Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Parallax Propeller

317 views
Skip to first unread message

Peter Jakacki

unread,
Jan 13, 2013, 1:37:05 AM1/13/13
to

For some reason I found myself reading these newsgroups which I haven't
really used for many years. It seems that although the Parallax Propeller
web forums are extremely active there never seems to be much discussion
elsewhere. Aren't you all curious as to what you are missing out on?

I have used just about every kind of micro and architecture over the
decades and while I might still use some small "one buck" micros for
certain tasks and some special ARM chips for higher end tasks I have
almost exclusively been using Propeller chips for everything else. The
reason is very simple, they are so simple to work with and there are no
"peripheral modules" to worry about other than the counters and video
hardware per cog. Every pin is general-purpose and any one of the eight
32-bit cores can use them, you don't have to worry about trying to route
a special pin or have the dilemma of using the special pin for one
function but not the other etc. The 32-bit CPUs or cogs are a breeze to
program and you can work with a variety of languages besides the easy to
use Spin compiler that comes with it.

For you Forth enthusiasts there has been a big flurry of Forth related
threads on the Propeller forums of late and there are several versions of
Forth including my own Tachyon Forth. IMHO this is the best way to get
into the Propeller and it's hardware. I just can't be bother trying to
cram functions and debug interrupts on other chips when I can set a cog
to work on a task and still have plenty of resources left over including
video on-chip.

If you are intrigued or just plain curious it won't kill you unless you
are a cat to have a look at my introduction page which has links to
projects and the forum etc.
http://goo.gl/VX7pc

There is the sup'ed up version of the Propeller, the Propeller II coming
out in the following months which has 96 I/O, 8 cogs, 128K RAM and
160MIPS/core.

BTW, I use Silabs C8051F chips as specialised peripherals (ADC etc) that
hang off the "I2C bus" from the Propeller and I can load new firmware
into each chip using just one extra line to program them in-circuit.

*peter*

MK

unread,
Jan 13, 2013, 5:26:31 AM1/13/13
to
Had a quick look (again) - it's not very fast and it's not very cheap
(compared with a CortexM4 clocked at 150MHz or more). Then there is all
the pain of programming and synching multiple cores which is (I think)
worse than the pain of managing DMA on a typical M4 based micro-controller.
For some applications I'm sure it's a really good fit but I haven't hit
one yet. It's similar in concept to XMOS and Greenchip offerings but
like them suffers from a less than first rank supplier which would worry
most of my customers.

MK


Peter Jakacki

unread,
Jan 13, 2013, 7:22:03 AM1/13/13
to
Since I've had a lot of experience with the Propeller and ARMs and I'm
familiar with M4s (still pushing out a design) then I can say that unless
you actually sit down with it for an hour or two that you will probably
continue to have this misconceptions. Is that the way that the Propeller
comes across? I've always wondered about that.

No, your idea of the Propeller is so off the mark, I'm saying that not to
offend, just setting the record straight although I appreciate the honest
opinion as this gives me something to work on in explaining what it is
and what it is not.

First off, don't try to compare this with an XMOS or GreenArrays (I think
you mean) as they are very different. No, the Propeller is more like 8
identical 32-bit CPUs + counter and video hardware and 512 longs of RAM
each sharing all 32 I/O in common and coupled to a multi-port 32kB RAM
(hub). Sounds strange? Okay, but the eight cogs are not really utilised
for "parallel processing" but think of each cog as either a CPU or a
virtual peripheral or both. The way it's used is that you might setup one
cog as an intelligent quad port UART, another as the VGA cog, another as
keyboard and mouse etc while maybe only one cog is processing the
application. When you see it in action it is very simple to program even
for a beginner.

Seeing that each cog only has 512 longs of RAM which it directly
addresses both source and destination in each 32-bit instruction, it
becomes necessary then to run a virtual machine for high level
application code. This is what Spin does as it compiles bytecode in hub
RAM while one or more cogs may have the Spin interpreter loaded. My
Tachyon Forth does something similar but has a much faster runtime speed
and of course the target hosts a Forth development and runtime system. So
I use bytecode that directly addresses the code in the first 256 longs of
the Tachyon VM. Each cog runs at a maximum of 2.5M Forth bytecode
instructions/second and since there are no interrupts!! nor any need for
them then each cog runs unencumbered. Realtime software development and
testing has never been easier.

Hopefully I've put it in some kind of nutshell. What do you think?
Weird? Yes, but it works really well :)

*peter*




Peter Jakacki

unread,
Jan 13, 2013, 7:44:03 AM1/13/13
to
On Sun, 13 Jan 2013 10:26:31 +0000, MK wrote:

Almost forgot, as for "first rank supplier" I can only say that all the
big companies are the ones that let you down because they upgrade and
suddenly the chip you are using is no longer current and may become hard
to get and of course expensive. Parallax have supported the little guy
just as well as the big and they have a track record of continuing to
support their product (in many ways) over the years with assurances as
well. How many companies these days are willing to make their chips
available in DIP as well as SMD simply for "that" market?

I have STM32F4 designs I'm working on mainly for Ethernet, as well as
other vendor's ARMs and while the chips are impressive you can't really
compare their raw speed with what the Propeller does and how it does it.

Price is relative too but 8 bucks for eight 32-bit fully utilisable cores
is very good value but the price goes down to under $4.50 in volume.

*peter*

Andrew Haley

unread,
Jan 13, 2013, 8:24:35 AM1/13/13
to
In comp.lang.forth Peter Jakacki <peterj...@gmail.com> wrote:

> I use bytecode that directly addresses the code in the first 256
> longs of the Tachyon VM. Each cog runs at a maximum of 2.5M Forth
> bytecode instructions/second and since there are no interrupts!! nor
> any need for them then each cog runs unencumbered. Realtime software
> development and testing has never been easier.
>
> Hopefully I've put it in some kind of nutshell. What do you think?
> Weird? Yes, but it works really well :)

I think this looks really good. Fast and simple multi-tasking with
even less overhead than the simplest Forth scheduler. Extremely low
latency. Cheap, simple. I'm sorry I didn't hear about this years
ago.

Andrew.

Peter Jakacki

unread,
Jan 13, 2013, 8:55:22 AM1/13/13
to
Yes, when you run a task in a cog then that is all it has to do. There is
no need for task switching or interrupts etc. Some of my biggest
headaches had to do with mysterious glitches which always end up being
traced back to the wrong interrupts at the wrong time, but only
sometimes, just to make it harder to find.

The P2 which is due to be released soon allows each cog to run up to 4
tasks with zero switching overhead, you can even set the switching
priority patterns in a 32-bit mask. But P2 runs 8 times faster than P1
just in terms of IPS alone without taking in account the many
enhancements. Although the P2 has not yet been released there are many of
us testing code for this on FPGA boards such as the DE2-115 or Nanos
because Parallax made the FPGA binary for the P2 openly available. Can
you beat that? Never ever heard of any chip company doing anything even
close to that.

*peter*

Albert van der Horst

unread,
Jan 13, 2013, 11:24:45 AM1/13/13
to
In article <LFxIs.2582$Ow3...@viwinnwfe02.internal.bigpond.com>,
Peter Jakacki <peterj...@gmail.com> wrote:
>On Sun, 13 Jan 2013 10:26:31 +0000, MK wrote:
>
>> On 13/01/2013 06:37, Peter Jakacki wrote:
<SNIP>

>
>
>Seeing that each cog only has 512 longs of RAM which it directly
>addresses both source and destination in each 32-bit instruction, it
>becomes necessary then to run a virtual machine for high level
>application code. This is what Spin does as it compiles bytecode in hub
>RAM while one or more cogs may have the Spin interpreter loaded. My
>Tachyon Forth does something similar but has a much faster runtime speed
>and of course the target hosts a Forth development and runtime system. So
>I use bytecode that directly addresses the code in the first 256 longs of
>the Tachyon VM. Each cog runs at a maximum of 2.5M Forth bytecode
>instructions/second and since there are no interrupts!! nor any need for
>them then each cog runs unencumbered. Realtime software development and
>testing has never been easier.
>
>Hopefully I've put it in some kind of nutshell. What do you think?
>Weird? Yes, but it works really well :)

It sounds like GA144 done properly.

>
>*peter*
>
>
>
>
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

David Brown

unread,
Jan 14, 2013, 4:51:39 AM1/14/13
to
I don't know about GreenArrays, but this sounds very much like XMOS
devices. There are differences in the details, and the main languages
for the XMOS are C, C++ and XC (sort of C with a few bits removed, and
some parallel processing and XMOS features added) rather than Forth.
But it's the same idea - you have multiple CPUs so that you can make
your peripherals in software in a CPU rather than as dedicated hardware.

And I'd imagine it is similarly "easy" to work with - some things /will/
be easy, but other things will be a lot harder in practice. In
particular, 8 cores/threads sounds great when you start, and you can
write very elegant UARTs, flashing lights, etc. But in the real
application you need more than 8 cores, and you start having to combine
tasks within a single core/thread and the elegance, clarity, and ease of
development go out the window. The other big problem is the small
memories - you start off thinking how cool these devices are that are so
fast you can make USB or Ethernet peripherals in software cores, using
ready-made "software components" from the company's website. But then
you discover that to actually /use/ them for something more than a
flashing lights demo takes more ram than the chips have.


So it's a nice idea for some types of problem - but it is not unique,
and it is only really good for a small number of applications.

Rafael Deliano

unread,
Jan 15, 2013, 3:10:08 PM1/15/13
to
> you will probably
> continue to have this misconceptions. Is that the way that the Propeller
> comes across? I've always wondered about that.

Un unusal/non-mainstream architecture will always be
initally misjudged.

But some of the misconceptions are due to Parallax:
if it has a smallish videogenerator then its probably
a retro arcade game machine chip ?
While the spin language is usable, there
is nothing people like less then learning new weird
languages.
(
http://www.parallax.com/portals/0/propellerqna/Content/QnaTopics/QnaSpin.htm
"Spin was inspired by portions of C, Delphi, and Python,
and a host of problem/solution scenarios ..." )
It could have easily been an extended BASIC or FORTH.

There are issues in implementation:
The upper half of the 64k byte memorymap is a 32kbyte ROM
that contains the small 4k byte spin interpreter. That
ROM is rather underutilized with data tables.
This spin interpreter firmware is encrypted
http://propeller.wikispaces.com/Cracking+Open+the+Propeller+-+Original+Page
Which makes implementing other languages ontop
of it certainly much fun.
Not much copy protection for the user
as the chip boots the application from an external
serial EEPROM to its internal SRAM.

There are issues in the basic concept:
The round-robin access doesn´t scale very well.
If you go from 8 to 16 cores, the clock has to double
otherwise speed is down. So a 100 pin chip
can hardly have 64 cores.
One can think of COG-RAM to HUB-RAM as an analogy
to the zero-page of an old 6502. But the speed penalty
is much worse: 4 clocks for COG versus 8...23 for HUB.
Identical cores are one-size-fits-all. The propeller
may be flexible and fast concerning I/O, bit banging.
But an I/O core that interprets bytecode is probably no
match for even the smallest ARM in executing high level
language.

MfG JRD

Peter Jakacki

unread,
Jan 16, 2013, 1:06:00 AM1/16/13
to
On Mon, 14 Jan 2013 10:51:39 +0100, David Brown wrote:


> I don't know about GreenArrays, but this sounds very much like XMOS
> devices. There are differences in the details, and the main languages
> for the XMOS are C, C++ and XC (sort of C with a few bits removed, and
> some parallel processing and XMOS features added) rather than Forth. But
> it's the same idea - you have multiple CPUs so that you can make your
> peripherals in software in a CPU rather than as dedicated hardware.
>
> And I'd imagine it is similarly "easy" to work with - some things /will/
> be easy, but other things will be a lot harder in practice. In
> particular, 8 cores/threads sounds great when you start, and you can
> write very elegant UARTs, flashing lights, etc. But in the real
> application you need more than 8 cores, and you start having to combine
> tasks within a single core/thread and the elegance, clarity, and ease of
> development go out the window. The other big problem is the small
> memories - you start off thinking how cool these devices are that are so
> fast you can make USB or Ethernet peripherals in software cores, using
> ready-made "software components" from the company's website. But then
> you discover that to actually /use/ them for something more than a
> flashing lights demo takes more ram than the chips have.
>
>
> So it's a nice idea for some types of problem - but it is not unique,
> and it is only really good for a small number of applications.
>
>

Well it seems that unless you try it you can't really judge it, that's
for sure and in trying to judge it everyone is way off the mark. In terms
of critique of your critique I would have to give you a very poor mark
though. To mention flashing lights followed by "etc" is indicative of an
early negative response and a attempt at minimalizing, as if you were
Bill Gates talking about Linux. As for real applications I am always
producing real commercial products with this chip so I think I know
(actually, I do know) what goes into real apps having worked with a very
wide variety of small to large micros through the decades (and still do).

Having read the rest of your comments I can't see where your fascination
for flashing lights is coming from but I hope you are cured soon :)

BTW, the Propeller does very nice blinking lights along with simultaneous
VGA and graphics rendering along with audio and SD, HID etc while still
handling critical I/O in real deterministic time. The language and tools
were the easiest to learn and use of any processor I've worked with.

Can't you see I'm not trying to sell them, I just didn't want to keep on
being selfish keeping this good thing all to myself and the secret
Propeller "Illuminati".


*peter*


Peter Jakacki

unread,
Jan 16, 2013, 1:26:14 AM1/16/13
to
Round-robin access doesn't have to scale, it is what it is and trying to
judge it based on how it might operate as it is but with 64 cores is
weird!! Just judge it as it is. Besides, the successor chip still has 8
cores but allows 8 longs to be accessed in one "round-robin" cycle. The
fixed round-robin access provides a guaranteed access time for cogs which
might otherwise have been upset by some resource crazed object.

The smallest or largest ARM has to handle more than one task plus
interrupts etc so it never ever achieves it's full speed. If everything
fits in 512 longs of a very code efficient cog then it is very fast but
even more so because it's not bogged down like an ARM. I can't write any
ARM code that doesn't have to do everything including washing the dishes
but the ARM does have a large code space without a doubt. The high level
code rarely needs to run fast though, it's all the low-level stuff that
needs priority which on an ARM requires some complex interrupt processing
to hopefully service that special interrupt in time. No such problem ever
with the Propeller though.

When I first developed code for the ARM it was on the LPC2148 which I
ended up entering into the Philips/NXP 2005 ARM design contest as the
"noPC" with which I was bit-banging VGA graphics under interrupts along
with SD access, audio generation, PS/2 keyboard & mouse as well as serial
etc. That was a lot of hard work on the ARM and left very little
processing time left for the application but doing the same thing on a
Propeller chip is better, easier, and faster.

Perhaps you have been Googling and clicking on some very old links but
the Spin interpreter source code etc is openly available and has been for
years although I don't know why whether it's encrypted or not should have
been worthy of any kind of mention.

Higher-end CPU users and those who require secured firmware should take a
look at the forthcoming Propeller 2 due out sometime in the next few
months or so.

*peter*

David Brown

unread,
Jan 16, 2013, 7:43:10 AM1/16/13
to
I have worked with XMOS chips - but I never claimed to have used the
Parallax (or GreenArray). I have just read some of the information from
the web site, and considered possible applications and problems based on
my XMOS experience - since the XMOS and the Parallax have a similar
philosophy.

I have no doubt that you can do lots of fun things with a Parallax - and
I have no doubt that you can do commercial projects with it. But I also
have no doubt that 8 cores/threads is far too few to be able to dedicate
a core to each task in non-trivial real world applications. And when
you have to multiplex tasks within each core/thread, you have lost much
of the elegance that you had by using the multi-core system.

>
> Having read the rest of your comments I can't see where your fascination
> for flashing lights is coming from but I hope you are cured soon :)
>

Did you not know that every electronics card needs a flashing light, so
that customers will know that it is working?

Seriously, it was obviously just a reference to test or demo software
rather than fully functional software.

> BTW, the Propeller does very nice blinking lights along with simultaneous
> VGA and graphics rendering along with audio and SD, HID etc while still
> handling critical I/O in real deterministic time. The language and tools
> were the easiest to learn and use of any processor I've worked with.
>
> Can't you see I'm not trying to sell them, I just didn't want to keep on
> being selfish keeping this good thing all to myself and the secret
> Propeller "Illuminati".

That's fine - and I am not trying to be critical to the devices (or your
work). I am just pointing out a few realities about the devices, such
as their similarities to the XMOS and limitations that I think users
will quickly encounter. They look like an interesting architecture -
and I am always a fan of considering new architectures and languages -
but I am not convinced that they are a great idea for general use.

>
>
> *peter*
>
>

Peter Jakacki

unread,
Jan 16, 2013, 10:27:06 AM1/16/13
to
On Wed, 16 Jan 2013 13:43:10 +0100, David Brown wrote:
>
> I have worked with XMOS chips - but I never claimed to have used the
> Parallax (or GreenArray). I have just read some of the information from
> the web site, and considered possible applications and problems based on
> my XMOS experience - since the XMOS and the Parallax have a similar
> philosophy.
>
> I have no doubt that you can do lots of fun things with a Parallax - and
> I have no doubt that you can do commercial projects with it. But I also
> have no doubt that 8 cores/threads is far too few to be able to dedicate
> a core to each task in non-trivial real world applications. And when
> you have to multiplex tasks within each core/thread, you have lost much
> of the elegance that you had by using the multi-core system.
>
> Did you not know that every electronics card needs a flashing light, so
> that customers will know that it is working?
>
> Seriously, it was obviously just a reference to test or demo software
> rather than fully functional software.
>
>
> That's fine - and I am not trying to be critical to the devices (or your
> work). I am just pointing out a few realities about the devices, such
> as their similarities to the XMOS and limitations that I think users
> will quickly encounter. They look like an interesting architecture -
> and I am always a fan of considering new architectures and languages -
> but I am not convinced that they are a great idea for general use.

From what I know about XMOS and also from those who work with the chip it
is quite a different beast from the Propeller. The Propeller appears to
be a simpler and easier to use chip and it's eight cogs are eight CPUs,
not tasks.

Indeed I know that every pcb needs a flashing light!! How is it that
there are so many designs out there that do not have such a rudimentary
indicator just to tell you that there is power and activity/status. But
indicators do not require the resources of a whole CPU and normally I run
these directly from code positions or even from a more general-purpose
timer "task" which looks after multiple timeouts and actions including
low priority polling.

Not sure what you are getting at by referring to "non-trivial" real world
applications :) How non-trivial is this? Are all the real-time industrial
control functions, communications and network protocols (wired, RF, and SM
fibre), H-bridge motor and microstepping control, graphic display and
input processing, SD file systems etc trivial? Okay, if so then I think
you might be thinking that just because the Propeller has eight cores
that it is some kind of parallel processing "beastie" but it's actually
classed as a "microcontroller", not a PC killer.

*peter*





Ben Bradley

unread,
Jan 18, 2013, 9:07:58 PM1/18/13
to
U=In comp.lang.forth,comp.arch.embedded On Sun, 13 Jan 2013 13:55:22
GMT, Peter Jakacki <peterj...@gmail.com> wrote:


>Yes, when you run a task in a cog then that is all it has to do. There is
>no need for task switching or interrupts etc. Some of my biggest
>headaches had to do with mysterious glitches which always end up being
>traced back to the wrong interrupts at the wrong time, but only
>sometimes, just to make it harder to find.

So you're PREVENTED from using interrupts to make programming
easier.

>
>The P2 which is due to be released soon

... still has zero interrupts.

Yes, I first saw the propellor mentioned years ago, the 8 32-bit cores
thing sounds nice, but no interrupts was a deal killer for me. A year
or two back (with maybe earlier mention of the P2) I looked on the
"official" support/discussion forums for the Propellor and saw this
longish thread on "why doesn't it have interrupts" and there were
posts there that covered every objection I've had or seen to a
microcontroller not having interrupts, even "why not add interrupts?
It would take very little silicon and you don't have to use 'em if you
don't want to." It's against that designer guru guy's religion or
something.

As far as I know there's no other microcontroller that doesn't have
interrupts, and I can't recall one that didn't. The Apple ][ didn't
use interrupts even though the 6502 had them. Maybe there were some
4-bit microprocessors that didn't have any interrupts.

Paul E. Bennett

unread,
Jan 19, 2013, 5:06:24 AM1/19/13
to
Ben Bradley wrote:

> U=In comp.lang.forth,comp.arch.embedded On Sun, 13 Jan 2013 13:55:22
> GMT, Peter Jakacki <peterj...@gmail.com> wrote:

[%X]

>>The P2 which is due to be released soon
>
> ... still has zero interrupts.
>
> Yes, I first saw the propellor mentioned years ago, the 8 32-bit cores
> thing sounds nice, but no interrupts was a deal killer for me. A year
> or two back (with maybe earlier mention of the P2) I looked on the
> "official" support/discussion forums for the Propellor and saw this
> longish thread on "why doesn't it have interrupts" and there were
> posts there that covered every objection I've had or seen to a
> microcontroller not having interrupts, even "why not add interrupts?
> It would take very little silicon and you don't have to use 'em if you
> don't want to." It's against that designer guru guy's religion or
> something.
>
> As far as I know there's no other microcontroller that doesn't have
> interrupts, and I can't recall one that didn't. The Apple ][ didn't
> use interrupts even though the 6502 had them. Maybe there were some
> 4-bit microprocessors that didn't have any interrupts.

One question you should ask yourself is why you think a parallel processor
(particularly the mesh organised ones) really need to include interrupts.
You have many processors, all the same, simple, no frills. You can afford to
dedicate a processor to deal with inputs that need to be responded to
rapidly without the need for interrupts. I know that, to some, it might seem
a waste of a processor, but with heavily parallel chips you could afford to
think about how you allocate I/O around the processor array.

Check back in the history of processors and you will see why the interrupt
was thought to be necessary in the first place. With the world heading to
the much more use of multi-parallel processor I suspect the need for
interrupts will wane.

--
********************************************************************
Paul E. Bennett...............<email://Paul_E....@topmail.co.uk>
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-510979
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************

Arlet Ottens

unread,
Jan 19, 2013, 7:04:46 AM1/19/13
to
On 01/19/2013 11:06 AM, Paul E. Bennett wrote:

>> As far as I know there's no other microcontroller that doesn't have
>> interrupts, and I can't recall one that didn't. The Apple ][ didn't
>> use interrupts even though the 6502 had them. Maybe there were some
>> 4-bit microprocessors that didn't have any interrupts.
>
> One question you should ask yourself is why you think a parallel processor
> (particularly the mesh organised ones) really need to include interrupts.
> You have many processors, all the same, simple, no frills. You can afford to
> dedicate a processor to deal with inputs that need to be responded to
> rapidly without the need for interrupts. I know that, to some, it might seem
> a waste of a processor, but with heavily parallel chips you could afford to
> think about how you allocate I/O around the processor array.
>
> Check back in the history of processors and you will see why the interrupt
> was thought to be necessary in the first place. With the world heading to
> the much more use of multi-parallel processor I suspect the need for
> interrupts will wane.

Why not let the end user decide whether interrupts are useful ? Adding
the support is not that complicated, and allows you to have a single
processor performing multiple tasks, and still achieve low latency
response to external events.

If I need a fast and predictable (but very simple) response to an
external event, it is a waste to dedicate an entire processor to the
job. If you don't care about wasting silicon, why not waste some on some
interrupt logic, and offer the best of both worlds ?

David Brown

unread,
Jan 19, 2013, 11:12:50 AM1/19/13
to
(Please do not snip newsgroups by using "followup to", unless the post
really is off-topic in a group.)


On 19/01/13 11:06, Paul E. Bennett wrote:
> Ben Bradley wrote:
>
>> U=In comp.lang.forth,comp.arch.embedded On Sun, 13 Jan 2013 13:55:22
>> GMT, Peter Jakacki <peterj...@gmail.com> wrote:
>
> [%X]
>
>>> The P2 which is due to be released soon
>>
>> ... still has zero interrupts.
>>
>> Yes, I first saw the propellor mentioned years ago, the 8 32-bit cores
>> thing sounds nice, but no interrupts was a deal killer for me. A year
>> or two back (with maybe earlier mention of the P2) I looked on the
>> "official" support/discussion forums for the Propellor and saw this
>> longish thread on "why doesn't it have interrupts" and there were
>> posts there that covered every objection I've had or seen to a
>> microcontroller not having interrupts, even "why not add interrupts?
>> It would take very little silicon and you don't have to use 'em if you
>> don't want to." It's against that designer guru guy's religion or
>> something.
>>
>> As far as I know there's no other microcontroller that doesn't have
>> interrupts, and I can't recall one that didn't. The Apple ][ didn't
>> use interrupts even though the 6502 had them. Maybe there were some
>> 4-bit microprocessors that didn't have any interrupts.

Were there not small Microchip PIC devices without interrupts? The
PIC12 series, or something like that (I didn't use them myself).

>
> One question you should ask yourself is why you think a parallel processor
> (particularly the mesh organised ones) really need to include interrupts.
> You have many processors, all the same, simple, no frills. You can afford to
> dedicate a processor to deal with inputs that need to be responded to
> rapidly without the need for interrupts. I know that, to some, it might seem
> a waste of a processor, but with heavily parallel chips you could afford to
> think about how you allocate I/O around the processor array.
>

That argument might have merit /if/ this chip had many processors. It
only has 8. I think the idea of splitting a design into multiple
simple, semi-independent tasks that all work on their own
cpu/core/thread, in their own little worlds, is very elegant. It can
give great modularisation, re-use of software-components, and easy
testing. But you need /many/ more than 8 threads for such a system with
real-world programs - otherwise you have to combine tasks within the
same thread, and you have lost all the elegance.

So then you might have a system with lots more cores - say 64 cores.
Then you have enough to do quite a number of tasks. But to get that
with a realistic price, power and size, these cores will be very simple
and slow - which means that you can't do tasks that require a single
core running quickly.

What makes a lot more sense is to have a cpu that has hardware support
for a RTOS, and is able to switch rapidly between different tasks. That
way demanding tasks can get the cpu time they need, while you can also
have lots of very simple tasks that give you the modularisation in code
without having to dedicate lots of silicon. The XMOS does a bit of
this, in that it has 8 threads per cpu that can run up to 100 MIPS each
(IIRC), but with a total of 500 MIPS per cpu, and it also has
inter-process communication in hardware.


> Check back in the history of processors and you will see why the interrupt
> was thought to be necessary in the first place. With the world heading to
> the much more use of multi-parallel processor I suspect the need for
> interrupts will wane.
>

Look how many interrupts a modern PC or large embedded system has - they
outnumber the number of cores by 50 to 1 at least. Interrupts are not
going away.

Albert van der Horst

unread,
Jan 19, 2013, 11:45:21 AM1/19/13
to
In article <tZidnZg9_bafW2fN...@lyse.net>,
David Brown <david...@removethis.hesbynett.no> wrote:
>
>Look how many interrupts a modern PC or large embedded system has - they
>outnumber the number of cores by 50 to 1 at least. Interrupts are not
>going away.

A modern CPU has 10 cores. So you say that they have far more than
500 interrupts. An Intel with an interrupt vector table of 500?
The largest IRQ I have seen fiddling in the BIOS startup screen is
12. I may look at this the wrong way, but 500+ seems excessive.

Groetjes Albert

George Neuner

unread,
Jan 19, 2013, 12:29:17 PM1/19/13
to
On Fri, 18 Jan 2013 21:07:58 -0500, Ben Bradley
<ben_u_...@etcmail.com> wrote:

>... I first saw the propellor mentioned years ago, the 8 32-bit cores
>thing sounds nice, but no interrupts was a deal killer for me. A year
>or two back (with maybe earlier mention of the P2) I looked on the
>"official" support/discussion forums for the Propellor and saw this
>longish thread on "why doesn't it have interrupts" and there were
>posts there that covered every objection I've had or seen to a
>microcontroller not having interrupts, even "why not add interrupts?
>It would take very little silicon and you don't have to use 'em if you
>don't want to." It's against that designer guru guy's religion or
>something.

There has been much discussions in comp.arch re: this very question.
The consensus has been that interrupts are extremely difficult to
implement properly (in the hardware and/or microcode), and most chips
don't do it right, leading to the occasional unavoidable glitch even
when handler code is written correctly per the CPU documentation.

There also has been much discussion of non-interrupting systems where
cores can be devoted to device handling. The consensus there is that
interrupts per se are not necessary, but such systems still require
inter-processor signaling. There has been considerable debate about
the form(s) such signaling should take.

George

Paul Rubin

unread,
Jan 19, 2013, 1:42:36 PM1/19/13
to
Ben Bradley <ben_u_...@etcmail.com> writes:
> As far as I know there's no other microcontroller that doesn't have
> interrupts, and I can't recall one that didn't.

The GA144 has no interrupts since you just dedicate a processor to
the event you want to listen for. The processors have i/o ports
(to external pins or to adjacent processors on the chip) that block
on read, so the processor doesn't burn power while waiting for data.

upsid...@downunder.com

unread,
Jan 19, 2013, 1:54:22 PM1/19/13
to
On Sat, 19 Jan 2013 17:12:50 +0100, David Brown
<david...@removethis.hesbynett.no> wrote:

>
>> Check back in the history of processors and you will see why the interrupt
>> was thought to be necessary in the first place. With the world heading to
>> the much more use of multi-parallel processor I suspect the need for
>> interrupts will wane.
>>
>
>Look how many interrupts a modern PC or large embedded system has - they
>outnumber the number of cores by 50 to 1 at least. Interrupts are not
>going away.

With only 10-50 cores, you still would have to put multiple threads on
each core. I wonder for you could make a pre-emptive kernel without
interrupts ? You would end up into some ugly Win 3.x style
co-operative kernel. At least you would need to have some kind of
timer interrupt to handle runaway threads.

Paul Rubin

unread,
Jan 19, 2013, 2:04:36 PM1/19/13
to
upsid...@downunder.com writes:
> With only 10-50 cores, you still would have to put multiple threads on
> each core. I wonder for you could make a pre-emptive kernel without
> interrupts?

I think we are talking about an embedded microcontroller connected to
a few sensors and pushbuttons or something like that. Not a workstation
or internet server. 8 tasks are probably plenty for many things.

Les Cargill

unread,
Jan 19, 2013, 3:18:44 PM1/19/13
to
You can still design them as if they were in their own thread and
run them in "series". The now-old CASE tool ObjecTime had a separate
set of screen widgets for organizing separate "executables" into
threads.

In non-case-tool systems, you simply run them one after the other.
This can be round-robin or through a more sophisticated arrangement.

> So then you might have a system with lots more cores - say 64 cores.
> Then you have enough to do quite a number of tasks. But to get that
> with a realistic price, power and size, these cores will be very simple
> and slow - which means that you can't do tasks that require a single
> core running quickly.
>
> What makes a lot more sense is to have a cpu that has hardware support
> for a RTOS, and is able to switch rapidly between different tasks.

An RTOS can also be a purely software object. Do mean essentially
register bank switching?

> That
> way demanding tasks can get the cpu time they need, while you can also
> have lots of very simple tasks that give you the modularisation in code
> without having to dedicate lots of silicon. The XMOS does a bit of
> this, in that it has 8 threads per cpu that can run up to 100 MIPS each
> (IIRC), but with a total of 500 MIPS per cpu, and it also has
> inter-process communication in hardware.
>
>

Shared memory has been around since System V, so...

>> Check back in the history of processors and you will see why the
>> interrupt
>> was thought to be necessary in the first place. With the world heading to
>> the much more use of multi-parallel processor I suspect the need for
>> interrupts will wane.
>>
>
> Look how many interrupts a modern PC or large embedded system has - they
> outnumber the number of cores by 50 to 1 at least. Interrupts are not
> going away.
>

--
Les Cargill

Les Cargill

unread,
Jan 19, 2013, 3:23:06 PM1/19/13
to
upsid...@downunder.com wrote:
> On Sat, 19 Jan 2013 17:12:50 +0100, David Brown
> <david...@removethis.hesbynett.no> wrote:
>
>>
>>> Check back in the history of processors and you will see why the interrupt
>>> was thought to be necessary in the first place. With the world heading to
>>> the much more use of multi-parallel processor I suspect the need for
>>> interrupts will wane.
>>>
>>
>> Look how many interrupts a modern PC or large embedded system has - they
>> outnumber the number of cores by 50 to 1 at least. Interrupts are not
>> going away.
>
> With only 10-50 cores, you still would have to put multiple threads on
> each core. I wonder for you could make a pre-emptive kernel without
> interrupts ?


Does it really need to be preemptive?

> You would end up into some ugly Win 3.x style
> co-operative kernel.

There is nothing particularly wrong with cooperative multitasking,
especially for embedded systems.

> At least you would need to have some kind of
> timer interrupt to handle runaway threads.
>

Or you could consider runaway threads to be defects.

--
Les Cargill


upsid...@downunder.com

unread,
Jan 19, 2013, 4:05:48 PM1/19/13
to
When considering that each external connection will need a dedicated
task, there is not many tasks left for the actual application.

Anyway, the I/O task needs to inform the main application that the I/O
operation (such as reading a complete frame from the serial port) has
ended. Of course, the main task could poll some shared memory
location(s) and burning a lot of power doing that.

Some low power "wait for interrupt" style instruction would help
reduce the power consumption, in order to avoid the busy polling
(especially if multiple signal sources needs to be polled).

Alternatively, the main task requesting a service (such as receiving a
frame) needs to send the request to the I/O task and then go to a low
power halt state. Upon completing the I/O task would have to power up
the halted task (hopefully from the same location it was halted and
not from the restart point :-). Things get ugly, if two or more
waiting tasks need to be informed.

b...@bob.com

unread,
Jan 19, 2013, 8:56:01 PM1/19/13
to
I use timer interrupts mainly to have many nice accurate timers.

It makes it so easy. I could use polled timers in main() but really
prefer the ease of an interrupt and if it doesn't require any context
switching, what could be easier ?

boB

Waldek Hebisch

unread,
Jan 19, 2013, 9:03:50 PM1/19/13
to
In comp.arch.embedded Paul E. Bennett <Paul_E....@topmail.co.uk> wrote:
>
> One question you should ask yourself is why you think a parallel processor
> (particularly the mesh organised ones) really need to include interrupts.
> You have many processors, all the same, simple, no frills. You can afford to
> dedicate a processor to deal with inputs that need to be responded to
> rapidly without the need for interrupts. I know that, to some, it might seem
> a waste of a processor, but with heavily parallel chips you could afford to
> think about how you allocate I/O around the processor array.
>

Forget about I/O -- in modern systems I/O devices frequantly
are like a specialized processors. Think about coordination
between different processors -- you need to get some info
from one processor to another ASAP and the info is nonatomic
so just writing it to shared RAM will not do. And you
need intrrupt even in case of atomic info it the target
processor is doing something else at would not look at
the info otherwise.



--
Waldek Hebisch
heb...@math.uni.wroc.pl

Mark Wills

unread,
Jan 20, 2013, 4:05:27 AM1/20/13
to
On Jan 19, 9:05 pm, upsided...@downunder.com wrote:
> Alternatively, the main task requesting a service (such as receiving a
> frame) needs to send the request to the I/O task and then go to a low
> power halt state. Upon completing the I/O task would have to power up
> the halted task (hopefully from the same location it was halted and
> not from the restart point :-). Things get ugly, if two or more
> waiting tasks need to be informed.

I don't 100% agree with that description! If you wrote your code like
that then you end up with code written in a parallel style, but that
executes in a serial (i.e. sequential) way!

It's far better if the task that is waiting for a response from the
serial I/O task goes off and does some other work while it is waiting.
Otherwise what you have in reality is a sequential process written in
a parallel style; in other words, it's far more complicated than it
needs to be.

If there's no other work to do, and you put the main task to sleep,
then it's possible that the overall software 'solution' didn't need to
be parallel in the first place - time to re-think the design and
simplify.

Or, to put it another way, if the 'main-task' has to go to sleep while
the serial task is running, then I would say the software has been
designed "the wrong way around" - the *serial task* is the main task,
and the programmer needs to rotate his perception of the problem he is
trying to solve by 180 degrees! He has it backwards.

Okay, enough pedantry from me!

Arlet Ottens

unread,
Jan 20, 2013, 6:11:46 AM1/20/13
to
On 01/20/2013 10:05 AM, Mark Wills wrote:
> On Jan 19, 9:05 pm, upsided...@downunder.com wrote:
>> Alternatively, the main task requesting a service (such as receiving a
>> frame) needs to send the request to the I/O task and then go to a low
>> power halt state. Upon completing the I/O task would have to power up
>> the halted task (hopefully from the same location it was halted and
>> not from the restart point :-). Things get ugly, if two or more
>> waiting tasks need to be informed.
>
> I don't 100% agree with that description! If you wrote your code like
> that then you end up with code written in a parallel style, but that
> executes in a serial (i.e. sequential) way!
>
> It's far better if the task that is waiting for a response from the
> serial I/O task goes off and does some other work while it is waiting.
> Otherwise what you have in reality is a sequential process written in
> a parallel style; in other words, it's far more complicated than it
> needs to be.

Of course it would be preferable if the tasks could do other work, but
that requires an interrupt mechanism if you want to handle external
events with low and predictable latency.

upsid...@downunder.com

unread,
Jan 20, 2013, 6:26:27 AM1/20/13
to
On Sun, 20 Jan 2013 01:05:27 -0800 (PST), Mark Wills
<markrob...@yahoo.co.uk> wrote:

>On Jan 19, 9:05�pm, upsided...@downunder.com wrote:
>> Alternatively, the main task requesting a service (such as receiving a
>> frame) needs to send the request to the I/O task and then go to a low
>> power halt state. Upon completing the I/O task would have to power up
>> the halted task (hopefully from the same location it was halted and
>> not from the restart point :-). Things get ugly, if two or more
>> waiting tasks need to be informed.
>
>I don't 100% agree with that description! If you wrote your code like
>that then you end up with code written in a parallel style, but that
>executes in a serial (i.e. sequential) way!
>
>It's far better if the task that is waiting for a response from the
>serial I/O task goes off and does some other work while it is waiting.
>Otherwise what you have in reality is a sequential process written in
>a parallel style; in other words, it's far more complicated than it
>needs to be.

While this is true, quite a lot of protocols are essentially
half-duplex request/response protocols (such as Modbus). While for
instance TCP/IP is capable of full-duplex communication, many
protocols built on TCP/IP are essentially half duplex request/response
(such as http). Of course, the I/O core should be programmed to handle
sending the request and assembling the response in true IBM mainframe
I/O processor SNA style :-)

>If there's no other work to do, and you put the main task to sleep,
>then it's possible that the overall software 'solution' didn't need to
>be parallel in the first place - time to re-think the design and
>simplify.

If the main task is doing something useful between issuing the request
and before processing the result, then is the question, _how_ or
_when_ the main task is going to process the I/O response without
interrupts. Of course, the main task could poll the I/O status say
every few hundred microseconds, this would force you to insert those
polls all around your application code, making it hard to maintain.

However, if the I/O-task (core) is polled at some slow application
convenient rate (say 10 ms), the buffering requirements would be quite
large. For a half duplex protocol, any latencies due to polling will
kill the throughput due to the extra latencies (especially at high
signaling rates).

might be a solution in some cases, I have

>Okay, enough pedantry from me!

The original question was about how to use multiple cores (=tasks in
traditional RTOS speak) without interrupts.

While I understand the problem of implementing proper interrupt
processing in current processors with long pipelines and large caches,
I still think that at least some kind of "wait for interrupt"
mechanism (i.e. flushing the pipeline, cache invalidation and going to
low power sleep mode) and some mechanism to reactivate the code by an
external pin or by writing a value to the power up register by some
other core is needed. If you do not want to call this "interrupt",
then it is an other story.

upsid...@downunder.com

unread,
Jan 20, 2013, 6:53:45 AM1/20/13
to
On Sun, 20 Jan 2013 12:11:46 +0100, Arlet Ottens <usen...@c-scape.nl>
wrote:
A system can be implemented in various ways. If the intertask
synchronization and communication is very primitive (e.g. implemented
in hardware only), you easily end up with dozens or even hundreds of
very simple tasks doing a huge number of random communications with
each other, making it hard to manage.

On the other hand, if you are forced to a single task (old style
Unix), you easily end up building some huge select() calls, including
file descriptors from actual files, sockets and serial lines. This
also makes it hard to maintain the system.

There should be some sweet spot between these two extremes. A
sufficiently usable intertask communication systems (typically using
interrupts) should be able to handle this. I prefer 5-10 different
tasks, so my fingers are sufficient to keep track of these, without
having to take my socks off to use my toes for counting the rest :-).

David Brown

unread,
Jan 20, 2013, 11:49:25 AM1/20/13
to
On 19/01/13 17:45, Albert van der Horst wrote:
> In article <tZidnZg9_bafW2fN...@lyse.net>,
> David Brown <david...@removethis.hesbynett.no> wrote:
>>
>> Look how many interrupts a modern PC or large embedded system has - they
>> outnumber the number of cores by 50 to 1 at least. Interrupts are not
>> going away.
>
> A modern CPU has 10 cores. So you say that they have far more than
> 500 interrupts. An Intel with an interrupt vector table of 500?
> The largest IRQ I have seen fiddling in the BIOS startup screen is
> 12. I may look at this the wrong way, but 500+ seems excessive.
>
> Groetjes Albert
>>
>>

The BIOS only shows the IRQ lines available to the original PC, not
those on the "extended" interrupt controller or multiplexed interrupt
sources.

A quick check on my PC shows 36 interrupts in use, with four cores. So
my first guess was a bit of an exaggeration - though I don't know how
many interrupt sources are available. But it is also not hard to find
large microcontrollers with hundreds of interrupt sources and only 1 or
two cores, giving much more than 50 to 1 ratio.

Of course, any given system is unlikely to use more than a small
fraction of the interrupt sources - but even something as simple as a
UART can use three interrupts plus some sort of timer. But it will
still be a lot more than the number of cores.

David Schultz

unread,
Jan 20, 2013, 12:21:31 PM1/20/13
to
On 01/18/2013 08:07 PM, Ben Bradley wrote:
> It's against that designer guru guy's religion or something.

I remember reading an article in Nuts & Volts on the propeller which
discussed the interrupt thing. Digging into the archives...

"There were only a few rules: it had to be fast, it had to be relatively
easy to program, and it had to be able to do multiple tasks without
using interrupts -- the bane of all but the heartiest programmers."
April 2006, page 16.

In other words, interrupts are too hard.

By this standard, I am a hearty programmer.

--
David W. Schultz
http://home.earthlink.net/~david.schultz
Returned for Regrooving


Dave Nadler

unread,
Jan 20, 2013, 2:00:27 PM1/20/13
to Paul_E....@topmail.co.uk
On Saturday, January 19, 2013 5:06:24 AM UTC-5, Paul E. Bennett wrote:
> ... I suspect the need for interrupts will wane.

And FORTH will rise again !

Sorry, couldn't help myself...

PS: Yes, I have programmed complicated IO processing on a
minicomputer without interrupts. And my next task was adding
an interrupt controller ;-)

David Brown

unread,
Jan 20, 2013, 2:32:27 PM1/20/13
to
You can certainly do that - but then you don't get the dedication of a
core that lets you get low-latency reactions.

>> So then you might have a system with lots more cores - say 64 cores.
>> Then you have enough to do quite a number of tasks. But to get that
>> with a realistic price, power and size, these cores will be very simple
>> and slow - which means that you can't do tasks that require a single
>> core running quickly.
>>
>> What makes a lot more sense is to have a cpu that has hardware support
>> for a RTOS, and is able to switch rapidly between different tasks.
>
> An RTOS can also be a purely software object. Do mean essentially
> register bank switching?

Bank switching is part of it, but to make a "hardware RTOS" you need a
system to switch tasks (register banks) automatically by task.

>
>> That
>> way demanding tasks can get the cpu time they need, while you can also
>> have lots of very simple tasks that give you the modularisation in code
>> without having to dedicate lots of silicon. The XMOS does a bit of
>> this, in that it has 8 threads per cpu that can run up to 100 MIPS each
>> (IIRC), but with a total of 500 MIPS per cpu, and it also has
>> inter-process communication in hardware.
>>
>>
>
> Shared memory has been around since System V, so...

There is a lot more to inter-process communication than shared memory.
The XMOS implements a message passing system with CSP-style synchronisation.

Walter Banks

unread,
Jan 20, 2013, 5:12:17 PM1/20/13
to
Interrupts are not going away anytime soon.

There are event riven processors that are essentially all interrupts.

Add run to completion (to eliminate preemption overhead) and multiple
cores for interrupts to use the next available execution unit and a lot of
processing overheads go away with comparable reduction in software
complexity.

w..


Les Cargill

unread,
Jan 20, 2013, 7:08:11 PM1/20/13
to
To be sure.

>>> So then you might have a system with lots more cores - say 64 cores.
>>> Then you have enough to do quite a number of tasks. But to get that
>>> with a realistic price, power and size, these cores will be very simple
>>> and slow - which means that you can't do tasks that require a single
>>> core running quickly.
>>>
>>> What makes a lot more sense is to have a cpu that has hardware support
>>> for a RTOS, and is able to switch rapidly between different tasks.
>>
>> An RTOS can also be a purely software object. Do mean essentially
>> register bank switching?
>
> Bank switching is part of it, but to make a "hardware RTOS" you need a
> system to switch tasks (register banks) automatically by task.
>

I presume the code store is shared, so it's matter
of assigning the "task" represented by entry point <x> to
core <y>.

>>
>>> That
>>> way demanding tasks can get the cpu time they need, while you can also
>>> have lots of very simple tasks that give you the modularisation in code
>>> without having to dedicate lots of silicon. The XMOS does a bit of
>>> this, in that it has 8 threads per cpu that can run up to 100 MIPS each
>>> (IIRC), but with a total of 500 MIPS per cpu, and it also has
>>> inter-process communication in hardware.
>>>
>>>
>>
>> Shared memory has been around since System V, so...
>
> There is a lot more to inter-process communication than shared memory.
> The XMOS implements a message passing system with CSP-style
> synchronisation.
>

Very nice indeed, then. CSP is the right abstraction.

>>
>>>> Check back in the history of processors and you will see why the
>>>> interrupt
>>>> was thought to be necessary in the first place. With the world
>>>> heading to
>>>> the much more use of multi-parallel processor I suspect the need for
>>>> interrupts will wane.
>>>>
>>>
>>> Look how many interrupts a modern PC or large embedded system has - they
>>> outnumber the number of cores by 50 to 1 at least. Interrupts are not
>>> going away.
>>>
>>
>

--
Les Cargill

Mel Wilson

unread,
Jan 20, 2013, 7:19:45 PM1/20/13
to
Walter Banks wrote:

> There are event riven processors that are essentially all interrupts.

There are no spelling mistakes. Only unexpected meanings. Not even
unexpected, sometimes.

Mel.

David Brown

unread,
Jan 21, 2013, 3:01:17 AM1/21/13
to
On 20/01/13 23:12, Walter Banks wrote:
> Interrupts are not going away anytime soon.
>
> There are event riven processors that are essentially all interrupts.

I have often written programs where the main loop contains nothing but a
"sleep until next interrupt" instruction - it is not uncommon for
low-power systems.

>
> Add run to completion (to eliminate preemption overhead) and multiple
> cores for interrupts to use the next available execution unit and a lot of
> processing overheads go away with comparable reduction in software
> complexity.
>

If you can eliminate all forms of pre-emption, you can keep many things
simpler - there is no need to worry about how you share data or
resources amongst threads, for example. But of course you can no longer
have low-latency reactions to events - you need to make sure these are
handled in hardware.


Walter Banks

unread,
Jan 21, 2013, 9:11:49 PM1/21/13
to
Latency in these systems is reduced in two ways, pre computing responses
delivered on an event (your comment) and multiple execution units so
code is not pre-empted but is immediately executed if there is an available
execution unit.

Hardware arbitrated execution priority levels is also a feature of these
processors

The rest is software design to make sure the response time meets
the application requirements. The resulting code has better average
response time and that can be important as well.


w..



upsid...@downunder.com

unread,
Jan 22, 2013, 2:39:49 AM1/22/13
to
On Sun, 20 Jan 2013 17:12:17 -0500, Walter Banks
<wal...@bytecraft.com> wrote:

>> Look how many interrupts a modern PC or large embedded system has - they
>> outnumber the number of cores by 50 to 1 at least. Interrupts are not
>> going away.
>
>Interrupts are not going away anytime soon.
>
>There are event riven processors that are essentially all interrupts.
>
>Add run to completion (to eliminate preemption overhead) and multiple
>cores for interrupts to use the next available execution unit and a lot of
>processing overheads go away with comparable reduction in software
>complexity.

Of course you could design a core which restarts every time an
external or internal interrupt occurs (such as a request packet sent
by an other core), run to completion and pot core in low power halt
state.

Of course, this works for some problems, but sooner or later you end
up with a hellish state machine, which remembers, where you were when
the previous interrupt occurred.

Mel Wilson

unread,
Jan 22, 2013, 10:14:06 AM1/22/13
to
upsid...@downunder.com wrote:

> Of course you could design a core which restarts every time an
> external or internal interrupt occurs (such as a request packet sent
> by an other core), run to completion and pot core in low power halt
> state.
>
> Of course, this works for some problems, but sooner or later you end
> up with a hellish state machine, which remembers, where you were when
> the previous interrupt occurred.

It does look as though the AVR TWI interface was designed to be controlled
just that way.

Mel.

rickman

unread,
Jan 23, 2013, 12:29:04 PM1/23/13
to
On 1/19/2013 12:29 PM, George Neuner wrote:
> On Fri, 18 Jan 2013 21:07:58 -0500, Ben Bradley
> <ben_u_...@etcmail.com> wrote:
>
>> ... I first saw the propellor mentioned years ago, the 8 32-bit cores
>> thing sounds nice, but no interrupts was a deal killer for me. A year
>> or two back (with maybe earlier mention of the P2) I looked on the
>> "official" support/discussion forums for the Propellor and saw this
>> longish thread on "why doesn't it have interrupts" and there were
>> posts there that covered every objection I've had or seen to a
>> microcontroller not having interrupts, even "why not add interrupts?
>> It would take very little silicon and you don't have to use 'em if you
>> don't want to." It's against that designer guru guy's religion or
>> something.
>
> There has been much discussions in comp.arch re: this very question.
> The consensus has been that interrupts are extremely difficult to
> implement properly (in the hardware and/or microcode), and most chips
> don't do it right, leading to the occasional unavoidable glitch even
> when handler code is written correctly per the CPU documentation.

How do you define "implement properly" for interrupts? Like most
things, if interrupts are kept simple, they work. It's the multiple
priority, many task processing that is hard to do right.


> There also has been much discussion of non-interrupting systems where
> cores can be devoted to device handling. The consensus there is that
> interrupts per se are not necessary, but such systems still require
> inter-processor signaling. There has been considerable debate about
> the form(s) such signaling should take.

There is always more than one way to skin a cat. I'm not convinced
interrupts are not necessary, well, maybe I should say "not useful"
instead, how's that?

Rick

rickman

unread,
Jan 23, 2013, 12:40:27 PM1/23/13
to
In my opinion, the lack of timers is one of a number of significant
shortcomings in the GA144. One of the claimed selling features of the
device is the low power possible. But if you need to wait for a
specific amount of time, not at all uncommon in real time systems which
many embedded systems are, you have to put a processor into a spin loop
to time it!

If you read Chuck Moore's blog he spent some time trying to implement a
video output without a clock. He only gave up that idea when he found
the pixels jittered on the screen because of the vagaries of async
processor timing loops. Had he implemented a simple timer on each
processor driven by... yes, a global chip clock (Oh! The horror!!!) many
timing events would be so much simpler and likely lower power. 5 mW per
core in a spin loop. It doesn't take many of those to add up to
significant power.

In a recent design I considered for the GA144 I found the CPU core
expended more than half it's power in the spin loop timing the ADC
converter and that was just a 6% duty cycle! This exceeded the power
budget. With an on chip clock the ADC could have been timed at very low
power possibly making the design practical.

Rick

Mark Wills

unread,
Jan 24, 2013, 5:03:32 PM1/24/13
to
On Jan 23, 5:29 pm, rickman <gnu...@gmail.com> wrote:
> On 1/19/2013 12:29 PM, George Neuner wrote:
>
>
>
>
>
>
>
>
>
> > On Fri, 18 Jan 2013 21:07:58 -0500, Ben Bradley
> > <ben_u_brad...@etcmail.com>  wrote:
I'm inclined to agree, though I've only had experience with 'classic'
micro-processors in this regard, so maybe my thoughts on the issue are
simply out of date. I can see that if you have a lot cores you can
effectively make your own interrupt controller by dedicating a core or
more to it. That idea seems to make sense on a simple device like the
GA devices, where each core is very primitive in its own right, so one
can argue that the 'cost' of assigning a core to the task of interrupt
detection is low. However, the idea does not sit well with me when
talking about complex devices such as the Propeller. Dedicating a cog
to interrupt control sounds bonkers to me, especially when a cog has
its own video controller - that's real overkill.

I get the impression that the Propeller is somewhat dumbed-down for
the hobbyist market. I cite its programming language, and the lack of
interrupts as two examples. Why they couldn't they add a 9th super-
simple core just for interrupts, that could pipe certain types of
interrupts to certain cogs? Best of both worlds.

The TMS99xx family of processors (very old) has 16 prioritised
cascading interrupts. Probably inherited from mini-computer
architecture. Very very powerful for its day. Since they were
prioritised, a lower level interrupt would not interrupt a higher
level interrupt until the higher level ISR terminated. Makes serving
multiple interrupts an absolute doddle. Not bad for 1976.

None

unread,
Jan 24, 2013, 6:26:05 PM1/24/13
to
rickman <gnu...@gmail.com> writes:
> If you read Chuck Moore's blog he spent some time trying to implement a
> video output without a clock. He only gave up that idea when he found
> the pixels jittered on the screen because of the vagaries of async
> processor timing loops.

I played with the PropTerm and found that the CPU-generated VGA bit stream (a
CPU got dedicated to the task) resulted in displays which always had a little
bit of fuzziness. It worked, and was quite readable, but the sharpness
of a regular PC display really made me aware of the limits of a pure software
approach to analog generation.

Andy

Hugh Aguilar

unread,
Jan 25, 2013, 1:55:53 AM1/25/13
to
On Jan 24, 3:03 pm, Mark Wills <markrobertwi...@yahoo.co.uk> wrote:
> The TMS99xx family of processors (very old) has 16 prioritised
> cascading interrupts. Probably inherited from mini-computer
> architecture. Very very powerful for its day. Since they were
> prioritised, a lower level interrupt would not interrupt a higher
> level interrupt until the higher level ISR terminated. Makes serving
> multiple interrupts an absolute doddle. Not bad for 1976.

Doddle? I've never heard that word before. Is a doddle good or bad?

Mark Wills

unread,
Jan 25, 2013, 2:03:32 AM1/25/13
to
doddle = extremely simple/easy

"Did you manage to fix that bug?"
"Yeah, it was a doddle!"

:-)

Walter Banks

unread,
Jan 25, 2013, 11:49:53 AM1/25/13
to
I have worked on a couple event driven ISA designs. Jitter is visible
on displays but it is equally problematic with control systems. The best solution

that I have seen/used is to have the hardware transfer out a precomputed
value or latch an input on the event interrupt trigger. Output values are
almost always known in advance.

This minor change has essentially little impact on the processor silicon
complexity.

A second important performance issue is to have an easily accesses data
area associated with each interrupt source. It means that a lot of common
code (pwm, ac phase control... ) can be a single executable. In some cases
preloading an index register with the start of data for that interrupt in some
hardware has significant performance improvements.

Walter Banks
Byte Craft Limited


Coos Haak

unread,
Jan 25, 2013, 2:02:51 PM1/25/13
to
Op Thu, 24 Jan 2013 23:03:32 -0800 (PST) schreef Mark Wills:
As we say: een fluitje van een cent.
A flute of a cent does cost nearly nothing and can be made for nearly
nothing. There is a herb (Anthriscus sylvestris) we call Fluitekruid. Due
to nearly French purism (hash-tag vs. mot-dièse) we must write
Fluitenkruid, as if it were plural.

--
Coos

CHForth, 16 bit DOS applications
http://home.hccnet.nl/j.j.haak/forth.html

rickman

unread,
Jan 25, 2013, 9:25:49 AM1/25/13
to
How do you know the display "fuzziness" was due to software timing? I
would expect software timing on a clocked processor to be on par with
other means of timing. There are other aspects of design that could
cause fuzziness or timing ambiguities in the signal.

Rick

None

unread,
Jan 25, 2013, 4:57:10 PM1/25/13
to
rickman <gnu...@gmail.com> writes:
> How do you know the display "fuzziness" was due to software timing? I
> would expect software timing on a clocked processor to be on par with
> other means of timing. There are other aspects of design that could
> cause fuzziness or timing ambiguities in the signal.

If you look at the inner loop driving the output pin, you can do a min/max
skew calculation which ends up with quite a bit of jitter on the table.
The product is the PockeTerm, you can pick one up at:

http://www.brielcomputers.com/wordpress/?cat=25

It's open source, VGA_HiRes_Text.spin is the low level driver for
VGA output. Note it actually uses *two* CPUs, and is some pretty darn
cool assembly code--written by the president of the Propeller company!

Andy Valencia
Home page: http://www.vsta.org/andy/
To contact me: http://www.vsta.org/contact/andy.html

rickman

unread,
Jan 26, 2013, 6:07:58 PM1/26/13
to
I don't follow what causes the skew you mention. Instruction timings
are deterministic, no? If not, trying to time using code is hopeless.
If the timings are deterministic, the skew should not be cumulative
since they are all based on the CPU clock. Is the CPU clock from an
accurate oscillator like a crystal? If it is using an internal RC
clock, again timing to sufficient accuracy is hopeless.

Rick

None

unread,
Jan 26, 2013, 10:09:58 PM1/26/13
to
rickman <gnu...@gmail.com> writes:
> I don't follow what causes the skew you mention. Instruction timings
> are deterministic, no?

The chip has a lower level bit stream engine which the higher level CPU
("cog") is feeding. Well, a pair of cogs. Each cog has local memory
and then a really expensive path through a central arbiter ("hub"). It
fills its image of the scanlines from the shared memory, then has to
feed it via waitvid into the lower level. Note that it's bit stream
engine *per cog*, so you also have to worry about their sync.

So yes, instruction timings are deterministic (although your shared
memory accesses will vary modulo the hub round-robin count). You
need to reach the waitvid before it's your turn to supply the next value.
But given that, this is much like the old wait state sync feeding bytes
to a floppy controller. PLL and waitvid sync are achieved with
magic incantations from Parallax, and it is not 100%.

> If the timings are deterministic, the skew should not be cumulative
> since they are all based on the CPU clock. Is the CPU clock from an
> accurate oscillator like a crystal? If it is using an internal RC
> clock, again timing to sufficient accuracy is hopeless.

The board has a CPU clock from which the PLL derives the video output
frequency. I recall the CPU clock being based on a crystal, but not one
with any consideration for video intervals. And the PLL's are per cog,
again my comment about (potential lack of) global sync.

Anyway, you should buy one and check it out. I'd be curious to hear
if (1) you also observe the same video quality, and (2) if you think it's
the waitvid mechanism, more the PLL->SVGA generation, or the sync issues
of the paired video generators. They even supply the schematic, FWIW.

Mark Wills

unread,
Jan 27, 2013, 3:50:59 AM1/27/13
to
The instruction times are deterministic (presumably; never written
code on the propeller), but when generating video in software, *per
scan line* all possible code-paths have to add up to the same number
of cycles in order to completely avoid jitter. That's very hard to do.

Consider a single scan line that contains text interspersed with
spaces. For the current horizontal position the software has to:

* Determine if background or foreground (i.e. a pixel of text colour)
should be drawn
* If background
* select background colour to video output register
* If foreground
* determine character under current horizontal position
* determine offset (in pixels) into the current line of the
character
* is a pixel to be drawn?
* If yes, load pixel colour
* otherwise, load background colour

The second code path is a lot more complex, containing many more
instructions, yet both code paths have to balance in terms of
execution time. This is just one example.

This is how video is done on the original Atari VCS console. 100%
software, with the hardware only providing horizontal interrupts (one
per scan line) and VBLNK interrupts, IIRC.

Caveat: The above assumes that there is no interrupt per horizontal
pixel. With interrupts, it's much easier. The Propeller doesn't have
any interrupts so software video generation would be non-trivial to
say the least. The easiest way would be to provide a pixel clock and
use an I/O pin to sync to, as Chuck found out for himself when
implementing video on the GA144.

Andrew Haley

unread,
Jan 27, 2013, 4:30:05 AM1/27/13
to
In comp.lang.forth Mark Wills <markrob...@yahoo.co.uk> wrote:
> Caveat: The above assumes that there is no interrupt per horizontal
> pixel. With interrupts, it's much easier.

I really don't understand why you say this. You need to be able to
sync to a timing pulse; whether this is done with interrupts doesn't
matter.

Andrew.

Dombo

unread,
Jan 27, 2013, 6:58:51 AM1/27/13
to
Op 27-Jan-13 9:50, Mark Wills schreef:
On the Atari VCS the software did not have to send out the individual
pixels. The TIA chip had memory for a single scan-line, which the TIA
chip converted to a video signal autonomously. The software just had to
make sure that the right data was loaded into the TIA chip in time for
each scan-line, it could finish doing that before the end of the
scan-line, but not after that. The TIA chip has also a function to stall
the CPU until the start of the next scan line. I.o.w. the software had
to be fast enough for each possible execution flow, but did not have to
complete in the exact same number of cycles.



Hugh Aguilar

unread,
Jan 28, 2013, 8:13:58 PM1/28/13
to
Maybe the reason why we don't have "doddle" or any similar word in
America, is because we never do anything the simple/easy way here! :-)

David Brown

unread,
Jan 29, 2013, 3:15:19 AM1/29/13
to
I am not sure, but I think "doddle" is perhaps a Scottish term. As far
as I could tell from an online dictionary, the origins are from the
German word for bagpipe...

Another very useful term is "fangle", which is the Scottish word that
perfectly describes the organisation of cables on most embedded
developers' desks.

rickman

unread,
Jan 28, 2013, 9:38:05 PM1/28/13
to
I'm not getting it. I guess the software had to be done this way to
optimize the CPU utilization. The "proper" way to time in software is
to have the video data already calculated in a frame buffer and use spin
loops to time when pixels are shifted out. That way you don't have lots
of processing to figure out the timing for. But you spend most of your
processing time in spin loops. Why was it done this way? To save a few
bucks on video hardware? That's just not an issue now days... unless
you are really obsessive about not using hardware where hardware is
warranted.


> Caveat: The above assumes that there is no interrupt per horizontal
> pixel. With interrupts, it's much easier. The Propeller doesn't have
> any interrupts so software video generation would be non-trivial to
> say the least. The easiest way would be to provide a pixel clock and
> use an I/O pin to sync to, as Chuck found out for himself when
> implementing video on the GA144.

I would have to go back and reread the web pages, but I think Chuck's
original attempt was to time the *entire* frame timing in software with
NO hardware timing at all. He found the timings drifted too much from
temperature (that's what async processors do after all, they are timed
by the silicon delays which vary with temp) so that with the monitor he
was using it would stop working once the board warmed up. I'm surprised
he had to build it to find that out. But I guess he didn't have specs
on the monitor.

His "compromise" to hardware timing was to use a horizontal *line*
interrupt (with a casual use of the word "interrupt", it is really a
wait for a signal) which was driven from the 10 MHz oscillator node,
like you described the Atari VCS. He still did the pixel timing in a
software loop. With 144 processors it is no big deal to do that... *OR*
he could have sprinkled a few counters around the chip to be used for
*really* low power timing. Each CPU core uses 5 mW when it is running a
simple timing loop. One of the big goals of the chip is to be low power
and software timing is the antithesis of low power in my opinion. But
then you would need an oscillator and a clock tree...

I think there is an optimal compromise between a chip with fully async
CPUs, with teeny tiny memories, no clocks, no peripherals (including
nearly no real memory interface) and a chip with a very small number of
huge CPUs, major clock trees running at very high clock rates, massive
memories (multiple types), a plethora of hardware peripherals and a
maximal bandwidth memory interface. How about an array of many small
CPUs, much like the F18 (or an F32 which rumor has is under
development), each one with a few kB of memory, with a dedicated idle
timer connected to lower speed clock trees (is one or two small clock
trees a real power problem?), some real hardware peripherals for the
higher speed I/O standards like 100/1000 Mbps Ethernet, real USB
(including USB 3.0), some amount of on chip block RAM and some *real*
memory interface which works at 200 or 300 MHz clock rates?

I get where Chuck is coming from with the minimal CPU thing. I have
said before that I think it is a useful chip in many ways. But so far I
haven't been able to use it. One project faced the memory interface
limitation and another found the chip to be too hard to use in the low
power modes it is supposed to be capable of, just not when you need to
do real time stuff at real low power. It only needs a few small
improvements including *real* I/O that can work at a number of voltages
rather than just the core voltage.

Oh yeah, some real documentation on the development system would be
useful too. I think you have to read some three or more documents just
to get started with the tools. I know it was pretty hard to figure it
all out, not that I *actually* figured it out.

Rick

rickman

unread,
Jan 28, 2013, 9:51:04 PM1/28/13
to
Weird, your posts all show up in my reader as replies to your own
messages rather than replies to my posts. The trimming made it hard for
me to figure out just what we were talking about with the odd
connections in my reader.


On 1/26/2013 10:09 PM, None wrote:
> rickman<gnu...@gmail.com> writes:
>> I don't follow what causes the skew you mention. Instruction timings
>> are deterministic, no?
>
> The chip has a lower level bit stream engine which the higher level CPU
> ("cog") is feeding. Well, a pair of cogs. Each cog has local memory
> and then a really expensive path through a central arbiter ("hub"). It
> fills its image of the scanlines from the shared memory, then has to
> feed it via waitvid into the lower level. Note that it's bit stream
> engine *per cog*, so you also have to worry about their sync.

I can't picture the processing with this description. I don't know
about the higher level and lower level CPUs you describe. Are you
saying there is some sort of dedicated hardware in each CPU for video?
Or is this separate from the CPUs? Why a *pair* of COGs? I assume a
COG is the Propeller term for a CPU?


> So yes, instruction timings are deterministic (although your shared
> memory accesses will vary modulo the hub round-robin count). You
> need to reach the waitvid before it's your turn to supply the next value.
> But given that, this is much like the old wait state sync feeding bytes
> to a floppy controller. PLL and waitvid sync are achieved with
> magic incantations from Parallax, and it is not 100%.

Not 100%? What does that mean? Magic? I guess this is the magic smoke
you want to keep from getting out of the chip?


>> If the timings are deterministic, the skew should not be cumulative
>> since they are all based on the CPU clock. Is the CPU clock from an
>> accurate oscillator like a crystal? If it is using an internal RC
>> clock, again timing to sufficient accuracy is hopeless.
>
> The board has a CPU clock from which the PLL derives the video output
> frequency. I recall the CPU clock being based on a crystal, but not one
> with any consideration for video intervals. And the PLL's are per cog,
> again my comment about (potential lack of) global sync.

I still don't know enough about the architecture to know what this
means. I don't care if the CPUs are not coordinated closely. If you
have a video engine providing the clock timing, why would the CPU timing
matter?


> Anyway, you should buy one and check it out. I'd be curious to hear
> if (1) you also observe the same video quality, and (2) if you think it's
> the waitvid mechanism, more the PLL->SVGA generation, or the sync issues
> of the paired video generators. They even supply the schematic, FWIW.

I appreciate your enthusiasm, but I have my own goals and projects. I
am currently oriented towards absurdly low power levels in digital
designs and am working on a design that will require no explicit power
source, it will scavenge power from the environment. I don't think a
Propeller is suitable for such a task is it?

Rick

None

unread,
Jan 29, 2013, 9:10:40 PM1/29/13
to
rickman <gnu...@gmail.com> writes:
> Weird, your posts all show up in my reader as replies to your own
> messages rather than replies to my posts. The trimming made it hard for
> me to figure out just what we were talking about with the odd
> connections in my reader.

Sorry. I'm assuming your reader is threading via the "References" field?
It looks like my posting software is preserving that.

> > The chip has a lower level bit stream engine which the higher level CPU
> > ("cog") is feeding. Well, a pair of cogs. Each cog has local memory
> > and then a really expensive path through a central arbiter ("hub"). It
> > fills its image of the scanlines from the shared memory, then has to
> > feed it via waitvid into the lower level. Note that it's bit stream
> > engine *per cog*, so you also have to worry about their sync.
> I can't picture the processing with this description. I don't know
> about the higher level and lower level CPUs you describe. Are you
> saying there is some sort of dedicated hardware in each CPU for video?
> Or is this separate from the CPUs? Why a *pair* of COGs? I assume a
> COG is the Propeller term for a CPU?

Yes, each cog has its own PLL and "video" bit stream engine (quotes because
they claim it can be used for any sort of analog stream in general). They
needed to use a pair of cogs (CPU's) because of the time it takes to pull
from screen memory as conceived by the ANSI emulation and generate the
scan lines to represent the font plus underline plus cursor. So the idea
is one is doing all that while the other is painting scan lines. Double
buffering, basically.

> Not 100%? What does that mean? Magic? I guess this is the magic smoke
> you want to keep from getting out of the chip?

Yes, there is no formal/deterministic way to lock the PLL's of the two
cogs. Everybody uses the sample code Parallax provided, and it has
definitely been shown that their "lock" can be skewed.

> I still don't know enough about the architecture to know what this
> means. I don't care if the CPUs are not coordinated closely. If you
> have a video engine providing the clock timing, why would the CPU timing
> matter?

They have *two* video engines. Each is generated from its own PLL, so
the first global clock is a crystal oscillator.

> > Anyway, you should buy one and check it out. I'd be curious to hear
> > if (1) you also observe the same video quality, and (2) if you think it's
> > the waitvid mechanism, more the PLL->SVGA generation, or the sync issues
> > of the paired video generators. They even supply the schematic, FWIW.
> I appreciate your enthusiasm, but I have my own goals and projects. I
> am currently oriented towards absurdly low power levels in digital
> designs and am working on a design that will require no explicit power
> source, it will scavenge power from the environment. I don't think a
> Propeller is suitable for such a task is it?

Darn, because I'm pretty sure you are much better equipped to drill
down into this than I. :-> But, no way, a Propeller is definitely a
traditional CPU for your purposes.

Anders....@kapsi.spam.stop.fi.invalid

unread,
Jan 30, 2013, 7:19:18 AM1/30/13
to
In comp.arch.embedded David Brown <da...@westcontrol.removethisbit.com> wrote:

> Another very useful term is "fangle", which is the Scottish word that
> perfectly describes the organisation of cables on most embedded
> developers' desks.

Gives a whole new meaning to the phrase "new-fangled".

-a

David Brown

unread,
Jan 30, 2013, 3:18:23 PM1/30/13
to
This would make more sense if I could spell - normally I rely on
Thunderbird's spell checker, but of course that doesn't help here! I
meant to write "fankle" rather than "fangle", and I don't think the
terms are related.

rickman

unread,
Jan 30, 2013, 6:32:59 PM1/30/13
to
On 1/29/2013 9:10 PM, None wrote:
> rickman<gnu...@gmail.com> writes:
>
> Yes, each cog has its own PLL and "video" bit stream engine (quotes because
> they claim it can be used for any sort of analog stream in general). They
> needed to use a pair of cogs (CPU's) because of the time it takes to pull
> from screen memory as conceived by the ANSI emulation and generate the
> scan lines to represent the font plus underline plus cursor. So the idea
> is one is doing all that while the other is painting scan lines. Double
> buffering, basically.

More like splitting the work load between two processors. I did timing
analysis of this (back of the envelope type stuff) for the GA144 and it
would be pretty simple with 100's of MIPs per CPU. I wouldn't think it
would be that hard with any processor running at reasonable clock rates.
Not sure why they need two CPUs. It all depends on the pixel clock
rate. For a terminal (what you seem to be describing) the pixel rate
should be fairly low, 50-80 MHz. That gives a character rate of 10 MHz
max, so I guess that could tax a 100 MIPS processor.


>> Not 100%? What does that mean? Magic? I guess this is the magic smoke
>> you want to keep from getting out of the chip?
>
> Yes, there is no formal/deterministic way to lock the PLL's of the two
> cogs. Everybody uses the sample code Parallax provided, and it has
> definitely been shown that their "lock" can be skewed.

I don't know what the architecture of this design is. Ideally there
would be no need to lock the two processors.


>> I still don't know enough about the architecture to know what this
>> means. I don't care if the CPUs are not coordinated closely. If you
>> have a video engine providing the clock timing, why would the CPU timing
>> matter?
>
> They have *two* video engines. Each is generated from its own PLL, so
> the first global clock is a crystal oscillator.

I have no image of how or why you would want to use *two* video engines,
although two could be used with one for the char data and one for the
cursor overlay. I also don't know anything about these "video engines".
If they are indeed video engines, they should be doing all the
addressing and fetching from memory. One way a terminal saves memory
bandwidth is by not fetching the same chars again for each line. In an
old implementation I remember seeing they used a shift register to
recycle the same char data for each scan line.


>> I appreciate your enthusiasm, but I have my own goals and projects. I
>> am currently oriented towards absurdly low power levels in digital
>> designs and am working on a design that will require no explicit power
>> source, it will scavenge power from the environment. I don't think a
>> Propeller is suitable for such a task is it?
>
> Darn, because I'm pretty sure you are much better equipped to drill
> down into this than I. :-> But, no way, a Propeller is definitely a
> traditional CPU for your purposes.

I'm happy to discuss this with you.

Rick

Mark Wills

unread,
Jan 31, 2013, 1:57:25 AM1/31/13
to
On Jan 30, 8:18 pm, David Brown <david.br...@removethis.hesbynett.no>
wrote:
Fankle was our "Word of the week" here in the office just last week.
Every week a word is chosen, and it's printed out on A3 and put up on
the wall. It is every engineers moral obligation to use that word as
many times as he can in every conversation he engages in. It's
hilarious. Makes client meetings so much more entertaining.

Alex McDonald

unread,
Jan 31, 2013, 1:38:39 PM1/31/13
to

Mark Wills

unread,
Jan 31, 2013, 3:34:55 PM1/31/13
to
Fit like, min?

Its nae my shot at word of the week next week, but see when it's my
shot, Stramash it'll be. Ken?

j.m.gr...@gmail.com

unread,
Feb 2, 2013, 6:22:03 PM2/2/13
to
On Tuesday, January 29, 2013 3:51:04 PM UTC+13, rickman wrote:
> I am currently oriented towards absurdly low power levels in digital
> designs and am working on a design that will require no explicit power
> source, it will scavenge power from the environment. I don't think a
> Propeller is suitable for such a task is it?

Depends on your work-load, and just how much power...

The Prop is RAM loaded, so avoids the FLASH cost of many cores, and the data shows ~3.5uA @ 3v3 static, but being RAM based it can Vcc scale below that.
(but it does have a boot-energy, which could be another issue..)

That is likely too high for most "scavenge power", but it also climbs slowly, so it can Poll a pin at 60KHz and ~10uA.

Some experimental numbers are found by google here
http://forums.parallax.com/showthread.php/133434-Booting-the-Prop-from-low-voltages-2-of-3-Scavenged-Headphone-Audio-0.42V
and
http://forums.parallax.com/showthread.php/133382-Booting-the-Prop-from-low-voltages-1-of-3-Single-Solar-Cell-0.56V
and then also
http://forums.parallax.com/showthread.php/133455-Booting-the-Prop-from-low-voltages-3-of-3-Meyer-Lemon-0.42V
and also
http://forums.parallax.com/showthread.php/129731-Prop-Limbo!-how-low-%28power-voltage%29-can-it-go!

rickman

unread,
Feb 4, 2013, 12:31:09 AM2/4/13
to
That's very interesting, thanks.

--

Rick
0 new messages