Libraries I looked at include: cogen, weightless, eventlet and
circuits (which isn't exactly coroutine-based but it's event-driven
model was intriguing).
Firstly, are there any others I've missed? And what would the
consensus be on the which has the most active community behind it?
> I've been taking a look at the multitude of coroutine libraries
> available for Python, but from the looks of the projects they all seem
> to be rather "quiet". I'd like to pick one up to use on a current
> project but can't deduce which is the most popular/has the largest
> community.
>
> Libraries I looked at include: cogen, weightless, eventlet and
> circuits (which isn't exactly coroutine-based but it's event-driven
> model was intriguing).
>
> Firstly, are there any others I've missed?
There's greenlets
http://pypi.python.org/pypi/greenlet
which I think is also fairly described as "quiet".
-M-
http://pypi.python.org/pypi/gevent
On Aug 23, 10:02 pm, Phillip B Oldham <phillip.old...@gmail.com>
wrote:
Please, please document this! There are a lot of people who would
love to use this but give up when they don't find a guide or something
similar.
Coroutines are built into the language. There's a good talk about
them here: http://www.dabeaz.com/coroutines/
HTH,
~Simon
But what some Python programmers call coroutines aren't really the same
as what the programming community at large would call a coroutine.
Jean-Paul
Really? I'm curious as to the differences. (I just skimmed the entry
for coroutines in Wikipedia and PEP 342, but I'm not fully
enlightened.)
Warm regards,
~Simon
>>> Coroutines are built into the language. ?There's a good talk
>>> about them here: http://www.dabeaz.com/coroutines/
>>
>> But what some Python programmers call coroutines aren't really
>> the same as what the programming community at large would call
>> a coroutine.
>
> Really? I'm curious as to the differences.
Me too. I read through the presentation above, it it seems to
describe pretty much exactly what we called co-routines both in
school and in the workplace.
Back when I worked on one of the first hand-held cellular
mobile phones, it used co-routines where the number of
coroutines was fixed at 2 (one for each register set in a Z80
CPU). The semantics seem to be identical to the coroutines
described in the presentation.
> (I just skimmed the entry for coroutines in Wikipedia and PEP
> 342, but I'm not fully enlightened.)
--
Grant Edwards grante Yow! I was born in a
at Hostess Cupcake factory
visi.com before the sexual
revolution!
The important difference is that coroutines can switch across multiple
stack frames. Python's "enhanced generators" can still only switch
across one stack frame - ie, from inside the generator to the frame
immediately outside the generator. This means that you cannot use
"enhanced generators" to implement an API like this one:
def doSomeNetworkStuff():
s = corolib.socket()
s.connect(('google.com', 80))
s.sendall('GET / HTTP/1.1\r\nHost: www.google.com\r\n\r\n')
response = s.recv(8192)
where connect, sendall, and recv don't actually block the entire calling
thread, they only switch away to another coroutine until the underlying
operation completes. With "real" coroutines, you can do this.
Jean-Paul
Good point. Being unable to "yeild" from inside a function
called by the "main" coroutine can be limiting once you try to
do something non-trivial. I had read about that limitation,
but had forgotten it.
Some "stackless" threading schemes for C (e.g. Protothreads
http://www.sics.se/~adam/pt/) have the same issue, and it does
require some extra effort to work around that limitation.
--
Grant Edwards grante Yow! I have accepted
at Provolone into my life!
visi.com
I might be missing some subtlety of your point, but I've implemented
this functionality using generators in a library called Kaa[1]. In kaa,
your example looks like:
import kaa
@kaa.coroutine()
def do_some_network_stuff():
s = kaa.Socket()
yield s.connect('google.com:80')
yield s.write('GET / HTTP/1.1\nHost: www.google.com\n\n')
response = yield s.read()
do_some_network_stuff()
kaa.main.run()
Of course, it does require a "coroutine scheduler" be implemented,
which, in kaa, is taken care of by the main loop.
Cheers,
Jason.
[1] The curious can visit http://doc.freevo.org/api/kaa/base/ and
http://freevo.org/kaa/ although a 1.0 hasn't yet been released and the
docs are still rather sketchy.
I specifically left out all "yield" statements in my version, since
that's exactly the point here. :) With "real" coroutines, they're not
necessary - coroutine calls look just like any other call. With
Python's enhanced generators, they are.
Jean-Paul
Yes, I take your point.
Explicitly yielding may not be such a terrible thing, though. It's more
clear when reading such code where it may return back to the scheduler.
This may be important for performance, unless of course the coroutine
implementation supports preemption.
Cheers,
Jason.
Sure, no value judgement intended, except on the practice of taking
words with well established meanings and re-using them for something
else ;)
Jean-Paul
I think it's the behaviour that's important, and not the specific syntax
needed to implement that behaviour.
In other words, I disagree (if this is what you're suggesting) that
sticking "yield" in front of certain expressions makes it any less a
coroutine.
Now, requiring explicit yields does mean that the coroutine has
specific, well-defined points of reentry. But I don't believe it's a
necessary condition that coroutines allow arbitrary (in the
non-deterministic sense) reentry points, only multiple.
Cheers,
Jason.
> On Wed, 2009-09-23 at 22:07 +0000, exa...@twistedmatrix.com wrote:
>> Sure, no value judgement intended, except on the practice of taking
>> words with well established meanings and re-using them for something
>> else ;)
>
> I think it's the behaviour that's important, and not the specific syntax
> needed to implement that behaviour.
>
> In other words, I disagree (if this is what you're suggesting) that
> sticking "yield" in front of certain expressions makes it any less a
> coroutine.
Indeed, it was my (badly faded with age) recollection that explicit
yielding was what made them coroutines rather than general cooperative
multitasking.
--
Rhodri James *-* Wildebeest Herder to the Masses
Alright, I won't pretend to have any particular insight into what the
fundamental "coroutineness" of a coroutine is.
To me, the difference I outlined in this thread is important because it
is a difference that is visible in the API (almost if it were some
unusual, extra part of the function's signature) to application code. If
you have a "send" function that is what I have been calling a "real"
coroutine, that's basically invisible. Put another way, if you started
with a normal blocking "send" function, then applications would be using
it without "yield". If you used "real" coroutines to make it
multitasking-friendly, then the same applications that were already
using it would continue to work (at least, they might). However, if you
have something like Python's enhanced generators, then they all break
very obviously, since "send" no longer returns the number of bytes
written, but now returns a generator object, something totally
different.
Now, I would say that this there's not a huge amount of value in being
able to make a function into a coroutine behind the application's back.
All kinds of problems can result from this. Others will certainly
disagree with me and say that it's worth more than the cost of the
trouble it might cause. But either way, there's clearly *some*
difference between the "real" coroutine way and the enhanced generators
way.
If you think that's not an important difference, I don't mind. I just
hope I've made it clear why I initially said that enhanced generators
aren't what a lot of people would call coroutines. :)
>
>Now, requiring explicit yields does mean that the coroutine has
>specific, well-defined points of reentry. But I don't believe it's a
>necessary condition that coroutines allow arbitrary (in the
>non-deterministic sense) reentry points, only multiple.
I don't think "non-deterministic" is the right word to use here; at
least, it's not what I was trying to convey as possible in coroutines.
More like "invisible".
That aside, I do think that most people familiar with coroutines from
outside of Python would disagree with this, but I haven't don't a formal
survey or anything, so perhaps I'm mistaken.
Jean-Paul
Gotta love the lightning-fast EXX instruction. :-)
Regards
Antoine.
The newest one seems to be diesel, by Simon Willison & Co: http://dieselweb.org/lib
Using it in the above context is about equivalent to slipping a hand grenade
in amongst the other eggs in a nest.
:-)
- Hendrik
EXX accomplised much of the context switch operation. I don't
remember how much RAM was available, but it wasn't much...
--
Grant Edwards grante Yow! And furthermore,
at my bowling average is
visi.com unimpeachable!!!
>e> I specifically left out all "yield" statements in my version, since that's
>e> exactly the point here. :) With "real" coroutines, they're not necessary -
>e> coroutine calls look just like any other call. With Python's enhanced
>e> generators, they are.
The first time I encountered coroutines was in Simula-67. Coroutine
switching was certainly explicit there. IIRC, the keyword was resume.
--
Piet van Oostrum <pi...@cs.uu.nl>
URL: http://pietvanoostrum.com [PGP 8DAE142BE17999C4]
Private email: pi...@vanoostrum.org
Personally, I like the yield. I understand that being forced to
rewrite code
to insert the yield is ugly, but the yield makes clear that the
control
flow is special. In Scheme there is no syntactic distinction between
ordinary function
calls and continuation calls, but they are quite different since
continuation do
not return (!). I always thought this is was a wart of the language,
not an advantage.
I'm not sure exactly what "coroutine calls" refers to, but the
"mis-feature" in Python co-routines that's being discussed is
the fact that you can only yeild/resume from the main coroutine
function.
You can't call a function that yields control back to the other
coroutine(s). By jumping through some hoops you can get the
same effect, but it's not very intuitive and it sort of "feels
wrong" that the main routine has to know ahead of time when
calling a function whether that function might need to yield or
not.
--
Grant Edwards grante Yow! Where does it go when
at you flush?
visi.com
You mean a "trampoline" function? I.e. you have to call into your
coroutines in a special main function that expects as part of the
yielded value(s) the next coroutine to pass control to, and your
coroutines all need to yield the next coroutine?
~Simon
>> You can't call a function that yields control back to the other
>> coroutine(s). ?By jumping through some hoops you can get the
>> same effect, but it's not very intuitive and it sort of "feels
>> wrong" that the main routine has to know ahead of time when
>> calling a function whether that function might need to yield or
>> not.
>
> You mean a "trampoline" function? I.e. you have to call into your
> coroutines in a special main function that expects as part of the
> yielded value(s) the next coroutine to pass control to, and your
> coroutines all need to yield the next coroutine?
Exactly. Compared to "real" coroutines where a yield statement
can occur anywhere, the trampoline business seems pretty
convoluted.
--
Grant Edwards grante Yow! Do you think the
at "Monkees" should get gas on
visi.com odd or even days?
Not directly, but you can simulate this, or at least some pseudo form of
it which is useful in practice. I suppose you could call this "jumping
through some hoops," but from the point of view of the coroutine, it can
be done completely transparently, managed by the coroutine scheduler.
In kaa, which I mentioned earlier, this might look like:
import kaa
@kaa.coroutine()
def task(name):
for i in range(10):
print name, i
yield kaa.NotFinished # kind of like a time slice
@kaa.coroutine()
def fetch_google():
s = kaa.Socket()
try:
yield s.connect('google.com:80')
except:
print 'Connection failed'
return
yield s.write('GET / HTTP/1.1\nHost: google.com\n\n')
yield (yield s.read())
@kaa.coroutine()
def orchestrate():
task('apple')
task('banana')
page = yield fetch_google()
print 'Fetched %d bytes' % len(page)
orchestrate()
kaa.main.run()
The two task() coroutines spawned by orchestrate() continue to "run in
the background" while any of the yields in fetch_google() are pending
(waiting on some network resource).
It's true that the yields in fetch_google() aren't yielding control
_directly_ to one of the task() coroutines, but it _is_ doing so
indirectly, via the coroutine scheduler, which runs inside the main
loop.
Cheers,
Jason.
It's nice that I could, because I did. :)
> but from the point of view of the coroutine, it can be done
> completely transparently, managed by the coroutine scheduler.
>
> In kaa, which I mentioned earlier, this might look like:
>
> import kaa
>
> @kaa.coroutine()
> def task(name):
> for i in range(10):
> print name, i
> yield kaa.NotFinished # kind of like a time slice
>
> @kaa.coroutine()
> def fetch_google():
> s = kaa.Socket()
> try:
> yield s.connect('google.com:80')
That's not comletely transparently. The routine fetch_google()
has to know a priori that s.connect() might want to yield and
so has to invoke it with a yield statement. Completely
transparent would be this:
try:
s.connect('google.com:80')
except:
print 'Connection faild'
return
> yield s.write('GET / HTTP/1.1\nHost: google.com\n\n')
> yield (yield s.read())
Again, you have to know ahead of time which functions might
yeild and which ones don't and call them differently. That's
the "hoop". If somewhere in the implementation of a function
you dicover a need to yield, you have to modify all the "calls"
all the way up to the top frame.
> It's true that the yields in fetch_google() aren't yielding control
> _directly_ to one of the task() coroutines, but it _is_ doing so
> indirectly, via the coroutine scheduler, which runs inside the main
> loop.
True. But I wouldn't call that transparent.
--
Grant Edwards grante Yow! Can I have an IMPULSE
at ITEM instead?
visi.com
So Kaa is essentially implementing the trampoline function.
If I understand it correctly MyHDL does something similar (to
implement models of hardware components running concurrently.)
http://www.myhdl.org/
With my implementation, tasks that execute asynchronously (which may be
either threads or coroutines) return a special object called an
InProgress object. You always yield such calls.
So you're right, it does require knowing a priori what invocations may
return InProgress objects. But this isn't any extra effort. It's
difficult to write any non-trivial program without knowing a priori what
callables will return, isn't it?
> Completely transparent would be this:
[...]
> try:
> s.connect('google.com:80')
> except:
Jean-Paul made the same argument. In my view, the requirement to yield
s.connect() is a feature, not a bug. Here, IMO explicit truly is better
than implicit. I prefer to know at what specific points my routines may
branch off.
And I maintain that requiring yield doesn't make it any less a
coroutine.
Maybe we can call this an aesthetic difference of opinion?
> Again, you have to know ahead of time which functions might
> yeild and which ones don't and call them differently. That's
> the "hoop".
To the extent it should be considered jumping through hoops to find what
any callable returns, all right.
> > It's true that the yields in fetch_google() aren't yielding control
> > _directly_ to one of the task() coroutines, but it _is_ doing so
> > indirectly, via the coroutine scheduler, which runs inside the main
> > loop.
>
> True. But I wouldn't call that transparent.
What I meant by transparent is the fact that yield inside fetch_google()
can "yield to" (indirectly) any active coroutine. It doesn't (and
can't) know which. I was responding to your specific claim that:
> [...] the "mis-feature" in Python co-routines that's being discussed
> is the fact that you can only yeild/resume from the main coroutine
> function.
With my implementation this is only half true. It's true that for other
active coroutines to be reentered, "main" coroutines (there can be more
than one in kaa) will need to yield, but once control is passed back to
the coroutine scheduler (which is hooked into main loop facility), any
active coroutine may be reentered.
Cheers,
Jason.
Essentially, yeah. It doesn't require (or support, depending on your
perspective) a coroutine to explicitly yield the next coroutine to be
reentered, but otherwise I'd say it's the same basic construct.
Cheers,
Jason.
> Jean-Paul made the same argument. In my view, the requirement to yield
> s.connect() is a feature, not a bug. Here, IMO explicit truly is better
> than implicit. I prefer to know at what specific points my routines may
> branch off.
>
> And I maintain that requiring yield doesn't make it any less a
> coroutine.
>
> Maybe we can call this an aesthetic difference of opinion?
Certainly.
You've a very valid point that "transparent" can also mean
"invisible", and stuff happening "invisibly" can be a source of
bugs. All the invisible stuff going on in Perl and C++ has
always caused headaches for me.
--
Grant
Actually even 64k looked pretty good, compared to the 1.5k of RAM and 2k
of PROM for one of my projects, a navigation system for shipboard use.
DaveA
Right. I meant I didn't recall how much RAM was available in
that particular product. Using the shadow register set to
store context is limiting when compared to just pushing
everything onto the stack and then switching to another stack,
but that does require more RAM.
--
Grant
> Actually even 64k looked pretty good, compared to the 1.5k of
> RAM and 2k of PROM for one of my projects, a navigation system
> for shipboard use.
I've worked on projects as recently as the past year that had
only a couple hundred bytes of RAM, and most of it was reserved
for a message buffer.
--
Grant
>GE> On 2009-09-25, Piet van Oostrum <pi...@cs.uu.nl> wrote:
>>>>>>>> exa...@twistedmatrix.com (e) wrote:
>>>
>e> I specifically left out all "yield" statements in my version, since that's
>e> exactly the point here. :) With "real" coroutines, they're not necessary -
>e> coroutine calls look just like any other call. With Python's enhanced
>e> generators, they are.
>>>
>>> The first time I encountered coroutines was in Simula-67. Coroutine
>>> switching was certainly explicit there. IIRC, the keyword was resume.
>GE> I'm not sure exactly what "coroutine calls" refers to, but the
>GE> "mis-feature" in Python co-routines that's being discussed is
>GE> the fact that you can only yeild/resume from the main coroutine
>GE> function.
Yes, I know, but the discussion had drifted to making the yield
invisible, if I understood correctly.
>GE> You can't call a function that yields control back to the other
>GE> coroutine(s). By jumping through some hoops you can get the
>GE> same effect, but it's not very intuitive and it sort of "feels
>GE> wrong" that the main routine has to know ahead of time when
>GE> calling a function whether that function might need to yield or
>GE> not.
I know. I think this is an implementation restriction in Python, to make
stack management easier. Although if you would lift this restriction
some new syntax would have to be invented to distinguish a generator
from a normal function.
> Actually even 64k looked pretty good, compared to the 1.5k of RAM and 2k
> of PROM for one of my projects, a navigation system for shipboard use.
Until you wanted to do hi-res colour graphics, at which
point the video memory took up an inconveniently large
part of the address space.
E.g. on the original BBC, you could either have a
decently large program, *or* decently hi-res graphics,
but not both at the same time. :-(
--
Greg
There is little reason to do that nowadays - one can buy a single cycle 8032
running at 30 MHz with 16/32/64k of programming flash and ik of RAM, as well
as some bytes of eeprom for around US$10-00. - in one off quantities.
- Hendrik
$10 is pretty expensive for a lot of applications. I bet that
processor also uses a lot of power and takes up a lot of board
space. If you've only got $2-$3 in the money budget, 200uA at
1.8V in the power budget, and 6mm X 6mm of board-space, your
choices are limited.
Besides If you can get by with 256 or 512 bytes of RAM, why pay
4X the price for a 1K part?
Besides which, the 8032 instruction set and development tools
are icky compared to something like an MSP430 or an AVR. ;)
[The 8032 is still head and shoulders above the 8-bit PIC
family.]
--
Grant
There are some key advantages to this transparency, especially in the
case of libraries built on libraries. For example, all the networking
libraries that ship in the Python standard lib are based on the
sockets library. They assume the blocking implementation, but then add
HTTPS, cookie handling, SMTP, and all sorts of higher-level network
protocols.
I want to use non-blocking network I/O for (concurrency) performance.
I don't want to re-write an SMTP lib - my language ships with one.
However, it is not possible for someone to write a non-blocking socket
that is a drop-in replacement for the blocking one in the std lib.
Thus, it is not possible for me to use _any_ of the well-written
libraries that are already part of Python's standard library. They
don't have yields sprinkled throughout, so they can't work with a non-
blocking, co-routine implemented socket. And they certainly aren't
written against the non-blocking I/O APIs.
Thus, the efforts by lots of people to write entire network libraries
that, basically, re-implement the Python standard library, but change
the implementation of 7 methods (bind, listen, accept, connect, send,
recv, close). They end up having to duplicate tens of thousands of
LoC, just to change 7 methods.
That's where transparency would be nice - to enable that separation of
concerns.
> $10 is pretty expensive for a lot of applications. I bet that
> processor also uses a lot of power and takes up a lot of board
> space. If you've only got $2-$3 in the money budget, 200uA at
> 1.8V in the power budget, and 6mm X 6mm of board-space, your
> choices are limited.
>
> Besides If you can get by with 256 or 512 bytes of RAM, why pay
> 4X the price for a 1K part?
>
> Besides which, the 8032 instruction set and development tools
> are icky compared to something like an MSP430 or an AVR. ;)
>
> [The 8032 is still head and shoulders above the 8-bit PIC
> family.]
I am biased.
I like the 8031 family.
I have written pre-emptive multitasking systems for it,
as well as state-machine round robin systems.
In assembler.
Who needs tools if you have a half decent macro assembler?
And if the macro assembler is not up to much, then you write your own pre
processor using python.
The 803x bit handling is, in my arrogant opinion, still the best of any
processor. - jump if bit set then clear as an atomic instruction rocks.
:-)
Where do you get such nice projects to work on?
- Hendrik
Assembler macros are indeed a lost art. Back in the day, I
remember seeing some pretty impressive macro libraries layered
2-3 deep. I've done assember macros as recently as about 2-3
years go because it was the easiest way to auto-magically
generate lookup tables for use by C programs (macro assemblers
always have a "repeat" directive, and cpp doesn't).
> The 803x bit handling is, in my arrogant opinion, still the
> best of any processor. - jump if bit set then clear as an
> atomic instruction rocks.
The bit-addressing mode was (and still is) cool. However, the
stack implementation hurts pretty badly now that memory is
cheap.
I shouldn't criticize the 8051. I remember switching from the
8048 to the 8051 (8751 actually, at about $300 each) and
thinking it was wonderful. [Anybody who remembers fighting
with the 8048 page boundaries knows what I mean.]
>:-)
>
> Where do you get such nice projects to work on?
Just lucky. :)
--
Grant
> Assembler macros are indeed a lost art. Back in the day, I
> remember seeing some pretty impressive macro libraries layered
> 2-3 deep. I've done assember macros as recently as about 2-3
> years go because it was the easiest way to auto-magically
> generate lookup tables for use by C programs (macro assemblers
> always have a "repeat" directive, and cpp doesn't).
>
> > The 803x bit handling is, in my arrogant opinion, still the
> > best of any processor. - jump if bit set then clear as an
> > atomic instruction rocks.
>
> The bit-addressing mode was (and still is) cool. However, the
> stack implementation hurts pretty badly now that memory is
> cheap.
>
> I shouldn't criticize the 8051. I remember switching from the
> 8048 to the 8051 (8751 actually, at about $300 each) and
> thinking it was wonderful. [Anybody who remembers fighting
> with the 8048 page boundaries knows what I mean.]
You were lucky - I started with an 8039 and the 8048 was a step up!
You are right about the stack - there are a lot of implementations now with
two or more data pointers, which make a big difference. If only someone
would build one with a two byte stack pointer that points into movx space,
the thing would fly faster again. It would make a stunning difference to the
multitasking performance if you do not have to store the whole stack. Of
course, if you are mucking around in assembler, then the 128 bytes at the top
of the internal memory is often enough.
This is getting a bit far away from python and coroutines, though. :-)
- Hendrik
Getting away from python in the opposite direction, if you click
http://cufp.galois.com/2008/schedule.html
the second presentation "Controlling Hybrid Vehicles with Haskell"
might interest you. Basically it's about a high level DSL that
generates realtime control code written in C. From the slides:
* 5K lines of Haskell/atom replaced 120K lines of matlab, simulink,
and visual basic.
* 2 months to port simulink design to atom.
* Rules with execution periods from 1ms to 10s all scheduled at
compile time to a 1 ms main loop.
* Atom design clears electronic/sw testing on first pass.
* Currently in vehicle testing with no major issues.
Code is here: http://hackage.haskell.org/package/atom
Blurb: "Atom is a Haskell DSL for designing hard realtime embedded
programs. Based on conditional term rewriting, atom will compile a
collection of atomic state transition rules to a C program with
constant memory use and deterministic execution time."
> Getting away from python in the opposite direction, if you click
>
> http://cufp.galois.com/2008/schedule.html
>
> the second presentation "Controlling Hybrid Vehicles with Haskell"
> might interest you. Basically it's about a high level DSL that
> generates realtime control code written in C. From the slides:
>
> * 5K lines of Haskell/atom replaced 120K lines of matlab, simulink,
> and visual basic.
> * 2 months to port simulink design to atom.
> * Rules with execution periods from 1ms to 10s all scheduled at
> compile time to a 1 ms main loop.
> * Atom design clears electronic/sw testing on first pass.
> * Currently in vehicle testing with no major issues.
>
> Code is here: http://hackage.haskell.org/package/atom
>
> Blurb: "Atom is a Haskell DSL for designing hard realtime embedded
> programs. Based on conditional term rewriting, atom will compile a
> collection of atomic state transition rules to a C program with
> constant memory use and deterministic execution time."
Awesome! Thank you
- Hendrik
> $10 is pretty expensive for a lot of applications. I bet that
> processor also uses a lot of power and takes up a lot of board
> space. If you've only got $2-$3 in the money budget, 200uA at
> 1.8V in the power budget, and 6mm X 6mm of board-space, your
> choices are limited.
>
> Besides If you can get by with 256 or 512 bytes of RAM, why pay
> 4X the price for a 1K part?
>
> Besides which, the 8032 instruction set and development tools
> are icky compared to something like an MSP430 or an AVR. ;)
>
> [The 8032 is still head and shoulders above the 8-bit PIC
> family.]
I was going to say, you want 256 bytes of RAM, you profligate
so-and-so? Here, have 32 bytes of data space and stop your
whining :-)
--
Rhodri James *-* Wildebeest Herder to the Masses
What? You had 1's? All we had were 0's. And we _liked_ it.
--
Grant
DaveA
> I was going to say, you want 256 bytes of RAM, you profligate
> so-and-so? Here, have 32 bytes of data space and stop your
> whining :-)
My multi tasking is coming on nicely, but I am struggling a bit with the
garbage collection. The Trash bin gets a bit tight at times, so the extra
bytes help a lot when I run more than 500 independent tasks.
:-)
- Hendrik
I've actually started doing that recently, check out http://gevent.org
Feedback is appreciated.