Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Most "active" coroutine library project?

12 views
Skip to first unread message

Phillip B Oldham

unread,
Aug 23, 2009, 11:02:34 AM8/23/09
to
I've been taking a look at the multitude of coroutine libraries
available for Python, but from the looks of the projects they all seem
to be rather "quiet". I'd like to pick one up to use on a current
project but can't deduce which is the most popular/has the largest
community.

Libraries I looked at include: cogen, weightless, eventlet and
circuits (which isn't exactly coroutine-based but it's event-driven
model was intriguing).

Firstly, are there any others I've missed? And what would the
consensus be on the which has the most active community behind it?

Matthew Woodcraft

unread,
Aug 23, 2009, 1:23:06 PM8/23/09
to
Phillip B Oldham <phillip...@gmail.com> writes:

> I've been taking a look at the multitude of coroutine libraries
> available for Python, but from the looks of the projects they all seem
> to be rather "quiet". I'd like to pick one up to use on a current
> project but can't deduce which is the most popular/has the largest
> community.
>
> Libraries I looked at include: cogen, weightless, eventlet and
> circuits (which isn't exactly coroutine-based but it's event-driven
> model was intriguing).
>
> Firstly, are there any others I've missed?

There's greenlets

http://pypi.python.org/pypi/greenlet

which I think is also fairly described as "quiet".

-M-

Denis

unread,
Aug 25, 2009, 12:51:52 AM8/25/09
to
You can also at gevent

http://pypi.python.org/pypi/gevent


On Aug 23, 10:02 pm, Phillip B Oldham <phillip.old...@gmail.com>
wrote:

Brian Hammond

unread,
Sep 23, 2009, 11:58:03 AM9/23/09
to
On Aug 25, 12:51 am, Denis <denis.bile...@gmail.com> wrote:
> You can also at gevent
>
> http://pypi.python.org/pypi/gevent

Please, please document this! There are a lot of people who would
love to use this but give up when they don't find a guide or something
similar.

Simon Forman

unread,
Sep 23, 2009, 1:00:13 PM9/23/09
to Phillip B Oldham, pytho...@python.org
> --
> http://mail.python.org/mailman/listinfo/python-list
>

Coroutines are built into the language. There's a good talk about
them here: http://www.dabeaz.com/coroutines/

HTH,
~Simon

exa...@twistedmatrix.com

unread,
Sep 23, 2009, 2:05:44 PM9/23/09
to pytho...@python.org
On 05:00 pm, sajm...@gmail.com wrote:
>On Sun, Aug 23, 2009 at 11:02 AM, Phillip B Oldham
><phillip...@gmail.com> wrote:
>>--
>>http://mail.python.org/mailman/listinfo/python-list
>
>Coroutines are built into the language. There's a good talk about
>them here: http://www.dabeaz.com/coroutines/

But what some Python programmers call coroutines aren't really the same
as what the programming community at large would call a coroutine.

Jean-Paul

Simon Forman

unread,
Sep 23, 2009, 4:16:06 PM9/23/09
to exa...@twistedmatrix.com, pytho...@python.org
On Wed, Sep 23, 2009 at 2:05 PM, <exa...@twistedmatrix.com> wrote:
> On 05:00 pm, sajm...@gmail.com wrote:
>>
>> On Sun, Aug 23, 2009 at 11:02 AM, Phillip B Oldham
>> <phillip...@gmail.com> wrote:
>>>
>>> --
>>> http://mail.python.org/mailman/listinfo/python-list
>>
>> Coroutines are built into the language.  There's a good talk about
>> them here: http://www.dabeaz.com/coroutines/
>
> But what some Python programmers call coroutines aren't really the same as
> what the programming community at large would call a coroutine.
>
> Jean-Paul

Really? I'm curious as to the differences. (I just skimmed the entry
for coroutines in Wikipedia and PEP 342, but I'm not fully
enlightened.)

Warm regards,
~Simon

Grant Edwards

unread,
Sep 23, 2009, 4:41:29 PM9/23/09
to
On 2009-09-23, Simon Forman <sajm...@gmail.com> wrote:

>>> Coroutines are built into the language. ?There's a good talk


>>> about them here: http://www.dabeaz.com/coroutines/
>>
>> But what some Python programmers call coroutines aren't really
>> the same as what the programming community at large would call
>> a coroutine.
>

> Really? I'm curious as to the differences.

Me too. I read through the presentation above, it it seems to
describe pretty much exactly what we called co-routines both in
school and in the workplace.

Back when I worked on one of the first hand-held cellular
mobile phones, it used co-routines where the number of
coroutines was fixed at 2 (one for each register set in a Z80
CPU). The semantics seem to be identical to the coroutines
described in the presentation.

> (I just skimmed the entry for coroutines in Wikipedia and PEP
> 342, but I'm not fully enlightened.)

--
Grant Edwards grante Yow! I was born in a
at Hostess Cupcake factory
visi.com before the sexual
revolution!

exa...@twistedmatrix.com

unread,
Sep 23, 2009, 4:50:08 PM9/23/09
to Simon Forman, pytho...@python.org
On 08:16 pm, sajm...@gmail.com wrote:
>On Wed, Sep 23, 2009 at 2:05 PM, <exa...@twistedmatrix.com> wrote:
>[snip]

>>
>>But what some Python programmers call coroutines aren't really the
>>same as
>>what the programming community at large would call a coroutine.
>>
>>Jean-Paul
>
>Really? I'm curious as to the differences. (I just skimmed the entry

>for coroutines in Wikipedia and PEP 342, but I'm not fully
>enlightened.)

The important difference is that coroutines can switch across multiple
stack frames. Python's "enhanced generators" can still only switch
across one stack frame - ie, from inside the generator to the frame
immediately outside the generator. This means that you cannot use
"enhanced generators" to implement an API like this one:

def doSomeNetworkStuff():
s = corolib.socket()
s.connect(('google.com', 80))
s.sendall('GET / HTTP/1.1\r\nHost: www.google.com\r\n\r\n')
response = s.recv(8192)

where connect, sendall, and recv don't actually block the entire calling
thread, they only switch away to another coroutine until the underlying
operation completes. With "real" coroutines, you can do this.

Jean-Paul

Grant Edwards

unread,
Sep 23, 2009, 4:58:49 PM9/23/09
to
On 2009-09-23, exa...@twistedmatrix.com <exa...@twistedmatrix.com> wrote:
> On 08:16 pm, sajm...@gmail.com wrote:
>>On Wed, Sep 23, 2009 at 2:05 PM, <exa...@twistedmatrix.com> wrote:
>>[snip]
>>>
>>>But what some Python programmers call coroutines aren't really the
>>>same as
>>>what the programming community at large would call a coroutine.
>>>
>>>Jean-Paul
>>
>>Really? I'm curious as to the differences. (I just skimmed
>>the entry for coroutines in Wikipedia and PEP 342, but I'm not
>>fully enlightened.)
>
> The important difference is that coroutines can switch across
> multiple stack frames. Python's "enhanced generators" can
> still only switch across one stack frame - ie, from inside the
> generator to the frame immediately outside the generator.

Good point. Being unable to "yeild" from inside a function
called by the "main" coroutine can be limiting once you try to
do something non-trivial. I had read about that limitation,
but had forgotten it.

Some "stackless" threading schemes for C (e.g. Protothreads
http://www.sics.se/~adam/pt/) have the same issue, and it does
require some extra effort to work around that limitation.

--
Grant Edwards grante Yow! I have accepted
at Provolone into my life!
visi.com

Jason Tackaberry

unread,
Sep 23, 2009, 5:40:50 PM9/23/09
to exa...@twistedmatrix.com, python-list
On Wed, 2009-09-23 at 20:50 +0000, exa...@twistedmatrix.com wrote:
> immediately outside the generator. This means that you cannot use
> "enhanced generators" to implement an API like this one:
>
> def doSomeNetworkStuff():
> s = corolib.socket()
> s.connect(('google.com', 80))
> s.sendall('GET / HTTP/1.1\r\nHost: www.google.com\r\n\r\n')
> response = s.recv(8192)
>
> where connect, sendall, and recv don't actually block the entire calling
> thread, they only switch away to another coroutine until the underlying
> operation completes. With "real" coroutines, you can do this.

I might be missing some subtlety of your point, but I've implemented
this functionality using generators in a library called Kaa[1]. In kaa,
your example looks like:

import kaa

@kaa.coroutine()
def do_some_network_stuff():
s = kaa.Socket()
yield s.connect('google.com:80')
yield s.write('GET / HTTP/1.1\nHost: www.google.com\n\n')
response = yield s.read()

do_some_network_stuff()
kaa.main.run()

Of course, it does require a "coroutine scheduler" be implemented,
which, in kaa, is taken care of by the main loop.

Cheers,
Jason.

[1] The curious can visit http://doc.freevo.org/api/kaa/base/ and
http://freevo.org/kaa/ although a 1.0 hasn't yet been released and the
docs are still rather sketchy.

exa...@twistedmatrix.com

unread,
Sep 23, 2009, 5:53:47 PM9/23/09
to Jason Tackaberry, python-list

I specifically left out all "yield" statements in my version, since
that's exactly the point here. :) With "real" coroutines, they're not
necessary - coroutine calls look just like any other call. With
Python's enhanced generators, they are.

Jean-Paul

Jason Tackaberry

unread,
Sep 23, 2009, 6:00:35 PM9/23/09
to exa...@twistedmatrix.com, python-list
On Wed, 2009-09-23 at 21:53 +0000, exa...@twistedmatrix.com wrote:
> I specifically left out all "yield" statements in my version, since
> that's exactly the point here. :) With "real" coroutines, they're not
> necessary - coroutine calls look just like any other call. With
> Python's enhanced generators, they are.

Yes, I take your point.

Explicitly yielding may not be such a terrible thing, though. It's more
clear when reading such code where it may return back to the scheduler.
This may be important for performance, unless of course the coroutine
implementation supports preemption.

Cheers,
Jason.

exa...@twistedmatrix.com

unread,
Sep 23, 2009, 6:07:41 PM9/23/09
to Jason Tackaberry, python-list

Sure, no value judgement intended, except on the practice of taking
words with well established meanings and re-using them for something
else ;)

Jean-Paul

Jason Tackaberry

unread,
Sep 23, 2009, 6:18:26 PM9/23/09
to exa...@twistedmatrix.com, python-list
On Wed, 2009-09-23 at 22:07 +0000, exa...@twistedmatrix.com wrote:
> Sure, no value judgement intended, except on the practice of taking
> words with well established meanings and re-using them for something
> else ;)

I think it's the behaviour that's important, and not the specific syntax
needed to implement that behaviour.

In other words, I disagree (if this is what you're suggesting) that
sticking "yield" in front of certain expressions makes it any less a
coroutine.

Now, requiring explicit yields does mean that the coroutine has
specific, well-defined points of reentry. But I don't believe it's a
necessary condition that coroutines allow arbitrary (in the
non-deterministic sense) reentry points, only multiple.

Cheers,
Jason.

Rhodri James

unread,
Sep 23, 2009, 6:32:14 PM9/23/09
to Jason Tackaberry, exa...@twistedmatrix.com, python-list
On Wed, 23 Sep 2009 23:18:26 +0100, Jason Tackaberry <ta...@urandom.ca>
wrote:

> On Wed, 2009-09-23 at 22:07 +0000, exa...@twistedmatrix.com wrote:
>> Sure, no value judgement intended, except on the practice of taking
>> words with well established meanings and re-using them for something
>> else ;)
>
> I think it's the behaviour that's important, and not the specific syntax
> needed to implement that behaviour.
>
> In other words, I disagree (if this is what you're suggesting) that
> sticking "yield" in front of certain expressions makes it any less a
> coroutine.

Indeed, it was my (badly faded with age) recollection that explicit
yielding was what made them coroutines rather than general cooperative
multitasking.

--
Rhodri James *-* Wildebeest Herder to the Masses

exa...@twistedmatrix.com

unread,
Sep 23, 2009, 6:47:51 PM9/23/09
to Jason Tackaberry, python-list
On 10:18 pm, ta...@urandom.ca wrote:
>On Wed, 2009-09-23 at 22:07 +0000, exa...@twistedmatrix.com wrote:
>>Sure, no value judgement intended, except on the practice of taking
>>words with well established meanings and re-using them for something
>>else ;)
>
>I think it's the behaviour that's important, and not the specific
>syntax
>needed to implement that behaviour.
>
>In other words, I disagree (if this is what you're suggesting) that
>sticking "yield" in front of certain expressions makes it any less a
>coroutine.

Alright, I won't pretend to have any particular insight into what the
fundamental "coroutineness" of a coroutine is.

To me, the difference I outlined in this thread is important because it
is a difference that is visible in the API (almost if it were some
unusual, extra part of the function's signature) to application code. If
you have a "send" function that is what I have been calling a "real"
coroutine, that's basically invisible. Put another way, if you started
with a normal blocking "send" function, then applications would be using
it without "yield". If you used "real" coroutines to make it
multitasking-friendly, then the same applications that were already
using it would continue to work (at least, they might). However, if you
have something like Python's enhanced generators, then they all break
very obviously, since "send" no longer returns the number of bytes
written, but now returns a generator object, something totally
different.

Now, I would say that this there's not a huge amount of value in being
able to make a function into a coroutine behind the application's back.
All kinds of problems can result from this. Others will certainly
disagree with me and say that it's worth more than the cost of the
trouble it might cause. But either way, there's clearly *some*
difference between the "real" coroutine way and the enhanced generators
way.

If you think that's not an important difference, I don't mind. I just
hope I've made it clear why I initially said that enhanced generators
aren't what a lot of people would call coroutines. :)


>
>Now, requiring explicit yields does mean that the coroutine has
>specific, well-defined points of reentry. But I don't believe it's a
>necessary condition that coroutines allow arbitrary (in the
>non-deterministic sense) reentry points, only multiple.

I don't think "non-deterministic" is the right word to use here; at
least, it's not what I was trying to convey as possible in coroutines.
More like "invisible".

That aside, I do think that most people familiar with coroutines from
outside of Python would disagree with this, but I haven't don't a formal
survey or anything, so perhaps I'm mistaken.

Jean-Paul

Antoine Pitrou

unread,
Sep 24, 2009, 9:42:36 AM9/24/09
to pytho...@python.org
Grant Edwards <invalid <at> invalid.invalid> writes:
>
> Back when I worked on one of the first hand-held cellular
> mobile phones, it used co-routines where the number of
> coroutines was fixed at 2 (one for each register set in a Z80
> CPU).

Gotta love the lightning-fast EXX instruction. :-)

Regards

Antoine.


Michele Simionato

unread,
Sep 24, 2009, 11:13:44 AM9/24/09
to

The newest one seems to be diesel, by Simon Willison & Co: http://dieselweb.org/lib

Hendrik van Rooyen

unread,
Sep 25, 2009, 3:23:01 AM9/25/09
to pytho...@python.org
On Thursday, 24 September 2009 15:42:36 Antoine Pitrou wrote:
> Grant Edwards <invalid <at> invalid.invalid> writes:
> > Back when I worked on one of the first hand-held cellular
> > mobile phones, it used co-routines where the number of
> > coroutines was fixed at 2 (one for each register set in a Z80
> > CPU).
>
> Gotta love the lightning-fast EXX instruction. :-)

Using it in the above context is about equivalent to slipping a hand grenade
in amongst the other eggs in a nest.

:-)

- Hendrik

Grant Edwards

unread,
Sep 25, 2009, 10:22:51 AM9/25/09
to

EXX accomplised much of the context switch operation. I don't
remember how much RAM was available, but it wasn't much...

--
Grant Edwards grante Yow! And furthermore,
at my bowling average is
visi.com unimpeachable!!!

Piet van Oostrum

unread,
Sep 25, 2009, 10:55:04 AM9/25/09
to
>>>>> exa...@twistedmatrix.com (e) wrote:

>e> I specifically left out all "yield" statements in my version, since that's
>e> exactly the point here. :) With "real" coroutines, they're not necessary -
>e> coroutine calls look just like any other call. With Python's enhanced
>e> generators, they are.

The first time I encountered coroutines was in Simula-67. Coroutine
switching was certainly explicit there. IIRC, the keyword was resume.
--
Piet van Oostrum <pi...@cs.uu.nl>
URL: http://pietvanoostrum.com [PGP 8DAE142BE17999C4]
Private email: pi...@vanoostrum.org

Michele Simionato

unread,
Sep 25, 2009, 11:23:09 AM9/25/09
to
On Sep 23, 11:53 pm, exar...@twistedmatrix.com wrote:
> I specifically left out all "yield" statements in my version, since
> that's exactly the point here. :)  With "real" coroutines, they're not
> necessary - coroutine calls look just like any other call.

Personally, I like the yield. I understand that being forced to
rewrite code
to insert the yield is ugly, but the yield makes clear that the
control
flow is special. In Scheme there is no syntactic distinction between
ordinary function
calls and continuation calls, but they are quite different since
continuation do
not return (!). I always thought this is was a wart of the language,
not an advantage.

Grant Edwards

unread,
Sep 25, 2009, 11:42:48 AM9/25/09
to
On 2009-09-25, Piet van Oostrum <pi...@cs.uu.nl> wrote:
>>>>>> exa...@twistedmatrix.com (e) wrote:
>
>>e> I specifically left out all "yield" statements in my version, since that's
>>e> exactly the point here. :) With "real" coroutines, they're not necessary -
>>e> coroutine calls look just like any other call. With Python's enhanced
>>e> generators, they are.
>
> The first time I encountered coroutines was in Simula-67. Coroutine
> switching was certainly explicit there. IIRC, the keyword was resume.

I'm not sure exactly what "coroutine calls" refers to, but the
"mis-feature" in Python co-routines that's being discussed is
the fact that you can only yeild/resume from the main coroutine
function.

You can't call a function that yields control back to the other
coroutine(s). By jumping through some hoops you can get the
same effect, but it's not very intuitive and it sort of "feels
wrong" that the main routine has to know ahead of time when
calling a function whether that function might need to yield or
not.

--
Grant Edwards grante Yow! Where does it go when
at you flush?
visi.com

Simon Forman

unread,
Sep 25, 2009, 12:21:42 PM9/25/09
to pytho...@python.org

You mean a "trampoline" function? I.e. you have to call into your
coroutines in a special main function that expects as part of the
yielded value(s) the next coroutine to pass control to, and your
coroutines all need to yield the next coroutine?

~Simon

Grant Edwards

unread,
Sep 25, 2009, 12:55:09 PM9/25/09
to
On 2009-09-25, Simon Forman <sajm...@gmail.com> wrote:
> On Fri, Sep 25, 2009 at 11:42 AM, Grant Edwards <inv...@invalid.invalid> wrote:

>> You can't call a function that yields control back to the other

>> coroutine(s). ?By jumping through some hoops you can get the


>> same effect, but it's not very intuitive and it sort of "feels
>> wrong" that the main routine has to know ahead of time when
>> calling a function whether that function might need to yield or
>> not.
>
> You mean a "trampoline" function? I.e. you have to call into your
> coroutines in a special main function that expects as part of the
> yielded value(s) the next coroutine to pass control to, and your
> coroutines all need to yield the next coroutine?

Exactly. Compared to "real" coroutines where a yield statement
can occur anywhere, the trampoline business seems pretty
convoluted.

--
Grant Edwards grante Yow! Do you think the
at "Monkees" should get gas on
visi.com odd or even days?

Jason Tackaberry

unread,
Sep 25, 2009, 2:07:32 PM9/25/09
to python-list
On Fri, 2009-09-25 at 15:42 +0000, Grant Edwards wrote:
> You can't call a function that yields control back to the other
> coroutine(s). By jumping through some hoops you can get the
> same effect, but it's not very intuitive and it sort of "feels
> wrong" that the main routine has to know ahead of time when
> calling a function whether that function might need to yield or
> not.

Not directly, but you can simulate this, or at least some pseudo form of
it which is useful in practice. I suppose you could call this "jumping
through some hoops," but from the point of view of the coroutine, it can
be done completely transparently, managed by the coroutine scheduler.

In kaa, which I mentioned earlier, this might look like:

import kaa

@kaa.coroutine()
def task(name):
for i in range(10):
print name, i
yield kaa.NotFinished # kind of like a time slice

@kaa.coroutine()
def fetch_google():
s = kaa.Socket()
try:


yield s.connect('google.com:80')

except:
print 'Connection failed'
return
yield s.write('GET / HTTP/1.1\nHost: google.com\n\n')
yield (yield s.read())

@kaa.coroutine()
def orchestrate():
task('apple')
task('banana')
page = yield fetch_google()
print 'Fetched %d bytes' % len(page)

orchestrate()
kaa.main.run()


The two task() coroutines spawned by orchestrate() continue to "run in
the background" while any of the yields in fetch_google() are pending
(waiting on some network resource).

It's true that the yields in fetch_google() aren't yielding control
_directly_ to one of the task() coroutines, but it _is_ doing so
indirectly, via the coroutine scheduler, which runs inside the main
loop.

Cheers,
Jason.

Grant Edwards

unread,
Sep 25, 2009, 2:36:27 PM9/25/09
to
On 2009-09-25, Jason Tackaberry <ta...@urandom.ca> wrote:
> On Fri, 2009-09-25 at 15:42 +0000, Grant Edwards wrote:
>> You can't call a function that yields control back to the other
>> coroutine(s). By jumping through some hoops you can get the
>> same effect, but it's not very intuitive and it sort of "feels
>> wrong" that the main routine has to know ahead of time when
>> calling a function whether that function might need to yield or
>> not.
>
> Not directly, but you can simulate this, or at least some pseudo form of
> it which is useful in practice. I suppose you could call this "jumping
> through some hoops,"

It's nice that I could, because I did. :)

> but from the point of view of the coroutine, it can be done
> completely transparently, managed by the coroutine scheduler.
>
> In kaa, which I mentioned earlier, this might look like:
>
> import kaa
>
> @kaa.coroutine()
> def task(name):
> for i in range(10):
> print name, i
> yield kaa.NotFinished # kind of like a time slice
>
> @kaa.coroutine()
> def fetch_google():
> s = kaa.Socket()
> try:
> yield s.connect('google.com:80')

That's not comletely transparently. The routine fetch_google()
has to know a priori that s.connect() might want to yield and
so has to invoke it with a yield statement. Completely
transparent would be this:

try:


s.connect('google.com:80')
except:

print 'Connection faild'
return

> yield s.write('GET / HTTP/1.1\nHost: google.com\n\n')
> yield (yield s.read())

Again, you have to know ahead of time which functions might
yeild and which ones don't and call them differently. That's
the "hoop". If somewhere in the implementation of a function
you dicover a need to yield, you have to modify all the "calls"
all the way up to the top frame.

> It's true that the yields in fetch_google() aren't yielding control
> _directly_ to one of the task() coroutines, but it _is_ doing so
> indirectly, via the coroutine scheduler, which runs inside the main
> loop.

True. But I wouldn't call that transparent.

--
Grant Edwards grante Yow! Can I have an IMPULSE
at ITEM instead?
visi.com

Simon Forman

unread,
Sep 25, 2009, 3:25:32 PM9/25/09
to Jason Tackaberry, python-list
On Fri, Sep 25, 2009 at 2:07 PM, Jason Tackaberry <ta...@urandom.ca> wrote:
> On Fri, 2009-09-25 at 15:42 +0000, Grant Edwards wrote:
>> You can't call a function that yields control back to the other
>> coroutine(s).  By jumping through some hoops you can get the
>> same effect, but it's not very intuitive and it sort of "feels
>> wrong" that the main routine has to know ahead of time when
>> calling a function whether that function might need to yield or
>> not.
>
> Not directly, but you can simulate this, or at least some pseudo form of
> it which is useful in practice.  I suppose you could call this "jumping
> through some hoops," but from the point of view of the coroutine, it can

> be done completely transparently, managed by the coroutine scheduler.
>
> In kaa, which I mentioned earlier, this might look like:
>
>        import kaa
>
>        @kaa.coroutine()

>        def task(name):
>           for i in range(10):
>              print name, i
>              yield kaa.NotFinished  # kind of like a time slice
>
>        @kaa.coroutine()
>        def fetch_google():
>           s = kaa.Socket()
>           try:
>              yield s.connect('google.com:80')
>           except:
>              print 'Connection failed'
>              return
>           yield s.write('GET / HTTP/1.1\nHost: google.com\n\n')
>           yield (yield s.read())
>
>        @kaa.coroutine()
>        def orchestrate():
>            task('apple')
>            task('banana')
>            page = yield fetch_google()
>            print 'Fetched %d bytes' % len(page)
>
>        orchestrate()
>        kaa.main.run()
>
>
> The two task() coroutines spawned by orchestrate() continue to "run in
> the background" while any of the yields in fetch_google() are pending
> (waiting on some network resource).
>
> It's true that the yields in fetch_google() aren't yielding control
> _directly_ to one of the task() coroutines, but it _is_ doing so
> indirectly, via the coroutine scheduler, which runs inside the main
> loop.
>
> Cheers,
> Jason.
>

So Kaa is essentially implementing the trampoline function.

If I understand it correctly MyHDL does something similar (to
implement models of hardware components running concurrently.)
http://www.myhdl.org/

Jason Tackaberry

unread,
Sep 25, 2009, 3:26:31 PM9/25/09
to pytho...@python.org
On Fri, 2009-09-25 at 18:36 +0000, Grant Edwards wrote:
> That's not comletely transparently. The routine fetch_google()
> has to know a priori that s.connect() might want to yield and
> so has to invoke it with a yield statement.

With my implementation, tasks that execute asynchronously (which may be
either threads or coroutines) return a special object called an
InProgress object. You always yield such calls.

So you're right, it does require knowing a priori what invocations may
return InProgress objects. But this isn't any extra effort. It's
difficult to write any non-trivial program without knowing a priori what
callables will return, isn't it?


> Completely transparent would be this:

[...]


> try:
> s.connect('google.com:80')
> except:

Jean-Paul made the same argument. In my view, the requirement to yield
s.connect() is a feature, not a bug. Here, IMO explicit truly is better
than implicit. I prefer to know at what specific points my routines may
branch off.

And I maintain that requiring yield doesn't make it any less a
coroutine.

Maybe we can call this an aesthetic difference of opinion?


> Again, you have to know ahead of time which functions might
> yeild and which ones don't and call them differently. That's
> the "hoop".

To the extent it should be considered jumping through hoops to find what
any callable returns, all right.


> > It's true that the yields in fetch_google() aren't yielding control
> > _directly_ to one of the task() coroutines, but it _is_ doing so
> > indirectly, via the coroutine scheduler, which runs inside the main
> > loop.
>
> True. But I wouldn't call that transparent.

What I meant by transparent is the fact that yield inside fetch_google()
can "yield to" (indirectly) any active coroutine. It doesn't (and
can't) know which. I was responding to your specific claim that:

> [...] the "mis-feature" in Python co-routines that's being discussed


> is the fact that you can only yeild/resume from the main coroutine
> function.

With my implementation this is only half true. It's true that for other
active coroutines to be reentered, "main" coroutines (there can be more
than one in kaa) will need to yield, but once control is passed back to
the coroutine scheduler (which is hooked into main loop facility), any
active coroutine may be reentered.

Cheers,
Jason.

Jason Tackaberry

unread,
Sep 25, 2009, 3:36:47 PM9/25/09
to Simon Forman, python-list
On Fri, 2009-09-25 at 15:25 -0400, Simon Forman wrote:
> So Kaa is essentially implementing the trampoline function.

Essentially, yeah. It doesn't require (or support, depending on your
perspective) a coroutine to explicitly yield the next coroutine to be
reentered, but otherwise I'd say it's the same basic construct.

Cheers,
Jason.

Grant Edwards

unread,
Sep 25, 2009, 5:07:51 PM9/25/09
to
On 2009-09-25, Jason Tackaberry <ta...@urandom.ca> wrote:

> Jean-Paul made the same argument. In my view, the requirement to yield
> s.connect() is a feature, not a bug. Here, IMO explicit truly is better
> than implicit. I prefer to know at what specific points my routines may
> branch off.
>
> And I maintain that requiring yield doesn't make it any less a
> coroutine.
>
> Maybe we can call this an aesthetic difference of opinion?

Certainly.

You've a very valid point that "transparent" can also mean
"invisible", and stuff happening "invisibly" can be a source of
bugs. All the invisible stuff going on in Perl and C++ has
always caused headaches for me.

--
Grant

Message has been deleted
Message has been deleted

Dave Angel

unread,
Sep 26, 2009, 7:49:27 AM9/26/09
to Dennis Lee Bieber, pytho...@python.org
Dennis Lee Bieber wrote:
> On Fri, 25 Sep 2009 14:22:51 +0000 (UTC), Grant Edwards
> <inv...@invalid.invalid> declaimed the following in
> gmane.comp.python.general:

>
>
>> EXX accomplised much of the context switch operation. I don't
>> remember how much RAM was available, but it wasn't much...
>>
>>
> Zilog Z80... as with the rest of the "improved" 8080 family -- 64kB
> address space...
>
I knew of one Z80 implementation which gave nearly 128k to the user.
Code was literally in a separate 64k page from data, and there were
special ways to access it, when you needed to do code-modification on
the fly. The 64k bank select was normally chosen on each bus cycle by
status bits from the CPU indicating whether it was part of an
instruction fetch or a data fetch.

Actually even 64k looked pretty good, compared to the 1.5k of RAM and 2k
of PROM for one of my projects, a navigation system for shipboard use.

DaveA

Grant Edwards

unread,
Sep 26, 2009, 10:54:08 AM9/26/09
to
On 2009-09-26, Dennis Lee Bieber <wlf...@ix.netcom.com> wrote:
> On Fri, 25 Sep 2009 14:22:51 +0000 (UTC), Grant Edwards
><inv...@invalid.invalid> declaimed the following in
> gmane.comp.python.general:
>
>>
>> EXX accomplised much of the context switch operation. I don't
>> remember how much RAM was available, but it wasn't much...
>
> Zilog Z80... as with the rest of the "improved" 8080 family --
> 64kB address space...

Right. I meant I didn't recall how much RAM was available in
that particular product. Using the shadow register set to
store context is limiting when compared to just pushing
everything onto the stack and then switching to another stack,
but that does require more RAM.

--
Grant

Grant Edwards

unread,
Sep 26, 2009, 10:55:30 AM9/26/09
to
On 2009-09-26, Dave Angel <da...@ieee.org> wrote:

> Actually even 64k looked pretty good, compared to the 1.5k of
> RAM and 2k of PROM for one of my projects, a navigation system
> for shipboard use.

I've worked on projects as recently as the past year that had
only a couple hundred bytes of RAM, and most of it was reserved
for a message buffer.

--
Grant

Piet van Oostrum

unread,
Sep 27, 2009, 6:18:21 AM9/27/09
to
>>>>> Grant Edwards <inv...@invalid.invalid> (GE) wrote:

>GE> On 2009-09-25, Piet van Oostrum <pi...@cs.uu.nl> wrote:
>>>>>>>> exa...@twistedmatrix.com (e) wrote:
>>>
>e> I specifically left out all "yield" statements in my version, since that's
>e> exactly the point here. :) With "real" coroutines, they're not necessary -
>e> coroutine calls look just like any other call. With Python's enhanced
>e> generators, they are.
>>>
>>> The first time I encountered coroutines was in Simula-67. Coroutine
>>> switching was certainly explicit there. IIRC, the keyword was resume.

>GE> I'm not sure exactly what "coroutine calls" refers to, but the
>GE> "mis-feature" in Python co-routines that's being discussed is
>GE> the fact that you can only yeild/resume from the main coroutine
>GE> function.

Yes, I know, but the discussion had drifted to making the yield
invisible, if I understood correctly.

>GE> You can't call a function that yields control back to the other
>GE> coroutine(s). By jumping through some hoops you can get the
>GE> same effect, but it's not very intuitive and it sort of "feels
>GE> wrong" that the main routine has to know ahead of time when
>GE> calling a function whether that function might need to yield or
>GE> not.

I know. I think this is an implementation restriction in Python, to make
stack management easier. Although if you would lift this restriction
some new syntax would have to be invented to distinguish a generator
from a normal function.

greg

unread,
Sep 28, 2009, 1:35:03 AM9/28/09
to
Dave Angel wrote:

> Actually even 64k looked pretty good, compared to the 1.5k of RAM and 2k
> of PROM for one of my projects, a navigation system for shipboard use.

Until you wanted to do hi-res colour graphics, at which
point the video memory took up an inconveniently large
part of the address space.

E.g. on the original BBC, you could either have a
decently large program, *or* decently hi-res graphics,
but not both at the same time. :-(

--
Greg

Hendrik van Rooyen

unread,
Sep 28, 2009, 4:41:40 AM9/28/09
to Grant Edwards, pytho...@python.org

There is little reason to do that nowadays - one can buy a single cycle 8032
running at 30 MHz with 16/32/64k of programming flash and ik of RAM, as well
as some bytes of eeprom for around US$10-00. - in one off quantities.

- Hendrik

Grant Edwards

unread,
Sep 28, 2009, 10:44:48 AM9/28/09
to

$10 is pretty expensive for a lot of applications. I bet that
processor also uses a lot of power and takes up a lot of board
space. If you've only got $2-$3 in the money budget, 200uA at
1.8V in the power budget, and 6mm X 6mm of board-space, your
choices are limited.

Besides If you can get by with 256 or 512 bytes of RAM, why pay
4X the price for a 1K part?

Besides which, the 8032 instruction set and development tools
are icky compared to something like an MSP430 or an AVR. ;)

[The 8032 is still head and shoulders above the 8-bit PIC
family.]

--
Grant

Arlo Belshee

unread,
Sep 28, 2009, 4:09:38 PM9/28/09
to
On Sep 25, 2:07 pm, Grant Edwards <inva...@invalid.invalid> wrote:

> On 2009-09-25, Jason Tackaberry <t...@urandom.ca> wrote:
>
> > And I maintain that requiring yield doesn't make it any less a
> > coroutine.
>
> > Maybe we can call this an aesthetic difference of opinion?
>
> Certainly.
>
> You've a very valid point that "transparent" can also mean
> "invisible", and stuff happening "invisibly" can be a source of
> bugs.  All the invisible stuff going on in Perl and C++ has
> always caused headaches for me.

There are some key advantages to this transparency, especially in the
case of libraries built on libraries. For example, all the networking
libraries that ship in the Python standard lib are based on the
sockets library. They assume the blocking implementation, but then add
HTTPS, cookie handling, SMTP, and all sorts of higher-level network
protocols.

I want to use non-blocking network I/O for (concurrency) performance.
I don't want to re-write an SMTP lib - my language ships with one.
However, it is not possible for someone to write a non-blocking socket
that is a drop-in replacement for the blocking one in the std lib.
Thus, it is not possible for me to use _any_ of the well-written
libraries that are already part of Python's standard library. They
don't have yields sprinkled throughout, so they can't work with a non-
blocking, co-routine implemented socket. And they certainly aren't
written against the non-blocking I/O APIs.

Thus, the efforts by lots of people to write entire network libraries
that, basically, re-implement the Python standard library, but change
the implementation of 7 methods (bind, listen, accept, connect, send,
recv, close). They end up having to duplicate tens of thousands of
LoC, just to change 7 methods.

That's where transparency would be nice - to enable that separation of
concerns.

Hendrik van Rooyen

unread,
Sep 29, 2009, 10:16:52 AM9/29/09
to pytho...@python.org
On Monday, 28 September 2009 16:44:48 Grant Edwards wrote:

> $10 is pretty expensive for a lot of applications. I bet that
> processor also uses a lot of power and takes up a lot of board
> space. If you've only got $2-$3 in the money budget, 200uA at
> 1.8V in the power budget, and 6mm X 6mm of board-space, your
> choices are limited.
>
> Besides If you can get by with 256 or 512 bytes of RAM, why pay
> 4X the price for a 1K part?
>
> Besides which, the 8032 instruction set and development tools
> are icky compared to something like an MSP430 or an AVR. ;)
>
> [The 8032 is still head and shoulders above the 8-bit PIC
> family.]

I am biased.
I like the 8031 family.
I have written pre-emptive multitasking systems for it,
as well as state-machine round robin systems.
In assembler.
Who needs tools if you have a half decent macro assembler?
And if the macro assembler is not up to much, then you write your own pre
processor using python.

The 803x bit handling is, in my arrogant opinion, still the best of any
processor. - jump if bit set then clear as an atomic instruction rocks.

:-)

Where do you get such nice projects to work on?

- Hendrik


Grant Edwards

unread,
Sep 29, 2009, 10:16:45 PM9/29/09
to
On 2009-09-29, Hendrik van Rooyen <hen...@microcorp.co.za> wrote:
> On Monday, 28 September 2009 16:44:48 Grant Edwards wrote:
>
>> $10 is pretty expensive for a lot of applications. I bet that
>> processor also uses a lot of power and takes up a lot of board
>> space. If you've only got $2-$3 in the money budget, 200uA at
>> 1.8V in the power budget, and 6mm X 6mm of board-space, your
>> choices are limited.
>>
>> Besides If you can get by with 256 or 512 bytes of RAM, why pay
>> 4X the price for a 1K part?
>>
>> Besides which, the 8032 instruction set and development tools
>> are icky compared to something like an MSP430 or an AVR. ;)
>>
>> [The 8032 is still head and shoulders above the 8-bit PIC
>> family.]
>
> I am biased.
> I like the 8031 family.
> I have written pre-emptive multitasking systems for it,
> as well as state-machine round robin systems.
> In assembler.
> Who needs tools if you have a half decent macro assembler?

Assembler macros are indeed a lost art. Back in the day, I
remember seeing some pretty impressive macro libraries layered
2-3 deep. I've done assember macros as recently as about 2-3
years go because it was the easiest way to auto-magically
generate lookup tables for use by C programs (macro assemblers
always have a "repeat" directive, and cpp doesn't).

> The 803x bit handling is, in my arrogant opinion, still the
> best of any processor. - jump if bit set then clear as an
> atomic instruction rocks.

The bit-addressing mode was (and still is) cool. However, the
stack implementation hurts pretty badly now that memory is
cheap.

I shouldn't criticize the 8051. I remember switching from the
8048 to the 8051 (8751 actually, at about $300 each) and
thinking it was wonderful. [Anybody who remembers fighting
with the 8048 page boundaries knows what I mean.]

>:-)
>
> Where do you get such nice projects to work on?

Just lucky. :)

--
Grant


Hendrik van Rooyen

unread,
Sep 30, 2009, 3:25:28 AM9/30/09
to pytho...@python.org
On Wednesday, 30 September 2009 04:16:45 Grant Edwards wrote:

> Assembler macros are indeed a lost art. Back in the day, I
> remember seeing some pretty impressive macro libraries layered
> 2-3 deep. I've done assember macros as recently as about 2-3
> years go because it was the easiest way to auto-magically
> generate lookup tables for use by C programs (macro assemblers
> always have a "repeat" directive, and cpp doesn't).
>
> > The 803x bit handling is, in my arrogant opinion, still the
> > best of any processor. - jump if bit set then clear as an
> > atomic instruction rocks.
>
> The bit-addressing mode was (and still is) cool. However, the
> stack implementation hurts pretty badly now that memory is
> cheap.
>
> I shouldn't criticize the 8051. I remember switching from the
> 8048 to the 8051 (8751 actually, at about $300 each) and
> thinking it was wonderful. [Anybody who remembers fighting
> with the 8048 page boundaries knows what I mean.]

You were lucky - I started with an 8039 and the 8048 was a step up!

You are right about the stack - there are a lot of implementations now with
two or more data pointers, which make a big difference. If only someone
would build one with a two byte stack pointer that points into movx space,
the thing would fly faster again. It would make a stunning difference to the
multitasking performance if you do not have to store the whole stack. Of
course, if you are mucking around in assembler, then the 128 bytes at the top
of the internal memory is often enough.

This is getting a bit far away from python and coroutines, though. :-)

- Hendrik

Paul Rubin

unread,
Sep 30, 2009, 3:46:38 AM9/30/09
to
Hendrik van Rooyen <hen...@microcorp.co.za> writes:
> You were lucky - I started with an 8039 and the 8048 was a step up!
> ....

> This is getting a bit far away from python and coroutines, though. :-)

Getting away from python in the opposite direction, if you click

http://cufp.galois.com/2008/schedule.html

the second presentation "Controlling Hybrid Vehicles with Haskell"
might interest you. Basically it's about a high level DSL that
generates realtime control code written in C. From the slides:

* 5K lines of Haskell/atom replaced 120K lines of matlab, simulink,
and visual basic.
* 2 months to port simulink design to atom.
* Rules with execution periods from 1ms to 10s all scheduled at
compile time to a 1 ms main loop.
* Atom design clears electronic/sw testing on first pass.
* Currently in vehicle testing with no major issues.

Code is here: http://hackage.haskell.org/package/atom

Blurb: "Atom is a Haskell DSL for designing hard realtime embedded
programs. Based on conditional term rewriting, atom will compile a
collection of atomic state transition rules to a C program with
constant memory use and deterministic execution time."

Hendrik van Rooyen

unread,
Sep 30, 2009, 12:00:43 PM9/30/09
to pytho...@python.org
On Wednesday, 30 September 2009 09:46:38 Paul Rubin wrote:

> Getting away from python in the opposite direction, if you click
>
> http://cufp.galois.com/2008/schedule.html
>
> the second presentation "Controlling Hybrid Vehicles with Haskell"
> might interest you. Basically it's about a high level DSL that
> generates realtime control code written in C. From the slides:
>
> * 5K lines of Haskell/atom replaced 120K lines of matlab, simulink,
> and visual basic.
> * 2 months to port simulink design to atom.
> * Rules with execution periods from 1ms to 10s all scheduled at
> compile time to a 1 ms main loop.
> * Atom design clears electronic/sw testing on first pass.
> * Currently in vehicle testing with no major issues.
>
> Code is here: http://hackage.haskell.org/package/atom
>
> Blurb: "Atom is a Haskell DSL for designing hard realtime embedded
> programs. Based on conditional term rewriting, atom will compile a
> collection of atomic state transition rules to a C program with
> constant memory use and deterministic execution time."

Awesome! Thank you

- Hendrik


Rhodri James

unread,
Sep 30, 2009, 6:27:02 PM9/30/09
to pytho...@python.org
On Mon, 28 Sep 2009 15:44:48 +0100, Grant Edwards
<inv...@invalid.invalid> wrote:

> $10 is pretty expensive for a lot of applications. I bet that
> processor also uses a lot of power and takes up a lot of board
> space. If you've only got $2-$3 in the money budget, 200uA at
> 1.8V in the power budget, and 6mm X 6mm of board-space, your
> choices are limited.
>
> Besides If you can get by with 256 or 512 bytes of RAM, why pay
> 4X the price for a 1K part?
>
> Besides which, the 8032 instruction set and development tools
> are icky compared to something like an MSP430 or an AVR. ;)
>
> [The 8032 is still head and shoulders above the 8-bit PIC
> family.]

I was going to say, you want 256 bytes of RAM, you profligate
so-and-so? Here, have 32 bytes of data space and stop your
whining :-)

--
Rhodri James *-* Wildebeest Herder to the Masses

Grant Edwards

unread,
Sep 30, 2009, 7:19:09 PM9/30/09
to

What? You had 1's? All we had were 0's. And we _liked_ it.

--
Grant


Dave Angel

unread,
Sep 30, 2009, 10:43:40 PM9/30/09
to Grant Edwards, pytho...@python.org
The 1's were left there by the UV light. If you wanted a zero, you had
to pulse it there on purpose.

DaveA

Hendrik van Rooyen

unread,
Oct 1, 2009, 3:40:47 AM10/1/09
to pytho...@python.org
On Thursday, 1 October 2009 00:27:02 Rhodri James wrote:

> I was going to say, you want 256 bytes of RAM, you profligate
> so-and-so? Here, have 32 bytes of data space and stop your
> whining :-)

My multi tasking is coming on nicely, but I am struggling a bit with the
garbage collection. The Trash bin gets a bit tight at times, so the extra
bytes help a lot when I run more than 500 independent tasks.

:-)

- Hendrik


Denis

unread,
Oct 9, 2009, 4:35:40 AM10/9/09
to
On Sep 23, 10:58 pm, Brian Hammond
<or.else.it.gets.the.h...@gmail.com> wrote:
> On Aug 25, 12:51 am, Denis <denis.bile...@gmail.com> wrote:
>
> > You can also atgevent
>
> >http://pypi.python.org/pypi/gevent
>
> Please, please document this!  There are a lot of people who would
> love to use this but give up when they don't find a guide or something
> similar.

I've actually started doing that recently, check out http://gevent.org
Feedback is appreciated.

0 new messages