stackless python

3 views
Skip to first unread message

John DeWeese

unread,
Dec 19, 2001, 3:30:58 AM12/19/01
to
Hello, I'm a python newbie interested in using stackless python to support
hundreds of simulation objects for a game. I've checked out various articles
and it sounds great, and I've visited stackless.com. Seems that branch is
becoming a bit dated. I also see that some features such as generators (not
relevant to me) have made it into python 2.2. So, what's up, can anyone
share their experiences regarding stackless python, micro-threads, etc.?

Thanks!


Bill Tate

unread,
Dec 19, 2001, 1:38:52 PM12/19/01
to
"John DeWeese" <dew...@usc.edu> wrote in message news:<9vpjds$due$1...@usc.edu>...

Real life - The largest implementation I've seen was done by gordon
mcmillan while I was working for a particular company that shall
remain anonymous - CBI applies unfortunately. Gordon's implementation
was very stable, fast, and scalable. I think you might some useful
information on gordon's site - some of its dated - but it does a
pretty good job of driving home certain key concepts. I don't think
you can go wrong here given that continuations are so damn lightweight
and they work beautifully with non-blocking sockets. If your going to
be doing network-based game development, sounds like a good fit to me.

Michael Hudson

unread,
Dec 19, 2001, 4:56:54 PM12/19/01
to
"John DeWeese" <dew...@usc.edu> writes:

> Hello, I'm a python newbie interested in using stackless python to
> support hundreds of simulation objects for a game.

Cool.

> I've checked out various articles and it sounds great, and I've
> visited stackless.com. Seems that branch is becoming a bit dated.

Well what's there works well enough, I believe. But yes, you're
right.

> I also see that some features such as generators (not relevant to
> me) have made it into python 2.2. So, what's up, can anyone share
> their experiences regarding stackless python, micro-threads, etc.?

What do you want to know? As you see development on stackless seems
largely to be stalled, but I believe there are plenty of people still
using it.

OTOH, the chances of seeing stackless python or something like it in
the core are currently in the vanishing-to-nil range. We went round
with this one on the newsgroup a few weeks back -- google is your
friend here -- and the conclusion was there was noone willing and able
to do the enormous amount of work required. CT has done the first 90%
of the work, but there's still the other 90% to go, and even if the
work was done, it's far from certain that Guido would accept the
changes anyway.

Cheers,
M.

Courageous

unread,
Dec 19, 2001, 5:51:06 PM12/19/01
to

>OTOH, the chances of seeing stackless python or something like it in
>the core are currently in the vanishing-to-nil range. We went round
>with this one on the newsgroup a few weeks back -- google is your
>friend here -- and the conclusion was there was noone willing and able
>to do the enormous amount of work required.

A big rub is that it might very well be impossible to get stackless
to work properly in Jython, and even if it weren't, it's a tough
mark over there as well.

>CT has done the first 90%
>of the work, but there's still the other 90% to go, and even if the
>work was done, it's far from certain that Guido would accept the
>changes anyway.

Makes the kernel tougher to understand. Programmers with an under-
standing of stack maniuplation tricks, implementation of cooperative
multithreading frameworks, and forth will have no problems. But this
group is a rare breed.

If you really want to have a bad day, try implementation something
like continuations in C++. Then, when something goes wrong, try to
debug it. Har, har, har. *cringe*

C//

Donn Cave

unread,
Dec 20, 2001, 12:02:03 AM12/20/01
to
Quoth "John DeWeese" <dew...@usc.edu>:

Stackless can really do miracles for an application design that has to
deal with asynchronous event handling of some kind. I don't think you
ever get miracles like that free, though. To work with Stackless, you
really need to keep your head screwed on - in my opinion it isn't for
casual use in every conceivable application. Raw continuations, anyway -
I don't know microthreads, that sword is probably not so double-edged.
Anyway, it's like a whole new language, and one with not much competition -
I don't know, can Scheme do that stuff?

Donn Cave, do...@drizzle.com

John DeWeese

unread,
Dec 20, 2001, 12:22:13 AM12/20/01
to
Bill Tate wrote:
> What do you want to know? As you see development on stackless seems
> largely to be stalled, but I believe there are plenty of people still
> using it.
>
> OTOH, the chances of seeing stackless python or something like it in
> the core are currently in the vanishing-to-nil range. We went round
> with this one on the newsgroup a few weeks back -- google is your
> friend here -- and the conclusion was there was noone willing and able
> to do the enormous amount of work required. CT has done the first 90%

> of the work, but there's still the other 90% to go, and even if the
> work was done, it's far from certain that Guido would accept the
> changes anyway.

Hmm, that's disappointing that it's falling away without any word from
Christian Tismer since May (at least, his page). I've read that google
thread. Do you think there's enough support to carry concurrent versions
forward to the latest python versions, at least? It would suck if Stackless
is at 2.x and some new modules start requiring Python 2.(x+1).

So my goal is to see how I can take the stackless idea of micro-threading
and use it to drive all my simulation objects, to get the non-spaghetti
benefits of threading with sufficiently low overhead to support perhaps 100
object scripts. I've found Gordon McMillan's socket example (thanks Bill),
which appears to very similar in theory... very cool.

Now I just have to learn Python!

Do you guys know of more example code for micro-threads? Hungry for info!

- John


Mike C. Fletcher

unread,
Dec 20, 2001, 2:37:22 AM12/20/01
to
Unfortunately, none of my micro-threading code is my
own (all done as part of a research project for a
former employer), but I might be able to help out if
you're needing pointers or advice.

I used it to develop a prototype of a distributed VR +
e-commerce + software distribution system.
Micro-threads made it considerably simpler than any
other methodology I've seen.

Good luck,
Mike

--- John DeWeese <dew...@usc.edu> wrote:
...


> Do you guys know of more example code for
> micro-threads? Hungry for info!

...


______________________________________________________
Send your holiday cheer with http://greetings.yahoo.ca

Michael Hudson

unread,
Dec 20, 2001, 5:45:24 AM12/20/01
to
"John DeWeese" <dew...@usc.edu> writes:

> Bill Tate wrote:
> > What do you want to know? As you see development on stackless seems
> > largely to be stalled, but I believe there are plenty of people still
> > using it.

That wasn't Bill, that was me.

Cheers,
M.

Steven Majewski

unread,
Dec 20, 2001, 3:27:50 PM12/20/01
to

The new generators in Python2.2 implement semi-coroutines, not
full coroutines: the limitation is that they always return to
their caller -- they can't take an arbitrary continuation as
a return target.

So you can't do everything you can do in Stackless, or at least,
you can't do it the same way. I'm not sure what the limitations
are yet, but you might be surprised with what you CAN do.
If all the generator objects yield back to the same 'driver'
procedure, then you can do a sort of cooperative multithreading.
In that case, you're executing the generators for their side
effects -- the value returned by yield may be unimportant except
perhaps as a status code.

[ See also Tim Peters' post on the performance advantages of
generators -- you only parse args once for many generator
calls, and you keep and reuse the same stack frame. So I
believe you get some of the same benefits of Stackless
microthreads. ]

Maybe I can find the time to post an example of what I mean.
In the mean time, some previous posts on generators might give
you some ideas.

<http://groups.google.com/groups?q=generators+group:comp.lang.python+author:Majewski&hl=en&scoring=d&as_drrb=b&as_mind=12&as_minm=1&as_miny=2001&as_maxd=20&as_maxm=12&as_maxy=2001&rnum=1&selm=mailman.1008090197.6318.python-list%40python.org>

<http://groups.google.com/groups?hl=en&threadm=mailman.996870501.28562.python-list%40python.org&rnum=6&prev=/groups%3Fas_q%3Dgenerators%26as_ugroup%3Dcomp.lang.python%26as_uauthors%3DMajewski%26as_drrb%3Db%26as_mind%3D12%26as_minm%3D1%26as_miny%3D2001%26as_maxd%3D20%26as_maxm%3D12%26as_maxy%3D2001%26num%3D100%26as_scoring%3Dd%26hl%3Den>

The ungrammatical title of that last thread above:
"Nested generators is the Python equivalent of unix pipe cmds."
might also suggest the solution I'm thinking of.


-- Steve Majewski

John DeWeese

unread,
Dec 20, 2001, 4:06:18 PM12/20/01
to
> > Bill Tate wrote:
> > > What do you want to know? As you see development on stackless seems
> > > largely to be stalled, but I believe there are plenty of people still
> > > using it.
>
> That wasn't Bill, that was me.

Oops! I wrote a response to you but used Bill's message to reply. Thanks for
your info.

- John


John DeWeese

unread,
Dec 20, 2001, 4:16:41 PM12/20/01
to
> The new generators in Python2.2 implement semi-coroutines, not
> full coroutines: the limitation is that they always return to
> their caller -- they can't take an arbitrary continuation as
> a return target.
>
> So you can't do everything you can do in Stackless, or at least,
> you can't do it the same way. I'm not sure what the limitations
> are yet, but you might be surprised with what you CAN do.
> If all the generator objects yield back to the same 'driver'
> procedure, then you can do a sort of cooperative multithreading.
> In that case, you're executing the generators for their side
> effects -- the value returned by yield may be unimportant except
> perhaps as a status code.
>
> [ See also Tim Peters' post on the performance advantages of
> generators -- you only parse args once for many generator
> calls, and you keep and reuse the same stack frame. So I
> believe you get some of the same benefits of Stackless
> microthreads. ]
>
> Maybe I can find the time to post an example of what I mean.
> In the mean time, some previous posts on generators might give
> you some ideas.

Thanks, I will do some research on generators. If they are sufficient, I
would be able to stick to the latest python.

- John


Christian Tismer

unread,
Dec 31, 2001, 9:58:27 AM12/31/01
to
Donn Cave wrote:


I'm pretty sure Scheme can. It has raw continuations, so you can
implement everything with them, why not microthreads?
Continuations can do even more, they are able to create new
control structures for your language. And exactly that it the point
which makes them insuitable for Python: They are too powerful.
Python has its own control structures, and we don't need a construct
with the power to build some. This is oversized, and very oversized
construct turns out to be a drawback at some future.
What Python needs is a secure mechanism to switch frame changes
at certain times. This is not continuations, but microthreads with
explicit or implicit switching.

I will implement this for Python 2.2, probably with some help
of volunteers.

ciao - chris

--
Christian Tismer :^) <mailto:tis...@tismer.com>
Mission Impossible 5oftware : Have a break! Take a ride on Python's
Kaunstr. 26 : *Starship* http://starship.python.net/
14163 Berlin : PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
where do you want to jump today? http://www.stackless.com/

Paul Rubin

unread,
Dec 31, 2001, 10:28:40 AM12/31/01
to
Christian Tismer <tis...@tismer.com> writes:
> I'm pretty sure Scheme can. It has raw continuations, so you can
> implement everything with them, why not microthreads?
> Continuations can do even more, they are able to create new
> control structures for your language. And exactly that it the point
> which makes them insuitable for Python: They are too powerful.
> Python has its own control structures, and we don't need a construct
> with the power to build some. This is oversized, and very oversized
> construct turns out to be a drawback at some future.
> What Python needs is a secure mechanism to switch frame changes
> at certain times. This is not continuations, but microthreads with
> explicit or implicit switching.

The thing I don't fully understand is that several Scheme
implementations are both smaller and faster than Python. I've been
wondering for a while whether it's time to graft a Python parser onto
a Scheme compiler/evaluator. The limitations of "simple generators"
also seem kind of artificial--maybe Stackless for 2.2 can support
calling a generator from multiple places.

François Pinard

unread,
Dec 31, 2001, 11:42:11 AM12/31/01
to
[Paul Rubin]

> The thing I don't fully understand is that several Scheme implementations
> are both smaller and faster than Python.

Python is bigger than Scheme, most probably because of all its bundled
libraries. Scheme is more bare by comparison. As for speed, Scheme is
more simple as well, simple enough for allowing compilation.

> I've been wondering for a while whether it's time to graft a Python
> parser onto a Scheme compiler/evaluator.

There is a cost in the generality given by all the __SPECIAL__ functions
in Python, and following the evaluation chain induced by class nesting.
I guess that if you reimplement Python in Scheme, with all its features,
you will not gain speed over the implementation we currently use.

--
François Pinard http://www.iro.umontreal.ca/~pinard

Christian Tismer

unread,
Dec 31, 2001, 11:32:39 AM12/31/01
to
Paul Rubin wrote:

> Christian Tismer <tis...@tismer.com> writes:
<snip>


> The thing I don't fully understand is that several Scheme
> implementations are both smaller and faster than Python. I've been
> wondering for a while whether it's time to graft a Python parser onto
> a Scheme compiler/evaluator.


I don't understand this fully, either.
For sure, Scheme has got much more manpower than Python, since
so many university projects have been supporting Scheme.
Many Scheme constructs are compiled into machine code, which
Python doesn't try yet.

And most of computation time is not spent in Python's evaluator
(which can of course be dramatically sped up as Py2C has shown),
but in the implementation of all the many objects. The Python C
library is highly optimized, but it is a library, and therefore
there are many calls for every object, which cannot be optimized
away, easily.

BTW Sam Rushing did some work towards a Scheme-based Python
a while ago.

> The limitations of "simple generators"
> also seem kind of artificial--maybe Stackless for 2.2 can support
> calling a generator from multiple places.


Sure, Stackless generators are not limited. They are first class
objects which can be used everywhere. Just think of ICON's
co-expressions.

Will Ware

unread,
Dec 31, 2001, 12:12:43 PM12/31/01
to
Donn Cave wrote:
> ... microthreads ... can Scheme do that stuff?

Christian Tismer wrote:
> I'm pretty sure Scheme can. It has raw continuations, so you can
> implement everything with them, why not microthreads?

MzScheme has an excellent microthreading capability. It works great
under Linux, I haven't tried it under Windows.
General info: http://www.cs.rice.edu/CS/PLT/packages/mzscheme/
Threads: http://www.cs.rice.edu/CS/PLT/packages/doc/mzscheme/node91.htm

Tim Peters

unread,
Jan 1, 2002, 1:03:05 AM1/1/02
to
[Paul Rubin]

> The limitations of "simple generators" also seem kind of artificial--
> maybe Stackless for 2.2 can support calling a generator from multiple
> places.

[Christian Tismer]


> Sure, Stackless generators are not limited. They are first class
> objects which can be used everywhere.

So are 2.2's Simple Generators. Paul is confused if he thinks a Python
generator can't be called from multiple places (although that's not typical
use). Since they're just objects that implement the 2.2 iterator protocol,
we *can't* stop them from getting called from multiple places <wink>.

> Just think of ICON's co-expressions.

Python's generators are between Icon's generators and Icon's co-expressions:
like an Icon coexp, they can be resumed at will independent of control
context (although it's more usual to let "for" loops drive them by magic),
but like an Icon generator they always return control to their resumer. A
subtle advantage of the latter is that semantics in case of unhandled
exception are very clear: an active instance of a Python generator "came
from" an obvious place, and that's the obvious place to pass on an unhandled
exception.

BTW, Neil Schemenauer had no trouble rewriting all "the usual" coroutine
examples to use Simple Generators instead. It remains debatable whether
coexps are actually more powerful; in general, a Python-level dispatch loop
can keep around any number of Simple Generators suspended in midstream
(including chains of generators, recursive or otherwise), and coexp-like
transfer can then be faked by yielding back to the dispatch loop with an
indication of which generator you want to see resumed next. Neil didn't
need all that machinery to handle the examples he tried, though. I expect
that if someone bothered to flesh this all out, the rub would be that
resuming a generator chain N deep (ditto yielding back from one) requires
time proportional to N.

if-you-don't-avoid-the-c-stack-you-have-to-rebuild-it-each-time-ly
y'rs - tim


Donn Cave

unread,
Jan 1, 2002, 1:41:32 PM1/1/02
to
Quoth Christian Tismer <tis...@tismer.com>:
...

| Continuations can do even more, they are able to create new
| control structures for your language. And exactly that it the point
| which makes them insuitable for Python: They are too powerful.
| Python has its own control structures, and we don't need a construct
| with the power to build some. This is oversized, and very oversized
| construct turns out to be a drawback at some future.
| What Python needs is a secure mechanism to switch frame changes
| at certain times. This is not continuations, but microthreads with
| explicit or implicit switching.
|
| I will implement this for Python 2.2, probably with some help
| of volunteers.

Could Gordon McMillan's "asyncore turned right-side out" select
dispatcher have been written with the microthread facility you
have in mind?

I am sure there's some way to present that capability in terms of
a pre-defined control structure, instead of raw continuations, but
it's a pity if it has to be given up because it's too powerful.

Donn Cave, do...@drizzle.com

Michael Hudson

unread,
Jan 1, 2002, 2:22:12 PM1/1/02
to
Paul Rubin <phr-n...@nightsong.com> writes:

> The thing I don't fully understand is that several Scheme
> implementations are both smaller and faster than Python.

Less dynamism, I think. I'm not sure what the standard says about
things like

(define (func x y) (+ x y))
(set! + -)
(display (func 2 3))

but I'd bet at least some implementations would print "5". Certainly
in CL the compiler can know for certain if the + will refer to cl:+ or
not, and rebinding cl:+ is not allowed.

> I've been wondering for a while whether it's time to graft a Python
> parser onto a Scheme compiler/evaluator.

Would be an interesting project, but I'd guess that either you
wouldn't get a significant speed up, or the language you'd end up
implementing would have subtle differences from Python as we know it.

Cheers,
M>

Christian Tismer

unread,
Jan 1, 2002, 5:13:40 PM1/1/02
to
Donn Cave wrote:

> Quoth Christian Tismer <tis...@tismer.com>:
> ...
> | Continuations can do even more, they are able to create new
> | control structures for your language. And exactly that it the point
> | which makes them insuitable for Python: They are too powerful.
> | Python has its own control structures, and we don't need a construct
> | with the power to build some. This is oversized, and very oversized
> | construct turns out to be a drawback at some future.
> | What Python needs is a secure mechanism to switch frame changes
> | at certain times. This is not continuations, but microthreads with
> | explicit or implicit switching.
> |
> | I will implement this for Python 2.2, probably with some help
> | of volunteers.
>
> Could Gordon McMillan's "asyncore turned right-side out" select
> dispatcher have been written with the microthread facility you
> have in mind?


Yes, I'm pretty sure.

> I am sure there's some way to present that capability in terms of
> a pre-defined control structure, instead of raw continuations, but
> it's a pity if it has to be given up because it's too powerful.

No, I no longer think so.
It has to be shrunk down to the capabilities needed.
Having continuations where one-shot continuations (aka frames with
state) are sufficient is not healthy.
I've been thinking of this since a year now, and finally Guido
convinced me.

Courageous

unread,
Jan 1, 2002, 7:05:44 PM1/1/02
to

>No, I no longer think so.
>It has to be shrunk down to the capabilities needed.
>Having continuations where one-shot continuations (aka frames with
>state) are sufficient is not healthy.
>I've been thinking of this since a year now, and finally Guido
>convinced me.

It'll be misfortunate if all access to continuations goes away.
Will your library at least allow the ability ot manipulate them
through C extension functions? I would highly suggest that you
offer this. It leaves continuation code to the experts and is
highly dissuasive to the casual user, which would mean in practice
very little of the labyrnthine code which use of continuations
results in.

C//

Donn Cave

unread,
Jan 1, 2002, 10:07:25 PM1/1/02
to
Quoth Courageous <jkr...@san.rr.com>:
(quoting Christian Tismer)

Eww, that sounds like the worst of both worlds to me. Not only do
I lack your confidence in C coders, I find C modules quite a bit more
inscrutable.

I'm inclined to agree with him on as far as quoted above, though I'm
sure he has a more comprehensive sense of what "unhealthy" means.
The trick is defining what's really useful, and shrinking it to that.
If the result will work for nearly every practical use of continuations,
then it's a good deal, and of course by that standard there wouldn't be
any apparent reason to support more from C.

Donn Cave, do...@drizzle.com

Christian Tismer

unread,
Jan 2, 2002, 4:37:21 AM1/2/02
to
Courageous wrote:


If I'm heading towards integration into the mainstream, I cannot
introduce a backdoor to continuations. They do add a considerable
amount of complexity to the kernel code.
I might add a branch for this later on.
For now, I need to go with one-shot continuations.
That is: After the continuation is run, its state is changed
and it cannot be run again, resp. it is no longer the same
continuation. As a compromize, I can provide an explicit
clone operation to save a continuation for re-use. Today, this
happens automagically, although it isn't needed in most cases.

Happy new year - chris

maxm

unread,
Jan 2, 2002, 5:15:14 AM1/2/02
to

"Christian Tismer" <tis...@tismer.com> wrote in message
news:mailman.100996428...@python.org...

> If I'm heading towards integration into the mainstream, I cannot
> introduce a backdoor to continuations. They do add a considerable
> amount of complexity to the kernel code.

Anyhoo ... I look forward to trying out the microthreads. They sound like
something really worthwile.

I do hope that something like it gets into the standard distribution.

regards Max M


Justin Sheehy

unread,
Jan 2, 2002, 1:52:47 PM1/2/02
to
Michael Hudson <m...@python.net> writes:

>> The thing I don't fully understand is that several Scheme
>> implementations are both smaller and faster than Python.
>
> Less dynamism, I think. I'm not sure what the standard says about
> things like
>
> (define (func x y) (+ x y))
> (set! + -)
> (display (func 2 3))
>
> but I'd bet at least some implementations would print "5".

It's hard to show that _none_ of them do that, but on the ones I have handy:

----
Chez Scheme Version 6.1
Copyright (c) 1998 Cadence Research Systems

> (define (func x y) (+ x y))
> (set! + -)
> (display (func 2 3))

-1
----
Welcome to MzScheme version 103, Copyright (c) 1995-2000 PLT (Matthew Flatt)


> (define (func x y) (+ x y))
> (set! + -)
> (display (func 2 3))

-1
----
$ elk


> (define (func x y) (+ x y))

func
> (set! + -)
#[primitive +]
> (display (func 2 3))
-1
----
(mit scheme)
1 ]=> (define (func x y) (+ x y))

;Value: func

1 ]=> (set! + -)

;Value 1: #[arity-dispatched-procedure 1]

1 ]=> (display (func 2 3))
-1
;Unspecified return value
----
guile> (define (func x y) (+ x y))
guile> (set! + -)
guile> (display (func 2 3))
-1
----

At least a good portion of Scheme implementations support this level
of dynamism. At least a couple of those are pretty darn efficient
compared to CPython. So I'd venture that while there may be "less
dynamism" in some real sense, it isn't as simple a difference as you imply.

-Justin


p.s. - I suspect that scheme48 might not let you assign to "+", from
what I remember of it.

Bengt Richter

unread,
Jan 2, 2002, 5:41:59 PM1/2/02
to
On Wed, 02 Jan 2002 13:52:47 -0500, Justin Sheehy <jus...@iago.org> wrote:

>Michael Hudson <m...@python.net> writes:
>
>>> The thing I don't fully understand is that several Scheme
>>> implementations are both smaller and faster than Python.
>>
>> Less dynamism, I think. I'm not sure what the standard says about
>> things like
>>
>> (define (func x y) (+ x y))
>> (set! + -)
>> (display (func 2 3))
>>
>> but I'd bet at least some implementations would print "5".
>

FWIW, here's an oldie (came on 5 1/4" floppy in "PC Scheme Trade Edition" book
from MIT press 1990, by Texas Instruments)

Note the difference in results at [3] and [4]:
_________________________________________

PC Scheme Student Edition 3.0
(C) Copyright 1987 by Texas Instruments
All Rights Reserved.

[PCS-DEBUG-MODE is OFF]
[1] (define (func x y) (+ x y))
FUNC
[2] (set! + -)
[WARNING: modifying an `integrable' variable: +]
#<PROCEDURE ->
[3] (display (func 2 3))
5
[4] (+ 2 3)
-1
[5]
_________________________________________

Michael Hudson

unread,
Jan 3, 2002, 5:24:23 AM1/3/02
to
bo...@accessone.com (Bengt Richter) writes:

> On Wed, 02 Jan 2002 13:52:47 -0500, Justin Sheehy <jus...@iago.org> wrote:
>
> >Michael Hudson <m...@python.net> writes:
> >
> >>> The thing I don't fully understand is that several Scheme
> >>> implementations are both smaller and faster than Python.
> >>
> >> Less dynamism, I think. I'm not sure what the standard says about
> >> things like
> >>
> >> (define (func x y) (+ x y))
> >> (set! + -)
> >> (display (func 2 3))
> >>
> >> but I'd bet at least some implementations would print "5".
> >

> FWIW, here's an oldie (came on 5 1/4" floppy in "PC Scheme Trade
> Edition" book from MIT press 1990, by Texas Instruments)
>
> Note the difference in results at [3] and [4]:

That was what I was expecting, at least some of the time.
[...]

> >It's hard to show that _none_ of them do that, but on the ones I
> >have handy:

[lots of -1's]


> >At least a good portion of Scheme implementations support this level
> >of dynamism. At least a couple of those are pretty darn efficient
> >compared to CPython.

Were any of those compiling the code?

> >So I'd venture that while there may be "less
> >dynamism" in some real sense, it isn't as simple a difference as
> >you imply.

I wouldn't want anyone to think I claimed it was simple; it was just
an example.

It may be almost a social thing; the sort of code one writes in scheme
may be easier to compile efficiently than the sort of code one writes
in Python.

This is getting into meaningless wibble territory anyway, so I'm going
to stop.

Cheers,
M.

Justin Sheehy

unread,
Jan 3, 2002, 11:08:18 AM1/3/02
to
Michael Hudson <m...@python.net> writes:

>> >At least a good portion of Scheme implementations support this level
>> >of dynamism. At least a couple of those are pretty darn efficient
>> >compared to CPython.
>
> Were any of those compiling the code?

Yes, Chez Scheme performs incremental compilation.

> It may be almost a social thing; the sort of code one writes in scheme
> may be easier to compile efficiently than the sort of code one writes
> in Python.

Could be.

My personal suspicion is that the main reasons for the fact that
Scheme implementations tend to be so much more optimized than Python
are:

1 - Hundreds of graduate students attacking the problem.

2 - Less concern with issues like maintainability and vast portability
of a single core implementation.

I really don't believe the "Python is harder to optimize" argument[1],
either for language definition or social reasons. Other similarly
dynamic languages have successfully produced well-optimized
implementations. I think that it just hasn't been a very high
priority among the relatively small group of people that actually work
on the CPython core.

-Justin

[1] - qualification: There are a few things that Python doesn't have
that do make it harder to optimize than it would be if they were
present, such as type declarations. My point is that other
languages that are also missing these features have not found
them to be insurmountable obstacles.


Courageous

unread,
Jan 3, 2002, 12:24:41 PM1/3/02
to

>[1] - qualification: There are a few things that Python doesn't have
> that do make it harder to optimize than it would be if they were
> present, such as type declarations. My point is that other
> languages that are also missing these features have not found
> them to be insurmountable obstacles.

And in any case, actually only a small part of the performance problem --
or at least for the performance problem within the order of magnitude
currently faced by Python.

C//

Christian Tismer

unread,
Feb 15, 2002, 3:27:15 AM2/15/02
to
Paul Rubin wrote:


Python's approach at generators is the cheapest possible
way to get generators. And this implies that the generator
frame is bound to the current context.
Stackless new dirty little brother goes a completely different
way and can of course generate general generators which do
run without any limitation -- at some cost of course.

Python generators will run at maximum speed, but "stack-based".

Stackless generators will exist as a special case of Stackless
tasklets, which can act as microthreads, coroutines or
generators, at a slightly higher computational cost, but
only limited by the programmer's imagination.

These festures will be available by the end of February 2002,
the latest. A way to pickle program state will be implemented
until end of March 2002.

Kind regards - chris

Reply all
Reply to author
Forward
0 new messages