Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Is Unladen Swallow dead?

19 views
Skip to first unread message

laspi

unread,
Nov 16, 2010, 5:30:58 PM11/16/10
to
There has been little or no activity at all in this project in the
last months, and the last comments on their mailing list seem to
conrfim that it's future is uncertain.
It's also very strange the lack of updates, news or discussions,
specially considering that the merging plan has been approved. Or it
hasn't?

swapnil

unread,
Nov 17, 2010, 1:24:51 AM11/17/10
to

AFAIK, the merging plan was approved by Guido early this year. I guess
Google is expecting the community to drive the project from here on.
That was the whole idea for merging it to mainline. From my last
conversation with Collin, they are targeting Python 3.3

John Nagle

unread,
Nov 17, 2010, 2:02:11 AM11/17/10
to

I think it's dead. They're a year behind on quarterly releases.
The last release was Q3 2009. The project failed to achieve its
stated goal of a 5x speedup. Not even close. More like 1.5x
(http://www.python.org/dev/peps/pep-3146)

The Google blog at
"http://groups.google.com/group/unladen-swallow/browse_thread/thread/f2011129c4414d04"
says, as of November 8, 2010:

"Jeffrey and I have been pulled on to other projects of higher
importance to Google. Unfortunately, no-one from the Python
open-source community has been interested in picking up the merger
work, and since none of the original team is still full-time on the
project, it's moving very slowly. Finishing up the merger into the
py3k-jit branch is a high priority for me this quarter, but what
happens then is an open question."

So Google has pulled the plug on Unladen Swallow. It looks
like they underestimated the difficulty of speeding up the CPython
model. The performance improvement achieved was so low
that cluttering up CPython with a JIT system and LLVM probably is
a lose.

John Nagle


John Ladasky

unread,
Nov 17, 2010, 3:49:17 PM11/17/10
to
On Nov 16, 2:30 pm, laspi <lorena.aspi...@gmail.com> wrote:
>Is Unladen Swallow dead?

No, it's just resting.

John Nagle

unread,
Nov 17, 2010, 5:09:59 PM11/17/10
to

For those who don't get that, The Monty Python reference:
"http://www.mtholyoke.edu/~ebarnes/python/dead-parrot.htm"

Owner: Oh yes, the, uh, the Norwegian Blue...What's,uh...What's wrong
with it?

Mr. Praline: I'll tell you what's wrong with it, my lad. 'E's dead,
that's what's wrong with it!

Owner: No, no, 'e's uh,...he's resting.

Mr. Praline: Look, matey, I know a dead parrot when I see one, and I'm
looking at one right now.

Owner: No no he's not dead, he's, he's restin'! Remarkable bird, the
Norwegian Blue, idn'it, ay? Beautiful plumage!

Mr. Praline: The plumage don't enter into it. It's stone dead.

Owner: Nononono, no, no! 'E's resting!

Mr. Praline: All right then, if he's restin', I'll wake him up!
(shouting at the cage) 'Ello, Mister Polly Parrot! I've got a lovely
fresh cuttle fish for you if you show...

(owner hits the cage)

Owner: There, he moved!

Mr. Praline: No, he didn't, that was you hitting the cage!

Owner: I never!!

Mr. Praline: Yes, you did!

Owner: I never, never did anything...

Mr. Praline: (yelling and hitting the cage repeatedly) 'ELLO POLLY!!!!!
Testing! Testing! Testing! Testing! This is your nine o'clock alarm call!

(Takes parrot out of the cage and thumps its head on the counter. Throws
it up in the air and watches it plummet to the floor.)

Mr. Praline: Now that's what I call a dead parrot.

(There's more, but you get the idea.)

John Nagle

Alexander Kapps

unread,
Nov 17, 2010, 5:51:01 PM11/17/10
to pytho...@python.org
On 17.11.2010 23:09, John Nagle wrote:
> On 11/17/2010 12:49 PM, John Ladasky wrote:
>> On Nov 16, 2:30 pm, laspi<lorena.aspi...@gmail.com> wrote:
>>> Is Unladen Swallow dead?
>>
>> No, it's just resting.
>
> For those who don't get that, The Monty Python reference:
> "http://www.mtholyoke.edu/~ebarnes/python/dead-parrot.htm"

Thank you John for making my light enough Wallet even lighter, now I
have to go and buy the original English version. Seems the German
translation sucks (misses a lot) and my copy lacks the original dub.

Damned.

:-)

Robert Kern

unread,
Nov 17, 2010, 6:39:43 PM11/17/10
to pytho...@python.org
On 11/17/10 4:51 PM, Alexander Kapps wrote:
> On 17.11.2010 23:09, John Nagle wrote:
>> On 11/17/2010 12:49 PM, John Ladasky wrote:
>>> On Nov 16, 2:30 pm, laspi<lorena.aspi...@gmail.com> wrote:
>>>> Is Unladen Swallow dead?
>>>
>>> No, it's just resting.
>>
>> For those who don't get that, The Monty Python reference:
>> "http://www.mtholyoke.edu/~ebarnes/python/dead-parrot.htm"
>
> Thank you John for making my light enough Wallet even lighter, now I have to go
> and buy the original English version. Seems the German translation sucks (misses
> a lot) and my copy lacks the original dub.

They're all (legitimately) on Youtube now.

http://www.youtube.com/watch?v=npjOSLCR2hE

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco

Martin Gregorie

unread,
Nov 17, 2010, 6:43:27 PM11/17/10
to

While you're at it, pick up the video of "Monty Python and the Holy
Grail". the project name, Unladen Swallow, is a reference to the film.


--
martin@ | Martin Gregorie
gregorie. | Essex, UK
org |

Philip Semanchuk

unread,
Nov 17, 2010, 5:30:21 PM11/17/10
to Python list

On Nov 17, 2010, at 5:09 PM, John Nagle wrote:

> On 11/17/2010 12:49 PM, John Ladasky wrote:
>> On Nov 16, 2:30 pm, laspi<lorena.aspi...@gmail.com> wrote:
>>> Is Unladen Swallow dead?
>>
>> No, it's just resting.
>
> For those who don't get that, The Monty Python reference:
> "http://www.mtholyoke.edu/~ebarnes/python/dead-parrot.htm"

A link to the source material:
http://www.youtube.com/user/montypython?blend=1&ob=4#p/c/6FD5A97331C1B802/0/npjOSLCR2hE


--
Cheers
Philip

BartC

unread,
Nov 18, 2010, 7:24:22 AM11/18/10
to
"John Nagle" <na...@animats.com> wrote in message
news:4ce37e01$0$1666$742e...@news.sonic.net...

> On 11/16/2010 10:24 PM, swapnil wrote:

>> AFAIK, the merging plan was approved by Guido early this year. I guess
>> Google is expecting the community to drive the project from here on.
>> That was the whole idea for merging it to mainline. From my last
>> conversation with Collin, they are targeting Python 3.3
>
> I think it's dead. They're a year behind on quarterly releases.
> The last release was Q3 2009. The project failed to achieve its
> stated goal of a 5x speedup. Not even close. More like 1.5x
> (http://www.python.org/dev/peps/pep-3146)

There must have been good reasons to predict a 5x increase. But why did it
take so long to find out the approach wasn't going anywhere?

Assuming the 5x speedup was shown to be viable (ie. performing the same
benchmarks, on the same data, can be done that quickly in any other
language, and allowing for the overheads associated with Python's dynamic
nature), then what went wrong?

(I've had a look at the benchmarks, with a view to trying some on other
languages, and they seem an extraordinarily difficult bunch to work with.)

> The Google blog at
> "http://groups.google.com/group/unladen-swallow/browse_thread/thread/f2011129c4414d04"
> says, as of November 8, 2010:

> So Google has pulled the plug on Unladen Swallow. It looks


> like they underestimated the difficulty of speeding up the CPython
> model. The performance improvement achieved was so low
> that cluttering up CPython with a JIT system and LLVM probably is
> a lose.

LLVM. Ok, that explains a lot. (LLVM is a huge, complex system).

--
Bartc

brf...@gmail.com

unread,
Nov 18, 2010, 12:50:58 PM11/18/10
to Python List
I do hope that it isn't dead but it unfortunately seems that way?

- Braden Faulkner

John Nagle

unread,
Nov 18, 2010, 1:31:15 PM11/18/10
to
On 11/18/2010 4:24 AM, BartC wrote:
> "John Nagle" <na...@animats.com> wrote in message
> news:4ce37e01$0$1666$742e...@news.sonic.net...
>> On 11/16/2010 10:24 PM, swapnil wrote:
>
>>> AFAIK, the merging plan was approved by Guido early this year. I
>>> guess Google is expecting the community to drive the project
>>> from here on. That was the whole idea for merging it to mainline.
>>> From my last conversation with Collin, they are targeting Python
>>> 3.3
>>
>> I think it's dead. They're a year behind on quarterly releases.
>> The last release was Q3 2009. The project failed to achieve its
>> stated goal of a 5x speedup. Not even close. More like 1.5x
>> (http://www.python.org/dev/peps/pep-3146)
>
> There must have been good reasons to predict a 5x increase.

For Java, adding a JIT improved performance by much more than that.
Hard-code compilers for LISP have done much better than 5x. The
best Java and LISP compilers approach the speed of C, while CPython
is generally considered to be roughly 60 times slower than C. So
5x probably looked like a conservative goal. For Google, a company
which buys servers by the acre, a 5x speedup would have a big payoff.

> Assuming the 5x speedup was shown to be viable (ie. performing the
> same benchmarks, on the same data, can be done that quickly in any
> other language, and allowing for the overheads associated with
> Python's dynamic nature), then what went wrong?

Python is defined by what a naive interpreter with late binding
and dynamic name lookups, like CPython, can easily implement. Simply
emulating the semantics of CPython with generated code doesn't help
all that much.

Because you can "monkey patch" Python objects from outside the
class, a local compiler, like a JIT, can't turn name lookups into hard
bindings. Nor can it make reliable decisions about the types of
objects. That adds a sizable performance penalty. Short of global
program analysis, the compiler can't tell when code for the hard cases
needs to be generated. So the hard-case code, where you figure out at
run-time, for ever use of "+", whether "+" is addition or concatenation,
has to be generated every time. Making that decision is far slower
than doing an add.

Shed Skin, which analyzes the entire program, including libraries,
on every compilation, can figure out the types of objects and generate
much faster code. Shed Skin has some heavy restrictions, many of which
could be lifted if more work went into that effort. That's one
approach that might work.

I've referred to this problem as "gratuitous hidden dynamism".
Most things which could be changed dynamically in a Python program
usually aren't.

This has been pointed out many times by many people. There's
even a PhD thesis on the topic. Without a few restrictions, so
that a compiler can at least tell when support for the hard cases
is needed, Python cannot be compiled well.

John Nagle

Ben James

unread,
Nov 18, 2010, 2:13:10 PM11/18/10
to
On 18/11/2010 17:50, brf...@gmail.com wrote:
> I do hope that it isn't dead but it unfortunately seems that way?
>
> - Braden Faulkner
>

It looks like we'll have to get used to a laden one.

Maybe if we had two, and a strand of creeper...

Jean-Paul Calderone

unread,
Nov 18, 2010, 5:59:49 PM11/18/10
to

This isn't completely accurate. It *is* possible to write a JIT
compiler
for a Python runtime which has fast path code for the common case, the
case
where the meaning of "+" doesn't change between every opcode. PyPy
has
produced some pretty good results with this approach.

For those who haven't seen it yet, http://speed.pypy.org/ has some
graphs
which reflect fairly well on PyPy's performance for benchmarks that
are not
entirely dissimilar to real world code.

Jean-Paul

Mark Wooding

unread,
Nov 18, 2010, 8:35:55 PM11/18/10
to
John Nagle <na...@animats.com> writes:

> Python is defined by what a naive interpreter with late binding
> and dynamic name lookups, like CPython, can easily implement. Simply
> emulating the semantics of CPython with generated code doesn't help
> all that much.

Indeed.

> Because you can "monkey patch" Python objects from outside the
> class, a local compiler, like a JIT, can't turn name lookups into hard
> bindings. Nor can it make reliable decisions about the types of
> objects.

But it /can/ make guesses. A dynamic runtime doesn't have to predict
everything right in advance; it only has to predict most things sort of
well enough, and fix up the things it got wrong before anyone notices.
For example, A Python compiler could inline a function call if it makes
a note to recompile the calling function if the called function is
modified. Most functions aren't redefined, so this is probably a pretty
good guess.

> That adds a sizable performance penalty. Short of global program
> analysis, the compiler can't tell when code for the hard cases needs
> to be generated.

The right approach is to guess that things are going to be done the easy
way, and then detect when the guess is wrong.

> So the hard-case code, where you figure out at run-time, for ever use
> of "+", whether "+" is addition or concatenation, has to be generated
> every time. Making that decision is far slower than doing an add.

There's an old trick here called `inline caching'. The first time a
function is called, compile it so as to assume that types of things are
as you found this time: inline simple methods, and so on. Insert some
quick type checks at the top: is this going to work next time? If not,
take a trap back into the compiler. The traditional approach is to
replace the mispredictions with full dispatches (`monomorphic inline
caching'); the clever approach tolerates a few different types,
dispatching to optimized code for each (`polymorphic inline caching'),
unless there are just too many decision points and you give up.

There are time/space tradeoffs to be made here too. Fortunately, you
don't have to compile everything super-optimized from the get-go: you
can dynamically identify the inner loops which need special attention,
and get the compiler to really stare hard at them. The rest of the
program might plausibly be left interpreted much of the time for all
anyone will care.

> I've referred to this problem as "gratuitous hidden dynamism".
> Most things which could be changed dynamically in a Python program
> usually aren't.

This is one of the crucial observations for making a dynamic language go
fast; the other is that you still have the compiler around if you
guessed wrong.

An aggressively dynamic runtime has two enormous advantages over batch
compilers such as are traditionally used for C: it gets the entire
program in one go, and it gets to see the real live data that the
program's meant to run against. Given that, I'd expect it to be able to
/beat/ a batch compiler in terms of performance.

> This has been pointed out many times by many people. There's
> even a PhD thesis on the topic. Without a few restrictions, so
> that a compiler can at least tell when support for the hard cases
> is needed, Python cannot be compiled well.

This assumes static compilation. It's the wrong approach for a dynamic
language like Python.

-- [mdw]

Carl Banks

unread,
Nov 18, 2010, 9:23:24 PM11/18/10
to
On Nov 18, 2:59 pm, Jean-Paul Calderone <calderone.jeanp...@gmail.com>
wrote:

Right. The key is to be able to dispatch on the type of object once
for a given chunk of code, which is possible if you do some kind of
flow path analysis on the function/chunk.

PyPy is starting to look much better of late, I kind of thought their
first approach was wrong (or at least too much) but they seem to have
pushed though it.


Carl Banks

John Nagle

unread,
Nov 18, 2010, 10:57:44 PM11/18/10
to
On 11/18/2010 5:35 PM, Mark Wooding wrote:
> John Nagle<na...@animats.com> writes:
>
>> Python is defined by what a naive interpreter with late binding
>> and dynamic name lookups, like CPython, can easily implement. Simply
>> emulating the semantics of CPython with generated code doesn't help
>> all that much.
>
> Indeed.
>
>> Because you can "monkey patch" Python objects from outside the
>> class, a local compiler, like a JIT, can't turn name lookups into hard
>> bindings. Nor can it make reliable decisions about the types of
>> objects.
>
> But it /can/ make guesses. A dynamic runtime doesn't have to predict
> everything right in advance; it only has to predict most things sort of
> well enough, and fix up the things it got wrong before anyone notices.
> For example, A Python compiler could inline a function call if it makes
> a note to recompile the calling function if the called function is
> modified. Most functions aren't redefined, so this is probably a pretty
> good guess.
>
>> That adds a sizable performance penalty. Short of global program
>> analysis, the compiler can't tell when code for the hard cases needs
>> to be generated.
>
> The right approach is to guess that things are going to be done the easy
> way, and then detect when the guess is wrong.

That's been done successfully for Self and JavaScript. It's not
easy. See this talk on JaegerMonkey:

http://blog.cdleary.com/2010/09/picing-on-javascript-for-fun-and-profit/

The effort needed to do that for Javascript is justified by the size
of the installed base.

The Unladen Swallow people had plans to go in that direction, but they
underestimated the size of the job.

John Nagle

Stefan Behnel

unread,
Nov 20, 2010, 3:02:47 PM11/20/10
to pytho...@python.org
Mark Wooding, 19.11.2010 02:35:

> John Nagle writes:
>> This has been pointed out many times by many people. There's
>> even a PhD thesis on the topic. Without a few restrictions, so
>> that a compiler can at least tell when support for the hard cases
>> is needed, Python cannot be compiled well.
>
> This assumes static compilation. It's the wrong approach for a dynamic
> language like Python.

Cython does a pretty good job in that, though. It also optimistically
optimises a couple of things even during static compilation, e.g.
"x.append(y)" likely hints on "x" being a list, even if static analysis
can't prove that.

Stefan

0 new messages