Python's "only one way to do it" philosophy isn't good?

41 views
Skip to first unread message

WaterWalk

unread,
Jun 9, 2007, 1:49:03 AM6/9/07
to
I've just read an article "Building Robust System" by Gerald Jay
Sussman. The article is here:
http://swiss.csail.mit.edu/classes/symbolic/spring07/readings/robust-systems.pdf

In it there is a footprint which says:
"Indeed, one often hears arguments against building exibility into an
engineered sys-
tem. For example, in the philosophy of the computer language Python it
is claimed:
\There should be one|and preferably only one|obvious way to do
it."[25] Science does
not usually proceed this way: In classical mechanics, for example, one
can construct equa-
tions of motion using Newtonian vectoral mechanics, or using a
Lagrangian or Hamiltonian
variational formulation.[30] In the cases where all three approaches
are applicable they are
equivalent, but each has its advantages in particular contexts."

I'm not sure how reasonable this statement is and personally I like
Python's simplicity, power and elegance. So I put it here and hope to
see some inspiring comments.

Gabriel Genellina

unread,
Jun 9, 2007, 2:35:48 AM6/9/07
to pytho...@python.org
En Sat, 09 Jun 2007 02:49:03 -0300, WaterWalk <toolm...@163.com>
escribió:

I think the key is the word you ommited in the subject: "obvious". There
should be one "obvious" way to do it. For what I can remember of my first
love (Physics): if you have a small ball moving inside a spherical cup, it
would be almost crazy to use cartesian orthogonal coordinates and Newton's
laws to solve it - the "obvious" way would be to use spherical coordinates
and the Lagrangian formulation (or at least I hope so - surely
knowledgeable people will find more "obviously" which is the right way).
All classical mechanics formulations may be equivalent, but in certain
cases one is much more suited that the others.


--
Gabriel Genellina

Terry Reedy

unread,
Jun 9, 2007, 3:40:42 AM6/9/07
to pytho...@python.org

"WaterWalk" <toolm...@163.com> wrote in message
news:1181368143.8...@r19g2000prf.googlegroups.com...

| I've just read an article "Building Robust System" by Gerald Jay
| Sussman. The article is here:
|
http://swiss.csail.mit.edu/classes/symbolic/spring07/readings/robust-systems.pdf
|
| In it there is a footprint which says:
| "Indeed, one often hears arguments against building exibility into an
| engineered sys-
| tem. For example, in the philosophy of the computer language Python it
| is claimed:

For him to imply that Python is anti-flexibility is wrong. Very wrong..
He should look in a mirror. See below.

| \There should be one|and preferably only one|obvious way to do
| it."[25] Science does
| not usually proceed this way: In classical mechanics, for example, one
| can construct equa-
| tions of motion using Newtonian vectoral mechanics, or using a
| Lagrangian or Hamiltonian
| variational formulation.[30] In the cases where all three approaches
| are applicable they are
| equivalent, but each has its advantages in particular contexts."

And in those contexts, one would hope that the method with advantages is
somehow the obvious way to do it. Otherwise beginners might become like
Buriden's ass.

So I dispute that science is as different as he claims. And I do not see
any real value in the statement in that I do not see it saying anything
useful to the reader, at least not in this snippet.

| I'm not sure how reasonable this statement is and personally I like
| Python's simplicity, power and elegance. So I put it here and hope to
| see some inspiring comments.

How much has Mr. Sussman actually programmed in Python and what actual
problems did he find with the *implementation* of the philosophy? Without
something concrete, the complaint is rather bogus.

But here is what I find funny (and a bit maddening): G. J. Sussman is one
of the inventers of the Lisp dialect Scheme, a minimalist language that for
some things has only one way to do it, let alone one obvious way. Scheme
would be like physics with only one of the three ways. After all, if they
are equivalent, only one is needed.

For example, consider scanning the items in a collection. In Python, you
have a choice of recursion (normal or tail), while loops, and for
statements. For statements are usually the obvious way, but the other two
are available for personal taste and for specially situations such as
walking a tree (where one might use recursion to write the generator that
can then be used by for loops. In scheme, I believe you just have
recursion. Since iteration and recursion are equivalent, why have both?

Terry Jan Reedy

James Stroud

unread,
Jun 9, 2007, 6:16:01 AM6/9/07
to
Terry Reedy wrote:
> In Python, you have a choice of recursion (normal or tail)

Please explain this. I remember reading on this newsgroup that an
advantage of ruby (wrt python) is that ruby has tail recursion, implying
that python does not. Does python have fully optimized tail recursion as
described in the tail recursion Wikipedia entry? Under what
circumstances can one count on the python interpreter recognizing the
possibility for optimized tail recursion?

James


=====

Disclaimer: Mention of more than one programming language in post does
not imply author's desire to begin language v. language holy battle. The
author does not program in [some or all of the other languages mentioned
aside from the language topical to the newsgroup] and has no opinions on
the merits or shortcomings of said language or languages.

=====

Cousin Stanley

unread,
Jun 9, 2007, 6:42:48 AM6/9/07
to

> ....

> In scheme, I believe you just have recursion.
> ....

Cousin TJR ....

I'm a total scheme rookie starting only about 3 days ago
and one of the mechanisms I went looking for was a technique
for iteration ....

Found in the scheme docs about iteration supplied
via the reduce package ....

"Iterate and reduce are extensions of named-let
for writing loops that walk down one or more sequences
such as the elements of a list or vector, the characters
read from a port, or arithmetic series .... "

The following scheme session illustrates a trivial example ....

> , open reduce
>
> ( define ( list_loop this_list )
( iterate loop
( ( list* this_item this_list ) ) ; iterate expression
( ( new_list '( ) ) ) ; state expression
( loop ( cons ( * 2 this_item ) new_list ) ) ; body expression
( reverse new_list ) ) ) ; final expression
; no values returned
>
> ( define L '( 1 2 3 4 5 ) )
; no values returned
>
( define result_i ( list_loop L ) )
; no values returned
>
> result_i
'(2 4 6 8 10)
>

However, just as in Python the map function
might be both easier to code and more readable
in many cases ....

> ( define ( x2 n ) ( * 2 n ) )
; no values returned
>
> ( define result_m ( map x2 L ) )
; no values returned
>
> result_m
'(2 4 6 8 10)

Note ....

No lambdas in my scheme code either .... ;-)

--
Stanley C. Kitching
Human Being
Phoenix, Arizona


----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----

Bjoern Schliessmann

unread,
Jun 9, 2007, 6:53:54 AM6/9/07
to
Gabriel Genellina wrote:

> For what I can
> remember of my first love (Physics): if you have a small ball
> moving inside a spherical cup, it would be almost crazy to use
> cartesian orthogonal coordinates and Newton's laws to solve it -
> the "obvious" way would be to use spherical coordinates and the
> Lagrangian formulation (or at least I hope so

Yep, that's right.

> - surely knowledgeable people will find more "obviously" which is
> the right way).

No, this case is IMHO almost classical. Movement with planar
constraints can be solved quite easy using Lagrange.

> All classical mechanics formulations may be equivalent, but
> in certain cases one is much more suited that the others.

Or: Lagrange is the only obvious way to describe movement with
constraints.

Regards,


Björn

--
BOFH excuse #80:

That's a great computer you have there; have you considered how it
would work as a BSD machine?

Kay Schluehr

unread,
Jun 9, 2007, 7:28:10 AM6/9/07
to
On Jun 9, 12:16 pm, James Stroud <jstr...@mbi.ucla.edu> wrote:
> Terry Reedy wrote:
> > In Python, you have a choice of recursion (normal or tail)
>
> Please explain this. I remember reading on this newsgroup that an
> advantage of ruby (wrt python) is that ruby has tail recursion, implying
> that python does not.

Proof by rumour? You can use first class continuations in Ruby to
eliminate tail calls in and define higher order function wrappers
( like Python decorators ). But I wouldn't call this "fully
optimized".

> Does python have fully optimized tail recursion as
> described in the tail recursion Wikipedia entry?

No.

Terry Reedy

unread,
Jun 9, 2007, 12:12:53 PM6/9/07
to pytho...@python.org

"James Stroud" <jst...@mbi.ucla.edu> wrote in message
news:E5vai.858$TC1...@newssvr17.news.prodigy.net...

| Terry Reedy wrote:
| > In Python, you have a choice of recursion (normal or tail)
|
| Please explain this.

I am working on a paper for Python Papers that will. It was inspired by
the question 'why doesn't Python do tail-recursion optimization'.

tjr

Terry Reedy

unread,
Jun 9, 2007, 1:00:11 PM6/9/07
to pytho...@python.org

"Cousin Stanley" <cousin...@hotmail.com> wrote in message
news:11813857...@sp12lax.superfeed.net...

| > In scheme, I believe you just have recursion.

I was referring to the original mimimalist core language developed by Guy
and Sussman and as I remember it being used in the original edition of SICP
(see Wikipedia). I also remember statements explaining (truthfully) that
builtin iteration is not needed because it can be defined in terms of tail
recursion, which in Scheme is required to be optimized to be just as space
efficient.

I see in Wikipedia that Scheme has do loops (all versions?), but I do not
know if that was original or added. If the former, it was de-emphasized.
Hence my belief, even if mistaken.

| Cousin TJR ....
|
| I'm a total scheme rookie starting only about 3 days ago
| and one of the mechanisms I went looking for was a technique
| for iteration ....
|
| Found in the scheme docs about iteration supplied
| via the reduce package ....

Right. An add-on library package, not part of the core;-)

In Python, modules can add functions (and classes, etc), but not statement
syntax, so adding while statements defined in terms of recursion is not
possible.

Scheme is quite elegant and worth learning at least the basics of. My only
point was that Sussman is an odd person to be criticizing (somewhat
mistakingly) Python for being minimalist.

tjr

John Nagle

unread,
Jun 9, 2007, 1:58:57 PM6/9/07
to
Bjoern Schliessmann wrote:
> Gabriel Genellina wrote:
>
>
>>For what I can
>>remember of my first love (Physics): if you have a small ball
>>moving inside a spherical cup, it would be almost crazy to use
>>cartesian orthogonal coordinates and Newton's laws to solve it -
>>the "obvious" way would be to use spherical coordinates and the
>>Lagrangian formulation (or at least I hope so

Having actually solved that problem in simulation, I can report
that it's easier in Cartesian coordinates. I used to use this as
a test of Falling Bodies, one of the first physics engines that
really worked on the hard cases.

Spherical coordinates seem attractive until you have to deal
with friction between the ball and cup. The ball is rotating, too,
and may be slipping with respect to the cup. Then the simple
Physics 101 approach isn't so simple any more.

John Nagle
Animats

Josiah Carlson

unread,
Jun 9, 2007, 4:05:05 PM6/9/07
to
James Stroud wrote:
> Terry Reedy wrote:
>> In Python, you have a choice of recursion (normal or tail)
>
> Please explain this. I remember reading on this newsgroup that an
> advantage of ruby (wrt python) is that ruby has tail recursion, implying
> that python does not. Does python have fully optimized tail recursion as
> described in the tail recursion Wikipedia entry? Under what
> circumstances can one count on the python interpreter recognizing the
> possibility for optimized tail recursion?

Note that Terry said that you could do normal or tail recursion, he
didn't claim that either were optimized. As for why tail calls are not
optimized out, it was decided that being able to have the stack traces
(with variable information, etc.) was more useful than offering tail
call optimization (do what I say).

- Josiah

Message has been deleted

Alexander Schmolck

unread,
Jun 9, 2007, 5:42:17 PM6/9/07
to
Josiah Carlson <josiah....@sbcglobal.net> writes:

> James Stroud wrote:
>> Terry Reedy wrote:
>>> In Python, you have a choice of recursion (normal or tail)
>>
>> Please explain this. I remember reading on this newsgroup that an advantage
>> of ruby (wrt python) is that ruby has tail recursion, implying that python
>> does not. Does python have fully optimized tail recursion as described in
>> the tail recursion Wikipedia entry? Under what circumstances can one count
>> on the python interpreter recognizing the possibility for optimized tail
>> recursion?
>
> Note that Terry said that you could do normal or tail recursion, he didn't
> claim that either were optimized.

Well yeah, but without the implication how do the two words "or tail" add to
the information content of the sentence?

> As for why tail calls are not optimized out, it was decided that being able
> to have the stack traces (with variable information, etc.) was more useful
> than offering tail call optimization

I don't buy this. What's more important, making code not fail arbitrarily (and
thus making approaches to certain problems feasible that otherwise wouldn't
be) or making it a be a bit easier to debug code that will fail arbitrarily?
Why not only do tail-call optimization in .pyo files and get the best of both
worlds?

> (do what I say).

Where did you say run out of memory and fail? More importantly how do you say
"don't run out of memory and fail"?

'as

Josiah Carlson

unread,
Jun 9, 2007, 6:52:32 PM6/9/07
to
Alexander Schmolck wrote:
> Josiah Carlson <josiah....@sbcglobal.net> writes:
>
>> James Stroud wrote:
>>> Terry Reedy wrote:
>>>> In Python, you have a choice of recursion (normal or tail)
>>> Please explain this. I remember reading on this newsgroup that an advantage
>>> of ruby (wrt python) is that ruby has tail recursion, implying that python
>>> does not. Does python have fully optimized tail recursion as described in
>>> the tail recursion Wikipedia entry? Under what circumstances can one count
>>> on the python interpreter recognizing the possibility for optimized tail
>>> recursion?
>> Note that Terry said that you could do normal or tail recursion, he didn't
>> claim that either were optimized.
>
> Well yeah, but without the implication how do the two words "or tail" add to
> the information content of the sentence?

Normal and tail recursion are different, based upon whether or not one
can technically considered to be done with the stack frame.

def normal(arg):
if arg == 1:
return 1
return arg * normal(arg-1)

def tail(arg, default=1):
if arg == 1:
return arg * default
return tail(arg-1, default*arg)


>> As for why tail calls are not optimized out, it was decided that being able
>> to have the stack traces (with variable information, etc.) was more useful
>> than offering tail call optimization
>
> I don't buy this. What's more important, making code not fail arbitrarily (and
> thus making approaches to certain problems feasible that otherwise wouldn't
> be) or making it a be a bit easier to debug code that will fail arbitrarily?
> Why not only do tail-call optimization in .pyo files and get the best of both
> worlds?

I didn't make the decisions, I'm just reporting what was decided upon.

Personally, I have never found the lack of tail call optimization an
issue for two reasons. The first is because I typically write such
programs in an iterative fashion. And generally for recursion for which
tail call optimization doesn't work (the vast majority of recursive
algorithms I use), I typically write the algorithm recursively first,
verify its correctness, then convert it into an iterative version with
explicit stack. I find it is good practice, and would be necessary
regardless of whether Python did tail call optimization or not.


>> (do what I say).
>
> Where did you say run out of memory and fail? More importantly how do you say
> "don't run out of memory and fail"?

By virtue of Python's underlying implementation, Python "does what I
say", it doesn't "do what I mean". While I may not have explicitly
stated "run out of stack space", the underlying implementation *has*
limited stack space. You are stating that when you write a tail
recursive program, you want Python to optimize it away by destroying the
stack frames. And that's fine. There are tail-call optimization
decorators available, and if you dig into sourceforge, there should even
be a full patch to make such things available in a previous Python.

However, Python is not Lisp and is not partially defined by infinite
recursion (see sys.setrecursionlimit() ). Instead, it is limited by the
C call stack (in CPython), and makes guarantees regarding what will
always be available during debugging (the only thing that optimization
currently does in Python at present is to discard docstrings). If you
want to change what is available for debugging (to make things like tail
call optimization possible), you are free to write and submit a PEP.

In the mean time, you may need to do some source conversion.


- Josiah

Steven D'Aprano

unread,
Jun 9, 2007, 8:23:41 PM6/9/07
to
On Sat, 09 Jun 2007 22:42:17 +0100, Alexander Schmolck wrote:

>> As for why tail calls are not optimized out, it was decided that being able
>> to have the stack traces (with variable information, etc.) was more useful
>> than offering tail call optimization
>
> I don't buy this.

Do you mean you don't believe the decision was made, or you don't agree
with the decision?


> What's more important, making code not fail arbitrarily (and
> thus making approaches to certain problems feasible that otherwise
> wouldn't be) or making it a be a bit easier to debug code that will fail
> arbitrarily? Why not only do tail-call optimization in .pyo files and
> get the best of both worlds?

Are you volunteering? If you are, I'm sure your suggestion will be
welcomed gratefully.


>> (do what I say).
>
> Where did you say run out of memory and fail? More importantly how do
> you say "don't run out of memory and fail"?

If we can live with a certain amount of "arbitrary failures" in simple
arithmetic, then the world won't end if tail recursion isn't optimized
away by the compiler. You can always hand-optimize it yourself.

dont_run_out_of_memory_and_fail = 10**(10**100) # please?


--
Steven.

Steven D'Aprano

unread,
Jun 9, 2007, 8:34:21 PM6/9/07
to
On Sat, 09 Jun 2007 22:52:32 +0000, Josiah Carlson wrote:

> the only thing that optimization
> currently does in Python at present is to discard docstrings

Python, or at least CPython, does more optimizations than that. Aside from
run-time optimizations like interned strings etc., there are a small
number of compiler-time optimizations done.

Running Python with the -O (optimize) flag tells Python to ignore
assert statements. Using -OO additionally removes docstrings.

Regardless of the flag, in function (and class?) definitions like the
following:

def function(args):
"Doc string"
x = 1
s = "this is a string constant"
"and this string is treated as a comment"
return s*x

The string-comment is ignored by the compiler just like "real" comments.
(The same doesn't necessarily hold for other data types.)


Some dead code is also optimized away:

>>> def function():
... if 0:
... print "dead code"
... return 2
...
>>> dis.dis(function)
4 0 LOAD_CONST 1 (2)
3 RETURN_VALUE


Lastly, in recent versions (starting with 2.5 I believe) Python includes a
peephole optimizer that implements simple constant folding:

# Python 2.4.3
>>> dis.dis(lambda: 1+2)
1 0 LOAD_CONST 1 (1)
3 LOAD_CONST 2 (2)
6 BINARY_ADD
7 RETURN_VALUE

# Python 2.5
>>> dis.dis(lambda: 1+2)
1 0 LOAD_CONST 2 (3)
3 RETURN_VALUE


The above all holds for CPython. Other Pythons may implement other
optimizations.

--
Steven.

James Stroud

unread,
Jun 9, 2007, 8:46:20 PM6/9/07
to
Kay Schluehr wrote:
> On Jun 9, 12:16 pm, James Stroud <jstr...@mbi.ucla.edu> wrote:
>> Terry Reedy wrote:
>>> In Python, you have a choice of recursion (normal or tail)
>> Please explain this. I remember reading on this newsgroup that an
>> advantage of ruby (wrt python) is that ruby has tail recursion, implying
>> that python does not.
>
> Proof by rumour?

"Proof" if you define "proof" by asking for clarification about a vague
recollection of an even more vague posting. I think now I'll prove
Fermat's Last Theorem by hailing a cab.

bruno.des...@gmail.com

unread,
Jun 10, 2007, 7:36:35 AM6/10/07
to
On Jun 9, 12:16 pm, James Stroud <jstr...@mbi.ucla.edu> wrote:
> Terry Reedy wrote:
> > In Python, you have a choice of recursion (normal or tail)
>
> Please explain this. I remember reading on this newsgroup that an
> advantage of ruby (wrt python) is that ruby has tail recursion, implying
> that python does not. Does python have fully optimized tail recursion as
> described in the tail recursion Wikipedia entry? Under what
> circumstances can one count on the python interpreter recognizing the
> possibility for optimized tail recursion?
>

I'm afraid Terry is wrong here, at least if he meant that CPython had
tail recursion *optimization*.

(and just for those who don't know yet, it's not a shortcoming, it's a
design choice.)


Josiah Carlson

unread,
Jun 10, 2007, 10:27:27 AM6/10/07
to
Steven D'Aprano wrote:
> On Sat, 09 Jun 2007 22:52:32 +0000, Josiah Carlson wrote:
>
>> the only thing that optimization
>> currently does in Python at present is to discard docstrings
>
> Python, or at least CPython, does more optimizations than that. Aside from
> run-time optimizations like interned strings etc., there are a small
> number of compiler-time optimizations done.
>
> Running Python with the -O (optimize) flag tells Python to ignore
> assert statements. Using -OO additionally removes docstrings.

Oh yeah, asserts. I never run with -O, and typically don't use asserts,
so having or not having either isn't a big deal for me.

> Regardless of the flag, in function (and class?) definitions like the
> following:
>
> def function(args):
> "Doc string"
> x = 1
> s = "this is a string constant"
> "and this string is treated as a comment"
> return s*x
>
> The string-comment is ignored by the compiler just like "real" comments.
> (The same doesn't necessarily hold for other data types.)

I would guess it is because some other data types may have side-effects.
On the other hand, a peephole optimizer could be written to trim out
unnecessary LOAD_CONST/POP_TOP pairs.

> Some dead code is also optimized away:

Obviously dead code removal happens regardless of optimization level in
current Pythons.

> Lastly, in recent versions (starting with 2.5 I believe) Python includes a
> peephole optimizer that implements simple constant folding:

Constant folding happens regardless of optimization level in current
Pythons.


So really, assert and docstring removals. Eh.

- Josiah

John Nagle

unread,
Jun 10, 2007, 12:20:10 PM6/10/07
to

John Nagle

unread,
Jun 10, 2007, 12:43:47 PM6/10/07
to
Josiah Carlson wrote:
> Steven D'Aprano wrote:
>
>> On Sat, 09 Jun 2007 22:52:32 +0000, Josiah Carlson wrote:
>>
>>> the only thing that optimization currently does in Python at present
>>> is to discard docstrings
>>
>>
>> Python, or at least CPython, does more optimizations than that. Aside
>> from
>> run-time optimizations like interned strings etc., there are a small
>> number of compiler-time optimizations done.
>>
>> Running Python with the -O (optimize) flag tells Python to ignore
>> assert statements. Using -OO additionally removes docstrings.
...

>
> I would guess it is because some other data types may have side-effects.
> On the other hand, a peephole optimizer could be written to trim out
> unnecessary LOAD_CONST/POP_TOP pairs.
>
>> Some dead code is also optimized away:
>
> Obviously dead code removal happens regardless of optimization level in
> current Pythons.
>
>> Lastly, in recent versions (starting with 2.5 I believe) Python
>> includes a
>> peephole optimizer that implements simple constant folding:
>
> Constant folding happens regardless of optimization level in current
> Pythons.

> So really, assert and docstring removals. Eh.

It's hard to optimize Python code well without global analysis.
The problem is that you have to make sure that a long list of "wierd
things", like modifying code or variables via getattr/setattr, aren't
happening before doing significant optimizations. Without that,
you're doomed to a slow implementation like CPython.

ShedSkin, which imposes some restrictions, is on the right track here.
The __slots__ feature is useful but doesn't go far enough.

I'd suggest defining "simpleobject" as the base class, instead of "object",
which would become a derived class of "simpleobject". Objects descended
directly from "simpleobject" would have the following restrictions:

- "getattr" and "setattr" are not available (as with __slots__)
- All class member variables must be initialized in __init__, or
in functions called by __init__. The effect is like __slots__,
but you don't have to explictly write declarations.
- Class members are implicitly typed with the type of the first
thing assigned to them. This is the ShedSkin rule. It might
be useful to allow assignments like

self.str = None(string)

to indicate that a slot holds strings, but currently has the null
string.
- Function members cannot be modified after declaration. Subclassing
is fine, but replacing a function member via assignment is not.
This allows inlining of function calls to small functions, which
is a big win.
- Private function members (self._foo and self.__foo) really are
private and are not callable outside the class definition.

You get the idea. This basically means that "simpleobject" objects have
roughly the same restrictions as C++ objects, for which heavy compile time
optimization is possible. Most Python classes already qualify for
"simpleobject". And this approach doesn't require un-Pythonic stuff like
declarations or extra "decorators".

With this, the heavy optimizations are possible. Strength reduction. Hoisting
common subexpressious out of loops. Hoisting reference count updates out of
loops. Keeping frequently used variables in registers. And elimination of
many unnecessary dictionary lookups.

Python could get much, much faster. Right now CPython is said to be 60X slower
than C. It should be possible to get at least an order of magnitude over
CPython.

John Nagle

Message has been deleted

Terry Reedy

unread,
Jun 10, 2007, 1:38:36 PM6/10/07
to pytho...@python.org

<bruno.des...@gmail.com> wrote in message
news:1181475395.7...@m36g2000hse.googlegroups.com...

| > Terry Reedy wrote:
| > > In Python, you have a choice of recursion (normal or tail)

[snip Stroud questions]

| I'm afraid Terry is wrong here, at least if he meant that CPython had
| tail recursion *optimization*.

NO!!!
I did not mean that or imply that in any way.

| (and just for those who don't know yet, it's not a shortcoming, it's a
| design choice.)

And I already noted in a followup that I am working on a Python Papers
paper explaining that choice, including Guido's claim that 'for statements
are better'.

So frankly I am a little annoyed that you dragged my name into your answer
to Stroud when you should have succintly said 'No, Never', or better,
nothing at all, as someone else already did say that. Read more of the
tread before jumping in and acribing ignorance to people.

Terry Jan Reedy

Steve Howell

unread,
Jun 10, 2007, 6:43:05 PM6/10/07
to John Nagle, pytho...@python.org

--- John Nagle <na...@animats.com> wrote:
>
> With this, the heavy optimizations are possible.
> Strength reduction. Hoisting
> common subexpressious out of loops. Hoisting
> reference count updates out of
> loops. Keeping frequently used variables in
> registers. And elimination of
> many unnecessary dictionary lookups.
>

To the extent that some of these optimizations could
be achieved by writing better Python code, it would
nice for optimization tools to have a "suggest" mode.
For example, if I code a Fourier series and duplicate
the subexpression n*f*t, it would be nice for a tool
to tell me that I should refactor that expression.
Something like this:

n*f*t should be refactored out of this expression,
assuming muliplication has no desired side effects for
n, f, and t.



____________________________________________________________________________________
TV dinner still cooling?
Check out "Tonight's Picks" on Yahoo! TV.
http://tv.yahoo.com/

Alexander Schmolck

unread,
Jun 10, 2007, 8:28:09 PM6/10/07
to
Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:

> On Sat, 09 Jun 2007 22:42:17 +0100, Alexander Schmolck wrote:
>
>>> As for why tail calls are not optimized out, it was decided that being able
>>> to have the stack traces (with variable information, etc.) was more useful
>>> than offering tail call optimization
>>
>> I don't buy this.
>
> Do you mean you don't believe the decision was made, or you don't agree
> with the decision?

Neither. I don't believe the rationale stated in this thread to be the true
reason.

> Are you volunteering? If you are, I'm sure your suggestion will be welcomed
> gratefully.

I rather doubt it. Guido has stated quite clearly that his not interested in
incorporating this feature.



>>> (do what I say).
>>
>> Where did you say run out of memory and fail? More importantly how do
>> you say "don't run out of memory and fail"?
>
> If we can live with a certain amount of "arbitrary failures" in simple
> arithmetic,

I prefer not too, and thus when possible avoid to use languages where ``a +
b`` is liable to fail arbitrarily (such as C, where the behavior will often be
undefined).

> then the world won't end if tail recursion isn't optimized away by the
> compiler.

I'd personally much rather have arithmetic working properly. Unfortunately
this is not an option at the moment, although quite a few smart people are
working on it and although it is an immensely costly problem.

> You can always hand-optimize it yourself.

Not tail calls, in general, no.

'as

John Nagle

unread,
Jun 10, 2007, 9:51:55 PM6/10/07
to
Steve Howell wrote:
> --- John Nagle <na...@animats.com> wrote:
>
>>With this, the heavy optimizations are possible.
>>Strength reduction. Hoisting
>>common subexpressious out of loops. Hoisting
>>reference count updates out of
>>loops. Keeping frequently used variables in
>>registers. And elimination of
>>many unnecessary dictionary lookups.
>>
> To the extent that some of these optimizations could
> be achieved by writing better Python code, it would
> nice for optimization tools to have a "suggest" mode.
> For example, if I code a Fourier series and duplicate
> the subexpression n*f*t, it would be nice for a tool
> to tell me that I should refactor that expression.
> Something like this:
>
> n*f*t should be refactored out of this expression,
> assuming muliplication has no desired side effects for
> n, f, and t.

Too labor-intensive. These are well understood optimizations
that can be done quite well automatically. The problem is Python's
gratutious dynamism - it's hard to tell, when examining code, if
something can be patched by other code elsewhere. Where "setattr"
is allowed, the compiler has to assume side effects almost everywhere.

John Nagle

Steven D'Aprano

unread,
Jun 10, 2007, 11:22:28 PM6/10/07
to
On Mon, 11 Jun 2007 01:28:09 +0100, Alexander Schmolck wrote:

> Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:
>
>> On Sat, 09 Jun 2007 22:42:17 +0100, Alexander Schmolck wrote:
>>
>>>> As for why tail calls are not optimized out, it was decided that being able
>>>> to have the stack traces (with variable information, etc.) was more useful
>>>> than offering tail call optimization
>>>
>>> I don't buy this.
>>
>> Do you mean you don't believe the decision was made, or you don't agree
>> with the decision?
>
> Neither. I don't believe the rationale stated in this thread to be the true
> reason.


Don't keep us in suspense. What do you believe is the true reason?


>> Are you volunteering? If you are, I'm sure your suggestion will be welcomed
>> gratefully.
>
> I rather doubt it. Guido has stated quite clearly that his not
> interested in incorporating this feature.

He's not the only one who gets to make these decisions. But even if he
uses his veto to prevent tail-recursion optimization from being put into
the main branch, there are other options.

>>>> (do what I say).
>>>
>>> Where did you say run out of memory and fail? More importantly how do
>>> you say "don't run out of memory and fail"?
>>
>> If we can live with a certain amount of "arbitrary failures" in simple
>> arithmetic,
>
> I prefer not too, and thus when possible avoid to use languages where
> ``a + b`` is liable to fail arbitrarily (such as C, where the behavior
> will often be undefined).

That's not the sort of arbitrary failure I was discussing, but for that
matter Python is one of those languages. Perhaps Python is not the
language for you?

Correct me if I'm wrong, but surely it is C++ that can have arbitrary
behaviour for "a + b", not C?

>> then the world won't end if tail recursion isn't optimized away by the
>> compiler.
>
> I'd personally much rather have arithmetic working properly.
> Unfortunately this is not an option at the moment, although quite a few
> smart people are working on it and although it is an immensely costly
> problem.
>
>> You can always hand-optimize it yourself.
>
> Not tail calls, in general, no.

Sorry, how does that work? You're suggesting that there is an algorithm
which the compiler could follow to optimize away tail-recursion, but human
beings can't follow the same algorithm?

Now I'm confused.


--
Steven.

Paul Rubin

unread,
Jun 10, 2007, 11:18:49 PM6/10/07
to

The usual compiler method is to translate the code into
continuation-passing style and thereby gain tail-recursion
optimization automagically. Of course a human could do the same thing
in principle, but it would result in insanely difficult-to-write,
unreadable and unmaintainable code. At a higher level, there are no
Python features whatsoever, either existing or proposed, that couldn't
be eliminated and left to the human. Iterators? While loops? Who
needs 'em? We could all go back to programming in machine code. But
Python is supposed to make programming easier, not harder.

Kay Schluehr

unread,
Jun 10, 2007, 11:22:23 PM6/10/07
to
On Jun 11, 12:43 am, Steve Howell <showel...@yahoo.com> wrote:

> To the extent that some of these optimizations could
> be achieved by writing better Python code, it would
> nice for optimization tools to have a "suggest" mode.

Is anyone out there who uses MS Word and doesn't deactivate the
"suggest" mode i.e. Clippy? Maybe someone shall write a lucid blog
entry about the failure of "suggest" modes in general or point me to
one. Autocompletion as a typing aid might be considered as a counter
example but only because you don't really have a choice and will not
be confronted with nonsensical AI guesses.

Doug Phillips

unread,
Jun 11, 2007, 12:48:36 AM6/11/07
to pytho...@python.org
> Is anyone out there who uses MS Word and doesn't deactivate
> the "suggest" mode i.e. Clippy?

Me... I don't install Clippy (or any of his horribly annoying friends)
to start with. :)

On the topic though, the suggest mode of the MS help system is generally
way off-base, even for my 80-yr-old grandmother's needs.

Steve Howell

unread,
Jun 10, 2007, 11:47:26 PM6/10/07
to Kay Schluehr, pytho...@python.org

Unless I'm misunderstanding what you're saying, you
are wildly overinterpreting my use of the phrase
"suggest mode."

I was making the simple suggestion that code
optimizers could suggest opportunities for
optimization that the tools couldn't unilaterally
decide themselves to enforce.

In other words, there are scenarios where an automated
tool has to assume the extreme case of dynamicism,
when in fact I, the programmer, know that I'm writing
basically static code within Python, and I simply
forgot to pull a subexpression out of a loop.

Maybe you just need to rant about animated office
supplies every now and then.

And regarding autocompletion--yes, it's an extremely
useful feature.

____________________________________________________________________________________
Park yourself in front of a world of choices in alternative vehicles. Visit the Yahoo! Auto Green Center.
http://autos.yahoo.com/green_center/

Josiah Carlson

unread,
Jun 11, 2007, 2:34:49 AM6/11/07
to
John Nagle wrote:
> Josiah Carlson wrote:
[snip]

>> Constant folding happens regardless of optimization level in current
>> Pythons.
>
>> So really, assert and docstring removals. Eh.
>
> It's hard to optimize Python code well without global analysis.
> The problem is that you have to make sure that a long list of "wierd
> things", like modifying code or variables via getattr/setattr, aren't
> happening before doing significant optimizations. Without that,
> you're doomed to a slow implementation like CPython.
>
> ShedSkin, which imposes some restrictions, is on the right track here.
> The __slots__ feature is useful but doesn't go far enough.
[snip]

> Python could get much, much faster. Right now CPython is said to be 60X
> slower
> than C. It should be possible to get at least an order of magnitude over
> CPython.

Don't get me wrong; I'm all for adding optimizations, I was merely
expressing that currently, 'python -OO' doesn't really do a whole lot.
I've a long-time user of psyco, have mucked about with
scipy.weave.inline, and have been a heavy user of Pyrex and C. If there
was a method of offering some simple optimization cues to the Python
runtime to improve its speed, I would be happy to add them when I really
care about speed (I already do more than that when writing Pyrex). The
real question is whether we can get a practical implementation of these
optimizations as easy to use as psyco.


- Josiah

Josiah Carlson

unread,
Jun 11, 2007, 2:36:25 AM6/11/07
to

Thanks to Richie Hindle, there exists a goto for Python implementation
that makes such things quite trivial (assuming one doesn't like
"abusing" break/continue/else). http://entrian.com/goto/ (which, by
the way, is the best April-fools joke ever)


- Josiah

Diez B. Roggisch

unread,
Jun 11, 2007, 3:27:35 AM6/11/07
to


I won't give you the "prove it by doing it"-talk. It's to cheap.

Instead I'd like to say why I don't think that this will buy you much
performance-wise: it's a local optimization only. All it can and will do
is to optimize lookups and storage of attributes - either functions or
values - and calls to methods from within one specialobject. As long as
expressions stay in their own "soup", things might be ok.

The very moment you mix this with "regular", no-strings-attached python
code, you have to have the full dynamic machinery in place + you need
tons of guarding statements in the optimized code to prevent access
violations.

So in the end, I seriously doubt the performance gains are noticable.
Instead I'd rather take the pyrex-road, which can go even further
optimizing with some more declarations. But then I at least know exactly
where the boundaries are. As does the compiler.

> Python could get much, much faster. Right now CPython is said to be 60X
> slower
> than C. It should be possible to get at least an order of magnitude over
> CPython.


Regardless of the possibility of speeding it up - why should one want
this? Coding speed is more important than speed of coding in 90%+ of all
cases. The other ones - well, if you _really_ want speed, assembler is
the way to go. I'm serious about that. There is one famous mathematical
library author that does code in assembler - because in the end, it's
all about processor architecture and careful optimization for that. [1]

The same is true for e.g. the new Cell architecture, or the
altivec-optimized code in photoshop that still beats the crap out of
Intel processors on PPC-machines.

I'm all for making python faster if it doesn't suffer
functionality-wise. But until there is a proof that something really
speeds up python w/o crippling it, I'm more than skeptical.

Diez

[1] http://math-atlas.sourceforge.net/faq.html#auth

"""
Kazushige Goto
His ev5/ev6 GEMM is used directly by ATLAS if the user answers
"yes" to its use during the configuration procedure on an alpha
processor. This results in a significant speedup over ATLAS's own GEMM
codes, and is the fastest ev5/ev6 implementation we are aware of.
"""

Bruno Desthuilliers

unread,
Jun 11, 2007, 4:37:26 AM6/11/07
to
Terry Reedy a écrit :

> <bruno.des...@gmail.com> wrote in message
> news:1181475395.7...@m36g2000hse.googlegroups.com...
> | > Terry Reedy wrote:
> | > > In Python, you have a choice of recursion (normal or tail)
>
> [snip Stroud questions]
>
> | I'm afraid Terry is wrong here, at least if he meant that CPython had
> | tail recursion *optimization*.
>
> NO!!!
> I did not mean that or imply that in any way.

I understand you didn't mean it, but since the whole point of
tail-recursion is allowing optimisation (else tail-recursion is nothing
else than a subset of recursion), you somehow implied it, even while
that was not your intention.

> | (and just for those who don't know yet, it's not a shortcoming, it's a
> | design choice.)
>
> And I already noted in a followup that I am working on a Python Papers
> paper explaining that choice, including Guido's claim that 'for statements
> are better'.
>
> So frankly I am a little annoyed that you dragged my name into your answer
> to Stroud when you should have succintly said 'No, Never', or better,
> nothing at all, as someone else already did say that. Read more of the
> tread before jumping in and acribing ignorance to people.
>

You're right on the fact that I should have read more of the thread
before posting this (which I usually do), and I do apologize for this.
But please note the second half of the sentence - which puts a strong
precondition on the validity of the first part.


Antoon Pardon

unread,
Jun 11, 2007, 6:34:58 AM6/11/07
to
On 2007-06-09, Terry Reedy <tjr...@udel.edu> wrote:
>
> "WaterWalk" <toolm...@163.com> wrote in message
> news:1181368143.8...@r19g2000prf.googlegroups.com...
>| I've just read an article "Building Robust System" by Gerald Jay
>| Sussman. The article is here:
>|
> http://swiss.csail.mit.edu/classes/symbolic/spring07/readings/robust-systems.pdf
>|
>| In it there is a footprint which says:
>| "Indeed, one often hears arguments against building exibility into an
>| engineered sys-
>| tem. For example, in the philosophy of the computer language Python it
>| is claimed:
>
> For him to imply that Python is anti-flexibility is wrong. Very wrong..
> He should look in a mirror. See below.

My impression is that python supporters often enough show
some anti-flexibility attitude.

>| \There should be one|and preferably only one|obvious way to do
>| it."[25] Science does
>| not usually proceed this way: In classical mechanics, for example, one
>| can construct equa-
>| tions of motion using Newtonian vectoral mechanics, or using a
>| Lagrangian or Hamiltonian
>| variational formulation.[30] In the cases where all three approaches
>| are applicable they are
>| equivalent, but each has its advantages in particular contexts."
>
> And in those contexts, one would hope that the method with advantages is
> somehow the obvious way to do it. Otherwise beginners might become like
> Buriden's ass.
>
> So I dispute that science is as different as he claims. And I do not see
> any real value in the statement in that I do not see it saying anything
> useful to the reader, at least not in this snippet.

Yes science is different. The difference is the following. Should
science only know the Newtonian vectoral mechanics and someone
would come up with the Lagrangian approach, nobody would protest
against this new approach by remarking that there should only be
one obvious approach, implying that by introducing the second approach
you give the people a choice, which they will have to think about
so their decision what to use is no longer obvious, which it is
if there is only one option.

Yet these kind of remarks are made often enough when someone suggest a
change to python.

--
Antoon Pardon

John Nagle

unread,
Jun 11, 2007, 1:37:12 PM6/11/07
to
Diez B. Roggisch wrote:
> Regardless of the possibility of speeding it up - why should one want
> this? Coding speed is more important than speed of coding in 90%+ of all
> cases.

When you have to start buying more servers for the server farm,
it's a real pain. I'm actually facing that because Python's HTML
parsing is so slow.

John Nagle

Steve Howell

unread,
Jun 11, 2007, 5:27:59 PM6/11/07
to John Nagle, pytho...@python.org
--- John Nagle <na...@animats.com> wrote:
>
> When you have to start buying more servers for
> the server farm,
> it's a real pain. I'm actually facing that because
> Python's HTML
> parsing is so slow.
>

I have been following this thread for a bit, but
apologies in advance if I didn't read far back enough.

Did you profile the module that you're using to do the
HTML parsing?



____________________________________________________________________________________
Get the free Yahoo! toolbar and rest assured with the added security of spyware protection.
http://new.toolbar.yahoo.com/toolbar/features/norton/index.php

Terry Reedy

unread,
Jun 11, 2007, 6:07:54 PM6/11/07
to pytho...@python.org
|> | > Terry Reedy wrote:
|> | > > In Python, you have a choice of recursion (normal or tail)

Bruno


> | I'm afraid Terry is wrong here, at least if he meant that CPython had
> | tail recursion *optimization*.

| Terry Reedy a écrit :


| > NO!!!
| > I did not mean that or imply that in any way.

Bruno


| I understand you didn't mean it, but since the whole point of
| tail-recursion is allowing optimisation

Wrong again. That is a reason, even a primary reason, for some people some
of the time, but not at all the only reason anyone would ever write linear
recursion in tail rather than body (my term) form. So nothing follows from
this false premise.

| (else tail-recursion is nothing else than a subset of recursion)

So? Body-recursion (non-tail-recursion) is also nothing else than a subset
of recursion.

| you somehow implied it, even while that was not your intention.

False in its own right. Any langauge that allow recursion allows the
subset that happen to constitute tail recursion. Most do not *mandate*
that compilers specially recognize and optimize that subset. So there is
pragmatically no implication that the latter follows from the former.

The reason I specifically mentioned tail recursion is because Prof.
Sussman, who complained about
There should be one-- and preferably only one --obvious way to do it.
-- to quote Brother Tim accurately -- co-developed Scheme. To me, Scheme
promotes tail recursion as the one true way as much or more as anything is
similarly promoted in Python. That mention had nothing in itself to do
with the separate issue of optimizing tail calls.

What Sussman apparently missed is that Tim's main point is that there
should be some rather than no obvious way to do things. The parenthetical
optional secondary desiderata is just that -- optional and secondary.

Terry Jan Reedy

Terry Reedy

unread,
Jun 11, 2007, 6:40:08 PM6/11/07
to pytho...@python.org

"Antoon Pardon" <apa...@forel.vub.ac.be> wrote in message
news:slrnf6q9ah....@rcpc42.vub.ac.be...

| On 2007-06-09, Terry Reedy <tjr...@udel.edu> wrote:
| > For him to imply that Python is anti-flexibility is wrong. Very
wrong..
| > He should look in a mirror. See below.
|
| My impression is that python supporters often enough show
| some anti-flexibility attitude.

More so than supporters of most other languages, in particular Scheme?

Here's the situation. Python is making inroads at MIT, Scheme home turf.
The co-developer of Scheme, while writing about some other subject, tosses
in an off-the-wall slam against Python. Someone asks what we here think.
I think that the comment is a crock and the slam better directed, for
instance, at Scheme itself. Hence 'he should look in a mirror'.

| Yes science is different. The difference is the following. Should
| science only know the Newtonian vectoral mechanics and someone
| would come up with the Lagrangian approach, nobody would protest
| against this new approach by remarking that there should only be
| one obvious approach,

The history of science is a history of innovation and resistance to
innovation. Do you have information that the introduction of the
Lagrangian approach was exceptional? Do you really think that no college
student has ever groused about having to learn another approach that is
only equivalent to what he already knows?

| Yet these kind of remarks are made often enough when someone suggest a
| change to python.

So? Tim wrote 'There should be one-- and preferably only one --obvious way
to do it'. The primary clause is that there should at least one. The
secondary clause is that once there is a good and obvious way to do
something, we take a hard look before adding another. As it is, there are
already multiple ways to do many things. And there are probably at least
10 suggested innovations for everyone accepted.

tjr

Aahz

unread,
Jun 11, 2007, 10:16:25 PM6/11/07
to
In article <mailman.8879.1181405...@python.org>,
Terry Reedy <tjr...@udel.edu> wrote:
>
>"James Stroud" <jst...@mbi.ucla.edu> wrote in message
>news:E5vai.858$TC1...@newssvr17.news.prodigy.net...

>| Terry Reedy wrote:
>| > In Python, you have a choice of recursion (normal or tail)
>|
>| Please explain this.
>
>I am working on a paper for Python Papers that will. It was inspired by
>the question 'why doesn't Python do tail-recursion optimization'.

...with the proof contained in the margins?
--
Aahz (aa...@pythoncraft.com) <*> http://www.pythoncraft.com/

"as long as we like the same operating system, things are cool." --piranha

Michele Simionato

unread,
Jun 12, 2007, 12:21:49 AM6/12/07
to

This is already done in RPython:

http://codespeak.net/pypy/dist/pypy/doc/coding-guide.html#restricted-python

I was at the PyCon It conference the other day and one of the
PyPy people claimed that RPython is up to 300X faster than Python.

Michele Simionato

Antoon Pardon

unread,
Jun 12, 2007, 4:34:05 AM6/12/07
to
On 2007-06-11, Terry Reedy <tjr...@udel.edu> wrote:
>
> "Antoon Pardon" <apa...@forel.vub.ac.be> wrote in message
> news:slrnf6q9ah....@rcpc42.vub.ac.be...
>| On 2007-06-09, Terry Reedy <tjr...@udel.edu> wrote:
>| > For him to imply that Python is anti-flexibility is wrong. Very
> wrong..
>| > He should look in a mirror. See below.
>|
>| My impression is that python supporters often enough show
>| some anti-flexibility attitude.
>
> More so than supporters of most other languages, in particular Scheme?

Well to my knowledge (which could be vastly improved), scheme doesn't
have some Zen-rules that include something like this.

I tried to google for similar remarks in relation to scheme but I
got no results. Maybe your google skills are better.

> Here's the situation. Python is making inroads at MIT, Scheme home turf.
> The co-developer of Scheme, while writing about some other subject, tosses
> in an off-the-wall slam against Python. Someone asks what we here think.
> I think that the comment is a crock and the slam better directed, for
> instance, at Scheme itself. Hence 'he should look in a mirror'.
>
>| Yes science is different. The difference is the following. Should
>| science only know the Newtonian vectoral mechanics and someone
>| would come up with the Lagrangian approach, nobody would protest
>| against this new approach by remarking that there should only be
>| one obvious approach,
>
> The history of science is a history of innovation and resistance to
> innovation. Do you have information that the introduction of the
> Lagrangian approach was exceptional? Do you really think that no college
> student has ever groused about having to learn another approach that is
> only equivalent to what he already knows?

Yes the history of science is a history of innovation and resistance.
But the resistance to my knowledge has never used the argument that
there should (preferably) be only one obvious way to do things.

The student example is IMO not appropiate. There is a difference between
prefering not having to learn something yourself and argueing something
shouldn't be available in general.

>| Yet these kind of remarks are made often enough when someone suggest a
>| change to python.
>
> So? Tim wrote 'There should be one-- and preferably only one --obvious way
> to do it'. The primary clause is that there should at least one. The
> secondary clause is that once there is a good and obvious way to do
> something, we take a hard look before adding another. As it is, there are
> already multiple ways to do many things. And there are probably at least
> 10 suggested innovations for everyone accepted.

Yes I know that. But that doesn't stop a lot of python supporters in this news
group to come with a variation that suggests once there is an obvious way to do
something in python, there really is no need any more to look at ways
that do it differently. And if my memory doesn't betray me, corrections
from others to such variations are a rather recent occurence.

--
Antoon Pardon

Neil Cerutti

unread,
Jun 12, 2007, 7:05:24 AM6/12/07
to
On 2007-06-12, Antoon Pardon <apa...@forel.vub.ac.be> wrote:
> On 2007-06-11, Terry Reedy <tjr...@udel.edu> wrote:
>> More so than supporters of most other languages, in particular
>> Scheme?
>
> Well to my knowledge (which could be vastly improved), scheme
> doesn't have some Zen-rules that include something like this.
>
> I tried to google for similar remarks in relation to scheme but
> I got no results. Maybe your google skills are better.

It's in _The Revised^%d Report on Scheme_, Introduction:

Programming languages should be designed not by piling feature
on top of feature, but by removing the weaknesses and
restrictions that make additional features appear necessary.

Of course, that was written well before Scheme had most of its
current features.

--
Neil Cerutti
These people haven't seen the last of my face. If I go down, I'm going down
standing up. --Chuck Person

Diez B. Roggisch

unread,
Jun 12, 2007, 8:46:16 AM6/12/07
to
John Nagle wrote:

I can't believe that this is a general "python is to slow"-issue. After all,
lxml and cElementTree are _really_ fast - and written in C.

For example in TurboGears, there are two python-only templating systems -
KID & genshi (apart from others). The latter is a magnitude faster than the
former. After all, O(n^3) is O(n^3), regardless of the language...

And if only the html-parsing is slow, you might consider creating an
extension for that. Using e.g. Pyrex.

Diez

Terry Reedy

unread,
Jun 12, 2007, 2:19:22 PM6/12/07
to pytho...@python.org

"Antoon Pardon" <apa...@forel.vub.ac.be> wrote in message
news:slrnf6smjs....@rcpc42.vub.ac.be...

| > So? Tim wrote 'There should be one-- and preferably only one --obvious
way
| > to do it'. The primary clause is that there should at least one. The
| > secondary clause is that once there is a good and obvious way to do
| > something, we take a hard look before adding another. As it is, there
are
| > already multiple ways to do many things. And there are probably at
least
| > 10 suggested innovations for everyone accepted.
|
| Yes I know that. But that doesn't stop a lot of python supporters in this
news
| group to come with a variation that suggests once there is an obvious way
to do
| something in python, there really is no need any more to look at ways
| that do it differently.

Try suggesting on a Lisp or Scheme group that having only one type of
syntax (prefix expressions) lacks something and that they should add
variety in the form of statement syntax ;-) Hint: some Lispers have
bragged here about the simplicity of 'one way to do it' and put Python down
for its mixed syntax. (Of course, this does not mean that some dialects
have not sneaked in lists of statements thru a back door ;-).

Would you really want Python to have a hundred new features every release?

tjr

Anders J. Munch

unread,
Jun 12, 2007, 5:11:45 PM6/12/07
to
Paul Rubin wrote:
> Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:
>>> Not tail calls, in general, no.
>> Sorry, how does that work? You're suggesting that there is an algorithm
>> which the compiler could follow to optimize away tail-recursion, but human
>> beings can't follow the same algorithm?
>>
>> Now I'm confused.
>
> The usual compiler method is to translate the code into
> continuation-passing style and thereby gain tail-recursion
> optimization automagically.

There's no need to go into CPS just to optimise tail-recursion. After all,
compilers were optimising tail-calls decades before Appel's work on SML/NJ.

Converting tail-recursion to iteration is trivial, and perfectly reasonable for
a human to do by hand. You add an outer "while True"-loop, the recursive call
becomes a tuple assignment, and other code paths end with a break out of the
loop. Completely mechanical and the resulting code doesn't even look that bad.

Like Steven said, tail-call optimisation is not necessary as you can always
hand-optimise it yourself.

- Anders

Steve Howell

unread,
Jun 12, 2007, 6:51:07 PM6/12/07
to Anders J. Munch, pytho...@python.org

--- "Anders J. Munch" <20...@jmunch.dk> wrote:
>
> Converting tail-recursion to iteration is trivial,
> and perfectly reasonable for
> a human to do by hand. You add an outer "while
> True"-loop, the recursive call
> becomes a tuple assignment, and other code paths end
> with a break out of the
> loop. Completely mechanical and the resulting code
> doesn't even look that bad.
>

I have to ask the stupid question. If a human can do
this completely mechanically, why can't a machine?



____________________________________________________________________________________
Get the Yahoo! toolbar and be alerted to new email wherever you're surfing.
http://new.toolbar.yahoo.com/toolbar/features/mail/index.php

John Nagle

unread,
Jun 12, 2007, 7:16:35 PM6/12/07
to
Steve Howell wrote:
> --- "Anders J. Munch" <20...@jmunch.dk> wrote:
>
>>Converting tail-recursion to iteration is trivial,
>>and perfectly reasonable for
>>a human to do by hand. You add an outer "while
>>True"-loop, the recursive call
>>becomes a tuple assignment, and other code paths end
>>with a break out of the
>>loop. Completely mechanical and the resulting code
>>doesn't even look that bad.
>>
>
>
> I have to ask the stupid question. If a human can do
> this completely mechanically, why can't a machine?

That's what tail recursion optimization is.

It's not a high-priority optimization for CPython.
The language has good iteration primitives, so iterating
via simple recursion is relatively rare and not, typically,
a performance bottleneck. In LISP, one might iterate over
a list using tail recursion, but in Python, that would be
both silly and inefficient.

John Nagle

Steven D'Aprano

unread,
Jun 12, 2007, 7:32:38 PM6/12/07
to
On Tue, 12 Jun 2007 15:51:07 -0700, Steve Howell wrote:

>
> --- "Anders J. Munch" <20...@jmunch.dk> wrote:
>>
>> Converting tail-recursion to iteration is trivial,
>> and perfectly reasonable for
>> a human to do by hand. You add an outer "while
>> True"-loop, the recursive call
>> becomes a tuple assignment, and other code paths end
>> with a break out of the
>> loop. Completely mechanical and the resulting code
>> doesn't even look that bad.
>>
>
> I have to ask the stupid question. If a human can do
> this completely mechanically, why can't a machine?


They can and do -- some compilers optimize tail-recursion into iteration.

Python doesn't, as a deliberate design decision, because to do so would
lose traceback information.


--
Steven.

Steve Howell

unread,
Jun 12, 2007, 7:38:21 PM6/12/07
to pytho...@python.org

--- Steven D'Aprano
<st...@REMOVE.THIS.cybersource.com.au> wrote:

Ok, that didn't occur to me. What does occur to me,
though, is that tracebacks for recursive algorithms
can get kind of, well, recursive, so I wonder if
there's a tradeoff somewhere.



____________________________________________________________________________________
Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.
http://farechase.yahoo.com/

Alexander Schmolck

unread,
Jun 12, 2007, 8:42:51 PM6/12/07
to
Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:

> On Mon, 11 Jun 2007 01:28:09 +0100, Alexander Schmolck wrote:
>
>> Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:
>>
>>> On Sat, 09 Jun 2007 22:42:17 +0100, Alexander Schmolck wrote:
>>>
>>>>> As for why tail calls are not optimized out, it was decided that being able
>>>>> to have the stack traces (with variable information, etc.) was more useful
>>>>> than offering tail call optimization
>>>>
>>>> I don't buy this.
>>>
>>> Do you mean you don't believe the decision was made, or you don't agree
>>> with the decision?
>>
>> Neither. I don't believe the rationale stated in this thread to be the true
>> reason.
>
>
> Don't keep us in suspense. What do you believe is the true reason?

It's easier to spot that some rationalization is bogus than to unconver the
true underlying causes; I'm pretty sure it's more a Gestalt thing than a
compelling technical reason (I guess Guido's distaste for scheme also plays a
role). Not that I discount that out of hand -- maybe all that's great about
python is due to Guido being exceptionally good at making such judgements.

>>> Are you volunteering? If you are, I'm sure your suggestion will be welcomed
>>> gratefully.
>>
>> I rather doubt it. Guido has stated quite clearly that his not
>> interested in incorporating this feature.
>
> He's not the only one who gets to make these decisions.

This is news to me. Who else does?

> But even if he uses his veto to prevent tail-recursion optimization from
> being put into the main branch, there are other options.

That don't involve abducting his kids?

>>>>> (do what I say).
>>>>
>>>> Where did you say run out of memory and fail? More importantly how do
>>>> you say "don't run out of memory and fail"?
>>>
>>> If we can live with a certain amount of "arbitrary failures" in simple
>>> arithmetic,
>>
>> I prefer not too, and thus when possible avoid to use languages where
>> ``a + b`` is liable to fail arbitrarily (such as C, where the behavior
>> will often be undefined).
>
> That's not the sort of arbitrary failure I was discussing, but for that
> matter Python is one of those languages.

Apart from floating point arithmetic, simple arithmetic doesn't tend to fail
arbitrarily in python, as far as I'm aware. You can of course create your very
own classes specifically to get broken arithmetic but that doesn't strike me
as "simple" arithmetic anymore.

> Perhaps Python is not the language for you?

Do you also happen to know what would be?

> Correct me if I'm wrong, but surely it is C++ that can have arbitrary
> behaviour for "a + b", not C?

``INT_MAX + 1`` can do precisely anything in C.

>>> You can always hand-optimize it yourself.
>>
>> Not tail calls, in general, no.
>
> Sorry, how does that work? You're suggesting that there is an algorithm
> which the compiler could follow to optimize away tail-recursion, but human
> beings can't follow the same algorithm? Now I'm confused.

Does it also confuse you that if I give you a 500x500 matrix A you won't be
able to figure out a single element in A^-1 by doing mental arithmetic (or
using pen and paper), although my computer manages just fine and I'm happy to
give you the algorithm it uses?

'as

Alexander Schmolck

unread,
Jun 12, 2007, 8:45:13 PM6/12/07
to
"Anders J. Munch" <20...@jmunch.dk> writes:

> Like Steven said, tail-call optimisation is not necessary as you can always
> hand-optimise it yourself.

Care to demonstrate on some code written in CPS (a compiler or parser, say)?

'as

John Nagle

unread,
Jun 12, 2007, 11:12:04 PM6/12/07
to
Alexander Schmolck wrote:
> Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:
>>On Mon, 11 Jun 2007 01:28:09 +0100, Alexander Schmolck wrote:
>>>Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:

>>Don't keep us in suspense. What do you believe is the true reason?
>
>
> It's easier to spot that some rationalization is bogus than to unconver the
> true underlying causes; I'm pretty sure it's more a Gestalt thing than a

> compelling technical reason.

There's a real reason. Remember, functions are dynamically
replaceable. The compiler would have to detect that the function
doesn't modify or replace itself while recursing for this optimization
to be valid. Worst case, another thread could replace the function
while it was recursing, invalidating the tail recursion optimization.

John Nagle

Steve Howell

unread,
Jun 13, 2007, 12:15:22 AM6/13/07
to John Nagle, pytho...@python.org

--- John Nagle <na...@animats.com> wrote:
>
> There's a real reason. Remember, functions are
> dynamically
> replaceable. The compiler would have to detect that
> the function
> doesn't modify or replace itself while recursing for
> this optimization
> to be valid. Worst case, another thread could
> replace the function
> while it was recursing, invalidating the tail
> recursion optimization.
>

You would just change the language definition to say
that once you enter f(), any call to f() from within
f() behaves as if the recursively called f() still
points to the originally bound version of f. To want
any other behavior would be absurd, anyhow.

I could see the utility of this restriction even
without optimization.



____________________________________________________________________________________
Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated for today's economy) at Yahoo! Games.
http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow

Neil Cerutti

unread,
Jun 13, 2007, 8:15:18 AM6/13/07
to
On 2007-06-12, Anders J. Munch <20...@jmunch.dk> wrote:
> Paul Rubin wrote:
>> Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:
>>>> Not tail calls, in general, no.
>>> Sorry, how does that work? You're suggesting that there is an
>>> algorithm which the compiler could follow to optimize away
>>> tail-recursion, but human beings can't follow the same
>>> algorithm?
>>>
>>> Now I'm confused.
>>
>> The usual compiler method is to translate the code into
>> continuation-passing style and thereby gain tail-recursion
>> optimization automagically.
>
> There's no need to go into CPS just to optimise tail-recursion.
> After all, compilers were optimising tail-calls decades before
> Appel's work on SML/NJ.
>
> Converting tail-recursion to iteration is trivial, and
> perfectly reasonable for a human to do by hand.

For simple recursive tail calls, yeah, it can be. Translating a
tail-recursive Factorial function into a while loop is easy. But
tail-call optimization technically works for any tail-call,
including mutual recursion, and non-recursive tail-calls. You
can't reasonably hand-optimize away the stack frame for all
tail-calls.

def foo(x)
bar(x)

The only way to hand-optimize the call to bar is to inline it
yourself.

--
Neil Cerutti
Will the highways on the Internet become more few? --George W. Bush

Neil Cerutti

unread,
Jun 13, 2007, 8:15:18 AM6/13/07
to
On 2007-06-13, Steve Howell <show...@yahoo.com> wrote:
> You would just change the language definition to say that once
> you enter f(), any call to f() from within f() behaves as if
> the recursively called f() still points to the originally bound
> version of f. To want any other behavior would be absurd,
> anyhow.

There's a reason it's generally refered to as "tail-call"
optimization and not "tail-recursive" optimization. The former is
more general, and, I believe, easier to implement than the
latter.

--
Neil Cerutti
The peace-making meeting scheduled for today has been cancelled due to a
conflict. --Church Bulletin Blooper

Anders J. Munch

unread,
Jun 13, 2007, 12:57:47 PM6/13/07
to
Neil Cerutti wrote:
> On 2007-06-12, Anders J. Munch <20...@jmunch.dk> wrote:
>> Converting tail-recursion to iteration is trivial, and
>> perfectly reasonable for a human to do by hand.
>
> For simple recursive tail calls, yeah, it can be. Translating a
> tail-recursive Factorial function into a while loop is easy. But
> tail-call optimization technically works for any tail-call,
> including mutual recursion, and non-recursive tail-calls. You
> can't reasonably hand-optimize away the stack frame for all
> tail-calls.

I may have misunderstood, I thought we were talking about tail recursion only.
The general tail-call optimisation, where all leaf calls become jumps and the
called function usurps the current stack frame, is a different ballgame
entirely. There's no pure-Python transformation for that, but that still
doesn't mean you need CPS.

General tail-call optimisation is of course completely out-of-bounds for Python,
because it ruins tracebacks. Unlike tail recursion, which could use recursion
counters.

- Anders

Anders J. Munch

unread,
Jun 13, 2007, 1:01:02 PM6/13/07
to

I meant tail recursion, not tail-call, sorry, that was just my fingers trying to
save typing.

- Anders

Neil Cerutti

unread,
Jun 13, 2007, 2:22:54 PM6/13/07
to
On 2007-06-13, Anders J. Munch <20...@jmunch.dk> wrote:
> General tail-call optimisation is of course completely
> out-of-bounds for Python, because it ruins tracebacks. Unlike
> tail recursion, which could use recursion counters.

Is it really ruined? To use a similar example:

def foo(x):
bar(x+1)

def bar(x):
if x > 10:
raise ValueError
else:
foo(x+2)

Today, when I call foo(4), I get something like:

C:\WINNT\system32\cmd.exe /c python temp.py
Traceback (most recent call last):
File "temp.py", line 529, in <module>
foo(4)
File "temp.py", line 521, in foo
bar(x+1)
File "temp.py", line 527, in bar
foo(x+2)
File "temp.py", line 521, in foo
bar(x+1)
File "temp.py", line 527, in bar
foo(x+2)
File "temp.py", line 521, in foo
bar(x+1)
File "temp.py", line 525, in bar
raise ValueError
ValueError
shell returned 1

With tail-call optimization you'd get something like:

C:\WINNT\system32\cmd.exe /c python temp.py
Traceback (most recent call last):
File "temp.py", line 529, in <module>
foo(4)
File "temp.py", line 525, in bar
raise ValueError
ValueError
shell returned 1

What makes the latter harder to work with?

--
Neil Cerutti

Carsten Haese

unread,
Jun 13, 2007, 2:34:59 PM6/13/07
to pytho...@python.org

The fact that you don't see how many call levels down your algorithm got
before throwing an exception. This may be an important clue in debugging
a recursive algorithm.

--
Carsten Haese
http://informixdb.sourceforge.net


Neil Cerutti

unread,
Jun 13, 2007, 2:51:27 PM6/13/07
to
On 2007-06-13, Neil Cerutti <hor...@yahoo.com> wrote:
> On 2007-06-13, Anders J. Munch <20...@jmunch.dk> wrote:
>> General tail-call optimisation is of course completely
>> out-of-bounds for Python, because it ruins tracebacks. Unlike
>> tail recursion, which could use recursion counters.
>
> Is it really ruined? To use a similar example:

I found some interesting notes by Alex Martelli pertaining to
tail-call optimisation, and my assumption that tail-call
optimization is easier to implement than tail-recursive
optimization may have been naive. ;)

http://groups.google.com/group/comp.lang.python/msg/1a7cccc103c1bd70?hl=en&

Moreover, there are (or were) technical reasons that you can't do
tail-call optimization in Python, which can't even recognize
tail-calls at compile time. According to Tim Peters:

http://groups.google.com/group/comp.lang.python/msg/ea1de1e35aefb828?hl=en&

--
Neil Cerutti

Terry Reedy

unread,
Jun 13, 2007, 4:49:04 PM6/13/07