Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Python's "only one way to do it" philosophy isn't good?

41 views
Skip to first unread message

WaterWalk

unread,
Jun 9, 2007, 1:49:03 AM6/9/07
to
I've just read an article "Building Robust System" by Gerald Jay
Sussman. The article is here:
http://swiss.csail.mit.edu/classes/symbolic/spring07/readings/robust-systems.pdf

In it there is a footprint which says:
"Indeed, one often hears arguments against building exibility into an
engineered sys-
tem. For example, in the philosophy of the computer language Python it
is claimed:
\There should be one|and preferably only one|obvious way to do
it."[25] Science does
not usually proceed this way: In classical mechanics, for example, one
can construct equa-
tions of motion using Newtonian vectoral mechanics, or using a
Lagrangian or Hamiltonian
variational formulation.[30] In the cases where all three approaches
are applicable they are
equivalent, but each has its advantages in particular contexts."

I'm not sure how reasonable this statement is and personally I like
Python's simplicity, power and elegance. So I put it here and hope to
see some inspiring comments.

Gabriel Genellina

unread,
Jun 9, 2007, 2:35:48 AM6/9/07
to pytho...@python.org
En Sat, 09 Jun 2007 02:49:03 -0300, WaterWalk <toolm...@163.com>
escribió:

I think the key is the word you ommited in the subject: "obvious". There
should be one "obvious" way to do it. For what I can remember of my first
love (Physics): if you have a small ball moving inside a spherical cup, it
would be almost crazy to use cartesian orthogonal coordinates and Newton's
laws to solve it - the "obvious" way would be to use spherical coordinates
and the Lagrangian formulation (or at least I hope so - surely
knowledgeable people will find more "obviously" which is the right way).
All classical mechanics formulations may be equivalent, but in certain
cases one is much more suited that the others.


--
Gabriel Genellina

Terry Reedy

unread,
Jun 9, 2007, 3:40:42 AM6/9/07
to pytho...@python.org

"WaterWalk" <toolm...@163.com> wrote in message
news:1181368143.8...@r19g2000prf.googlegroups.com...

| I've just read an article "Building Robust System" by Gerald Jay
| Sussman. The article is here:
|
http://swiss.csail.mit.edu/classes/symbolic/spring07/readings/robust-systems.pdf
|
| In it there is a footprint which says:
| "Indeed, one often hears arguments against building exibility into an
| engineered sys-
| tem. For example, in the philosophy of the computer language Python it
| is claimed:

For him to imply that Python is anti-flexibility is wrong. Very wrong..
He should look in a mirror. See below.

| \There should be one|and preferably only one|obvious way to do
| it."[25] Science does
| not usually proceed this way: In classical mechanics, for example, one
| can construct equa-
| tions of motion using Newtonian vectoral mechanics, or using a
| Lagrangian or Hamiltonian
| variational formulation.[30] In the cases where all three approaches
| are applicable they are
| equivalent, but each has its advantages in particular contexts."

And in those contexts, one would hope that the method with advantages is
somehow the obvious way to do it. Otherwise beginners might become like
Buriden's ass.

So I dispute that science is as different as he claims. And I do not see
any real value in the statement in that I do not see it saying anything
useful to the reader, at least not in this snippet.

| I'm not sure how reasonable this statement is and personally I like
| Python's simplicity, power and elegance. So I put it here and hope to
| see some inspiring comments.

How much has Mr. Sussman actually programmed in Python and what actual
problems did he find with the *implementation* of the philosophy? Without
something concrete, the complaint is rather bogus.

But here is what I find funny (and a bit maddening): G. J. Sussman is one
of the inventers of the Lisp dialect Scheme, a minimalist language that for
some things has only one way to do it, let alone one obvious way. Scheme
would be like physics with only one of the three ways. After all, if they
are equivalent, only one is needed.

For example, consider scanning the items in a collection. In Python, you
have a choice of recursion (normal or tail), while loops, and for
statements. For statements are usually the obvious way, but the other two
are available for personal taste and for specially situations such as
walking a tree (where one might use recursion to write the generator that
can then be used by for loops. In scheme, I believe you just have
recursion. Since iteration and recursion are equivalent, why have both?

Terry Jan Reedy

James Stroud

unread,
Jun 9, 2007, 6:16:01 AM6/9/07
to
Terry Reedy wrote:
> In Python, you have a choice of recursion (normal or tail)

Please explain this. I remember reading on this newsgroup that an
advantage of ruby (wrt python) is that ruby has tail recursion, implying
that python does not. Does python have fully optimized tail recursion as
described in the tail recursion Wikipedia entry? Under what
circumstances can one count on the python interpreter recognizing the
possibility for optimized tail recursion?

James


=====

Disclaimer: Mention of more than one programming language in post does
not imply author's desire to begin language v. language holy battle. The
author does not program in [some or all of the other languages mentioned
aside from the language topical to the newsgroup] and has no opinions on
the merits or shortcomings of said language or languages.

=====

Cousin Stanley

unread,
Jun 9, 2007, 6:42:48 AM6/9/07
to

> ....

> In scheme, I believe you just have recursion.
> ....

Cousin TJR ....

I'm a total scheme rookie starting only about 3 days ago
and one of the mechanisms I went looking for was a technique
for iteration ....

Found in the scheme docs about iteration supplied
via the reduce package ....

"Iterate and reduce are extensions of named-let
for writing loops that walk down one or more sequences
such as the elements of a list or vector, the characters
read from a port, or arithmetic series .... "

The following scheme session illustrates a trivial example ....

> , open reduce
>
> ( define ( list_loop this_list )
( iterate loop
( ( list* this_item this_list ) ) ; iterate expression
( ( new_list '( ) ) ) ; state expression
( loop ( cons ( * 2 this_item ) new_list ) ) ; body expression
( reverse new_list ) ) ) ; final expression
; no values returned
>
> ( define L '( 1 2 3 4 5 ) )
; no values returned
>
( define result_i ( list_loop L ) )
; no values returned
>
> result_i
'(2 4 6 8 10)
>

However, just as in Python the map function
might be both easier to code and more readable
in many cases ....

> ( define ( x2 n ) ( * 2 n ) )
; no values returned
>
> ( define result_m ( map x2 L ) )
; no values returned
>
> result_m
'(2 4 6 8 10)

Note ....

No lambdas in my scheme code either .... ;-)

--
Stanley C. Kitching
Human Being
Phoenix, Arizona


----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----

Bjoern Schliessmann

unread,
Jun 9, 2007, 6:53:54 AM6/9/07
to
Gabriel Genellina wrote:

> For what I can
> remember of my first love (Physics): if you have a small ball
> moving inside a spherical cup, it would be almost crazy to use
> cartesian orthogonal coordinates and Newton's laws to solve it -
> the "obvious" way would be to use spherical coordinates and the
> Lagrangian formulation (or at least I hope so

Yep, that's right.

> - surely knowledgeable people will find more "obviously" which is
> the right way).

No, this case is IMHO almost classical. Movement with planar
constraints can be solved quite easy using Lagrange.

> All classical mechanics formulations may be equivalent, but
> in certain cases one is much more suited that the others.

Or: Lagrange is the only obvious way to describe movement with
constraints.

Regards,


Björn

--
BOFH excuse #80:

That's a great computer you have there; have you considered how it
would work as a BSD machine?

Kay Schluehr

unread,
Jun 9, 2007, 7:28:10 AM6/9/07
to
On Jun 9, 12:16 pm, James Stroud <jstr...@mbi.ucla.edu> wrote:
> Terry Reedy wrote:
> > In Python, you have a choice of recursion (normal or tail)
>
> Please explain this. I remember reading on this newsgroup that an
> advantage of ruby (wrt python) is that ruby has tail recursion, implying
> that python does not.

Proof by rumour? You can use first class continuations in Ruby to
eliminate tail calls in and define higher order function wrappers
( like Python decorators ). But I wouldn't call this "fully
optimized".

> Does python have fully optimized tail recursion as
> described in the tail recursion Wikipedia entry?

No.

Terry Reedy

unread,
Jun 9, 2007, 12:12:53 PM6/9/07
to pytho...@python.org

"James Stroud" <jst...@mbi.ucla.edu> wrote in message
news:E5vai.858$TC1...@newssvr17.news.prodigy.net...

| Terry Reedy wrote:
| > In Python, you have a choice of recursion (normal or tail)
|
| Please explain this.

I am working on a paper for Python Papers that will. It was inspired by
the question 'why doesn't Python do tail-recursion optimization'.

tjr

Terry Reedy

unread,
Jun 9, 2007, 1:00:11 PM6/9/07
to pytho...@python.org

"Cousin Stanley" <cousin...@hotmail.com> wrote in message
news:11813857...@sp12lax.superfeed.net...

| > In scheme, I believe you just have recursion.

I was referring to the original mimimalist core language developed by Guy
and Sussman and as I remember it being used in the original edition of SICP
(see Wikipedia). I also remember statements explaining (truthfully) that
builtin iteration is not needed because it can be defined in terms of tail
recursion, which in Scheme is required to be optimized to be just as space
efficient.

I see in Wikipedia that Scheme has do loops (all versions?), but I do not
know if that was original or added. If the former, it was de-emphasized.
Hence my belief, even if mistaken.

| Cousin TJR ....
|
| I'm a total scheme rookie starting only about 3 days ago
| and one of the mechanisms I went looking for was a technique
| for iteration ....
|
| Found in the scheme docs about iteration supplied
| via the reduce package ....

Right. An add-on library package, not part of the core;-)

In Python, modules can add functions (and classes, etc), but not statement
syntax, so adding while statements defined in terms of recursion is not
possible.

Scheme is quite elegant and worth learning at least the basics of. My only
point was that Sussman is an odd person to be criticizing (somewhat
mistakingly) Python for being minimalist.

tjr

John Nagle

unread,
Jun 9, 2007, 1:58:57 PM6/9/07
to
Bjoern Schliessmann wrote:
> Gabriel Genellina wrote:
>
>
>>For what I can
>>remember of my first love (Physics): if you have a small ball
>>moving inside a spherical cup, it would be almost crazy to use
>>cartesian orthogonal coordinates and Newton's laws to solve it -
>>the "obvious" way would be to use spherical coordinates and the
>>Lagrangian formulation (or at least I hope so

Having actually solved that problem in simulation, I can report
that it's easier in Cartesian coordinates. I used to use this as
a test of Falling Bodies, one of the first physics engines that
really worked on the hard cases.

Spherical coordinates seem attractive until you have to deal
with friction between the ball and cup. The ball is rotating, too,
and may be slipping with respect to the cup. Then the simple
Physics 101 approach isn't so simple any more.

John Nagle
Animats

Josiah Carlson

unread,
Jun 9, 2007, 4:05:05 PM6/9/07
to
James Stroud wrote:
> Terry Reedy wrote:
>> In Python, you have a choice of recursion (normal or tail)
>
> Please explain this. I remember reading on this newsgroup that an
> advantage of ruby (wrt python) is that ruby has tail recursion, implying
> that python does not. Does python have fully optimized tail recursion as
> described in the tail recursion Wikipedia entry? Under what
> circumstances can one count on the python interpreter recognizing the
> possibility for optimized tail recursion?

Note that Terry said that you could do normal or tail recursion, he
didn't claim that either were optimized. As for why tail calls are not
optimized out, it was decided that being able to have the stack traces
(with variable information, etc.) was more useful than offering tail
call optimization (do what I say).

- Josiah

Message has been deleted

Alexander Schmolck

unread,
Jun 9, 2007, 5:42:17 PM6/9/07
to
Josiah Carlson <josiah....@sbcglobal.net> writes:

> James Stroud wrote:
>> Terry Reedy wrote:
>>> In Python, you have a choice of recursion (normal or tail)
>>
>> Please explain this. I remember reading on this newsgroup that an advantage
>> of ruby (wrt python) is that ruby has tail recursion, implying that python
>> does not. Does python have fully optimized tail recursion as described in
>> the tail recursion Wikipedia entry? Under what circumstances can one count
>> on the python interpreter recognizing the possibility for optimized tail
>> recursion?
>
> Note that Terry said that you could do normal or tail recursion, he didn't
> claim that either were optimized.

Well yeah, but without the implication how do the two words "or tail" add to
the information content of the sentence?

> As for why tail calls are not optimized out, it was decided that being able
> to have the stack traces (with variable information, etc.) was more useful
> than offering tail call optimization

I don't buy this. What's more important, making code not fail arbitrarily (and
thus making approaches to certain problems feasible that otherwise wouldn't
be) or making it a be a bit easier to debug code that will fail arbitrarily?
Why not only do tail-call optimization in .pyo files and get the best of both
worlds?

> (do what I say).

Where did you say run out of memory and fail? More importantly how do you say
"don't run out of memory and fail"?

'as

Josiah Carlson

unread,
Jun 9, 2007, 6:52:32 PM6/9/07
to
Alexander Schmolck wrote:
> Josiah Carlson <josiah....@sbcglobal.net> writes:
>
>> James Stroud wrote:
>>> Terry Reedy wrote:
>>>> In Python, you have a choice of recursion (normal or tail)
>>> Please explain this. I remember reading on this newsgroup that an advantage
>>> of ruby (wrt python) is that ruby has tail recursion, implying that python
>>> does not. Does python have fully optimized tail recursion as described in
>>> the tail recursion Wikipedia entry? Under what circumstances can one count
>>> on the python interpreter recognizing the possibility for optimized tail
>>> recursion?
>> Note that Terry said that you could do normal or tail recursion, he didn't
>> claim that either were optimized.
>
> Well yeah, but without the implication how do the two words "or tail" add to
> the information content of the sentence?

Normal and tail recursion are different, based upon whether or not one
can technically considered to be done with the stack frame.

def normal(arg):
if arg == 1:
return 1
return arg * normal(arg-1)

def tail(arg, default=1):
if arg == 1:
return arg * default
return tail(arg-1, default*arg)


>> As for why tail calls are not optimized out, it was decided that being able
>> to have the stack traces (with variable information, etc.) was more useful
>> than offering tail call optimization
>
> I don't buy this. What's more important, making code not fail arbitrarily (and
> thus making approaches to certain problems feasible that otherwise wouldn't
> be) or making it a be a bit easier to debug code that will fail arbitrarily?
> Why not only do tail-call optimization in .pyo files and get the best of both
> worlds?

I didn't make the decisions, I'm just reporting what was decided upon.

Personally, I have never found the lack of tail call optimization an
issue for two reasons. The first is because I typically write such
programs in an iterative fashion. And generally for recursion for which
tail call optimization doesn't work (the vast majority of recursive
algorithms I use), I typically write the algorithm recursively first,
verify its correctness, then convert it into an iterative version with
explicit stack. I find it is good practice, and would be necessary
regardless of whether Python did tail call optimization or not.


>> (do what I say).
>
> Where did you say run out of memory and fail? More importantly how do you say
> "don't run out of memory and fail"?

By virtue of Python's underlying implementation, Python "does what I
say", it doesn't "do what I mean". While I may not have explicitly
stated "run out of stack space", the underlying implementation *has*
limited stack space. You are stating that when you write a tail
recursive program, you want Python to optimize it away by destroying the
stack frames. And that's fine. There are tail-call optimization
decorators available, and if you dig into sourceforge, there should even
be a full patch to make such things available in a previous Python.

However, Python is not Lisp and is not partially defined by infinite
recursion (see sys.setrecursionlimit() ). Instead, it is limited by the
C call stack (in CPython), and makes guarantees regarding what will
always be available during debugging (the only thing that optimization
currently does in Python at present is to discard docstrings). If you
want to change what is available for debugging (to make things like tail
call optimization possible), you are free to write and submit a PEP.

In the mean time, you may need to do some source conversion.


- Josiah

Steven D'Aprano

unread,
Jun 9, 2007, 8:23:41 PM6/9/07
to
On Sat, 09 Jun 2007 22:42:17 +0100, Alexander Schmolck wrote:

>> As for why tail calls are not optimized out, it was decided that being able
>> to have the stack traces (with variable information, etc.) was more useful
>> than offering tail call optimization
>
> I don't buy this.

Do you mean you don't believe the decision was made, or you don't agree
with the decision?


> What's more important, making code not fail arbitrarily (and
> thus making approaches to certain problems feasible that otherwise
> wouldn't be) or making it a be a bit easier to debug code that will fail
> arbitrarily? Why not only do tail-call optimization in .pyo files and
> get the best of both worlds?

Are you volunteering? If you are, I'm sure your suggestion will be
welcomed gratefully.


>> (do what I say).
>
> Where did you say run out of memory and fail? More importantly how do
> you say "don't run out of memory and fail"?

If we can live with a certain amount of "arbitrary failures" in simple
arithmetic, then the world won't end if tail recursion isn't optimized
away by the compiler. You can always hand-optimize it yourself.

dont_run_out_of_memory_and_fail = 10**(10**100) # please?


--
Steven.

Steven D'Aprano

unread,
Jun 9, 2007, 8:34:21 PM6/9/07
to
On Sat, 09 Jun 2007 22:52:32 +0000, Josiah Carlson wrote:

> the only thing that optimization
> currently does in Python at present is to discard docstrings

Python, or at least CPython, does more optimizations than that. Aside from
run-time optimizations like interned strings etc., there are a small
number of compiler-time optimizations done.

Running Python with the -O (optimize) flag tells Python to ignore
assert statements. Using -OO additionally removes docstrings.

Regardless of the flag, in function (and class?) definitions like the
following:

def function(args):
"Doc string"
x = 1
s = "this is a string constant"
"and this string is treated as a comment"
return s*x

The string-comment is ignored by the compiler just like "real" comments.
(The same doesn't necessarily hold for other data types.)


Some dead code is also optimized away:

>>> def function():
... if 0:
... print "dead code"
... return 2
...
>>> dis.dis(function)
4 0 LOAD_CONST 1 (2)
3 RETURN_VALUE


Lastly, in recent versions (starting with 2.5 I believe) Python includes a
peephole optimizer that implements simple constant folding:

# Python 2.4.3
>>> dis.dis(lambda: 1+2)
1 0 LOAD_CONST 1 (1)
3 LOAD_CONST 2 (2)
6 BINARY_ADD
7 RETURN_VALUE

# Python 2.5
>>> dis.dis(lambda: 1+2)
1 0 LOAD_CONST 2 (3)
3 RETURN_VALUE


The above all holds for CPython. Other Pythons may implement other
optimizations.

--
Steven.

James Stroud

unread,
Jun 9, 2007, 8:46:20 PM6/9/07
to
Kay Schluehr wrote:
> On Jun 9, 12:16 pm, James Stroud <jstr...@mbi.ucla.edu> wrote:
>> Terry Reedy wrote:
>>> In Python, you have a choice of recursion (normal or tail)
>> Please explain this. I remember reading on this newsgroup that an
>> advantage of ruby (wrt python) is that ruby has tail recursion, implying
>> that python does not.
>
> Proof by rumour?

"Proof" if you define "proof" by asking for clarification about a vague
recollection of an even more vague posting. I think now I'll prove
Fermat's Last Theorem by hailing a cab.

bruno.des...@gmail.com

unread,
Jun 10, 2007, 7:36:35 AM6/10/07
to
On Jun 9, 12:16 pm, James Stroud <jstr...@mbi.ucla.edu> wrote:
> Terry Reedy wrote:
> > In Python, you have a choice of recursion (normal or tail)
>
> Please explain this. I remember reading on this newsgroup that an
> advantage of ruby (wrt python) is that ruby has tail recursion, implying
> that python does not. Does python have fully optimized tail recursion as
> described in the tail recursion Wikipedia entry? Under what
> circumstances can one count on the python interpreter recognizing the
> possibility for optimized tail recursion?
>

I'm afraid Terry is wrong here, at least if he meant that CPython had
tail recursion *optimization*.

(and just for those who don't know yet, it's not a shortcoming, it's a
design choice.)


Josiah Carlson

unread,
Jun 10, 2007, 10:27:27 AM6/10/07
to
Steven D'Aprano wrote:
> On Sat, 09 Jun 2007 22:52:32 +0000, Josiah Carlson wrote:
>
>> the only thing that optimization
>> currently does in Python at present is to discard docstrings
>
> Python, or at least CPython, does more optimizations than that. Aside from
> run-time optimizations like interned strings etc., there are a small
> number of compiler-time optimizations done.
>
> Running Python with the -O (optimize) flag tells Python to ignore
> assert statements. Using -OO additionally removes docstrings.

Oh yeah, asserts. I never run with -O, and typically don't use asserts,
so having or not having either isn't a big deal for me.

> Regardless of the flag, in function (and class?) definitions like the
> following:
>
> def function(args):
> "Doc string"
> x = 1
> s = "this is a string constant"
> "and this string is treated as a comment"
> return s*x
>
> The string-comment is ignored by the compiler just like "real" comments.
> (The same doesn't necessarily hold for other data types.)

I would guess it is because some other data types may have side-effects.
On the other hand, a peephole optimizer could be written to trim out
unnecessary LOAD_CONST/POP_TOP pairs.

> Some dead code is also optimized away:

Obviously dead code removal happens regardless of optimization level in
current Pythons.

> Lastly, in recent versions (starting with 2.5 I believe) Python includes a
> peephole optimizer that implements simple constant folding:

Constant folding happens regardless of optimization level in current
Pythons.


So really, assert and docstring removals. Eh.

- Josiah

John Nagle

unread,
Jun 10, 2007, 12:20:10 PM6/10/07
to

John Nagle

unread,
Jun 10, 2007, 12:43:47 PM6/10/07
to
Josiah Carlson wrote:
> Steven D'Aprano wrote:
>
>> On Sat, 09 Jun 2007 22:52:32 +0000, Josiah Carlson wrote:
>>
>>> the only thing that optimization currently does in Python at present
>>> is to discard docstrings
>>
>>
>> Python, or at least CPython, does more optimizations than that. Aside
>> from
>> run-time optimizations like interned strings etc., there are a small
>> number of compiler-time optimizations done.
>>
>> Running Python with the -O (optimize) flag tells Python to ignore
>> assert statements. Using -OO additionally removes docstrings.
...

>
> I would guess it is because some other data types may have side-effects.
> On the other hand, a peephole optimizer could be written to trim out
> unnecessary LOAD_CONST/POP_TOP pairs.
>
>> Some dead code is also optimized away:
>
> Obviously dead code removal happens regardless of optimization level in
> current Pythons.
>
>> Lastly, in recent versions (starting with 2.5 I believe) Python
>> includes a
>> peephole optimizer that implements simple constant folding:
>
> Constant folding happens regardless of optimization level in current
> Pythons.

> So really, assert and docstring removals. Eh.

It's hard to optimize Python code well without global analysis.
The problem is that you have to make sure that a long list of "wierd
things", like modifying code or variables via getattr/setattr, aren't
happening before doing significant optimizations. Without that,
you're doomed to a slow implementation like CPython.

ShedSkin, which imposes some restrictions, is on the right track here.
The __slots__ feature is useful but doesn't go far enough.

I'd suggest defining "simpleobject" as the base class, instead of "object",
which would become a derived class of "simpleobject". Objects descended
directly from "simpleobject" would have the following restrictions:

- "getattr" and "setattr" are not available (as with __slots__)
- All class member variables must be initialized in __init__, or
in functions called by __init__. The effect is like __slots__,
but you don't have to explictly write declarations.
- Class members are implicitly typed with the type of the first
thing assigned to them. This is the ShedSkin rule. It might
be useful to allow assignments like

self.str = None(string)

to indicate that a slot holds strings, but currently has the null
string.
- Function members cannot be modified after declaration. Subclassing
is fine, but replacing a function member via assignment is not.
This allows inlining of function calls to small functions, which
is a big win.
- Private function members (self._foo and self.__foo) really are
private and are not callable outside the class definition.

You get the idea. This basically means that "simpleobject" objects have
roughly the same restrictions as C++ objects, for which heavy compile time
optimization is possible. Most Python classes already qualify for
"simpleobject". And this approach doesn't require un-Pythonic stuff like
declarations or extra "decorators".

With this, the heavy optimizations are possible. Strength reduction. Hoisting
common subexpressious out of loops. Hoisting reference count updates out of
loops. Keeping frequently used variables in registers. And elimination of
many unnecessary dictionary lookups.

Python could get much, much faster. Right now CPython is said to be 60X slower
than C. It should be possible to get at least an order of magnitude over
CPython.

John Nagle

Message has been deleted

Terry Reedy

unread,
Jun 10, 2007, 1:38:36 PM6/10/07
to pytho...@python.org

<bruno.des...@gmail.com> wrote in message
news:1181475395.7...@m36g2000hse.googlegroups.com...

| > Terry Reedy wrote:
| > > In Python, you have a choice of recursion (normal or tail)

[snip Stroud questions]

| I'm afraid Terry is wrong here, at least if he meant that CPython had
| tail recursion *optimization*.

NO!!!
I did not mean that or imply that in any way.

| (and just for those who don't know yet, it's not a shortcoming, it's a
| design choice.)

And I already noted in a followup that I am working on a Python Papers
paper explaining that choice, including Guido's claim that 'for statements
are better'.

So frankly I am a little annoyed that you dragged my name into your answer
to Stroud when you should have succintly said 'No, Never', or better,
nothing at all, as someone else already did say that. Read more of the
tread before jumping in and acribing ignorance to people.

Terry Jan Reedy

Steve Howell

unread,
Jun 10, 2007, 6:43:05 PM6/10/07
to John Nagle, pytho...@python.org

--- John Nagle <na...@animats.com> wrote:
>
> With this, the heavy optimizations are possible.
> Strength reduction. Hoisting
> common subexpressious out of loops. Hoisting
> reference count updates out of
> loops. Keeping frequently used variables in
> registers. And elimination of
> many unnecessary dictionary lookups.
>

To the extent that some of these optimizations could
be achieved by writing better Python code, it would
nice for optimization tools to have a "suggest" mode.
For example, if I code a Fourier series and duplicate
the subexpression n*f*t, it would be nice for a tool
to tell me that I should refactor that expression.
Something like this:

n*f*t should be refactored out of this expression,
assuming muliplication has no desired side effects for
n, f, and t.



____________________________________________________________________________________
TV dinner still cooling?
Check out "Tonight's Picks" on Yahoo! TV.
http://tv.yahoo.com/

Alexander Schmolck

unread,
Jun 10, 2007, 8:28:09 PM6/10/07
to
Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:

> On Sat, 09 Jun 2007 22:42:17 +0100, Alexander Schmolck wrote:
>
>>> As for why tail calls are not optimized out, it was decided that being able
>>> to have the stack traces (with variable information, etc.) was more useful
>>> than offering tail call optimization
>>
>> I don't buy this.
>
> Do you mean you don't believe the decision was made, or you don't agree
> with the decision?

Neither. I don't believe the rationale stated in this thread to be the true
reason.

> Are you volunteering? If you are, I'm sure your suggestion will be welcomed
> gratefully.

I rather doubt it. Guido has stated quite clearly that his not interested in
incorporating this feature.



>>> (do what I say).
>>
>> Where did you say run out of memory and fail? More importantly how do
>> you say "don't run out of memory and fail"?
>
> If we can live with a certain amount of "arbitrary failures" in simple
> arithmetic,

I prefer not too, and thus when possible avoid to use languages where ``a +
b`` is liable to fail arbitrarily (such as C, where the behavior will often be
undefined).

> then the world won't end if tail recursion isn't optimized away by the
> compiler.

I'd personally much rather have arithmetic working properly. Unfortunately
this is not an option at the moment, although quite a few smart people are
working on it and although it is an immensely costly problem.

> You can always hand-optimize it yourself.

Not tail calls, in general, no.

'as

John Nagle

unread,
Jun 10, 2007, 9:51:55 PM6/10/07
to
Steve Howell wrote:
> --- John Nagle <na...@animats.com> wrote:
>
>>With this, the heavy optimizations are possible.
>>Strength reduction. Hoisting
>>common subexpressious out of loops. Hoisting
>>reference count updates out of
>>loops. Keeping frequently used variables in
>>registers. And elimination of
>>many unnecessary dictionary lookups.
>>
> To the extent that some of these optimizations could
> be achieved by writing better Python code, it would
> nice for optimization tools to have a "suggest" mode.
> For example, if I code a Fourier series and duplicate
> the subexpression n*f*t, it would be nice for a tool
> to tell me that I should refactor that expression.
> Something like this:
>
> n*f*t should be refactored out of this expression,
> assuming muliplication has no desired side effects for
> n, f, and t.

Too labor-intensive. These are well understood optimizations
that can be done quite well automatically. The problem is Python's
gratutious dynamism - it's hard to tell, when examining code, if
something can be patched by other code elsewhere. Where "setattr"
is allowed, the compiler has to assume side effects almost everywhere.

John Nagle

Steven D'Aprano

unread,
Jun 10, 2007, 11:22:28 PM6/10/07
to
On Mon, 11 Jun 2007 01:28:09 +0100, Alexander Schmolck wrote:

> Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:
>
>> On Sat, 09 Jun 2007 22:42:17 +0100, Alexander Schmolck wrote:
>>
>>>> As for why tail calls are not optimized out, it was decided that being able
>>>> to have the stack traces (with variable information, etc.) was more useful
>>>> than offering tail call optimization
>>>
>>> I don't buy this.
>>
>> Do you mean you don't believe the decision was made, or you don't agree
>> with the decision?
>
> Neither. I don't believe the rationale stated in this thread to be the true
> reason.


Don't keep us in suspense. What do you believe is the true reason?


>> Are you volunteering? If you are, I'm sure your suggestion will be welcomed
>> gratefully.
>
> I rather doubt it. Guido has stated quite clearly that his not
> interested in incorporating this feature.

He's not the only one who gets to make these decisions. But even if he
uses his veto to prevent tail-recursion optimization from being put into
the main branch, there are other options.

>>>> (do what I say).
>>>
>>> Where did you say run out of memory and fail? More importantly how do
>>> you say "don't run out of memory and fail"?
>>
>> If we can live with a certain amount of "arbitrary failures" in simple
>> arithmetic,
>
> I prefer not too, and thus when possible avoid to use languages where
> ``a + b`` is liable to fail arbitrarily (such as C, where the behavior
> will often be undefined).

That's not the sort of arbitrary failure I was discussing, but for that
matter Python is one of those languages. Perhaps Python is not the
language for you?

Correct me if I'm wrong, but surely it is C++ that can have arbitrary
behaviour for "a + b", not C?

>> then the world won't end if tail recursion isn't optimized away by the
>> compiler.
>
> I'd personally much rather have arithmetic working properly.
> Unfortunately this is not an option at the moment, although quite a few
> smart people are working on it and although it is an immensely costly
> problem.
>
>> You can always hand-optimize it yourself.
>
> Not tail calls, in general, no.

Sorry, how does that work? You're suggesting that there is an algorithm
which the compiler could follow to optimize away tail-recursion, but human
beings can't follow the same algorithm?

Now I'm confused.


--
Steven.

Paul Rubin

unread,
Jun 10, 2007, 11:18:49 PM6/10/07
to

The usual compiler method is to translate the code into
continuation-passing style and thereby gain tail-recursion
optimization automagically. Of course a human could do the same thing
in principle, but it would result in insanely difficult-to-write,
unreadable and unmaintainable code. At a higher level, there are no
Python features whatsoever, either existing or proposed, that couldn't
be eliminated and left to the human. Iterators? While loops? Who
needs 'em? We could all go back to programming in machine code. But
Python is supposed to make programming easier, not harder.

Kay Schluehr

unread,
Jun 10, 2007, 11:22:23 PM6/10/07
to
On Jun 11, 12:43 am, Steve Howell <showel...@yahoo.com> wrote:

> To the extent that some of these optimizations could
> be achieved by writing better Python code, it would
> nice for optimization tools to have a "suggest" mode.

Is anyone out there who uses MS Word and doesn't deactivate the
"suggest" mode i.e. Clippy? Maybe someone shall write a lucid blog
entry about the failure of "suggest" modes in general or point me to
one. Autocompletion as a typing aid might be considered as a counter
example but only because you don't really have a choice and will not
be confronted with nonsensical AI guesses.

Doug Phillips

unread,
Jun 11, 2007, 12:48:36 AM6/11/07
to pytho...@python.org
> Is anyone out there who uses MS Word and doesn't deactivate
> the "suggest" mode i.e. Clippy?

Me... I don't install Clippy (or any of his horribly annoying friends)
to start with. :)

On the topic though, the suggest mode of the MS help system is generally
way off-base, even for my 80-yr-old grandmother's needs.

Steve Howell

unread,
Jun 10, 2007, 11:47:26 PM6/10/07
to Kay Schluehr, pytho...@python.org

Unless I'm misunderstanding what you're saying, you
are wildly overinterpreting my use of the phrase
"suggest mode."

I was making the simple suggestion that code
optimizers could suggest opportunities for
optimization that the tools couldn't unilaterally
decide themselves to enforce.

In other words, there are scenarios where an automated
tool has to assume the extreme case of dynamicism,
when in fact I, the programmer, know that I'm writing
basically static code within Python, and I simply
forgot to pull a subexpression out of a loop.

Maybe you just need to rant about animated office
supplies every now and then.

And regarding autocompletion--yes, it's an extremely
useful feature.

____________________________________________________________________________________
Park yourself in front of a world of choices in alternative vehicles. Visit the Yahoo! Auto Green Center.
http://autos.yahoo.com/green_center/

Josiah Carlson

unread,
Jun 11, 2007, 2:34:49 AM6/11/07
to
John Nagle wrote:
> Josiah Carlson wrote:
[snip]

>> Constant folding happens regardless of optimization level in current
>> Pythons.
>
>> So really, assert and docstring removals. Eh.
>
> It's hard to optimize Python code well without global analysis.
> The problem is that you have to make sure that a long list of "wierd
> things", like modifying code or variables via getattr/setattr, aren't
> happening before doing significant optimizations. Without that,
> you're doomed to a slow implementation like CPython.
>
> ShedSkin, which imposes some restrictions, is on the right track here.
> The __slots__ feature is useful but doesn't go far enough.
[snip]

> Python could get much, much faster. Right now CPython is said to be 60X
> slower
> than C. It should be possible to get at least an order of magnitude over
> CPython.

Don't get me wrong; I'm all for adding optimizations, I was merely
expressing that currently, 'python -OO' doesn't really do a whole lot.
I've a long-time user of psyco, have mucked about with
scipy.weave.inline, and have been a heavy user of Pyrex and C. If there
was a method of offering some simple optimization cues to the Python
runtime to improve its speed, I would be happy to add them when I really
care about speed (I already do more than that when writing Pyrex). The
real question is whether we can get a practical implementation of these
optimizations as easy to use as psyco.


- Josiah

Josiah Carlson

unread,
Jun 11, 2007, 2:36:25 AM6/11/07
to

Thanks to Richie Hindle, there exists a goto for Python implementation
that makes such things quite trivial (assuming one doesn't like
"abusing" break/continue/else). http://entrian.com/goto/ (which, by
the way, is the best April-fools joke ever)


- Josiah

Diez B. Roggisch

unread,
Jun 11, 2007, 3:27:35 AM6/11/07
to


I won't give you the "prove it by doing it"-talk. It's to cheap.

Instead I'd like to say why I don't think that this will buy you much
performance-wise: it's a local optimization only. All it can and will do
is to optimize lookups and storage of attributes - either functions or
values - and calls to methods from within one specialobject. As long as
expressions stay in their own "soup", things might be ok.

The very moment you mix this with "regular", no-strings-attached python
code, you have to have the full dynamic machinery in place + you need
tons of guarding statements in the optimized code to prevent access
violations.

So in the end, I seriously doubt the performance gains are noticable.
Instead I'd rather take the pyrex-road, which can go even further
optimizing with some more declarations. But then I at least know exactly
where the boundaries are. As does the compiler.

> Python could get much, much faster. Right now CPython is said to be 60X
> slower
> than C. It should be possible to get at least an order of magnitude over
> CPython.


Regardless of the possibility of speeding it up - why should one want
this? Coding speed is more important than speed of coding in 90%+ of all
cases. The other ones - well, if you _really_ want speed, assembler is
the way to go. I'm serious about that. There is one famous mathematical
library author that does code in assembler - because in the end, it's
all about processor architecture and careful optimization for that. [1]

The same is true for e.g. the new Cell architecture, or the
altivec-optimized code in photoshop that still beats the crap out of
Intel processors on PPC-machines.

I'm all for making python faster if it doesn't suffer
functionality-wise. But until there is a proof that something really
speeds up python w/o crippling it, I'm more than skeptical.

Diez

[1] http://math-atlas.sourceforge.net/faq.html#auth

"""
Kazushige Goto
His ev5/ev6 GEMM is used directly by ATLAS if the user answers
"yes" to its use during the configuration procedure on an alpha
processor. This results in a significant speedup over ATLAS's own GEMM
codes, and is the fastest ev5/ev6 implementation we are aware of.
"""

Bruno Desthuilliers

unread,
Jun 11, 2007, 4:37:26 AM6/11/07
to
Terry Reedy a écrit :

> <bruno.des...@gmail.com> wrote in message
> news:1181475395.7...@m36g2000hse.googlegroups.com...
> | > Terry Reedy wrote:
> | > > In Python, you have a choice of recursion (normal or tail)
>
> [snip Stroud questions]
>
> | I'm afraid Terry is wrong here, at least if he meant that CPython had
> | tail recursion *optimization*.
>
> NO!!!
> I did not mean that or imply that in any way.

I understand you didn't mean it, but since the whole point of
tail-recursion is allowing optimisation (else tail-recursion is nothing
else than a subset of recursion), you somehow implied it, even while
that was not your intention.

> | (and just for those who don't know yet, it's not a shortcoming, it's a
> | design choice.)
>
> And I already noted in a followup that I am working on a Python Papers
> paper explaining that choice, including Guido's claim that 'for statements
> are better'.
>
> So frankly I am a little annoyed that you dragged my name into your answer
> to Stroud when you should have succintly said 'No, Never', or better,
> nothing at all, as someone else already did say that. Read more of the
> tread before jumping in and acribing ignorance to people.
>

You're right on the fact that I should have read more of the thread
before posting this (which I usually do), and I do apologize for this.
But please note the second half of the sentence - which puts a strong
precondition on the validity of the first part.


Antoon Pardon

unread,
Jun 11, 2007, 6:34:58 AM6/11/07
to
On 2007-06-09, Terry Reedy <tjr...@udel.edu> wrote:
>
> "WaterWalk" <toolm...@163.com> wrote in message
> news:1181368143.8...@r19g2000prf.googlegroups.com...
>| I've just read an article "Building Robust System" by Gerald Jay
>| Sussman. The article is here:
>|
> http://swiss.csail.mit.edu/classes/symbolic/spring07/readings/robust-systems.pdf
>|
>| In it there is a footprint which says:
>| "Indeed, one often hears arguments against building exibility into an
>| engineered sys-
>| tem. For example, in the philosophy of the computer language Python it
>| is claimed:
>
> For him to imply that Python is anti-flexibility is wrong. Very wrong..
> He should look in a mirror. See below.

My impression is that python supporters often enough show
some anti-flexibility attitude.

>| \There should be one|and preferably only one|obvious way to do
>| it."[25] Science does
>| not usually proceed this way: In classical mechanics, for example, one
>| can construct equa-
>| tions of motion using Newtonian vectoral mechanics, or using a
>| Lagrangian or Hamiltonian
>| variational formulation.[30] In the cases where all three approaches
>| are applicable they are
>| equivalent, but each has its advantages in particular contexts."
>
> And in those contexts, one would hope that the method with advantages is
> somehow the obvious way to do it. Otherwise beginners might become like
> Buriden's ass.
>
> So I dispute that science is as different as he claims. And I do not see
> any real value in the statement in that I do not see it saying anything
> useful to the reader, at least not in this snippet.

Yes science is different. The difference is the following. Should
science only know the Newtonian vectoral mechanics and someone
would come up with the Lagrangian approach, nobody would protest
against this new approach by remarking that there should only be
one obvious approach, implying that by introducing the second approach
you give the people a choice, which they will have to think about
so their decision what to use is no longer obvious, which it is
if there is only one option.

Yet these kind of remarks are made often enough when someone suggest a
change to python.

--
Antoon Pardon

John Nagle

unread,
Jun 11, 2007, 1:37:12 PM6/11/07
to
Diez B. Roggisch wrote:
> Regardless of the possibility of speeding it up - why should one want
> this? Coding speed is more important than speed of coding in 90%+ of all
> cases.

When you have to start buying more servers for the server farm,
it's a real pain. I'm actually facing that because Python's HTML
parsing is so slow.

John Nagle

Steve Howell

unread,
Jun 11, 2007, 5:27:59 PM6/11/07
to John Nagle, pytho...@python.org
--- John Nagle <na...@animats.com> wrote:
>
> When you have to start buying more servers for
> the server farm,
> it's a real pain. I'm actually facing that because
> Python's HTML
> parsing is so slow.
>

I have been following this thread for a bit, but
apologies in advance if I didn't read far back enough.

Did you profile the module that you're using to do the
HTML parsing?



____________________________________________________________________________________
Get the free Yahoo! toolbar and rest assured with the added security of spyware protection.
http://new.toolbar.yahoo.com/toolbar/features/norton/index.php

Terry Reedy

unread,
Jun 11, 2007, 6:07:54 PM6/11/07
to pytho...@python.org
|> | > Terry Reedy wrote:
|> | > > In Python, you have a choice of recursion (normal or tail)

Bruno


> | I'm afraid Terry is wrong here, at least if he meant that CPython had
> | tail recursion *optimization*.

| Terry Reedy a écrit :


| > NO!!!
| > I did not mean that or imply that in any way.

Bruno


| I understand you didn't mean it, but since the whole point of
| tail-recursion is allowing optimisation

Wrong again. That is a reason, even a primary reason, for some people some
of the time, but not at all the only reason anyone would ever write linear
recursion in tail rather than body (my term) form. So nothing follows from
this false premise.

| (else tail-recursion is nothing else than a subset of recursion)

So? Body-recursion (non-tail-recursion) is also nothing else than a subset
of recursion.

| you somehow implied it, even while that was not your intention.

False in its own right. Any langauge that allow recursion allows the
subset that happen to constitute tail recursion. Most do not *mandate*
that compilers specially recognize and optimize that subset. So there is
pragmatically no implication that the latter follows from the former.

The reason I specifically mentioned tail recursion is because Prof.
Sussman, who complained about
There should be one-- and preferably only one --obvious way to do it.
-- to quote Brother Tim accurately -- co-developed Scheme. To me, Scheme
promotes tail recursion as the one true way as much or more as anything is
similarly promoted in Python. That mention had nothing in itself to do
with the separate issue of optimizing tail calls.

What Sussman apparently missed is that Tim's main point is that there
should be some rather than no obvious way to do things. The parenthetical
optional secondary desiderata is just that -- optional and secondary.

Terry Jan Reedy

Terry Reedy

unread,
Jun 11, 2007, 6:40:08 PM6/11/07
to pytho...@python.org

"Antoon Pardon" <apa...@forel.vub.ac.be> wrote in message
news:slrnf6q9ah....@rcpc42.vub.ac.be...

| On 2007-06-09, Terry Reedy <tjr...@udel.edu> wrote:
| > For him to imply that Python is anti-flexibility is wrong. Very
wrong..
| > He should look in a mirror. See below.
|
| My impression is that python supporters often enough show
| some anti-flexibility attitude.

More so than supporters of most other languages, in particular Scheme?

Here's the situation. Python is making inroads at MIT, Scheme home turf.
The co-developer of Scheme, while writing about some other subject, tosses
in an off-the-wall slam against Python. Someone asks what we here think.
I think that the comment is a crock and the slam better directed, for
instance, at Scheme itself. Hence 'he should look in a mirror'.

| Yes science is different. The difference is the following. Should
| science only know the Newtonian vectoral mechanics and someone
| would come up with the Lagrangian approach, nobody would protest
| against this new approach by remarking that there should only be
| one obvious approach,

The history of science is a history of innovation and resistance to
innovation. Do you have information that the introduction of the
Lagrangian approach was exceptional? Do you really think that no college
student has ever groused about having to learn another approach that is
only equivalent to what he already knows?

| Yet these kind of remarks are made often enough when someone suggest a
| change to python.

So? Tim wrote 'There should be one-- and preferably only one --obvious way
to do it'. The primary clause is that there should at least one. The
secondary clause is that once there is a good and obvious way to do
something, we take a hard look before adding another. As it is, there are
already multiple ways to do many things. And there are probably at least
10 suggested innovations for everyone accepted.

tjr

Aahz

unread,
Jun 11, 2007, 10:16:25 PM6/11/07
to
In article <mailman.8879.1181405...@python.org>,
Terry Reedy <tjr...@udel.edu> wrote:
>
>"James Stroud" <jst...@mbi.ucla.edu> wrote in message
>news:E5vai.858$TC1...@newssvr17.news.prodigy.net...

>| Terry Reedy wrote:
>| > In Python, you have a choice of recursion (normal or tail)
>|
>| Please explain this.
>
>I am working on a paper for Python Papers that will. It was inspired by
>the question 'why doesn't Python do tail-recursion optimization'.

...with the proof contained in the margins?
--
Aahz (aa...@pythoncraft.com) <*> http://www.pythoncraft.com/

"as long as we like the same operating system, things are cool." --piranha

Michele Simionato

unread,
Jun 12, 2007, 12:21:49 AM6/12/07
to

This is already done in RPython:

http://codespeak.net/pypy/dist/pypy/doc/coding-guide.html#restricted-python

I was at the PyCon It conference the other day and one of the
PyPy people claimed that RPython is up to 300X faster than Python.

Michele Simionato

Antoon Pardon

unread,
Jun 12, 2007, 4:34:05 AM6/12/07
to
On 2007-06-11, Terry Reedy <tjr...@udel.edu> wrote:
>
> "Antoon Pardon" <apa...@forel.vub.ac.be> wrote in message
> news:slrnf6q9ah....@rcpc42.vub.ac.be...
>| On 2007-06-09, Terry Reedy <tjr...@udel.edu> wrote:
>| > For him to imply that Python is anti-flexibility is wrong. Very
> wrong..
>| > He should look in a mirror. See below.
>|
>| My impression is that python supporters often enough show
>| some anti-flexibility attitude.
>
> More so than supporters of most other languages, in particular Scheme?

Well to my knowledge (which could be vastly improved), scheme doesn't
have some Zen-rules that include something like this.

I tried to google for similar remarks in relation to scheme but I
got no results. Maybe your google skills are better.

> Here's the situation. Python is making inroads at MIT, Scheme home turf.
> The co-developer of Scheme, while writing about some other subject, tosses
> in an off-the-wall slam against Python. Someone asks what we here think.
> I think that the comment is a crock and the slam better directed, for
> instance, at Scheme itself. Hence 'he should look in a mirror'.
>
>| Yes science is different. The difference is the following. Should
>| science only know the Newtonian vectoral mechanics and someone
>| would come up with the Lagrangian approach, nobody would protest
>| against this new approach by remarking that there should only be
>| one obvious approach,
>
> The history of science is a history of innovation and resistance to
> innovation. Do you have information that the introduction of the
> Lagrangian approach was exceptional? Do you really think that no college
> student has ever groused about having to learn another approach that is
> only equivalent to what he already knows?

Yes the history of science is a history of innovation and resistance.
But the resistance to my knowledge has never used the argument that
there should (preferably) be only one obvious way to do things.

The student example is IMO not appropiate. There is a difference between
prefering not having to learn something yourself and argueing something
shouldn't be available in general.

>| Yet these kind of remarks are made often enough when someone suggest a
>| change to python.
>
> So? Tim wrote 'There should be one-- and preferably only one --obvious way
> to do it'. The primary clause is that there should at least one. The
> secondary clause is that once there is a good and obvious way to do
> something, we take a hard look before adding another. As it is, there are
> already multiple ways to do many things. And there are probably at least
> 10 suggested innovations for everyone accepted.

Yes I know that. But that doesn't stop a lot of python supporters in this news
group to come with a variation that suggests once there is an obvious way to do
something in python, there really is no need any more to look at ways
that do it differently. And if my memory doesn't betray me, corrections
from others to such variations are a rather recent occurence.

--
Antoon Pardon

Neil Cerutti

unread,
Jun 12, 2007, 7:05:24 AM6/12/07
to
On 2007-06-12, Antoon Pardon <apa...@forel.vub.ac.be> wrote:
> On 2007-06-11, Terry Reedy <tjr...@udel.edu> wrote:
>> More so than supporters of most other languages, in particular
>> Scheme?
>
> Well to my knowledge (which could be vastly improved), scheme
> doesn't have some Zen-rules that include something like this.
>
> I tried to google for similar remarks in relation to scheme but
> I got no results. Maybe your google skills are better.

It's in _The Revised^%d Report on Scheme_, Introduction:

Programming languages should be designed not by piling feature
on top of feature, but by removing the weaknesses and
restrictions that make additional features appear necessary.

Of course, that was written well before Scheme had most of its
current features.

--
Neil Cerutti
These people haven't seen the last of my face. If I go down, I'm going down
standing up. --Chuck Person

Diez B. Roggisch

unread,
Jun 12, 2007, 8:46:16 AM6/12/07
to
John Nagle wrote:

I can't believe that this is a general "python is to slow"-issue. After all,
lxml and cElementTree are _really_ fast - and written in C.

For example in TurboGears, there are two python-only templating systems -
KID & genshi (apart from others). The latter is a magnitude faster than the
former. After all, O(n^3) is O(n^3), regardless of the language...

And if only the html-parsing is slow, you might consider creating an
extension for that. Using e.g. Pyrex.

Diez

Terry Reedy

unread,
Jun 12, 2007, 2:19:22 PM6/12/07
to pytho...@python.org

"Antoon Pardon" <apa...@forel.vub.ac.be> wrote in message
news:slrnf6smjs....@rcpc42.vub.ac.be...

| > So? Tim wrote 'There should be one-- and preferably only one --obvious
way
| > to do it'. The primary clause is that there should at least one. The
| > secondary clause is that once there is a good and obvious way to do
| > something, we take a hard look before adding another. As it is, there
are
| > already multiple ways to do many things. And there are probably at
least
| > 10 suggested innovations for everyone accepted.
|
| Yes I know that. But that doesn't stop a lot of python supporters in this
news
| group to come with a variation that suggests once there is an obvious way
to do
| something in python, there really is no need any more to look at ways
| that do it differently.

Try suggesting on a Lisp or Scheme group that having only one type of
syntax (prefix expressions) lacks something and that they should add
variety in the form of statement syntax ;-) Hint: some Lispers have
bragged here about the simplicity of 'one way to do it' and put Python down
for its mixed syntax. (Of course, this does not mean that some dialects
have not sneaked in lists of statements thru a back door ;-).

Would you really want Python to have a hundred new features every release?

tjr

Anders J. Munch

unread,
Jun 12, 2007, 5:11:45 PM6/12/07
to
Paul Rubin wrote:
> Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:
>>> Not tail calls, in general, no.
>> Sorry, how does that work? You're suggesting that there is an algorithm
>> which the compiler could follow to optimize away tail-recursion, but human
>> beings can't follow the same algorithm?
>>
>> Now I'm confused.
>
> The usual compiler method is to translate the code into
> continuation-passing style and thereby gain tail-recursion
> optimization automagically.

There's no need to go into CPS just to optimise tail-recursion. After all,
compilers were optimising tail-calls decades before Appel's work on SML/NJ.

Converting tail-recursion to iteration is trivial, and perfectly reasonable for
a human to do by hand. You add an outer "while True"-loop, the recursive call
becomes a tuple assignment, and other code paths end with a break out of the
loop. Completely mechanical and the resulting code doesn't even look that bad.

Like Steven said, tail-call optimisation is not necessary as you can always
hand-optimise it yourself.

- Anders

Steve Howell

unread,
Jun 12, 2007, 6:51:07 PM6/12/07
to Anders J. Munch, pytho...@python.org

--- "Anders J. Munch" <20...@jmunch.dk> wrote:
>
> Converting tail-recursion to iteration is trivial,
> and perfectly reasonable for
> a human to do by hand. You add an outer "while
> True"-loop, the recursive call
> becomes a tuple assignment, and other code paths end
> with a break out of the
> loop. Completely mechanical and the resulting code
> doesn't even look that bad.
>

I have to ask the stupid question. If a human can do
this completely mechanically, why can't a machine?



____________________________________________________________________________________
Get the Yahoo! toolbar and be alerted to new email wherever you're surfing.
http://new.toolbar.yahoo.com/toolbar/features/mail/index.php

John Nagle

unread,
Jun 12, 2007, 7:16:35 PM6/12/07
to
Steve Howell wrote:
> --- "Anders J. Munch" <20...@jmunch.dk> wrote:
>
>>Converting tail-recursion to iteration is trivial,
>>and perfectly reasonable for
>>a human to do by hand. You add an outer "while
>>True"-loop, the recursive call
>>becomes a tuple assignment, and other code paths end
>>with a break out of the
>>loop. Completely mechanical and the resulting code
>>doesn't even look that bad.
>>
>
>
> I have to ask the stupid question. If a human can do
> this completely mechanically, why can't a machine?

That's what tail recursion optimization is.

It's not a high-priority optimization for CPython.
The language has good iteration primitives, so iterating
via simple recursion is relatively rare and not, typically,
a performance bottleneck. In LISP, one might iterate over
a list using tail recursion, but in Python, that would be
both silly and inefficient.

John Nagle

Steven D'Aprano

unread,
Jun 12, 2007, 7:32:38 PM6/12/07
to
On Tue, 12 Jun 2007 15:51:07 -0700, Steve Howell wrote:

>
> --- "Anders J. Munch" <20...@jmunch.dk> wrote:
>>
>> Converting tail-recursion to iteration is trivial,
>> and perfectly reasonable for
>> a human to do by hand. You add an outer "while
>> True"-loop, the recursive call
>> becomes a tuple assignment, and other code paths end
>> with a break out of the
>> loop. Completely mechanical and the resulting code
>> doesn't even look that bad.
>>
>
> I have to ask the stupid question. If a human can do
> this completely mechanically, why can't a machine?


They can and do -- some compilers optimize tail-recursion into iteration.

Python doesn't, as a deliberate design decision, because to do so would
lose traceback information.


--
Steven.

Steve Howell

unread,
Jun 12, 2007, 7:38:21 PM6/12/07
to pytho...@python.org

--- Steven D'Aprano
<st...@REMOVE.THIS.cybersource.com.au> wrote:

Ok, that didn't occur to me. What does occur to me,
though, is that tracebacks for recursive algorithms
can get kind of, well, recursive, so I wonder if
there's a tradeoff somewhere.



____________________________________________________________________________________
Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.
http://farechase.yahoo.com/

Alexander Schmolck

unread,
Jun 12, 2007, 8:42:51 PM6/12/07
to
Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:

> On Mon, 11 Jun 2007 01:28:09 +0100, Alexander Schmolck wrote:
>
>> Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:
>>
>>> On Sat, 09 Jun 2007 22:42:17 +0100, Alexander Schmolck wrote:
>>>
>>>>> As for why tail calls are not optimized out, it was decided that being able
>>>>> to have the stack traces (with variable information, etc.) was more useful
>>>>> than offering tail call optimization
>>>>
>>>> I don't buy this.
>>>
>>> Do you mean you don't believe the decision was made, or you don't agree
>>> with the decision?
>>
>> Neither. I don't believe the rationale stated in this thread to be the true
>> reason.
>
>
> Don't keep us in suspense. What do you believe is the true reason?

It's easier to spot that some rationalization is bogus than to unconver the
true underlying causes; I'm pretty sure it's more a Gestalt thing than a
compelling technical reason (I guess Guido's distaste for scheme also plays a
role). Not that I discount that out of hand -- maybe all that's great about
python is due to Guido being exceptionally good at making such judgements.

>>> Are you volunteering? If you are, I'm sure your suggestion will be welcomed
>>> gratefully.
>>
>> I rather doubt it. Guido has stated quite clearly that his not
>> interested in incorporating this feature.
>
> He's not the only one who gets to make these decisions.

This is news to me. Who else does?

> But even if he uses his veto to prevent tail-recursion optimization from
> being put into the main branch, there are other options.

That don't involve abducting his kids?

>>>>> (do what I say).
>>>>
>>>> Where did you say run out of memory and fail? More importantly how do
>>>> you say "don't run out of memory and fail"?
>>>
>>> If we can live with a certain amount of "arbitrary failures" in simple
>>> arithmetic,
>>
>> I prefer not too, and thus when possible avoid to use languages where
>> ``a + b`` is liable to fail arbitrarily (such as C, where the behavior
>> will often be undefined).
>
> That's not the sort of arbitrary failure I was discussing, but for that
> matter Python is one of those languages.

Apart from floating point arithmetic, simple arithmetic doesn't tend to fail
arbitrarily in python, as far as I'm aware. You can of course create your very
own classes specifically to get broken arithmetic but that doesn't strike me
as "simple" arithmetic anymore.

> Perhaps Python is not the language for you?

Do you also happen to know what would be?

> Correct me if I'm wrong, but surely it is C++ that can have arbitrary
> behaviour for "a + b", not C?

``INT_MAX + 1`` can do precisely anything in C.

>>> You can always hand-optimize it yourself.
>>
>> Not tail calls, in general, no.
>
> Sorry, how does that work? You're suggesting that there is an algorithm
> which the compiler could follow to optimize away tail-recursion, but human
> beings can't follow the same algorithm? Now I'm confused.

Does it also confuse you that if I give you a 500x500 matrix A you won't be
able to figure out a single element in A^-1 by doing mental arithmetic (or
using pen and paper), although my computer manages just fine and I'm happy to
give you the algorithm it uses?

'as

Alexander Schmolck

unread,
Jun 12, 2007, 8:45:13 PM6/12/07
to
"Anders J. Munch" <20...@jmunch.dk> writes:

> Like Steven said, tail-call optimisation is not necessary as you can always
> hand-optimise it yourself.

Care to demonstrate on some code written in CPS (a compiler or parser, say)?

'as

John Nagle

unread,
Jun 12, 2007, 11:12:04 PM6/12/07
to
Alexander Schmolck wrote:
> Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:
>>On Mon, 11 Jun 2007 01:28:09 +0100, Alexander Schmolck wrote:
>>>Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:

>>Don't keep us in suspense. What do you believe is the true reason?
>
>
> It's easier to spot that some rationalization is bogus than to unconver the
> true underlying causes; I'm pretty sure it's more a Gestalt thing than a

> compelling technical reason.

There's a real reason. Remember, functions are dynamically
replaceable. The compiler would have to detect that the function
doesn't modify or replace itself while recursing for this optimization
to be valid. Worst case, another thread could replace the function
while it was recursing, invalidating the tail recursion optimization.

John Nagle

Steve Howell

unread,
Jun 13, 2007, 12:15:22 AM6/13/07
to John Nagle, pytho...@python.org

--- John Nagle <na...@animats.com> wrote:
>
> There's a real reason. Remember, functions are
> dynamically
> replaceable. The compiler would have to detect that
> the function
> doesn't modify or replace itself while recursing for
> this optimization
> to be valid. Worst case, another thread could
> replace the function
> while it was recursing, invalidating the tail
> recursion optimization.
>

You would just change the language definition to say
that once you enter f(), any call to f() from within
f() behaves as if the recursively called f() still
points to the originally bound version of f. To want
any other behavior would be absurd, anyhow.

I could see the utility of this restriction even
without optimization.



____________________________________________________________________________________
Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated for today's economy) at Yahoo! Games.
http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow

Neil Cerutti

unread,
Jun 13, 2007, 8:15:18 AM6/13/07
to
On 2007-06-12, Anders J. Munch <20...@jmunch.dk> wrote:
> Paul Rubin wrote:
>> Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:
>>>> Not tail calls, in general, no.
>>> Sorry, how does that work? You're suggesting that there is an
>>> algorithm which the compiler could follow to optimize away
>>> tail-recursion, but human beings can't follow the same
>>> algorithm?
>>>
>>> Now I'm confused.
>>
>> The usual compiler method is to translate the code into
>> continuation-passing style and thereby gain tail-recursion
>> optimization automagically.
>
> There's no need to go into CPS just to optimise tail-recursion.
> After all, compilers were optimising tail-calls decades before
> Appel's work on SML/NJ.
>
> Converting tail-recursion to iteration is trivial, and
> perfectly reasonable for a human to do by hand.

For simple recursive tail calls, yeah, it can be. Translating a
tail-recursive Factorial function into a while loop is easy. But
tail-call optimization technically works for any tail-call,
including mutual recursion, and non-recursive tail-calls. You
can't reasonably hand-optimize away the stack frame for all
tail-calls.

def foo(x)
bar(x)

The only way to hand-optimize the call to bar is to inline it
yourself.

--
Neil Cerutti
Will the highways on the Internet become more few? --George W. Bush

Neil Cerutti

unread,
Jun 13, 2007, 8:15:18 AM6/13/07
to
On 2007-06-13, Steve Howell <show...@yahoo.com> wrote:
> You would just change the language definition to say that once
> you enter f(), any call to f() from within f() behaves as if
> the recursively called f() still points to the originally bound
> version of f. To want any other behavior would be absurd,
> anyhow.

There's a reason it's generally refered to as "tail-call"
optimization and not "tail-recursive" optimization. The former is
more general, and, I believe, easier to implement than the
latter.

--
Neil Cerutti
The peace-making meeting scheduled for today has been cancelled due to a
conflict. --Church Bulletin Blooper

Anders J. Munch

unread,
Jun 13, 2007, 12:57:47 PM6/13/07
to
Neil Cerutti wrote:
> On 2007-06-12, Anders J. Munch <20...@jmunch.dk> wrote:
>> Converting tail-recursion to iteration is trivial, and
>> perfectly reasonable for a human to do by hand.
>
> For simple recursive tail calls, yeah, it can be. Translating a
> tail-recursive Factorial function into a while loop is easy. But
> tail-call optimization technically works for any tail-call,
> including mutual recursion, and non-recursive tail-calls. You
> can't reasonably hand-optimize away the stack frame for all
> tail-calls.

I may have misunderstood, I thought we were talking about tail recursion only.
The general tail-call optimisation, where all leaf calls become jumps and the
called function usurps the current stack frame, is a different ballgame
entirely. There's no pure-Python transformation for that, but that still
doesn't mean you need CPS.

General tail-call optimisation is of course completely out-of-bounds for Python,
because it ruins tracebacks. Unlike tail recursion, which could use recursion
counters.

- Anders

Anders J. Munch

unread,
Jun 13, 2007, 1:01:02 PM6/13/07
to

I meant tail recursion, not tail-call, sorry, that was just my fingers trying to
save typing.

- Anders

Neil Cerutti

unread,
Jun 13, 2007, 2:22:54 PM6/13/07
to
On 2007-06-13, Anders J. Munch <20...@jmunch.dk> wrote:
> General tail-call optimisation is of course completely
> out-of-bounds for Python, because it ruins tracebacks. Unlike
> tail recursion, which could use recursion counters.

Is it really ruined? To use a similar example:

def foo(x):
bar(x+1)

def bar(x):
if x > 10:
raise ValueError
else:
foo(x+2)

Today, when I call foo(4), I get something like:

C:\WINNT\system32\cmd.exe /c python temp.py
Traceback (most recent call last):
File "temp.py", line 529, in <module>
foo(4)
File "temp.py", line 521, in foo
bar(x+1)
File "temp.py", line 527, in bar
foo(x+2)
File "temp.py", line 521, in foo
bar(x+1)
File "temp.py", line 527, in bar
foo(x+2)
File "temp.py", line 521, in foo
bar(x+1)
File "temp.py", line 525, in bar
raise ValueError
ValueError
shell returned 1

With tail-call optimization you'd get something like:

C:\WINNT\system32\cmd.exe /c python temp.py
Traceback (most recent call last):
File "temp.py", line 529, in <module>
foo(4)
File "temp.py", line 525, in bar
raise ValueError
ValueError
shell returned 1

What makes the latter harder to work with?

--
Neil Cerutti

Carsten Haese

unread,
Jun 13, 2007, 2:34:59 PM6/13/07
to pytho...@python.org

The fact that you don't see how many call levels down your algorithm got
before throwing an exception. This may be an important clue in debugging
a recursive algorithm.

--
Carsten Haese
http://informixdb.sourceforge.net


Neil Cerutti

unread,
Jun 13, 2007, 2:51:27 PM6/13/07
to
On 2007-06-13, Neil Cerutti <hor...@yahoo.com> wrote:
> On 2007-06-13, Anders J. Munch <20...@jmunch.dk> wrote:
>> General tail-call optimisation is of course completely
>> out-of-bounds for Python, because it ruins tracebacks. Unlike
>> tail recursion, which could use recursion counters.
>
> Is it really ruined? To use a similar example:

I found some interesting notes by Alex Martelli pertaining to
tail-call optimisation, and my assumption that tail-call
optimization is easier to implement than tail-recursive
optimization may have been naive. ;)

http://groups.google.com/group/comp.lang.python/msg/1a7cccc103c1bd70?hl=en&

Moreover, there are (or were) technical reasons that you can't do
tail-call optimization in Python, which can't even recognize
tail-calls at compile time. According to Tim Peters:

http://groups.google.com/group/comp.lang.python/msg/ea1de1e35aefb828?hl=en&

--
Neil Cerutti

Terry Reedy

unread,
Jun 13, 2007, 4:49:04 PM6/13/07
to pytho...@python.org

"Steve Howell" <show...@yahoo.com> wrote in message
news:725498....@web33511.mail.mud.yahoo.com...

|
| You would just change the language definition to say
| that once you enter f(), any call to f() from within
| f() behaves as if the recursively called f() still
| points to the originally bound version of f.

I am pretty sure such a context-dependent rule cannot be written as a
context-free grammar rule. In any case, the function object does not exist
when code is being compiled to a code object. So this requires
implementation-dependent post-patching of the code object. R.
Hetchinger(sp?) posted a Cookbook recipe for doing this for CPython.
Anyone wanting the speedup (with CPython) can use it.

tjr

Paul Rubin

unread,
Jun 13, 2007, 10:11:26 PM6/13/07
to
"Diez B. Roggisch" <de...@nospam.web.de> writes:
> And if only the html-parsing is slow, you might consider creating an
> extension for that. Using e.g. Pyrex.

I just tried using BeautifulSoup to pull some fields out of some html
files--about 2 million files, output of a web crawler. It parsed very
nicely at about 5 files per second. Of course Python being Python, I
wanted to run the program a whole lot of times, modifying it based on
what I found from previous runs, and at 5/sec each run was going to
take about 4 days (OK, I probably could have spread it across 5 or so
computers and gotten it to under 1 day, at the cost of more effort to
write the parallelizing code and to scare up extra machines). By
simply treating the html as a big string and using string.find to
locate the fields I wanted, I got it up to about 800 files/second,
which made each run about 1/2 hour. Simplest still would be if Python
just ran about 100x faster than it does, a speedup which is not
outlandish to hope for.

John Nagle

unread,
Jun 13, 2007, 11:21:36 PM6/13/07
to
Paul Rubin wrote:
> "Diez B. Roggisch" <de...@nospam.web.de> writes:
>
>>And if only the html-parsing is slow, you might consider creating an
>>extension for that. Using e.g. Pyrex.
>
>
> I just tried using BeautifulSoup to pull some fields out of some html
> files--about 2 million files, output of a web crawler. It parsed very
> nicely at about 5 files per second.

That's about what I'm seeing. And it's the bottleneck of
"sitetruth.com".

> By
> simply treating the html as a big string and using string.find to
> locate the fields I wanted, I got it up to about 800 files/second,
> which made each run about 1/2 hour.

For our application, we have to look at the HTML in some detail,
so we really need it in a tree form.

> Simplest still would be if Python
> just ran about 100x faster than it does, a speedup which is not
> outlandish to hope for.

Right. Looking forward to ShedSkin getting good enough to run
BeautifulSoup.

(Actually, the future of page parsing is probably to use some kind
of stripped-down browser that reads the page, builds the DOM,
runs the startup JavaScript, then lets you examine the DOM. There
are too many pages now that just come through as blank if you don't
run the OnLoad JavaScript.)

John Nagle

Douglas Alan

unread,
Jun 15, 2007, 3:05:24 PM6/15/07
to
"Terry Reedy" <tjr...@udel.edu> writes:

> Try suggesting on a Lisp or Scheme group that having only one type
> of syntax (prefix expressions) lacks something and that they should
> add variety in the form of statement syntax ;-) Hint: some Lispers
> have bragged here about the simplicity of 'one way to do it' and put
> Python down for its mixed syntax. (Of course, this does not mean
> that some dialects have not sneaked in lists of statements thru a
> back door ;-).

Almost all Lisp dialects have an extremely powerful macro mechanism
that lets users and communities extend the syntax of the language in
very general ways. Consequently, dialects such a Scheme try to keep
the core language as simple as possible. Additional ways of doing
things can be loaded in as a library module.

So, a language such as Scheme may have no *obvious* way of something,
and yet may provide excellent means to extend the language so that
many obvious ways might be provided.

|>oug

Douglas Alan

unread,
Jun 15, 2007, 4:58:56 PM6/15/07
to
"Terry Reedy" <tjr...@udel.edu> writes:

> My only point was that Sussman is an odd person to be criticizing
> (somewhat mistakingly) Python for being minimalist.

I think that being a language minimalist is very different from
believing that there should be exactly one obvious way to do
everything.

For instance, I believe that Python is now too big, and that much of
what is in the language itself should be replaced with more general
Scheme-like features. Then a good macro mechanism should be
implemented so that all the conveniences features of the language can
be implemented via macro definitions in the standard library.

Macros, however, are typically claimed in these parts to violate the
"only one way" manifesto.

|>oug

Douglas Alan

unread,
Jun 15, 2007, 5:05:27 PM6/15/07
to
"Terry Reedy" <tjr...@udel.edu> writes:

> Here's the situation. Python is making inroads at MIT, Scheme home turf.
> The co-developer of Scheme, while writing about some other subject, tosses
> in an off-the-wall slam against Python. Someone asks what we here think.
> I think that the comment is a crock and the slam better directed, for
> instance, at Scheme itself. Hence 'he should look in a mirror'.

You are ignoring the fact that Scheme has a powerful syntax extension
mechanism (i.e., hygenic macros), which means that anyone in the world
can basically extend Scheme to include practically any language
feature they might like it to have.

|>oug

Kay Schluehr

unread,
Jun 15, 2007, 6:06:38 PM6/15/07
to
On 15 Jun., 22:58, Douglas Alan <d...@alum.mit.edu> wrote:

> For instance, I believe that Python is now too big, and that much of
> what is in the language itself should be replaced with more general
> Scheme-like features.
> Then a good macro mechanism should be
> implemented so that all the conveniences features of the language can
> be implemented via macro definitions in the standard library.

And why sould anyone reimplement the whole standard library using
macro reductions? Because this is the "one obvious way to do it" for
people who are addicted to Scheme?

Douglas Alan

unread,
Jun 15, 2007, 6:36:54 PM6/15/07
to
Kay Schluehr <kay.sc...@gmx.net> writes:

(1) By, "should be replaced", I meant in an ideal world. I'm not
proposing that this be done in the real world anytime soon.

(2) I didn't suggest that a single line of the standard library be
changed. What would need to be changed is the core Python language,
not the standard library. If this idea were implemented, the core
language could be made smaller, and the features that were thereby
removed from the language core could be moved into the standard
library instead.

(3) My reasons for wanting this have nothing to do with being
"addicted to Scheme", which I almost never use. It has to do more
with my language design and implementation aesthetics, and my desire
for a syntax extension mechanism so that I can add my own language
features to Python without having to hack on the CPython source code.

|>oug

Steven D'Aprano

unread,
Jun 15, 2007, 9:25:55 PM6/15/07
to
On Fri, 15 Jun 2007 17:05:27 -0400, Douglas Alan wrote:

> You are ignoring the fact that Scheme has a powerful syntax extension
> mechanism (i.e., hygenic macros), which means that anyone in the world
> can basically extend Scheme to include practically any language
> feature they might like it to have.


You say that like it is a good thing.


--
Steven.

Terry Reedy

unread,
Jun 15, 2007, 11:21:39 PM6/15/07
to pytho...@python.org

"Douglas Alan" <do...@alum.mit.edu> wrote in message
news:lc4pl9g...@gaffa.mit.edu...

| "Terry Reedy" <tjr...@udel.edu> writes:
|
| > Here's the situation. Python is making inroads at MIT, Scheme home
turf.
| > The co-developer of Scheme, while writing about some other subject,
tosses
| > in an off-the-wall slam against Python. Someone asks what we here
think.
| > I think that the comment is a crock and the slam better directed, for
| > instance, at Scheme itself. Hence 'he should look in a mirror'.
|
| You are ignoring the fact that

This prefactory clause is false and as such it turns what was a true
statement into one that is not. Better to leave off such ad hominisms and
stick with the bare true statement.

| Scheme has a powerful syntax extension mechanism

I did not and do not see this as relevant to the main points of my summary
above. Python has powerful extension mechanisms too, but comparing the two
languages on this basis is a whole other topic.

tjr

Douglas Alan

unread,
Jun 15, 2007, 11:44:33 PM6/15/07
to
Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:

A chaque son gout.

|>oug

Douglas Alan

unread,
Jun 16, 2007, 12:13:27 AM6/16/07
to
"Terry Reedy" <tjr...@udel.edu> writes:

> > You are ignoring the fact that

> This prefactory clause is false and as such it turns what was a true
> statement into one that is not. Better to leave off such ad hominisms and
> stick with the bare true statement.

You went on about how Gerry Sussman's opinion is a crock and how he
should look in the mirror, and then you get bent out of shape over the
phrase, "you are ignoring"??? For the record, "you are ignoring" is
not an ad hominem; "anyone who doesn't know how to spell 'ad hominem'
has the intelligence of a mealworm" is an ad hominem.

> > Scheme has a powerful syntax extension mechanism

> I did not and do not see this as relevant to the main points of my
> summary above. Python has powerful extension mechanisms too, but
> comparing the two languages on this basis is a whole other topic.

How do you know that Prof. Sussman doesn't consider the macro issue to
be essential? Certainly other Lisp aficionados do, as does, I believe
Guy Steele, the other inventor of Scheme.

It appears to me that you are missing the point that having a
minimalist disposition towards programming language design does not
preclude believing that such languages should have features that are
missing from Python.

|>oug

Message has been deleted

Paul Rubin

unread,
Jun 16, 2007, 1:00:26 AM6/16/07
to
Dennis Lee Bieber <wlf...@ix.netcom.com> writes:
> Adding generator expressions, which look identical except that one
> typically has () (or the () of an enclosing function call) while the
> other must have [] just seems to add confusion to the world. I'll
> abstain on "with"... Decorators I've not managed to figure out, and that
> refugee from Dr Moreau -- the inside out conditional expression -- is a
> prime candidate for termination...

Python has really changed its flavour over the past few releases, if
you develop a style that uses the above features (well maybe not the
if-expression) extensively. The learning curve is steeper but you end
up writing cleaner code that's free of corner cases.

Alex Martelli

unread,
Jun 16, 2007, 1:25:38 AM6/16/07
to
Neil Cerutti <hor...@yahoo.com> wrote:

> On 2007-06-12, Antoon Pardon <apa...@forel.vub.ac.be> wrote:
> > On 2007-06-11, Terry Reedy <tjr...@udel.edu> wrote:
> >> More so than supporters of most other languages, in particular
> >> Scheme?
> >
> > Well to my knowledge (which could be vastly improved), scheme
> > doesn't have some Zen-rules that include something like this.
> >
> > I tried to google for similar remarks in relation to scheme but
> > I got no results. Maybe your google skills are better.
>
> It's in _The Revised^%d Report on Scheme_, Introduction:
>
> Programming languages should be designed not by piling feature
> on top of feature, but by removing the weaknesses and
> restrictions that make additional features appear necessary.
>
> Of course, that was written well before Scheme had most of its
> current features.

The "Spirit of C" section in the preface of the ISO Standard for C
phrases this principle as "Provide only one way to do an operation".

Despite the phrasing variations, this commonality goes well with my
perception that, at their roots, Scheme, C and Python share one
philosophical underpinning (one that's extremely rare among programming
languages as a whole) -- an appreciation of SIMPLICITY AND UNIFORMITY as
language characteristics.


Alex

Steven D'Aprano

unread,
Jun 16, 2007, 8:44:47 AM6/16/07
to
On Fri, 15 Jun 2007 22:25:38 -0700, Alex Martelli wrote:

> The "Spirit of C" section in the preface of the ISO Standard for C
> phrases this principle as "Provide only one way to do an operation".

Taken seriously, that rapidly goes to absurdity -- it would mean, for
example, replacing all for loops with while loops.

That's why I get mad at people who misquote (and often misunderstand) the
Zen of Python's OOWTDI, misquoting it as Only One Way To Do It instead of
One Obvious Way To Do It. There can be a hundred unobvious ways to do it,
and it doesn't matter, so long as there is one obvious way.

Needlessly restricting what a language includes leads to a language that
does things badly. After all, all you really need is a virtual Turing
Machine, and you can do anything Python or Lisp or Perl or C can do. Badly.

I'm probably preaching to the converted, because I'm pretty sure Alex
doesn't believe Python should be a minimalist language with _literally_
only one way to do anything.


> Despite the phrasing variations, this commonality goes well with my
> perception that, at their roots, Scheme, C and Python share one
> philosophical underpinning (one that's extremely rare among programming
> languages as a whole) -- an appreciation of SIMPLICITY AND UNIFORMITY as
> language characteristics.

Out of curiosity, what do you consider some of the worst offenders as far
as overly complex and inconsistent languages go, and why?


--
Steven.

Alex Martelli

unread,
Jun 16, 2007, 9:06:34 AM6/16/07
to
Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> wrote:
...

> > perception that, at their roots, Scheme, C and Python share one
> > philosophical underpinning (one that's extremely rare among programming
> > languages as a whole) -- an appreciation of SIMPLICITY AND UNIFORMITY as
> > language characteristics.
>
> Out of curiosity, what do you consider some of the worst offenders as far
> as overly complex and inconsistent languages go, and why?

I think the Original Sin in that regard was PL/I: it tried to have all
the "cool features" of the three widespread languages of the time,
Cobol, Algol _and_ Fortran (and then some), because it aimed to replace
all three and become the "one programming language". As a result, it
tended to have two or more ways to perform any given task, typically
inspired by some of the existing languages, often with the addition of
new ones made out of whole cloth.

PL/I (mostly in various subset and "extended subset" forms) was widely
used in the implementation of Multics, and I believe that the statement
in the "Spirit of C" was at least partly inspired by that experience
(just like "Unix" was originally intended as a pun on "Multics"
underscoring the drastically simpler philosophy of the new OS).


Alex

Neil Cerutti

unread,
Jun 16, 2007, 10:56:03 AM6/16/07
to
On 2007-06-16, Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> wrote:
> On Fri, 15 Jun 2007 22:25:38 -0700, Alex Martelli wrote:
>> The "Spirit of C" section in the preface of the ISO Standard
>> for C phrases this principle as "Provide only one way to do an
>> operation".
>
> Taken seriously, that rapidly goes to absurdity -- it would
> mean, for example, replacing all for loops with while loops.

We do sometimes have to use a while loop in Python where we could
use a for loop in some other language. It's not really a big
deal.

> I'm probably preaching to the converted, because I'm pretty
> sure Alex doesn't believe Python should be a minimalist
> language with _literally_ only one way to do anything.

It's making me think of the male model in Zoolander who can't
turn left. ;)

>> Despite the phrasing variations, this commonality goes well
>> with my perception that, at their roots, Scheme, C and Python
>> share one philosophical underpinning (one that's extremely
>> rare among programming languages as a whole) -- an
>> appreciation of SIMPLICITY AND UNIFORMITY as language
>> characteristics.
>
> Out of curiosity, what do you consider some of the worst
> offenders as far as overly complex and inconsistent languages
> go, and why?

I vote for C++ as being astoundingly complex. But it provides
complex features, e.g.,the machanisms it provides to deal with
multiple inheritance, or generic, type-safe code. I don't think
it's inconsistent, though. The complexity of a feature *tends* to
mirror the *real* complexity.

--
Neil Cerutti

Cousin Stanley

unread,
Jun 16, 2007, 11:25:39 AM6/16/07
to

Cousin Alex ....

With regards to PL/I a phrase from an old ( 1969 ) song
named "The Night They Drove Old Dixie Down" comes to mind ....

Ya take what you need and ya leave the rest ....

I really liked programming in pl/1
and did a little over 3 years of Multics time
in San Juan, Puerto Rico ....

http://multicians.org/site-prha.html

--
Stanley C. Kitching
Human Being
Phoenix, Arizona


----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----

Paul Rubin

unread,
Jun 16, 2007, 1:51:33 PM6/16/07
to
Neil Cerutti <hor...@yahoo.com> writes:
> I vote for C++ as being astoundingly complex. But it provides
> complex features, e.g.,the machanisms it provides to deal with
> multiple inheritance, or generic, type-safe code.

It gets off-topic but I'm not sure what advantage templates are
supposed to have over ML-like polymorphism.

Douglas Alan

unread,
Jun 16, 2007, 3:26:45 PM6/16/07
to
Dennis Lee Bieber <wlf...@ix.netcom.com> writes:

> Macros? Unfortunately to my world, macros are those things
> found in C, high-powered assemblers, and pre-VBA Office. As such,
> they do anything but keep a language small, and one encounters
> multiple implementations of similar functionality -- each
> implementation the pride of one person, and abhorred by the person
> who now must edit the code.

Comparing C macros to Lisp macros is like comparing a Sawzall to a
scalpel.

Regarding having enough rope to hang yourself, the same claim can be
made about any language abstraction mechanism. E.g., classes are to
data types as macros are to syntax. You can abuse classes and you can
abuse macros. Both abuses will lead to abhorring by your
collaborators. And either abstraction mechanism, when used properly,
will result in more elegant, easier to read and maintain code.

Both abstraction mechanisms also allow language feature exploration to
occur outside of the privileged few who are allowed to commit changes
into the code-base for the interpreter or compiler. I think that this
is one of the things that Gerry Sussman might be getting at when he
talks about how science works. (Or at least that's what I would be
getting at if I were in his shoes.)

In the Lisp community, for instance, there have been lots of language
features that were implemented by people who were not part of the core
language development clique, but that were later widely adopted.
E.g., the Common Lisp OO system with mutimethods (CLOS), and the
"loop" macro. These features didn't require any changes to the core
language implementation, and so you can see that Lisp-style macros are
also a huge modularity boon for language implementation.

|>oug

Neil Cerutti

unread,
Jun 16, 2007, 3:53:38 PM6/16/07
to

I don't know that much about ML. I know is does a really nice job
of generic containers, as does C++. But can it 'foo' any type as
easily as C++?

template <class T> T foo(T);

As an aside, templates are abusable for many sorts of valuable
compile-time computations and type arithmetic, though I'd never
write that code myself--it looks like death.

C++ templates allow for fancy duck-typing in C++, which Python
programmers love. The STL is a good example of the power of
duck-typing (though it oughta been called quack-typing).

--
Neil Cerutti

Alex Martelli

unread,
Jun 16, 2007, 8:14:37 PM6/16/07
to
Cousin Stanley <cousin...@hotmail.com> wrote:
...

> > I think the Original Sin in that regard was PL/I: it tried to have all
...

> > tended to have two or more ways to perform any given task, typically
> > inspired by some of the existing languages, often with the addition of
> > new ones made out of whole cloth.
...

> Cousin Alex ....
>
> With regards to PL/I a phrase from an old ( 1969 ) song
> named "The Night They Drove Old Dixie Down" comes to mind ....
>
> Ya take what you need and ya leave the rest ....
>
> I really liked programming in pl/1

I didn't -- been there, hated that. Each project group ended up
defining (formally or informally) the subset of PL/I it was using -- and
when one had to work with multiple groups, it became yet one more
nightmare to recall which features were in the subset of which group(s).

If there are just 30 "operations", for each of which the language offers
two ways to perform it, that one language becomes over a billion
languages when "subsetted" by picking just one choice for each of the
operations. Absolutely crazy -- total and utter waste of effort on the
part of every programmer, every group, every compiler author.

PL/1 is basically gone, but its legacy of "take what you need and leave
the rest" is unfortunately alive in other languages that are blind to
the enormous advantages of simplicity and uniformity.


Alex

Paul Rubin

unread,
Jun 16, 2007, 9:33:49 PM6/16/07
to
al...@mac.com (Alex Martelli) writes:
> PL/1 is basically gone, but its legacy of "take what you need and leave
> the rest" is unfortunately alive in other languages that are blind to
> the enormous advantages of simplicity and uniformity.

Intercal?

Paul Rubin

unread,
Jun 16, 2007, 11:07:35 PM6/16/07
to
Neil Cerutti <hor...@yahoo.com> writes:
> I don't know that much about ML. I know is does a really nice job
> of generic containers, as does C++. But can it 'foo' any type as
> easily as C++?
>
> template <class T> T foo(T);

I don't know enough C++ to understand what the above means exactly,
but I think the answer is approximately "yes". I actually don't
know ML either, so I'm thinking in terms of Haskell types which
are similar.

Neil Cerutti

unread,
Jun 17, 2007, 8:05:52 AM6/17/07
to
On 2007-06-17, Paul Rubin <http> wrote:
> Neil Cerutti <hor...@yahoo.com> writes:
>> I don't know that much about ML. I know is does a really nice job
>> of generic containers, as does C++. But can it 'foo' any type as
>> easily as C++?
>>
>> template <class T> T foo(T);
>
> I don't know enough C++ to understand what the above means
> exactly,

It means that foo can take an object of any type, T, as a
parameter, and returns an object of type T.

> but I think the answer is approximately "yes". I actually
> don't know ML either, so I'm thinking in terms of Haskell types
> which are similar.

The following academic paper seems to be exactly what we're
looking for:

A comparitive study of language support for generic programming
http://faculty.cs.tamu.edu/jarvi/papers/cmp_gp.pdf

According to the paper, ML have a similar level of support for
generic programming, though in C++ three of the eight features
must be provided in code with template meta-programming.

--
Neil Cerutti

Magnus Lycka

unread,
Jun 18, 2007, 10:44:24 AM6/18/07
to
Alex Martelli wrote:
> PL/1 is basically gone, but its legacy of "take what you need and leave
> the rest" is unfortunately alive in other languages that are blind to
> the enormous advantages of simplicity and uniformity.

Reminds me of RUP... No wonder Ivar Jacobson gave up and started all over.

Douglas Alan

unread,
Jun 18, 2007, 3:41:39 PM6/18/07
to
"Terry Reedy" <tjr...@udel.edu> writes:

> |>oug writes:

>> Scheme has a powerful syntax extension mechanism

> I did not and do not see this as relevant to the main points of my
> summary above. Python has powerful extension mechanisms too, but
> comparing the two languages on this basis is a whole other topic.

Please note that Guy Steele in his abstract for "Rabbit: A Compiler
for SCHEME", specifically mentions that Scheme is designed to be a
minimal language in which, "All of the traditional imperative
constructs [...] as well as many standard LISP constructs [...] are
expressed in macros in terms of the applicative basis set. [...] The
macro approach enables speedy implementation of new constructs as
desired without sacrificing efficiency in the generated code."

http://library.readscheme.org/servlets/cite.ss?pattern=Ste-78b

Do you now see how Scheme's syntax extension mechanism is relevant?

|>oug

Terry Reedy

unread,
Jun 19, 2007, 11:14:00 AM6/19/07
to pytho...@python.org

"Douglas Alan" <do...@alum.mit.edu> wrote in message
news:lcvedlf...@gaffa.mit.edu...

| "Terry Reedy" <tjr...@udel.edu> writes:
|
| > |>oug writes:
|
| >> Scheme has a powerful syntax extension mechanism
|
| > I did not and do not see this as relevant to the main points of my
| > summary above.

The main point of my original post was that the quoted slam at Python was
based on a misquote of Tim Peters and a mischaracterization of Python and
that it was out-of-place in the quoted discussion of physics methods and
that it added nothing to that discussion and should better have been
omitted. *All of this has nothing to do with Scheme.*

At the end, I added as a *side note* the irony that the purported author
was the co-developer of Scheme, another 'minimalist algorithm language
(Wikipedia's characterization) with more uniform syntax than Python and
like Python, also with one preferred way to scan sequences (based on my
memory of Scheme use in the original SICP, co-authored by the same
purported quote author, and also confirmed by Wikipedia).

I do not consider it a slam at Scheme to compare it with Python and see
some similarities. Nor is it a slam at Scheme to suggest that it could
just as well have been the subject of an unfair slam, if only the slammer
were not its co-developer ;-)

| [Steele quote deleted]


| Do you now see how Scheme's syntax extension mechanism is relevant?

No. This just partly explains why Scheme gets away with being minimalist.
I explicitly referred to the core language as delivered and as used in
SICP.

tjr

Douglas Alan

unread,
Jun 19, 2007, 1:38:28 PM6/19/07
to
"Terry Reedy" <tjr...@udel.edu> writes:

> The main point of my original post was that the quoted slam at Python was
> based on a misquote of Tim Peters

But it wasn't based on a "misquote of Tim Peters"; it was based on an
*exact* quotation of Tim Peters.

> and a mischaracterization of Python

I find Sussman's criticism not to be a mischaracterization at all: I
and others have previous mentioned in this very forum our desire to
have (in an ideal world) a good syntax extension facility for Python,
and when we've done so, we've been thoroughly pounced upon by
prominent members of the Python community as just not understanding
the true "Weltanschauung" of Python. This despite the fact that I
have been happily and productively programming in Python for more than
a decade, and the fact that Guido himself has at times mentioned that
he's been idly considering the idea of a syntax extension facility.

The reason given for why macros wouldn't gel with Python's
Weltanschauung has typically been the "only one obvious way" koan, or
some variant of it.

> and that it was out-of-place in the quoted discussion of physics
> methods and that it added nothing to that discussion and should
> better have been omitted. *All of this has nothing to do with
> Scheme.*

I'm not sure what you're getting at. Gerry Sussman has a philosophy
of language design that is different from Python's (at least as it is
commonly expressed around here), and he was using an analogy to help
illuminate what his differences are. His analogy is completely clear
to me, and, I in fact agree with it. I love Python, but I think the
"only one obvious way" philosophy may do more harm than good. It is
certainly used, in my experience, at times, to attempt to squelch
intelligent debate.

> At the end, I added as a *side note* the irony that the purported author
> was the co-developer of Scheme, another 'minimalist algorithm
> language

Sussman's statements are not ironic because Scheme is a language that
is designed to be extended by the end-user (even syntactically), while
keeping the core language minimal. This is a rather different design
philosophy from that of Python.

> (Wikipedia's characterization) with more uniform syntax than Python and
> like Python, also with one preferred way to scan sequences (based on my
> memory of Scheme use in the original SICP, co-authored by the same
> purported quote author, and also confirmed by Wikipedia).

There is no one preferred way to scan sequences in Scheme. In fact,
if you were to take SICP at MIT, as I did when I was a freshman, you
would find that many of the problem sets would require you to solve a
problem in several different ways, so you would learn that there are
typically a number of different reasonable ways to approach a problem.
E.g., one of the first problem sets would have you implement something
both iteratively and recursively. I recall another problem set where
we had to find the way out of a maze first using a depth-first search
and then using a breadth-first search.

> | [Steele quote deleted]
> | Do you now see how Scheme's syntax extension mechanism is relevant?

> No. This just partly explains why Scheme gets away with being
> minimalist. I explicitly referred to the core language as delivered
> and as used in SICP.

I suggest that you haven't yet grokked the Weltanschauung of Scheme.
Scheme aficionados would not typically insist that a proposed language
feature is not good because it violates anything like an "only one
obvious way" rule. Rather they would argue that if it can be
implemented as fuctions and/or macros, then it *should* be implemented
that way, rather than polluting the core language. The new facility
should then be included in a library.

|>oug

Neil Cerutti

unread,
Jun 19, 2007, 2:04:58 PM6/19/07
to
On 2007-06-19, Douglas Alan <do...@alum.mit.edu> wrote:

> "Terry Reedy" <tjr...@udel.edu> writes:
>> At the end, I added as a *side note* the irony that the
>> purported author was the co-developer of Scheme, another
>> 'minimalist algorithm language
>
> Sussman's statements are not ironic because Scheme is a
> language that is designed to be extended by the end-user (even
> syntactically), while keeping the core language minimal. This
> is a rather different design philosophy from that of Python.

Which version Scheme, though? Scheme has only formally had macros
since R4RS, and then only as an extension. Macros are an
extension to Scheme, rather than a founder.

Python could conceivably end up in the same position 15 years
from now, with macros a well-established late-comer, as
generators have become.

> I suggest that you haven't yet grokked the Weltanschauung of
> Scheme. Scheme aficionados would not typically insist that a
> proposed language feature is not good because it violates
> anything like an "only one obvious way" rule. Rather they
> would argue that if it can be implemented as fuctions and/or
> macros, then it *should* be implemented that way, rather than
> polluting the core language. The new facility should then be
> included in a library.

The SRFIs are cool.

The last time I dipped my toe into the Scheme newsgroup, I was
overwhelmed by the many impractical discussions of Scheme's dark
corners. Python is either much more free of dark corners, or else
simply doesn't attract that kind of aficionado.

--
Neil Cerutti
Let us join David and Lisa in the celebration of their wedding and bring their
happiness to a conclusion. --Church Bulletin Blooper

Douglas Alan

unread,
Jun 19, 2007, 5:46:35 PM6/19/07
to
Neil Cerutti <hor...@yahoo.com> writes:

> |>oug writes:

>> Sussman's statements are not ironic because Scheme is a
>> language that is designed to be extended by the end-user (even
>> syntactically), while keeping the core language minimal. This
>> is a rather different design philosophy from that of Python.

> Which version Scheme, though? Scheme has only formally had macros
> since R4RS, and then only as an extension. Macros are an extension
> to Scheme, rather than a founder.

Macros were only *standardized* in Scheme with R4RS. This is because
they wanted to figure out the "right" way to do macros before putting
it in stone. (Common Lisp-like non-hygienic macros were considered
inelegant.) All the major implementations of Scheme that I know of
implemented some form of powerful macro mechanism. N.b., Rabbit,
which was Guy Steele's implementation of Scheme, and completed long,
long before the R4RS standard. (Guy Steele was one of the two
inventors of Scheme.) And, as far as I am aware, the plan was always
to eventually come up with a macro mechanism that was as elegant as
the rest of Scheme. The problem with this approach was that achieving
this daunting goal turned out to take quite a while.

> Python could conceivably end up in the same position 15 years
> from now, with macros a well-established late-comer, as
> generators have become.

That would be very cool. The feeling I get, however, is that there
would be too much complaining from the Python community about how such
a thing would be "un-Pythonic".

> The SRFIs are cool.

> The last time I dipped my toe into the Scheme newsgroup, I was
> overwhelmed by the many impractical discussions of Scheme's dark
> corners. Python is either much more free of dark corners, or else
> simply doesn't attract that kind of aficionado.

I don't really think that Scheme itself has many dark corners -- it's
just that being basically a pristine implementation of lambda
calculus, Scheme lets you directly explore some pretty mind-bending
stuff. I would agree that most of that kind of stuff is not
particularly practical, but it can be fun in a hackerly,
brain-expanding/brain-teaser kind of way.

I think that most people who program in Scheme these days don't do it
to write practical software. They either do it to have fun, or for
academic purposes. On the other hand, most people who program in
Python are trying to get real work done. Which is precisely why I
program a lot in Python and very little in Scheme these days. It's
nice to have the batteries included.

|>oug

Terry Reedy

unread,
Jun 19, 2007, 6:21:02 PM6/19/07
to pytho...@python.org

"Douglas Alan" <do...@alum.mit.edu> wrote in message
news:lcr6o7g...@gaffa.mit.edu...

|| But it wasn't based on a "misquote of Tim Peters"; it was based on an
| *exact* quotation of Tim Peters.

My mistake. The misquotation is in the subject line and other's posts here
and in other threads.

| > and a mischaracterization of Python

Nonetheless, picking on and characterizing Tim's statement as
anti-flexibility and un-scientific is to me writing of a sort that I would
not tolerate from my middle-school child.

I checked the context of the quote, page 3 (and 4) of
http://swiss.csail.mit.edu/classes/symbolic/spring07/readings/robust-systems.pdf

"(3)Indeed, one often hears arguments against building flexibility into an
engineered system.
For example, in the philosophy of the computer language Python it is
claimed:
'There should be one - and preferably only one - obvious way to do it.'[25]
Science does| not usually proceed this way: In classical mechanics, for
example, one
can construct equations of motion using Newtonian vectoral mechanics, or
using a
Lagrangian or Hamiltonian variational formulation.[30] In the cases where
all three approaches are applicable they are equivalent, but each has its
advantages in particular contexts."

This footnote is in the context of a section discussing redundancy in
biological systems (and the usual lack thereof in engineered physical
systems). Python is an algorithm language and a tool used to engineering
information systems, which is something different. The next sections are
about exploratory behavior. Languages do not 'behave', let alone
'explore'.

Leaving that aside, if you consider vector mechanics and variational
formulations as roughly analogous to functional versus procedural
programming (or OOP), then Python has both (or all three). Or if you
consider them to be different languages for expressing and solving
problems, then Python is one just language, but one that intentionally
works well with some others. So Python seems to have the sort of
flexibility that he implicitly claims it does not.

The general problems of software inflexibility that he mentioned in a
previous section have nothing specific to do with Python.

When he gets to solutions, one long section (page 13) somewhat specific to
languages, versus applications thereof, is about extensible generic
operations "where it is possible to define what is meant by addition,
multiplication, etc., for new datatypes unimagined by the language
designer." Well, golly gee. Guess what? Not only is Python code generic
unless specialized (with isinstance tests, for instance), but it is highly
extensible for new datatypes, just as Sussman advocates. There is a
special method for just about every syntactic construct and builtin
function. And 3.0 may add a new generic function module to dispatch on
multiple arguments and possibly predicates.

But what naive reader could possibly guess any of this from the single
mention of Python quoted above?

Terry Jan Reedy

Douglas Alan

unread,
Jun 19, 2007, 7:34:27 PM6/19/07
to
"Terry Reedy" <tjr...@udel.edu> writes:

> Nonetheless, picking on and characterizing Tim's statement as
> anti-flexibility and un-scientific is to me writing of a sort that I
> would not tolerate from my middle-school child.

Now it is you who are taking Sussman's comments out of context.
Sussman does not claim that Python is "un-scientific" -- he merely
holds it up as a example of canonical engineering that eschews the
kinds of flexibility that is to be found in biological systems and in
the practice of science. In this regard, Sussman is merely
criticizing engineering "best practices" in general, not Python in
specific, and is arguing that if we want to engineer systems that are
as robust as biological systems then we need to start exploring
radically different approaches to software design.

> Python is an algorithm language and a tool used to engineering
> information systems, which is something different. The next
> sections are about exploratory behavior. Languages do not 'behave',
> let alone 'explore'.

I think you are missing the point. Sussman is making a broad
criticism of software engineering in general, as it is understood
today. The essay in questions was written for a graduate-level MIT
computer science class that aims to explore potential avenues of
research into new languages and approaches that encourage and
facilitate more robust software systems. As good as Python is, it is
still largely an encapsulation of the best ideas about software
engineering as it was understood in the early 80's. We're now 20+
years on, and it behooves our future Computer Science researchers to
consider if we might not be able to do better than we could in 1984.

> So Python seems to have the sort of flexibility that he implicitly
> claims it does not.

Python most certainly does *not* have the type of flexibility that he
is talking about. For instance, one of the things that he talks about
exploring for more robust software systems is predicate dispatching,
which is an extension of multiple dispatch. Although you might be
able to cobble something like this together in Python, it would end up
being very cumbersome to use. (E.g., Guido wrote an essay on doing
multiple dispatch in Python, but you wouldn't actually want to write
Python code that way, because it would be too syntactically
cumbersome.) In dialects of Lisp (such as Scheme), however,
subsystems to explore such alternative programming models can be
written completely within the language (due, in part to their syntax
extension facilities). This is how the Common Lisp Object System came
to be, for instance. CLOS supports all sorts of OO stuff that even
Python doesn't, and yet Lisp without CLOS isn't even an OO language.

> The general problems of software inflexibility that he mentioned in
> a previous section have nothing specific to do with Python.

Right. And he never said they did.

> When he gets to solutions, one long section (page 13) somewhat
> specific to languages, versus applications thereof, is about
> extensible generic operations "where it is possible to define what
> is meant by addition, multiplication, etc., for new datatypes
> unimagined by the language designer." Well, golly gee. Guess what?
> Not only is Python code generic unless specialized (with isinstance
> tests, for instance), but it is highly extensible for new datatypes,
> just as Sussman advocates. There is a special method for just about
> every syntactic construct and builtin function. And 3.0 may add a
> new generic function module to dispatch on multiple arguments and
> possibly predicates.

You didn't read the paper very carefully. Sussman points out that
traditional OO languages are up to this sort of stuff to some extent,
but not to the extent which he thinks is required to solve future
challenges. He things, for instance, that predicate dispatching,
backtracking, and first-class continuations will be required.

|>oug

Steven D'Aprano

unread,
Jun 19, 2007, 8:01:13 PM6/19/07
to
On Tue, 19 Jun 2007 17:46:35 -0400, Douglas Alan wrote:

> I think that most people who program in Scheme these days don't do it
> to write practical software. They either do it to have fun, or for
> academic purposes. On the other hand, most people who program in
> Python are trying to get real work done. Which is precisely why I
> program a lot in Python and very little in Scheme these days. It's
> nice to have the batteries included.

So, once you've succeeded in your campaign to make Python more like
Scheme, what language will you use for getting real work done?

And how long will it take before Schemers start agitating for it to become
more like Scheme?

There is a huge gulf between the claim that Python needs to be more
Scheme-like, and the fact that by your own admission you use Python, not
Scheme, for real work. What benefit will be gained? The ability to
"directly explore some pretty mind-bending stuff ... in a hackerly,
brain-expanding/brain-teaser kind of way"?


--
Steven.

Douglas Alan

unread,
Jun 19, 2007, 8:16:28 PM6/19/07
to
Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:

> On Tue, 19 Jun 2007 17:46:35 -0400, Douglas Alan wrote:

>> I think that most people who program in Scheme these days don't do it
>> to write practical software. They either do it to have fun, or for
>> academic purposes. On the other hand, most people who program in
>> Python are trying to get real work done. Which is precisely why I
>> program a lot in Python and very little in Scheme these days. It's
>> nice to have the batteries included.

> So, once you've succeeded in your campaign to make Python more like
> Scheme, what language will you use for getting real work done?

The problem with using Scheme for real work is that it doesn't come
with enough batteries included and there isn't a big enough of a
community behind it that uses it for real work.

Also, the Scheme standard has progressed at a terribly slow pace. I
have heard that the reason for this is due to the way that its
standardizing committees were set up.

One of the whole reasons to use Lisp is for its extensible syntax, but
it took more than a decade for macros to make it into the Scheme
standard. And without a standard macro system, there was no standard
library -- not even for doing OO programming.

> And how long will it take before Schemers start agitating for it to
> become more like Scheme?

> There is a huge gulf between the claim that Python needs to be more
> Scheme-like, and the fact that by your own admission you use Python,
> not Scheme, for real work. What benefit will be gained? The ability
> to "directly explore some pretty mind-bending stuff ... in a
> hackerly, brain-expanding/brain-teaser kind of way"?

Well, go to MIT and take SICP and then the graduate-level sequel to
the class, Adventures in Advanced Symbolic Programming, and then
you'll see what some of the advantages would be.

A good multimethod system, e.g., would make Python a significantly
nicer language for my purposes, for instance.

For the record, I have a huge problem with NIH-syndrome, and think
that every programming language in the world could learn a thing or
two from what other languages have gotten right.

|>oug

Steve Howell

unread,
Jun 19, 2007, 9:40:27 PM6/19/07
to Douglas Alan, pytho...@python.org

--- Douglas Alan <do...@alum.mit.edu> wrote:
>
> For the record, I have a huge problem with
> NIH-syndrome, and think
> that every programming language in the world could
> learn a thing or
> two from what other languages have gotten right.
>

Which should includes natural languages in my opinion.
As somebody who has long abandoned Perl, I do think
the following has some elegance:

die if # ...



____________________________________________________________________________________
Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated for today's economy) at Yahoo! Games.
http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow

Paul Rubin

unread,
Jun 19, 2007, 10:22:33 PM6/19/07
to
Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:
> So, once you've succeeded in your campaign to make Python more like
> Scheme, what language will you use for getting real work done?
>
> And how long will it take before Schemers start agitating for it to become
> more like Scheme?

While you've dutifully searched the horizon for such intrusion, the
rot has been quietly happening from within ;-). Nested scopes,
first-class functions and closures, internal lambdas, and lazy
evaluation streams (iterators) are all Schemish incursions into
Python. List comprehensions and genexps come from even further in the
functional-programming "beyond". Soon Python will implement a
type system based on the Lambda Cube, well maybe not. :)

It is loading more messages.
0 new messages