I wonder if it's possible to have a Python that's completely (or at
least for the most part) implemented in C, just like PHP - I think
this is where PHP gets its performance advantage. Or maybe I'm wrong
because the core modules that matter are already in C and those Python
files are really a think wrapper. Anyhow, if would be ideal if Python
has performance similar to Java, with both being interpreted languages.
Jack
Prove it. ;-)
Seriously, switching to more C code will cause development to bog down
because Python is so much easier to write than C.
>I wonder if it's possible to have a Python that's completely (or at
>least for the most part) implemented in C, just like PHP - I think
>this is where PHP gets its performance advantage. Or maybe I'm wrong
>because the core modules that matter are already in C and those Python
>files are really a think wrapper. Anyhow, if would be ideal if Python
>has performance similar to Java, with both being interpreted languages.
Could you provide some evidence that Python is slower than Java or PHP?
--
Aahz (aa...@pythoncraft.com) <*> http://www.pythoncraft.com/
"Typing is cheap. Thinking is expensive." --Roy Smith
Writing everything in C might be possible - but is a daunting task & not
justified by the results. And wherever the standard libraries make use
of the flexibility of Python, it's questionable if there really was any
performance gain at all.
But what REALLY is questionable is the alleged performance advantage -
how do you back that up? According to the well-known (and surely
limited) computer language shootout
http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=all
Python is roughly 25% faster than PHP. Granted, this is just one
benchmark, with questionable real-life relevance. But where do you get
the impression from that PHP is faster than Python then?
diez
I guess this is subjective :) - that's what I felt in my experience
with web applications developed in Python and PHP. I wasn't able to
find a direct comparison online.
> Seriously, switching to more C code will cause development to bog down
> because Python is so much easier to write than C.
I understand. Python modules implemented in Python - this is how
Python gets its really rich library.
>>I wonder if it's possible to have a Python that's completely (or at
>>least for the most part) implemented in C, just like PHP - I think
>>this is where PHP gets its performance advantage. Or maybe I'm wrong
>>because the core modules that matter are already in C and those Python
>>files are really a thin wrapper. Anyhow, it would be ideal if Python
>>has performance similar to Java, with both being interpreted languages.
>
> Could you provide some evidence that Python is slower than Java or PHP?
I think most Java-Python benchmarks you can find online will indicate
that Java is a 3-10 times faster. A few here:
http://mail.python.org/pipermail/python-list/2002-January/125789.html
http://blog.snaplogic.org/?p=55
Here's an article that shows the new version of Ruby is
faster than Python in some aspects (they are catching up :)
http://antoniocangiano.com/2007/11/28/holy-shmoly-ruby-19-smokes-python-away/
> I wonder if it's possible to have a Python that's completely (or at
> least for the most part) implemented in C, just like PHP - I think
> this is where PHP gets its performance advantage. Or maybe I'm wrong
PHP is slower than Python.
Please compare the number of serious bugs and vulnerabilities in PHP and
Python.
> I understand. Python modules implemented in Python - this is how
> Python gets its really rich library.
Correct
Python code is much easier to write and multiple times easier to get
right than C code. Everybody with a few months of Python experience can
contribute to the core but it requires multiple years of C and Python
experience to contribute to the C implementation.
> I think most Java-Python benchmarks you can find online will indicate
> that Java is a 3-10 times faster. A few here:
> http://mail.python.org/pipermail/python-list/2002-January/125789.html
> http://blog.snaplogic.org/?p=55
There are lies, damn lies and benchmarks. :)
Pure Python code is not going to beat Java code until the Python core
gets a JIT compiler. If you want fair results you have to either
disable the JIT in Java or use Psyco for Python. Otherwise you are
comparing the quality of one language implementation to the quality of a
JIT compiler.
> Here's an article that shows the new version of Ruby is
> faster than Python in some aspects (they are catching up :)
> http://antoniocangiano.com/2007/11/28/holy-shmoly-ruby-19-smokes-python-away/
The Ruby developers are allowed to be proud. They were able to optimize
some aspects of the implementation to get one algorithm about 14 times
faster. That's good work. But why was it so slow in the first place?
Nevertheless it is just one algorithm that beats Python in an area that
is well known to be slow. Python's numbers are several factors slower
than C code because the overhead of the dynamic language throws lots of
data out of the cache line. If you need fast and highly optimized int
and floating point operations you can rewrite the algorithm in C and
create a Python interface for it.
Please compare the number of serious bugs and vulnerabilities in PHP and
Python.
> I understand. Python modules implemented in Python - this is how
> Python gets its really rich library.
Correct
Python code is much easier to write and multiple times easier to get
right than C code. Everybody with a few months of Python experience can
contribute to the core but it requires multiple years of C and Python
experience to contribute to the C implementation.
> I think most Java-Python benchmarks you can find online will indicate
> that Java is a 3-10 times faster. A few here:
> http://mail.python.org/pipermail/python-list/2002-January/125789.html
> http://blog.snaplogic.org/?p=55
There are lies, damn lies and benchmarks. :)
Pure Python code is not going to beat Java code until the Python core
gets a JIT compiler. If you want fair results you have to either
disable the JIT in Java or use Psyco for Python. Otherwise you are
comparing the quality of one language implementation to the quality of a
JIT compiler.
> Here's an article that shows the new version of Ruby is
> faster than Python in some aspects (they are catching up :)
> http://antoniocangiano.com/2007/11/28/holy-shmoly-ruby-19-smokes-python-away/
The Ruby developers are allowed to be proud. They were able to optimize
About half or fewer are modules meant to be imported into programs. The
rest comprise utility programs and test programs. The core interpreter is
all C.
| because the core modules that matter are already in C
Correct. There are about 20 'builtin' modules written is C either because
they need low level access to the machine or for speed concerns. Third
party modules not included in the standard distribution but definitely part
of the Python universe are also a mix.
If people wrote everything in C for speed, there would be no need for
Python!!
And don't say that you want everyone else to write in C while you enjoy the
pleasures of Python ;-).
tjr
The second articple does have a column for Psyco. It helps in some areas
but still not good enough to stand up against Java. Plus, Psyco is not the
main stream and has stopped development.
I'm also wondering, if Psyco is the right way to do, any reason it's not
being integrated into standard Python?
<scooby-whruu??>
What makes you think it has stopped development? I just swung by the
SF project page, and its most recent news post was just 2 months ago.
Psyco may not be in the standard Python distribution, but it is
definitely a fixture of the Python landscape, which is about as close
to main stream as you can get.
-- Paul
It further development is effectively part of the PyPy project, which
includes some jit work.
| I'm also wondering, if Psyco is the right way to do, any reason it's not
| being integrated into standard Python?
It does not accelerate everything and may slow somethings, it was (is?)
not compatible with everything, it bloats space requirements, it competes
with C/Fortran coded extensions (such as NumPy), it was originally I386
specific, its development cycle was much faster than Python's release
cycle, ...
The cutoff between what goes in the core/stdlib is arbitrary in borderline
cases, but some cutoff is necessary.
tjr
Maybe because of this line:
"Psyco is a reasonably complete project. I will not continue to
develop it beyond making sure it works with future versions of Python.
My plans for 2006 are to port the techniques implemented in Psyco to
PyPy. PyPy will allow us to build a more flexible JIT specializer,
easier to experiment with, and without the overhead of having to keep
in sync with the evolutions of the Python language."
on 12/10/2007 05:14 AM Jack wrote :
> I wonder if it's possible to have a Python that's completely (or at
> least for the most part) implemented in C, just like PHP - I think
> this is where PHP gets its performance advantage. Or maybe I'm wrong
> because the core modules that matter are already in C and those Python
> files are really a think wrapper. Anyhow, if would be ideal if Python
> has performance similar to Java, with both being interpreted languages.
to compare the speed of the two u need a lot of normalization, it is
completely two different vector quantities --irrational. There is no way
for an absolute comparison.
both php and python are excellent but each has some application in which
it defeats the other..
if u want something really fast use assembly.. what??? why are u
shocked!! you don't want to pay tax ;)
it's a matter of preference and need after all.. don't be misled by all
the fuss about speed or the mightiness of any programming language they
are just tools .. u r the secret!!
Which "performance advantage" ???
> Or maybe I'm wrong
> because the core modules that matter are already in C and those Python
> files are really a think wrapper. Anyhow, if would be ideal if Python
> has performance similar to Java, with both being interpreted languages.
<mode="pedantic">
Neither Python nor Java are "interpreted languages", because there's no
such thing as an "interpreted language" - being 'interpreted' (whatever
the definition of 'interpreted') is a quality of an implementation, not
of a language. wrt/ to CPython and Sun's Java implementation, they are
both byte-code compiled - which, according to usual definitions, is not
quite the same thing !-)
</mode>
Now most of the performance difference is due to Java being much less
dynamic than Python, which allow both the compiler and the VM to do much
more optimizations - specially JIT compilation. It's quite harder to
implement such optimizations for a language as dynamic as Python (IIRC,
some language/compiler gurus here mentionned that even compiling Python
to native binary code would not buy that much gain).
Actually, it seems that taking the opposite approach - that is, trying
to implement as much as possible of Python in Python - would be more
promising wrt/ possible JIT compilation, cf the Pypy project.
There's some choice nonsense here, albeit on a different topic:
"Coding for wxwidgets, using a QT or GTK bridge, or using TCL/TK is
hardly an optimal solution when writing complex graphical
applications, and Java wins in this area, despite there comically
being many problems with the look and feel of Java applications."
Clearly an individual who hasn't actually used any of the Python GUI
development solutions, given the choice of words: "bridge", "hardly an
optimal solution"; virtually intimating that you'd be doing malloc/
free or new/delete all the time. Plus throwaway remarks of the form
"XYZ wins" tend to suggest beliefs with little substance and a
continual need for self-reassurance on such matters.
Anyway, back to the topic at hand...
> Here's an article that shows the new version of Ruby is
> faster than Python in some aspects (they are catching up :)
>
> http://antoniocangiano.com/2007/11/28/holy-shmoly-ruby-19-smokes-pyth...
It's evident that the next mainstream version of Ruby will have
various optimisations around recursive operations - something that has
generally been rejected for CPython. Of course, the mainstream Ruby
implementation has had a lot of scope for improvement:
http://shootout.alioth.debian.org/gp4sandbox/benchmark.php?test=all&lang=all
What disappoints me somewhat is that most of the people interested in
taking Python performance to the next level are all outside (or on the
outer fringes of) the CPython core development group: PyPy and Shed
Skin are mostly distinct technologies; Psyco integrates with CPython
but hasn't improved the "out of the box" situation; Pyrex is really a
distinct tool, being more like a convenient wrapper generator than a
bolt-on high performance engine for CPython. Language implementations
like that of Lua have seen more progress on integrating solutions for
performance, it would seem.
As for a C-Python of the form requested, I suppose tools like Shed
Skin and RPython fit the bill somewhat, if a transparent solution is
needed where one writes in Python and it magically becomes fairly
efficient C or C++. Otherwise, Pyrex provides more explicit control
over what gets written in C and what remains in Python.
Paul
I'd like to provide some evidence that Python is *faster* than Java.
EVE online...emulate that in JAVA please.
-1 This would seriously muck-up the evolution of the language.
Having a few building blocks written in C provides a basis
for writing very fast pure python (for example, sets, heapq,
itertools).
Beyond those building blocks, it is a step backwards to write in C.
Also, if you really need performance, the traditional solutions are to
use tools like Psyco or Pyrex.
Raymond
Instead of recurring to benchmarks, I recommend that you read the
following:
http://highscalability.com/youtube-architecture
There are no comparisons there, just a sample of what python and
psyco can achieve. For a language that isn't designed with speed in
mind, I think that's quite impressive.
If yes, benchmarks are not an argument. Else, you'll have hard time
making your point !-)
(hint: doing objective benchmarking is really a difficult art)
> - that's what I felt in my experience
> with web applications developed in Python and PHP. I wasn't able to
> find a direct comparison online.
Could it be the case that you are comparing Python CGI scripts with
mod_php ? Anyway, since php is also usable (hem... maybe not the
appropriate qualificative) outside Apache, it should quite easy to make
a more serious test ?
Seriously: I never saw any benchmark where php was faster than Python
for any kind of stuff - unless of course you're trying to compare Zope
running as a CGI script with an hello world PHP script runned by mod_php.
Then benchmark the time taken for the interpreter (oops, sorry: "VM") to
start !-)
> I understand that the standard Python distribution is considered
> the C-Python. Howerver, the current C-Python is really a combination
> of C and Python implementation. There are about 2000 Python files
> included in the Windows version of Python distribution. I'm not sure
> how much of the C-Python is implemented in C but I think the more
> modules implemented in C, the better performance and lower memory
> footprint it will get.
Donald Knuth, one of the fathers of modern computer science, is famous
for stating that "premature optimization is the root of all evil in
computer science." A typical computer program tends to have
bottlenecks that accounts for more than 90% of the elapsed run time.
Directing your optimizations anywhere else is futile.
Writing a program in C will not improve the speed of your hardware. If
the bottleneck is a harddisk or a network connection, using C will not
change that. Disk i/o is a typical example of that. It is not the
language that determines the speed by which Python or C can read from
a disk. It is the disk itself.
I had a data vizualization program that was slowed down by the need to
move hundreds of megabytes of vertex data to video RAM. It would
obviously not help to make the handful of OpenGL calls from C instead
of Python. The problem was the amount of data and the speed of the
hardware (ram or bus). The fact that I used Python instead of C
actually helped to make the problem easier to solve.
We have seen several examples that 'dynamic' and 'interpreted'
languages can be quite efficient: There is an implementation of Common
Lisp - CMUCL - that can compete with Fortran in efficiency for
numerical computing. There are also versions of Lisp than can compete
with the latest versions of JIT-compiled Java, e.g. SBCL and Allegro.
As it happens, SBCL and CMUCL is mostly implemented in Lisp. The issue
of speed for a language like Python has a lot to do with the quality
of the implementation. What really makes CMUCL shine is the compiler
that emits efficient native code on the fly. If it is possible to make
a very fast Lisp, it should be possible to make a very fast Python as
well. I remember people complaining 10 years ago that 'Lisp is so
slow'. A huge effort has been put into making Lisp efficient enough
for AI. I hope Python some day will gain a little from that effort as
well.
We have a Python library that allows us to perform a wide range of
numerical tasks at 'native speed': NumPy (http://www.scipy.org). How
such array libraries can be used to get excellent speedups is
explained here: http://home.online.no/~pjacklam/matlab/doc/mtt/index.html
We obviously need more effort to make Python more efficient for CPU
bound tasks. Particularly JIT compilation like Java, compilation like
Lisp or data specialization like Psyco.
But writing larger parts of the standard library in C is not a
solution.
> Nevertheless it is just one algorithm that beats Python in an area that
> is well known to be slow. Python's numbers are several factors slower
> than C code because the overhead of the dynamic language throws lots of
> data out of the cache line. If you need fast and highly optimized int
> and floating point operations you can rewrite the algorithm in C and
> create a Python interface for it.
Lisp is a dynamically typed language. CMUCL can compete with Fortran
for numerical work. SBCL can compete with the Java server VM. If the
current CPython VM throws data out of the cache line, then it must be
a design flaw in the VM.
This last one points to a funny anthology (year 2k, Python 1.5.2 and
Java 1.1) piece of a paper (electronic paper of course) if you read past
the "benchmark" part (which BTW doesn't pretend to be that serious - the
'do nothing' test is just hilarious). I really like this part (sorry, I
only kept the titles - but you can have a look at the whole text, url is
below):
"""
* Unresolved Release-Critical Bugs in Java*
1. Don't Use Swing.
(snip rant about Swing memory leaks, just kept this:)
The AWT does this too, but you could probably write an application that
ran for longer than 20 minutes using it.
2. Don't allocate memory.
(snip)
3. Don't use java.lang.String.intern
(snip)
4. Don't expect your app to run
(snip)
5. Don't print anything
(snip)
6. Don't write large apps
(snip)
7. Don't write small apps
(snip)
"""
Heck... This sure looks like a very high price to pay wrt/ "raw speed"
gain !-)
Oh, yes, the url:
http://www.twistedmatrix.com/users/glyph/rant/python-vs-java.html
Yeps, that's it : twisted. No surprise the guy "decided to move Twisted
Reality to python."
It's really worth reading. While I was by that time a total newbie to
programming, and will probably never be anything close to the author, it
looks like we took a similar decision at the same time, and mostly based
on similar observations : *in practice*, Java sucks big time - when
Python JustWorks(tm).
From my .sig database:
"Premature optimization is the root of all evil in programming."
--C.A.R. Hoare (often misattributed to Knuth, who was himself quoting
Hoare)
--
Aahz (aa...@pythoncraft.com) <*> http://www.pythoncraft.com/
Or a lack of time and money. Lisp is one of the older programming
languages around, and at a time had BigBucks(tm) invested on it to try
and make it practically usable.
> Or a lack of time and money. Lisp is one of the older programming
> languages around, and at a time had BigBucks(tm) invested on it to try
> and make it practically usable.
Yes. But strangely enough, the two Lisp implementations that really
kick ass are both free and not particularly old. CMUCL and SBCL proves
that you can make a dynamic language implementation extremely
efficient if you try hard enough. There are also implementations of
Scheme (e.g. Bigloo) that shows the same.
> "Premature optimization is the root of all evil in programming."
> --C.A.R. Hoare (often misattributed to Knuth, who was himself quoting
> Hoare)
Oh, I was Hoare? Thanks. Anyway, it doesn't change the argument that
optimizing in wrong places is a waste of effort.
> >http://antoniocangiano.com/2007/11/28/holy-shmoly-ruby-19-smokes-pyth...
>
> The Ruby developers are allowed to be proud. They were able to optimize
> some aspects of the implementation to get one algorithm about 14 times
> faster. That's good work. But why was it so slow in the first place?
The thing to notice here is that Congiano spent 31.5 seconds computing
36 Fibonacci numbers in Python and 11.9 seconds doing the same in
Ruby. Those numbers are ridiculous! The only thing they prove is that
Congiano should not be programming computers. Anyone getting such
results should take a serious look at their algoritm instead of
blaming the language. I don't care if it takes 31.5 seconds to compute
36 Fibonacci numbers in Python 2.5.1 with the dumbest possible
algorithm.
Quite so.
Take something like
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/498110
and then modify the Python code from the "Ruby smokes Python" article by
the addition of @memoize(3) to decorate the otherwise unchanged fib
function: the Python runtime drops down to 0.002 seconds.
That is just slightly better than Ruby's 11.9 seconds although I'm sure the
Ruby code would also gain as much from a memoize decorator.
from memoize import memoize
@memoize(3)
def fib(n):
if n == 0 or n == 1:
return n
else:
return fib(n-1) + fib(n-2)
from time import clock
start = clock()
for i in range(36):
print "n=%d => %d" % (i, fib(i))
print clock()-start
When I run this (with output directed to a file: I'm not trying to time
windows console speed), the output is:
n=0 => 0
n=1 => 1
n=2 => 1
n=3 => 2
n=4 => 3
n=5 => 5
n=6 => 8
n=7 => 13
n=8 => 21
n=9 => 34
n=10 => 55
n=11 => 89
n=12 => 144
n=13 => 233
n=14 => 377
n=15 => 610
n=16 => 987
n=17 => 1597
n=18 => 2584
n=19 => 4181
n=20 => 6765
n=21 => 10946
n=22 => 17711
n=23 => 28657
n=24 => 46368
n=25 => 75025
n=26 => 121393
n=27 => 196418
n=28 => 317811
n=29 => 514229
n=30 => 832040
n=31 => 1346269
n=32 => 2178309
n=33 => 3524578
n=34 => 5702887
n=35 => 9227465
0.00226425425578
Another point is, the reason the ruby code shows such a performance
increase is because of the way it wraps native (C) types for integers
in the the new byte compiler; i.e., it's a directed optimization,
which the example code exploits to its full extent. But with
dictionary access, for example, python still creams ruby (by a 2/1
factor in my tests). Speaking as someone who uses both python and
ruby, I can say that ruby 1.9 is approaching python's speed, which is
very cool, but is still not quite as fast as python in general (the
whole "smokes python" bit is just propaganda that utilizes a specific
feature vector, and is generally unhelpful).
Regards,
Jordan
> @memoize(3)
> def fib(n):
> if n == 0 or n == 1:
> return n
> else:
> return fib(n-1) + fib(n-2)
The thing I would do is:
def fibo(n):
while 1:
try:
return fibo.seq[n]
except AttributeError:
fibo.seq = [0, 1, 1]
except IndexError:
fibo.seq.append( fibo.seq[-2] + fibo.seq[-1] )
Here are some timings I got on my laptop (1.7 GHz Pentium M, Windows
XP, Python 2.5.1), calculating 36 Fibonacci numbers:
First run, initalizing cache: 243 µs
Second run, exploiting cache: 28 µs
Third run, exploting cache: 27 µs
This is 6 orders of magnitude faster than Congiano's benchmark. That
is a speed up by a factor of a million.
We're ten years into Python, and it's still a naive interpreter.
It's time for a serious optimizing compiler. Shed Skin is going
in the right direction. But for some reason, people seem to dislike the
Shed Skin effort. Its author writes "Am I the only one seeing the potential
of an implicitly statically typed Python-like-language that runs at
practically the same speed as C++?"
"For a set of 27 non-trivial test programs (at about 7,000 lines in total;
... measurements show a typical speedup of 2-40 times over Psyco, about 10 on
average, and 2-220 times over CPython, about 35 on average." So that's
what's possible.
I'm surprised that Google management isn't pushing Guido towards
doing something about the performance problem.
John Nagle
Although sometimes people seem to think that it goes "optimisation is
the root...". The "premature" bit is significant.
This is an absurd misrepresentation of the state of the Python VM.
> It's time for a serious optimizing compiler. Shed Skin is going
> in the right direction. But for some reason, people seem to dislike the
> Shed Skin effort. Its author writes "Am I the only one seeing the potential
> of an implicitly statically typed Python-likea-lnguage that runs at
> practically the same speed as C++?"
>
> "For a set of 27 non-trivial test programs (at about 7,000 lines in total;
> ... measurements show a typical speedup of 2-40 times over Psyco, about 10 on
> average, and 2-220 times over CPython, about 35 on average." So that's
> what's possible.
>
... with roughly a hundredth of the python standard library, and a
bunch of standard python features not even possible. I like
generators, thanks.
If shedskin can actually match Pythons feature set and provide the
performance it aspires to, thats great, and I may even start using it
then. But in the meantime, hardly anything I write is CPU bound and
when it is I can easily optimize using other mechanisms. Shedskin
doesn't give me anything that's worth my time to improve on it, or the
restrictions it places on my code. I think JIT is the future of
optimization anyway.
> I'm surprised that Google management isn't pushing Guido towards
> doing something about the performance problem.
>
Assuming your conclusion (ie, that there's a performance problem to do
something about) doesn't prove your case.
> We have seen several examples that 'dynamic' and 'interpreted'
> languages can be quite efficient: There is an implementation of Common
> Lisp - CMUCL - that can compete with Fortran in efficiency for
> numerical computing. There are also versions of Lisp than can compete
> with the latest versions of JIT-compiled Java, e.g. SBCL and Allegro.
> As it happens, SBCL and CMUCL is mostly implemented in Lisp. The issue
> of speed for a language like Python has a lot to do with the quality
> of the implementation. What really makes CMUCL shine is the compiler
> that emits efficient native code on the fly. If it is possible to make
> a very fast Lisp, it should be possible to make a very fast Python as
> well. I remember people complaining 10 years ago that 'Lisp is so
> slow'. A huge effort has been put into making Lisp efficient enough
> for AI. I hope Python some day will gain a little from that effort as
> well.
I've been told that Torbjörn Lager's implementation of the Brill
tagger in Prolog is remarkably fast, but that it uses some
counter-intuitive arrangements of the predicate and argument
structures in order to take advantage of the way Prolog databases are
indexed.
> On 10 Des, 23:54, Bruno Desthuilliers
> <bdesth.quelquech...@free.quelquepart.fr> wrote:
>
>> Or a lack of time and money. Lisp is one of the older programming
>> languages around, and at a time had BigBucks(tm) invested on it to try
>> and make it practically usable.
>
> Yes. But strangely enough, the two Lisp implementations that really
> kick ass are both free and not particularly old.
Not two, but one -- SBL is simply a fork of CMU CL. As for their age,
the CMU CL states that it has been "continually developed since the
early 1980s".
Maybe because it isn't as much a problem as people with C envy assume it
must be? (Disclaimer: I'm not suggesting that John is one of those
people.)
Not that I'd object to anyone else doing the work to speed up Python, but
for the things I use Python for, I've never felt the need to say "Gosh
darn it, my script took twelve milliseconds to run, that's just too
slow!!!". Maybe Google are in the same boat?
Actually, in Google's case, I guess their bottleneck is not Python, but
trying to push around gigabytes of data. That will be slow no matter what
language you write in.
--
Steven.
No, it's not. Shedskin is interesting, but just a small subset of Python
- and without completeness, performance is useless.
The PyPy approach is much more interesting - first create a
full-featured Python itself, then create optimizing backends for it,
also for just a language subset - RPython.
And if possible - which it is only in a very limited set of cases for
not type-annotated code - identify parts that conform to RPython's
constraints, and compile that JITly.
Diez
Care to provide a less "naive" one ?
(snip usual rant)
> I'm surprised that Google management isn't pushing Guido towards
> doing something about the performance problem.
Could it be possible they don't see Python's perfs as a "problem" ?
Ok, I don't mean there's no room for improvement here. If you feel like
addressing the problem, you're welcome - in case you didn't notice,
Python is free software.
> Shed Skin effort. Its author writes "Am I the only one seeing the potential
> of an implicitly statically typed Python-like-language that runs at
> practically the same speed as C++?"
Don't forget about Pyrex and PyPy's RPython.
By the way, we don't need a hotspot JIT compiler. Lisp has a compiler
invoked by the programmer. We could include optional static typing in
Python, and have an optional static optimizing native compiler for
selected portions of code. That would be easier to implement in the
short run, with JIT-capabilities added later. Pyrex, ShedSkin or
RPython are all good starting points.
Please wait - going to get my gun...
Python 3 will have optional 'type' annotations, where 'type' includes
abstract base classes defined by the interface (methods). So parameters
could be annotated as a Number or Sequence, for instance, which is more
useful often than any particular concrete type. I strongly suspect that
someone will use the annotations for compilation, which others will use
them just for documentation and whatever else.
tjr
On the danger of hurting some souls, I used to write a type feedback
system a few months ago, which automatically annotates source code
from runtime types. Each run of a program P yields a program TP which
is fully type annotated on covered branches. Since a possible compiler
might need more hints than just relying on the types of function
parameters I changed the syntax of Python slightly for displaying
local type annotations as well. For each Python program P exist
possibly infinitely many programs TP of an extended languages TPython
called "typed snapshots". Each of those snapshots might be translated
using translation machinery a la ShedSkin. From a software engineering
point of view early integration with CPython is required, which means
integration of native code on a per function base and possible
fallback to bytecode interpretation in order to preserve duck-typing.
In the literature such techniques are called "(Offline) Feedback
Driven Optimization", while "Online" techniques refer to JIT
compilation.
When TP is yielded from P the program TP might be used for other
purposes as well because each typed snaphsot suggests static
guarantees in a given type system. This might ease the productive
value of refactoring browsers or other tools used for languages with
type systems. I'm yet not sure about type systems for typed snapshots
but one might interpret them in an existing languages with type
systems.
Kay
> Python 3 will have optional 'type' annotations, where 'type' includes
> abstract base classes defined by the interface (methods). So parameters
> could be annotated as a Number or Sequence, for instance, which is more
> useful often than any particular concrete type. I strongly suspect that
> someone will use the annotations for compilation, which others will use
> them just for documentation and whatever else.
I am not sure why a new type annotation syntax was needed Python 3:
def foobar(a: int, b: list) -> int:
#whatever
The same thing has been achieved at least three times before, using
the currently valid syntax:
1. ctypes: function attributes defines types
def foobar(a,b):
#whatever
foobar.argtypes = (int,int)
foobar.restype = int
2. .NET interopability in IronPython: decorators defines types
@accepts(int,int)
@return(int)
def foobar(a,b):
#whatever
3. PyGPU compiler: default argument values defines types (return type
undefined)
def foobar(a = int, b = int):
#whatever
Is is therefore possible to give an optimizing compiler sufficient
information to emit efficient code, even with the current syntax.
Local variables could be inferred by an intelligent type-inference
system like that of Boo. Or one could use attributes or decorators to
attach a type dictionary, {'name': type, ...}, for the type-invariant
local variables. The latter has the advantage of allowing duck-typing
for local variables not present in the static type dict.
Optional static typing is what makes Lips like CMUCL efficient for
numerical code. One could achieve the same with Python already today.
Because people care about a feature when there is @syntax. Introducing
syntax in Python is also the way of standardization: not everyone
creates his own informal application level annotation syntax when
there is one and only one recommended way of using annotations.
> > I am not sure why a new type annotation syntax was needed Python 3:
>
> Because people care about a feature when there is @syntax.
Good point; the inverse is not true though: time and time again people
cared about some syntax for properties without any change so far. The
result is a handful of different ways to spell out properties; python
2.6 will add yet another variation (http://mail.python.org/pipermail/
python-dev/2007-October/075057.html).
George
Yes, I'm aware. Curiously, whenever property syntax is discussed the
discussion loses track and is dragged away by needless side
discussions. Just look at Stephen Bethards withdrawn PEP 359 [1] in
which he finally muses about replacing the class statement by the make
statement. So the PEP ended in "abstract nonsense" instead of
clarifying the point.
[1] http://www.python.org/dev/peps/pep-0359/
I vaguely remember a discussion a few years ago, where someone made
the quite reasonable suggestion of introducing some kind of
thunk_statement:
class A(object):
foo = property:
def fget(self):
return self._foo
def fset(self, value):
self._foo = value
which was translated as follows:
class A(object):
def thunk():
def fget(self):
return self._foo
def fset(self, value):
self._foo = value
return vars()
foo = propery(**thunk())
del thunk
Now people started to consider using the compound statement within
expressions as well because the equal sign is used within method
signatures and call syntax. This lead to a general discussion about
the expr/statement distinction in Python, about multiline lambdas and
functional style programming. These association graphs are almost
predictable.
I just want to stress that adding type hints _won't_ make programs
faster if you use a good specializing JIT compiler. Psyco in particular
would not benefit from type hints at all (even if you changed Psyco take
them into account) and would give you exactly the same speed as without
them.
Cheers,
Carl Friedrich Bolz
I just want to stress that adding type hints _won't_ make programs
I really like this formulation. However, its memory consumption is
proportional to the input number. On a system with one gigabyte of
RAM, it computes the Fibonacci number of 100000 in about four seconds.
However, trying to compute 200000, the machine swaps madly, and the
Python interpreter DOSes the Linux kernel solid, making the system
unresponsive. :-|
If all that cache is not reused, building it may be avoided by
appending the following two lines to the above function:
fibo.seq.pop(0)
n -= 1
With this addition, the above system manages to compute the Fibonacci
number of 1000000 (one million) in about 190 seconds. :-)
> This is 6 orders of magnitude faster than Congiano's benchmark. That
> is a speed up by a factor of a million.
That's really besides the point. Nice OT, anyway. ;-)
--
Nicola Larosa - http://www.tekNico.net/
AtomPub sits in a very strange place, as it has the potential to
disrupt half a dozen or more industry sectors, such as, Enterprise
Content Management, Blogging, Digital/Desktop Publishing and
Archiving, Mobile Web, EAI/WS-* messaging, Social Networks, Online
Productivity tools.
-- Bill de hÓra, July 2007
> Curiously, whenever property syntax is discussed the
> discussion loses track and is dragged away by needless side
> discussions. Just look at Stephen Bethards withdrawn PEP 359 [1] in
> which he finally muses about replacing the class statement by the make
> statement. So the PEP ended in "abstract nonsense" instead of
> clarifying the point.
>
> [1]http://www.python.org/dev/peps/pep-0359/
Ah, the 'make' statement.. I liked (and still do) that PEP, I think it
would have an impact comparable to the decorator syntax sugar, if not
more. Alas, it was too much ahead of its time.. who knows, it might
revive on some 3.x version.
George
> Ah, the 'make' statement.. I liked (and still do) that PEP, I think it
> would have an impact comparable to the decorator syntax sugar, if not
> more.
I think it is one step closer to Lisp. I believe that it would be
worth considering adding defmacro statement. Any syntax, including if,
else, for, while, class, lambda, try, except, etc. would be
implemented with defmacros. We would only need a minimalistic syntax,
that would bootstrap a full Python syntax on startup. And as for
speed, we all know how Lisp compares to Python.
You say that as if "one step closer to Lisp" is a worthwhile goal.
Python has not become what it is, and achieved the success it has,
because a bunch of people really wanted to use Lisp but didn't think
other people could handle it.
The goal of these sorts of discussions should be to make Python a
better Python. But what happens far too often (especially with
Lispers, but not just them by any means) is that people want to make
Python into a clone or "better" version of whatever other language
they like.
If you're the sort of person who views lisp as the goal that other
languages should aspire to, and I know many of those people exist and
even frequent this list, then you should probably spend your time and
energy on making Lisp a better Lisp and addressing whatever weaknesses
in Lisp have you using Python instead. Trying to fix Lisp (or
whatever) by transforming Python into it isn't going to make you any
happier, and it's just going to derail any discussion of making Python
a better *Python*.
Programmable syntax is a very powerful concept. However, python is designed not only to be powerful, but simple, and this change would drastically reduce the simplicity of Python. It would cease to be a good beginner's language. If you want a language with different syntax than python, python has wonderful parsing libraries. Use those instead.
My 2Â.
Cheers,
Cliff
> Programmable syntax is a very powerful concept.
You don't have to use the programmable syntax just because it's there.
But I do realize it would be a misfeature if it is abused.
Two points:
* Programmable syntax would make it easier to write an efficient
native compiler. The compiler would only need to know about the small
subset of language used for bootstrapping (i.e. any high-level OOP
constructs could emerge from defmacros).
* Numerical extensions like NumPy create a temporary array when
expressions like '(a+b)*(c+d)' is evaluated. This is because the
overloaded operators do not see the whole expression. Programmable
syntax is a remedy for this.
> Python has not become what it is, and achieved the success it has,
> because a bunch of people really wanted to use Lisp but didn't think
> other people could handle it.
>
> The goal of these sorts of discussions should be to make Python a
> better Python.
I do not want to use Lisp. The syntax is awkward and strange, and does
not fit in my brain. I cannot read Lisp code and get a mental image of
what it does. Readability is what sets Python apart.
But I do think programmable syntax can make Python a better Python,
just as it made Lisp a better Lisp.
I don't see an indication that anybody but the creator of Psyco does
understand the code base. *g*
Guido has stated his opinion about optimizations more than once. My own
opinion as core developer (which is quite similar to Guido's opinion) is:
We are happy and glad for every improvement regarding speed, memory
usage or features if and only if:
* The implementation must be clean, well designed, well documented well
written and platform independent / supported on all platforms. Python
runs on machines from mobile phones to large main frames.
* The improvement must NOT hinder or slow down future development at
all cost. If it's so complicated that it might slow down future
development than it's a no go. It's more important to us to have a clean
and understandable code base than to add hundreds of small improvements
which makes debugging a nightmare.
* You are willing to support and fix the improvement for X years where
X is between 4 and INF years.
* The modification must not slow down Python for common uses like a
single threaded, single CPU bound program or small script. This rules
out all existing attempts to remove the GIL from Python since they have
slowed down Python to 50% or less. However Guido said a few months ago
that he would endorse a SMP branch of Python aimed to multi core and
multi threaded apps.
* The code and all its dependencies must be compatible with Python license.
* The code must be written following the C89 standards.
By the way core development is open for everybody. Any patch is
appreciated, starting from fixing a typo in the docs over bug reports,
bug fixes to new features.
Read the PEPs! http://www.python.org/dev/peps/
Subscribe to the mailing lists (I suggest gmane.org)!
Get involved!
Christian
>
> We are happy and glad for every improvement regarding speed, memory
> usage or features if and only if: ...
> ... platform independent / supported on all platforms. Python runs
> on machines from mobile phones to large main frames.
JOOI - there are things in the standard library that are not supported
on all platforms. Why would that be a basis for excluding some
psyco-like package?
Python 2.6 and 3.0 have a more Pythonic way for the problem:
class A(object):
@property
def foo(self):
return self._foo
@foo.setter
def foo(self, value)
self._foo = value
@foo.deletter
def foo(self)
del self._foo
class B(A):
# one can even overwrite the getter in a subclass
@foo.getter
def foo(self):
return self._foo * 2
Christian
Python 2.6 and 3.0 have a more Pythonic way for the problem:
> I vaguely remember a discussion a few years ago, where someone made
> the quite reasonable suggestion of introducing some kind of
> thunk_statement:
>
> class A(object):
> foo = property:
> def fget(self):
> return self._foo
> def fset(self, value):
> self._foo = value
That's almost identical to a recipe I had written once upon a time,
without requiring a syntax change: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/410698
George
This is by definition Pythonic since it was conceived by the BDFL.It
is also certainly an improvement over the current common practice of
polluting the class namespace with private getters and setters. Still
it's a kludge compared to languages with builtin support for
properties.
George
It would be great if Python could be speeded up to SBCL Lisp by just
transforming one parse tree into another one. But since Python is
compiled to bytecodes I dare to say that surface syntax is not the key
factor ;)
Kay
How exactly is this a kludge? This is almost identical syntax (but
with less indentation) to a C# property declaration. The only thing
that's simpler is auto-generation of trivial accessors via a
decoration, but those are useless in Python so only the case of
getters and setters that actually do something needs to be addressed.
If the only thing that's not a "kludge" is direct syntax support for a
feature, you've set a pretty high and uselessly arbitrary bar.
In three (at least) ways:
1. The property name ('foo') is repeated *5* times for a single class.
Talk about DRY.
2. Total inconsistency: @property for the getter when it is defined
for the first time, @foo.setter/@foo.deletter for the setter/deletter,
@foo.getter when the getter is redefined. WTF ?!
> This is almost identical syntax (but with less indentation) to a C# property declaration.
3. Less indentation is not an advantage here; typically ones wants all
two or three related functions that define the property to stand out
as a separate group, not be mixed with regular public methods.
Sorry, C# wins hands down on this.
George
As a stand-alone package (even shipping with Python), that's not a
problem; my understanding is that other issues have prevented including
Psyco. However, Christian was talking specifically about changes to the
CPython core for performance purposes.
--
Aahz (aa...@pythoncraft.com) <*> http://www.pythoncraft.com/
"Typing is cheap. Thinking is expensive." --Roy Smith
C# properties are thunk statements:
private Object _foo = null;
public Object foo {
get { return this._foo; }
set { this._foo = value; }
}
In Python pseudo code this would translate to
foo:
def get(self): return self._foo
def set(self, value): self._foo = value
omitting the reference to a property constructor. This is the pure
essence: assign methods not to objects but object attributes, for
which certain protocols are defined. It could be generalized for GUI
applications using triggers or other dataflow related bindings. The
"pythonic" solution being mentioned is a rather obscure and convoluted
decorator hack. "Executable pseudocode" reads different and for the
latter assertion one doesn't need a BDFL stamped license. It
demonstrates the cleverness of the programmer more than it clarifies
the issue.
Kay
Then you haven't been reading the right IRC channel recently. ;-)
> Guido has stated his opinion about optimizations more than once. My own
> opinion as core developer (which is quite similar to Guido's opinion) is:
>
> We are happy and glad for every improvement regarding speed, memory
> usage or features if and only if:
>
> * The implementation must be clean, well designed, well documented well
> written and platform independent / supported on all platforms. Python
> runs on machines from mobile phones to large main frames.
Indeed, but there's arguably a certain amount of deadlock around
making unpatched, released versions of Python available in all these
places, unless there's been some activity below the surface in the
python-dev community on things like cross-compilation and not running
distutils using the newly built Python executable (which, as I
remember, was but one of the problems). Your point stands, naturally,
but if there's potential for some real movement on some
uncontroversial issues, and yet we see no movement, one remains
skeptical about getting even slightly controversial improvements into
vanilla CPython.
> * The improvement must NOT hinder or slow down future development at
> all cost. If it's so complicated that it might slow down future
> development than it's a no go. It's more important to us to have a clean
> and understandable code base than to add hundreds of small improvements
> which makes debugging a nightmare.
Perhaps, but what would people prefer: yet more language bolt-ons or
better performance?
> * You are willing to support and fix the improvement for X years where
> X is between 4 and INF years.
Can't argue with this one. ;-)
> * The modification must not slow down Python for common uses like a
> single threaded, single CPU bound program or small script. This rules
> out all existing attempts to remove the GIL from Python since they have
> slowed down Python to 50% or less. However Guido said a few months ago
> that he would endorse a SMP branch of Python aimed to multi core and
> multi threaded apps.
It will be interesting to see what happens with recent work on
improving threading within CPython. As for Psyco (which perhaps offers
concurrency benefits only through instruction-level parallelism, if we
briefly consider that topic), I can understand that just-in-time
compilation can bring certain penalties in terms of memory usage and
initialisation times (as Java virtual machines have demonstrated), but
there's a compelling argument for trying to make such technologies
available to CPython if they can be switched off and won't then incur
such penalties. But we presumably return to the point of people not
wanting to touch code that has anything to do with such features: a
combination of social factors and the priorities of the group.
Paul
What's the right channel? I'm on #python and #python-dev
> Indeed, but there's arguably a certain amount of deadlock around
> making unpatched, released versions of Python available in all these
> places, unless there's been some activity below the surface in the
> python-dev community on things like cross-compilation and not running
> distutils using the newly built Python executable (which, as I
> remember, was but one of the problems). Your point stands, naturally,
> but if there's potential for some real movement on some
> uncontroversial issues, and yet we see no movement, one remains
> skeptical about getting even slightly controversial improvements into
> vanilla CPython.
I don't get your point, especially when you talk about distutils. Please
elaborate.
(C)Python has a well known process to get new features or changes into
the language: Write a PEP, convince enough core developers and/or Guido,
implement the feature. I don't see a PEP about JIT in the list at
abouthttp://www.python.org/dev/peps/, do you? :]
Besides nobody is going to stop you from creating a fork. Christian
Tismer forked of stackless years ago. It's a successful branch with
useful additions to the language. It got never merged back because
Christian didn't feel right about it.
> Perhaps, but what would people prefer: yet more language bolt-ons or
> better performance?
I prefer a fast, stable and maintainable Python over a faster but unstable.
> It will be interesting to see what happens with recent work on
> improving threading within CPython. As for Psyco (which perhaps offers
> concurrency benefits only through instruction-level parallelism, if we
> briefly consider that topic), I can understand that just-in-time
> compilation can bring certain penalties in terms of memory usage and
> initialisation times (as Java virtual machines have demonstrated), but
> there's a compelling argument for trying to make such technologies
> available to CPython if they can be switched off and won't then incur
> such penalties. But we presumably return to the point of people not
> wanting to touch code that has anything to do with such features: a
> combination of social factors and the priorities of the group.
Rhamph is working on a GIL-less Python version. It may become a compile
time option someday in the future. Others have worked hard to speed up
other parts of Python. We have multiple pending patches which speed up
small parts of Python. Some are related to peephole (byte code
optimizations), other patches speed up attribute access on classes or
globals. The global lookup patch makes globals as fast as locals.
I've done my share for the poor Windows souls when I created the VS 2008
PCbuild9 directory and enabled PGO builds. PGO builds are about 10%
faster than ordinary VS 2008 builds. VS 2008 should be slightly faster
than VS 2003 but I can bench mark it on my machine.
In my opinion an optional JIT as compile time or startup option has good
chances to become part of the CPython implementation. You "only" have to
replace ceval.c ... :]
Christian
What's the right channel? I'm on #python and #python-dev
> Indeed, but there's arguably a certain amount of deadlock around
> making unpatched, released versions of Python available in all these
> places, unless there's been some activity below the surface in the
> python-dev community on things like cross-compilation and not running
> distutils using the newly built Python executable (which, as I
> remember, was but one of the problems). Your point stands, naturally,
> but if there's potential for some real movement on some
> uncontroversial issues, and yet we see no movement, one remains
> skeptical about getting even slightly controversial improvements into
> vanilla CPython.
I don't get your point, especially when you talk about distutils. Please
elaborate.
(C)Python has a well known process to get new features or changes into
the language: Write a PEP, convince enough core developers and/or Guido,
implement the feature. I don't see a PEP about JIT in the list at
abouthttp://www.python.org/dev/peps/, do you? :]
Besides nobody is going to stop you from creating a fork. Christian
Tismer forked of stackless years ago. It's a successful branch with
useful additions to the language. It got never merged back because
Christian didn't feel right about it.
> Perhaps, but what would people prefer: yet more language bolt-ons or
> better performance?
I prefer a fast, stable and maintainable Python over a faster but unstable.
> It will be interesting to see what happens with recent work on
> improving threading within CPython. As for Psyco (which perhaps offers
> concurrency benefits only through instruction-level parallelism, if we
> briefly consider that topic), I can understand that just-in-time
> compilation can bring certain penalties in terms of memory usage and
> initialisation times (as Java virtual machines have demonstrated), but
> there's a compelling argument for trying to make such technologies
> available to CPython if they can be switched off and won't then incur
> such penalties. But we presumably return to the point of people not
> wanting to touch code that has anything to do with such features: a
> combination of social factors and the priorities of the group.
Rhamph is working on a GIL-less Python version. It may become a compile
Part of this readability comes from opiniated choices wrt/ syntax.
Letting anyone invent it's own syntax could well ruin this.
But where are people who might know Psyco likely to hang out? ;-)
Anyway, it remains to be seen what happens, but by reading various
conversations I get the impression that something could be afoot. I
wouldn't want to preempt any announcements, however, so I'll say no
more on the matter.
[Cross-compilation]
> I don't get your point, especially when you talk about distutils. Please
> elaborate.
From memory, once the Python executable is built, there's some kind of
process where modules get built with the newly built Python (perhaps
the rule labelled "Build the shared modules" in the Makefile). This
doesn't go down well when cross-compiling Python.
> (C)Python has a well known process to get new features or changes into
> the language: Write a PEP, convince enough core developers and/or Guido,
> implement the feature. I don't see a PEP about JIT in the list at
> abouthttp://www.python.org/dev/peps/, do you? :]
PEPs are very much skewed towards language changes, which then
encourages everyone and their dog to submit language changes, of
course.
> Besides nobody is going to stop you from creating a fork. Christian
> Tismer forked of stackless years ago. It's a successful branch with
> useful additions to the language. It got never merged back because
> Christian didn't feel right about it.
I think we all appreciate the work done by the core developers to
improve Python's stability and performance; new language features
don't interest me quite as much: it was, after all, possible to write
working systems in Python 1.x, with the addition of Unicode probably
rounding out quite a decent subset of what the language offers today.
The demands for greater performance enhancements than those possible
by modifying the existing virtual machine conservatively may, however,
eventually lead people to consider other paths of development just as
Stackless emerged as a fork in order to offer things that CPython
could not.
I think the pressure to fork Python will only increase over time,
considering the above together with the not inconsiderable impact of
Python 3.0 and the dependencies on Python 2.x out there in lots of
places, typically out of sight (or at least, the immediate
consideration) of the core developers.
Paul
On Dec 12, 2007 12:57 PM, George Sakkis <george...@gmail.com> wrote:
> 1. The property name ('foo') is repeated *5* times for a single class.
> Talk about DRY.
> 2. Total inconsistency: @property for the getter when it is defined
> for the first time, @foo.setter/@foo.deletter for the setter/deletter,
> @foo.getter when the getter is redefined. WTF ?!
Eww, I agree with George here, with respect to these two points. When
I looked at this my first wtf was the @property and then @foo.getter
business. I really don't mind the current way of doing things: attr =
property(get,set). Other mechanisms can be created with getattr
routines. I don't really like this new syntax at all. Too many @
marks, inconsistancies, and too many foos everywhere. Not to mention
how long it reads. For only getters, it's not bad though, and a
little better than property().
Decorators really don't feel pythonic to me at all, mostly due to the
@ symbol, but it looks really awful in this instance.
What about this, somewhat similar but not ugly syntax:
class A:
foo = property()
def foo.get():
return self._foo
def foo.delete():
del self._foo
def foo.set(val):
self._foo = val
Defining something with a dot is currently a syntax error. Ok, so
it's still too many foos. At least it's consistent. I'm not really
proposing this btw. I'd rather not introduce more specialized syntax.
How about abusing with:
class A:
with property("foo"):
def get
def set...
There's your thunk, and I really like with, but am saddened that it
has such limited use at the moment. Of course this isn't really what
with is for...
Can anyone tell me what's wrong about the current property() syntax,
besides namespace polution?
For the record, this is not new syntax. It's implemented this way
specifically to avoid the creation of new syntax for properties.
>Too many @
> marks, inconsistancies, and too many foos everywhere. Not to mention
> how long it reads. For only getters, it's not bad though, and a
> little better than property().
>
I don't feel that it's especially inconsistent, and I like decorators.
Having to write foo everywhere isn't that nice, but it's only mildly
worse than C# to me - I find the extra block levels really atrocious.
> Decorators really don't feel pythonic to me at all, mostly due to the
> @ symbol, but it looks really awful in this instance.
>
> What about this, somewhat similar but not ugly syntax:
>
> class A:
> foo = property()
> def foo.get():
> return self._foo
> def foo.delete():
> del self._foo
> def foo.set(val):
> self._foo = val
>
> Defining something with a dot is currently a syntax error. Ok, so
> it's still too many foos. At least it's consistent. I'm not really
> proposing this btw. I'd rather not introduce more specialized syntax.
>
> How about abusing with:
>
> class A:
> with property("foo"):
> def get
> def set...
>
> There's your thunk, and I really like with, but am saddened that it
> has such limited use at the moment. Of course this isn't really what
> with is for...
>
> Can anyone tell me what's wrong about the current property() syntax,
> besides namespace polution?
>
Nothing, except that people prefer decorators and they don't like the
namespace pollution. foo = property() isn't going away and if you
prefer it (I don't) you're free to use it. If you don't like
decorators in general it's fairly obvious that you won't be partial to
a decorator based feature.
It's not that big a deal anyway, of course, the use case for
properties in Python has a much smaller scope than in C#, and
getter-only properties (which you can create with just @property) are
the majority of those.
> Python 2.6 and 3.0 have a more Pythonic way for the problem:
>
> class A(object):
> @property
> def foo(self):
> return self._foo
>
> @foo.setter
> def foo(self, value)
> self._foo = value
>
> @foo.deletter
> def foo(self)
> del self._foo
>
> class B(A):
> # one can even overwrite the getter in a subclass
> @foo.getter
> def foo(self):
> return self._foo * 2
>
That would be great if it worked, but it doesn't.
Fixing your typos (missing colons, spelling of deleter, and in B the
decorator needs to refer to A.foo.getter):
Python 3.0a2 (r30a2:59405M, Dec 7 2007, 15:23:28) [MSC v.1500 32 bit
(Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
****************************************************************
Personal firewall software may warn about the connection IDLE
makes to its subprocess using this computer's internal loopback
interface. This connection is not visible on any external
interface and no data is sent to or received from the Internet.
****************************************************************
IDLE 3.0a1
>>> class A(object):
@property
def foo(self):
return self._foo
@foo.setter
def foo(self, value):
self._foo = value
@foo.deleter
def foo(self):
del self._foo
>>> class B(A):
# one can even overwrite the getter in a subclass
@A.foo.getter
def foo(self):
return self._foo * 2
>>> a = A()
>>> a.foo = 5
>>> a.foo
10
>>> A.__dict__['foo']
<property object at 0x01261F80>
>>> B.__dict__['foo']
<property object at 0x01261F80>
Unfortunately as currently implemented, getter setter and deleter just
update the existing property, so the getter defined in B changes how the
property works in A as well. I think the intention may have been that they
should create a new property each time, but this isn't what has been
implemented.
> I don't feel that it's especially inconsistent, and I like decorators.
> Having to write foo everywhere isn't that nice, but it's only mildly
> worse than C# to me - I find the extra block levels really atrocious.
Personally I find properties atrocious and unsafe. One cannot
distinguish between a function call and binding an attribute in a
statement like:
foo.bar = 2 # Does this call a function or bind an attribute?
# Is this foo.setBar(2) or setattr(foo,'bar',2)?
Even worse: if we make a typo, the error will not be detected as the
syntax is still valid. Properties and dynamic binding do not mix.
Thanks for the information! I've talked to Guido and we both agree that
it is a bug. I've a pending fix for it at my hands.
Christian
Thanks for the information! I've talked to Guido and we both agree that
> On 13 Des, 19:16, "Chris Mellon" <arka...@gmail.com> wrote:
>
>> I don't feel that it's especially inconsistent, and I like decorators.
>> Having to write foo everywhere isn't that nice, but it's only mildly
>> worse than C# to me - I find the extra block levels really atrocious.
>
> Personally I find properties atrocious and unsafe. One cannot
> distinguish between a function call and binding an attribute in a
> statement like:
>
> foo.bar = 2 # Does this call a function or bind an attribute?
> # Is this foo.setBar(2) or setattr(foo,'bar',2)?
Why do you care?
As the class *creator*, you care, but as the class *user*, you shouldn't
need to -- at least assuming it is a well-written class. (You might care
if the class' setter has harmful side-effects, but that's no difference
from a class with a __setattr__ method with harmful side-effects.)
> Even worse: if we make a typo, the error will not be detected as the
> syntax is still valid. Properties and dynamic binding do not mix.
I'm not quite sure I understand that criticism. How is that different
from things which are not properties?
foo.baz = 2 # oops, I meant bar
will succeed regardless of whether foo.bar is an attribute or a property.
--
Steven
Unless it's a new style class with __slots__
Christian
Unless it's a new style class with __slots__
Christian
>> Unless it's a new style class with __slots__
>
> [....]
>
> Naw, I'll skip the rant this time. ;-)
Wuss! I was looking forward to it :)
Tim Delaney
What a strange observation from someone wanting to introduce defmacros
and customizable syntax in Python....
> One cannot
> distinguish between a function call and binding an attribute in a
> statement like:
FWIW, "binding an attribute" will *alway* require some function call...
Properties - or any other computed attributes - are just hooks into the
default __setattr__ implementation so you can customize it.
> foo.bar = 2 # Does this call a function or bind an attribute?
From the client code POV, it binds an attribute - whatever the
implementation is.
From the implementation POV, it will always call a couple functions.
What's you point, exactly ?
> # Is this foo.setBar(2) or setattr(foo,'bar',2)?
Why do you care ? Ever heard about the concept of "encapsulation" ?
> Even worse: if we make a typo, the error will not be detected as the
> syntax is still valid.
So what ? This has nothing to do with properties.
> Properties and dynamic binding do not mix.
Sorry, but IMVHO, this is total bullshit.