http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=all
Python 3.0 is slower than Python 2.5 and 2.6. Lot's of code was added or
modified -- code that hasn't been optimized yet. Python 3's new io
library is much slower than the old file type but there will be an
optimized version in Python 3.1. The switch over to unicode is a minor
speed drain, too.
You can look forward for lots of interesting optimizations in Python 3.1
like threaded code (not to be confused with multi-threading) for the VM
on GCC platforms, 30bit digists for ints on 64bit platforms, C
optimization of the IO stack and more.
Christian
Python 3.0 has a couple of performance issues, mostly the io subsystem.
The 3.1 release will address the most significant issues.
regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC http://www.holdenweb.com/
In the real world, people care about the relative speed of programs.
Fine, but the Shootout on Alioth isn't a particularly pythonic one. It
deals almost exclusively with computationally intensive tasks, i.e.
programs where any decent Python developer would either import Psyco or
speed up the code in Cython. As long as that gives you improvements of
100-1000 times almost for free, I wouldn't bother too much with changing
the platform just because someone shows me benchmark results of some code
that I absolutely don't need in my daily work.
Stefan
It deals exclusively with small programs in isolation as if they were
the bottleneck.
> As long as that gives you improvements of
> 100-1000 times almost for free, I wouldn't bother too much with changing
> the platform just because someone shows me benchmark results of some code
> that I absolutely don't need in my daily work.
What examples do you have of 1000x improvement?
Right. But they aren't. So people who draw any conclusions from that
like "C++ is faster than C" or "Ruby is slower than Python" or "Python
is 30 times slower than C" draw the wrong conclusions.
Thorsten
We hear that from time to time on the Cython mailing list. Here's a recent
example of a user who reported an 80 times speed-up before we helped in
getting the code straight, which brought another factor of 20.
http://permalink.gmane.org/gmane.comp.python.cython.devel/4619
Speed-ups in the order of several hundred times are not uncommon for
computing intensive tasks and large data sets when you move from Python to
Cython, as it generates very optimised C code.
Stefan
They aren't except when they are!
"Now only the def line and the return line are using Python..." ;-)
So? I did see your smiley, but I hope you are not trying to make a point
here. If you look at the code (especially in this case, it might be more
C-ish in others), you will see that it's Python. It just gets translated to
very fast C code. Cython even has a "pure Python" mode where you can let
your code run in ordinary Python and have Cython compile it if you feel
like it. That gives you the best of both worlds: readable, easy to maintain
code, that you can compile to C speed when you need high performance.
Coming back to the original topic of this thread, when I look at the code
examples that are compared here, I wouldn't be surprised if Cython could
compile them to a faster executable straight away, without modification.
Just install Cython 0.11 (which is close to release) and add
import pyximport; pyximport.install(pyimport=True)
before importing the benchmarked module. If you want more performance than
what plain compilation gives you by itself, just add a few type
declarations. You can use Python decorators for that, if you prefer keeping
a standard Python program.
Stefan
Yes, and they care about the cost of programs, and about the
functionality of programs. If I wanted fast code I wouldn't use Python
*or* Ruby (I'd probably use FORTH; C would be a more mainstream
choice) -- but don't expect the program soon. Even where speed does
matter, plain Python against plain Ruby isn't a meaningful comparison,
because a common style in Python (and probably in Ruby too) is to get
the code working, then if there are any *measured* bottlenecks to
optimise them in C++. That means that in practice there won't be a
perceptible speed difference for the user.
--
Tim Rowe
I imagine the author was contrasting the priorities of the developers
of each language implementation. Ruby 1.8 and earlier were regarded as
being very slow; Ruby 1.9 is faster because the developers have used
techniques similar to those employed in the CPython implementation.
Although one shouldn't extrapolate performance improvements from the
difference between two consecutive releases, one can observe that one
set of developers has prioritised performance (admittedly to remedy
deficiencies) while another set has prioritised features.
One is left to wonder how the performance of the next major Ruby
release will compare with the next major Python releases. Such things
were barely worth considering before - maybe that was the author's
point.
Paul
It seems to me that comparing the Py2.5/6 -- Py3.0 change to the
Ruby1.8 -- Ruby1.9 change is a bit like comparing apples and oranges.
The latter apparently was mostly a 'speed up existing features' change
while the former was a 'make major features changes' release. Changing
Python's basic text model from extended ascii to unicode *is* a major
change and one that is still being worked out. Python has had speedup
releases before and will again (3.1). I presume Ruby has had and will
have feature oriented releases.
All of this is about 'current phase of development' rather than the
'priorities of the developers'.
It ignores 'performance of code writing and reading', which certain is
at least a Python priority and strength. For some people, the 3.0
switch to unicode rather than just ascii identifiers already improves this.
tjr
I think it would be silly to dispute whether or not programs that have
import psyco; psyco.bind are Python programs.
I'm not sure it would be equally silly to dispute whether or not
programs with type declarations have moved away from being Python
programs.
Of course, Cython is still kind-of neat.
i don't have any horse in this race (although i would like the standard
implementation to be as fast as possible), but it may be worth mentioning
that annotations on parameters are available (python 2.6 and 3.0). those,
along with abcs (same versions) suggests someone, somewhere, might be
wondering about how to introduce optional type declarations in python one
day...
it would also be nice if pypy could actually produce a final, compliant,
fully featured implementation. that (via llvm?) may be the best way to
introduce the kind of adaptive optimisations that java has made so much
mileage from. but i'm just stabbing in the dark - their exact status is a
bit opaque to me.
andrew