D.
Well don't be. There are benchmarks that clearly show IronPython as
faster for selected tests. Other tests show CPython running more quickly.
As always, a benchmark is only really valuable on a typical workload for
the intended application.
regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC http://www.holdenweb.com/
PyPy is attempting to address this issue via a separate interpreter, but
it's currently just playing catch-up on performance most of the time.
It does have a JIT, and might one day be fast enough to be a usable
replacement for CPython, but it will require a lot of developer-years to
get it there, most likely.
It would be really nice if PyPy could get Python 2.5 running say 5x
faster and then run with that. With that Python would open out into
entire new areas of applicability, becoming reasonable as an embedded
language, or a systems language. Only 2x slower than C would make
Python pretty close to a perfect language...
(far more attractive than a slightly tweaked syntax IMO). That's
probably 5-10 developer years out, though, not counting any distractions
from trying to support Python 3.x.
> If yes, what are IronPython drawbacks vs CPython?
>
Mostly library access from what I understand. Numpy and SciPy, for
instance, are not AFAIK ported to IronPython. Those are the people who
*really* need speed, and without those APIs having "Python" available
faster doesn't really mean much. IronPython has access to the Win32
API, so if you want to use Win32 APIs, rather than the CPython ones,
you're golden, but Numpy/SciPy's interface is *really* elegant for
working with large arrays of data.
If you're trying to write tight numeric loops for gigabyte arrays in raw
Python, 1.6 times performance isn't really usable... even 5x is just
barely usable. Numpy lets you use the optimized (C) libraries for the
heavy lifting and Python friendliness where you interact with humans.
If Python were 10x faster you *might* completely rewrite your Numpy in
Python code, but I'd expect that naive Python code would still be beat
handily by BLAS or what have you under the covers in Numpy.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
are the two lines that tend to preclude CPython ever becoming *really*
fast. Optimizing code is almost always complex and hard to explain.
You need lots and lots of thought to make a compiler smart enough to
wring performance out of naive code, and you need a lot of thought to
reason about what the compiler is going to do under the covers with your
code. IronPython (and Jython, and Parrot) can use the underlying
system's complexity without introducing it into their own project. PyPy
is trying to create the complexity itself (with the advantage of a
smaller problem domain than optimising *every* language).
> And is it possible to use IronPython in Linux?
>
Yes, running on Mono, though again, I don't believe Mono has had the
optimisation effort put in to make it competitive with MS's platforms.
Just my view from out in the boonies,
Mike
Actually, PyPy is just about (within a factor of 2 for most things) as
fast as CPython right now. A bigger hurdle is the availability of
extension modules.
>
>It would be really nice if PyPy could get Python 2.5 running say 5x
>faster and then run with that. With that Python would open out into
>entire new areas of applicability, becoming reasonable as an embedded
>language, or a systems language. Only 2x slower than C would make
>Python pretty close to a perfect language...
That'd be pretty great, certainly. Work on the JIT is continuing and
some PyPy developers have expressed some optimism about having something
which is faster than CPython by the end of this year.
Still, I'd be using PyPy for things today if it had the extension modules
that I rely on now.
Jean-Paul
There is a set of good benchmarks here, the answer is negative:
http://shootout.alioth.debian.org/sandbox/benchmark.php?test=all&lang=iron
Bye,
bearophile
This doesn't look like Mono to me:
IronPython 1.1 (1.1) on .NET 2.0.50727.42
Stefan
This is a second time around that IronPython piqued my interest
sufficiently to create a toy program to benchmark it and I must say
the results are not encouraging:
$ python bench.py
Elapsed time: 1.10 s
$ ipy bench.py
Elapsed time:65.01 s
and here is the benchmark program:
import time
start = time.time()
def foo(a):
return a * a
data = {}
for i in xrange( 10**6 ):
data[i] = foo(i)
print 'Elapsed time:%5.2f s' % ( time.time() - start)
IronPython implements only a limited subset of Python. [1] All extension
that depend on a C code don't work under IronPython.
Christian
[1]
http://www.codeplex.com/IronPython/Wiki/View.aspx?title=Differences&referringTitle=Home
Could it be because .NET doesn't have arbitrary length integer types
and your little benchmark will create lots of integers > 2**32 ?
What is the result if you replace foo(a) with
def foo(a): return sqrt(a)
--
Arnaud
> Could it be because .NET doesn't have arbitrary length integer types
> and your little benchmark will create lots of integers > 2**32 ?
> What is the result if you replace foo(a) with
> def foo(a): return sqrt(a)
Good observation, in the case above the run times are about the same.
i.
A good reason to not use it.
> and your little benchmark will create lots of integers > 2**32 ?
> What is the result if you replace foo(a) with
>
> def foo(a): return sqrt(a)
>
> --
> Arnaud- Hide quoted text -
>
> - Show quoted text -
I posted a little benchmark some time ago in ironpython's list that
showed a big performance gap between both implementations (being
cpython much faster than ironpython).
Jim Hugunin replied showing that making the script more abstract
(encapsulating code as much as possible into functions) the .NET
framework makes a better job at optimizing everything.
So I took Istvan script and made a little modification as follows:
import time
def foo(a):
return a * a
def do():
start = time.time()
data = {}
for i in xrange( 10**6 ):
data[i] = foo(i)
print 'Elapsed time:%5.2f s' % ( time.time() - start)
do() # pre-run to avoid initialization time
do()
import psyco
psyco.full()
print '\nNow with psyco...\n'
do()
do()
input()
The result is that it runs slighty faster in both, IP and CP, but
cpython is still faster (around 2x) than ironpython.
However, when using psyco simply blows everything out of the water...
These are my results.
Ironpython 2.0 Alpha 8:
Elapsed time: 3.14 s
Elapsed time: 3.36 s
cpyhon 2.5:
Elapsed time: 1.55 s
Elapsed time: 1.58 s
Now with psyco...
Elapsed time: 0.88 s
Elapsed time: 0.80 s
CPython is very fast here because it keeps blocks of allocated integer
objects to reduce the memory overhead. Your test shows that Python
highly specialized and optimized memory management for ints is superior
over IronPython's more general memory management. It also shows that
psyco optimized the function call. It's probably inlined.
You could replace foo(i) by i*i and see how much function calls affect
the speed.
Christian
"""
As always, a benchmark is only really valuable on a typical workload for
the intended application.
"""
There is no "better" and "worse" in the general case. Make rational
decisions. Benchmark your applications, don't scheme to make an
arbitrary benchmark run faster.
You are right! I think this shows that IronPython isn't faster than
CPython at all :-) (And it uses more memory).
Bye,
bearophile
Have you actually looked at the version string from IronPython-1.1-
Bin.zip running on Mono?
No.
Running on Debian? Fairly unlikely. :-)
Fuzzyman
http://www.manning.com/foord
>
> Stefan
This has been our experience at Resolver Systems. Code that makes
heavy use of function calls, non-exceptional exception handling or
arithmetic tends to run faster whereas the built-in types tend to be
slower.
It makes profiling code for optimisation all the more important
because all your usual assumptions about Python performance tend to be
wrong...
Michael Foord
http://www.manning.com/foord
Well, that *is* what the version string reports for IronPython on Mono
on Linux:
$ uname -sr
Linux 2.6.18-1.2200.fc5smp
$ mono ipy.exe
IronPython 1.1 (1.1) on .NET 2.0.50727.42
Copyright (c) Microsoft Corporation. All rights reserved.
--
Carsten Haese
http://informixdb.sourceforge.net
Why? Would that look like Mono? :)
Stefan
Why? Wasn't .NET supposed to be platform independent code and all that? ;)
Stefan
Why? Because then you'd be doing more than expressing your personal
ignorance.
Another interesting little benchmark of CPython and IronPython. Can't
see the code, but it looks like an implementation of the 11 queens
problem and IronPython comes out a clear winner on this one. (Looks
like no benchmark for psyco.)
http://www.sokoide.com/index.php?itemid=1427
Michael
http://www.manning.com/foord
If you want a more reliable set of benchmarks, take all the shootout
benchmarks, and try them on IronPython on dotnet, on CPython and on
Psyco.
Bye,
bearophile
Go for it!
> Bye,
> bearophile