Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Standardized interpreter speed evaluation tool

0 views
Skip to first unread message

alexru

unread,
Jan 8, 2010, 5:25:18 AM1/8/10
to
Is there any standardized interpreter speed evaluation tool? Say I
made few changes in interpreter code and want to know if those changes
made python any better, which test should I use?

Diez B. Roggisch

unread,
Jan 8, 2010, 5:55:35 AM1/8/10
to
alexru schrieb:

> Is there any standardized interpreter speed evaluation tool? Say I
> made few changes in interpreter code and want to know if those changes
> made python any better, which test should I use?

Pybench I guess.

Diez

Dave Angel

unread,
Jan 8, 2010, 6:11:49 AM1/8/10
to alexru, pytho...@python.org
Not trying to be a smart-aleck, but the test you use should reflect your
definition of the phrase "any better." For example, suppose you decided
that you could speed things up by pre-calculating a few dozen megabytes
of data, and including that in the python.dll. This would change the
memory footprint of Python, and possibly the interpreter's startup/load
time, not just the runtime of whatever loop you are testing.

But if I assume you know all that and just want to do timings. There
are at least two stdlib functions that can help:

import time
time.time() and time.clock() will give you a wall-clock floating point
number, and you can subtract two of these within the same program to see
how long something takes. This approach ignores interpreter startup,
and does nothing to compensate for other processes running on the
system. But it can be very useful, and easy to run. The resolution on
each function varies by OS, so you may have to experiment to see which
one gives the most precision.

import time
start = time.time()
dotest()
print "Elapsed time", time.time() - start

timeit module can be used within code for timing, or it may be used to
load and run a test from the command line. In the latter version, it
includes the startup time for the interpreter, which might be
important. It also can execute the desired code repeatedly, so you can
get some form of averaging, or amortizing.

*** python -m timeit ....

Note that due to system caching, doing multiple runs consecutively may
give different results than ones that are separated by other programs
running. And of course when you recompile and link., the system buffers
will contain an unusual set of data. So there are ways (which I don't
recall) of flushing the system buffers to let programs start on equal
footing.

DaveA

Terry Reedy

unread,
Jan 8, 2010, 6:52:21 PM1/8/10
to pytho...@python.org

The Unladen Swallow (sp?) project at code.google.com/????, which looks
to change and speedup CPython, has a suite of benchmarks that are are
perhaps better than PyBench.

Chris Rebert

unread,
Jan 8, 2010, 7:02:46 PM1/8/10
to alexru, pytho...@python.org

Although apparently undocumented, test.pystone is some sort of
interpreter benchmark.

Cheers,
Chris
--
http://blog.rebertia.com

Steve Holden

unread,
Jan 9, 2010, 7:50:48 AM1/9/10
to Chris Rebert, pytho...@python.org, alexru
Chris Rebert wrote:
> On Fri, Jan 8, 2010 at 2:25 AM, alexru <tar...@gmail.com> wrote:
> Although apparently undocumented, test.pystone is some sort of
> interpreter benchmark.
>
It's undocumented because it's not considered a representative
benchmark.Sure, you can use it to get *some* idea of relative
performance, but a single program is a very poor tool for such a complex
topic a comparing implementations of a dynamic language.

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
PyCon is coming! Atlanta, Feb 2010 http://us.pycon.org/
Holden Web LLC http://www.holdenweb.com/
UPCOMING EVENTS: http://holdenweb.eventbrite.com/

Steve Holden

unread,
Jan 9, 2010, 7:50:48 AM1/9/10
to pytho...@python.org, pytho...@python.org, alexru
Chris Rebert wrote:
> On Fri, Jan 8, 2010 at 2:25 AM, alexru <tar...@gmail.com> wrote:
0 new messages