Ticket 8949: metronome/django-stones

5 views
Skip to first unread message

Jeremy Dunck

unread,
Sep 9, 2008, 7:24:38 PM9/9/08
to django-d...@googlegroups.com
OK, enough noise on the naming.

Let's talk about what it should be and what should be measured. :)
(I suspect some devs already have a sketch regarding this stuff.
Please share.)

Do we want it to result in one big number like python/Lib/test/pystone.py?

Do we want to provide hooks for apps to supply something to stones for
site-specific stone testing?

Measuring:
* loading
* hello world request cycle
* template parsing
* template rendering
* generic views(?)
* various query mashing (repeated filters, combination, negation, count)
* signaling
* form validation
* cache
* model instantiation/save
* various laziness
* utils (encoding, datastructures

What else?

Also, what about things that affect downstream performance, but don't
affect our runtime, like the HTTP Vary header?

Simon Willison

unread,
Sep 9, 2008, 8:02:02 PM9/9/08
to Django developers
On Sep 10, 12:24 am, "Jeremy Dunck" <jdu...@gmail.com> wrote:
> OK, enough noise on the naming.

(I really like metronome)

> Let's talk about what it should be and what should be measured.  :)
> (I suspect some devs already have a sketch regarding this stuff.
> Please share.)
>
> Do we want it to result in one big number like python/Lib/test/pystone.py?

I don't know much about benchmarking, but it seems to me it would be
most useful if we got one big number and about a dozen other numbers,
one for each category of performance testing. That would make it
easier to see if changes we made had an effect on a particular
subsystem, and also ties in nicely to your next point:

> Do we want to provide hooks for apps to supply something to stones for
> site-specific stone testing?

That seems sensible. It's like unit testing - we'll need code that
finds and loads the benchmarks for Django core, so we may as well get
it to look in user applications as well.

As for what we measure, I think to start off with we just go with the
basics: startup, request cycle, template processing, signals and ORM.
If we get the wrapper mechanism right it will be easy to add further
stuff once we have those covered.

> Also, what about things that affect downstream performance, but don't
> affect our runtime, like the HTTP Vary header?

I say we ignore those entirely. Other tools like YSlow can pick up the
slack there.

Cheers,

Simon

Justin Fagnani

unread,
Sep 9, 2008, 8:58:13 PM9/9/08
to django-d...@googlegroups.com
I think one very important feature is submitting results back to
djangoproject.com for comparison. Since Django is so dependent on
underlying components it'll be very hard to compare results, but at
the very least we can track things like:

CPU type and speed
python version
memory (installed, free, python usage)
OS
loaded python modules

And it might be worthwhile to run something like pybench just to give
a baseline number for comparisons.

With that data it should be a lot easier to make sense of the results.

-Justin

(oh, Metronome is a great name. Meter or Tempo are also good on several levels.)

Eduardo O. Padoan

unread,
Sep 10, 2008, 8:33:09 AM9/10/08
to django-d...@googlegroups.com
On Tue, Sep 9, 2008 at 8:24 PM, Jeremy Dunck <jdu...@gmail.com> wrote:
>
> OK, enough noise on the naming.
>
> Let's talk about what it should be and what should be measured. :)
> (I suspect some devs already have a sketch regarding this stuff.
> Please share.)
>
> Do we want it to result in one big number like python/Lib/test/pystone.py?
>
> Do we want to provide hooks for apps to supply something to stones for
> site-specific stone testing?
>
> Measuring:
> * loading
> * hello world request cycle
> * template parsing
> * template rendering
> * generic views(?)
> * various query mashing (repeated filters, combination, negation, count)
> * signaling
> * form validation
> * cache
> * model instantiation/save
> * various laziness
> * utils (encoding, datastructures
>
> What else?

It'll need benchmark to test the number of requests per second that we
can process, something that could be used to test other frameworks
too, so we can compare Django's performance to, e.g., Turbogear's.


> Also, what about things that affect downstream performance, but don't
> affect our runtime, like the HTTP Vary header?


--
Eduardo de Oliveira Padoan

Jacob Kaplan-Moss

unread,
Sep 10, 2008, 10:41:10 AM9/10/08
to django-d...@googlegroups.com
On Wed, Sep 10, 2008 at 5:33 AM, Eduardo O. Padoan
<eduardo...@gmail.com> wrote:
> It'll need benchmark to test the number of requests per second that we
> can process, something that could be used to test other frameworks
> too, so we can compare Django's performance to, e.g., Turbogear's.

Actually, this is precisely what I *don't* want out of a Django
benchmark. It's nearly impossible to benchmark two totally different
stacks of software against each other, and unless you're *perfect* the
results are meaningless. On top of that, I don't particularly care how
well we compare against other frameworks -- we're both "fast enough"
-- but I care *highly* how Django trends over time.

Jacob

Reply all
Reply to author
Forward
0 new messages