On Sep 10, 12:24 am, "Jeremy Dunck" <jdu...
> OK, enough noise on the naming.
(I really like metronome)
> Let's talk about what it should be and what should be measured. :)
> (I suspect some devs already have a sketch regarding this stuff.
> Please share.)
> Do we want it to result in one big number like python/Lib/test/pystone.py?
I don't know much about benchmarking, but it seems to me it would be
most useful if we got one big number and about a dozen other numbers,
one for each category of performance testing. That would make it
easier to see if changes we made had an effect on a particular
subsystem, and also ties in nicely to your next point:
> Do we want to provide hooks for apps to supply something to stones for
> site-specific stone testing?
That seems sensible. It's like unit testing - we'll need code that
finds and loads the benchmarks for Django core, so we may as well get
it to look in user applications as well.
As for what we measure, I think to start off with we just go with the
basics: startup, request cycle, template processing, signals and ORM.
If we get the wrapper mechanism right it will be easy to add further
stuff once we have those covered.
> Also, what about things that affect downstream performance, but don't
> affect our runtime, like the HTTP Vary header?
I say we ignore those entirely. Other tools like YSlow can pick up the