Let's talk about what it should be and what should be measured. :)
(I suspect some devs already have a sketch regarding this stuff.
Please share.)
Do we want it to result in one big number like python/Lib/test/pystone.py?
Do we want to provide hooks for apps to supply something to stones for
site-specific stone testing?
Measuring:
* loading
* hello world request cycle
* template parsing
* template rendering
* generic views(?)
* various query mashing (repeated filters, combination, negation, count)
* signaling
* form validation
* cache
* model instantiation/save
* various laziness
* utils (encoding, datastructures
What else?
Also, what about things that affect downstream performance, but don't
affect our runtime, like the HTTP Vary header?
CPU type and speed
python version
memory (installed, free, python usage)
OS
loaded python modules
And it might be worthwhile to run something like pybench just to give
a baseline number for comparisons.
With that data it should be a lot easier to make sense of the results.
-Justin
(oh, Metronome is a great name. Meter or Tempo are also good on several levels.)
It'll need benchmark to test the number of requests per second that we
can process, something that could be used to test other frameworks
too, so we can compare Django's performance to, e.g., Turbogear's.
> Also, what about things that affect downstream performance, but don't
> affect our runtime, like the HTTP Vary header?
--
Eduardo de Oliveira Padoan
Actually, this is precisely what I *don't* want out of a Django
benchmark. It's nearly impossible to benchmark two totally different
stacks of software against each other, and unless you're *perfect* the
results are meaningless. On top of that, I don't particularly care how
well we compare against other frameworks -- we're both "fast enough"
-- but I care *highly* how Django trends over time.
Jacob