First, sorry Stan for have been rude and using a rude language with you.
What is wrong with hello world application if you are testing web application stack?
I Moonlight (or maybe Andriy ?). Some thoughts :
- Don't be directly involved in a product you are benchmarking against. that's bad
There is no independant consultant that would do this for free on a software that enters the market with no contracts in sight, especially if his or her products is not the best. I think that the benchmarks being opensource are reproducables, so the results are real, even if they might not be relevant, whatever the guy who does them.
- Make a relevant bench for the conclusion you are claiming. Print "Hello world" is not a benchmark to say a framework is faster than another one. Look at graphical card benchs. They don't display just a rotating cube at very low resolution with 4 colors
I'm not into 3D Graphics that much, but
- It involves a lot of back and forth between user code and the graphic card, which involves several level of cachings with complex algorithms, this is not relevant to Web dev
- Maybe jit optimisation or any code optimisation which are only relevant on heavy programs, which is only relevant to heavy computation, hence not relevant to Web dev per se, since if you do a celery call, this is the same call on every Python Web framework
- Be scientifically rigorous. Every framework make stuff under the hood when you initialize a project that you can't disregard because the impact is high. Stacks and hooks (auth, session, localization) that can be tuned (or not) in your project settings for instance. So the same "Hello World" (which is only the tip of the iceberg) can run 100 times faster according to your settings.
Raw speed of a vanilla project is also interesting, if tuning will get a program to better performance, then both frameworks will improve with tuning, which might not change the winner. Getting best performance on vanilla project is what every project should look for if it doesn't involve insane settings, fsync=off, Caching every page 1 hour isn't a sane default, using request bound transactions might be etc...
The problem with Hello World is that it is way too simple, an insignificant parameter in the numerous/heterogenous/significant other parameters.
Let's start the discussion for real about benchmarking Web frameworks and build a list of the different configurations we need to tests to make it relevant:
- Render a plain template (done)
- Render a cached template
- Render a template with a cached body response
- Render a template with constants provided by the view
- Render a cached template with constants provided by the view
- Render a template with constants provided by the view with cached body response
- Render a template with data from a database query (any query will do the job, we are testing the ORM)
- Render a cached template with data from a database query
- Render a template with a cached query against a database
- Render a cached template with a cached query against a database
- Render a template with a query database with a cached body response
I think some of this tests are identical.
Tell me if this tests are relevant and if you know any others ?
I do not say that this bench is absolutely useless. It can be used to show a regression, a bug, test the Apache stack.
Not to make framework comparisons.
Of course it is, getting used to a new API is possible, it's an investment of scale compared to pay more that what you would pay with an equivalent feature-wise framework, that you will pay forever if you continue to use the same framework.
Choosing framework X over Y doesn't guarantee any success to project. Good thing to know your framework has a limit... that also tells me how effective one or other implemented... I guess it tells that.