FrameworkBenchmarks round 10 results are available

35 views
Skip to first unread message

Ludovic Gasc

unread,
Apr 21, 2015, 5:53:54 PM4/21/15
to python...@googlegroups.com
Hi,

For your information, TechEmpower FrameworkBenchmarks results are available.

If you want to see quickly results for Python, especially for AsyncIO+aiohttp+API-Hour:

I'm writing a blog post to give more details about that, especially for the future, because several results have already been improved in the master branch of FrameworkBenchmarks, this benchmark is based on a snapshot one month ago. I'll list also the new potentials improvements in Python ecosystem (aiohttp new release, potential AsyncIO booster, articles from lwn.net...)

For the worst result, I've made a mistake, I didn't implement correctly the test, nevertheless shouldn't be good, as pure JSON result.
For the best result, the next round should be better because PostgreSQL config is now better and I've added a Redis tests.

The difference of results between my DB benchmarks and DB benchmarks in FrameworkBenchmarks is because, in my benchmark, I've a bigger payload and a SQL query more complex that in FrameworkBenchmarks.
For round 11, TechEmpower should add a test more realistic for a production, with a bigger payload and more DB interactions.

I'll also reorganize tests to include more changes like several event loops, more databases, and maybe PyPy3.
My goal of theses benchmarks isn't only to find bottlenecks in this Python stack, but also to be sure that this Python stack continue to be efficient over the time.
Most stacks become bloatwares over the time.

Nevertheless, for a first time participation, I'm really happy of the results. We have now 3-6 months to improve results, if you want to help.

But, even if I've taken a little bit some spotlights for the benchmarks, and sometimes I've some ping-pong "fights" (another blogpost should be published) on mailing-lists or during PyCONs, I mustn't forget that theses results aren't possible without you.

Thanks a lot for everybody for your help, especially:

1. Ukrainian (+Russian ?) Dream Team: Sorry to reduce you to your nationality/native speaking, nevertheless, you're over-represented in AsyncIO community, especially in aiohttp ;-): aiohttp/aio-libs, boost performances and advices from Andrew Svetlov, Nikolay Kim, Alexander Shorin, Alexey Popravka, Yury Selivanov and all others.

2. Victor Stinner for AsyncIO improvements, help, and benchmarks tests

3. Antoine Pitrou for CPython optimizations like --with-computed-gotos and to have accepted AsyncIO PEP.

4. Inada Naoki for all tips and challenged me in my benchmarks

5. All people I've forgotten in my list, especially AsyncIO libraries maintainers

6. Last but not least: Guido Van Rossum, because to maintain a community's size as Python ecosystem, political skills are as important as technical skills. And also to have implemented and pushed AsyncIO on the Python scene.

I'm proud to be an atom inside this big community ;-)

And don't forget: IT world has as bia that Python is slow because you believe that it's slow: With absolute values on microbenchmarks, it's true.
Nevertheless, with some tips&tricks, it should be fast enough for most production use cases in companies AND with Python, you keep the simplicity of coding.
Developer speed is really more important that benchmarks, but unfortunately, almost impossible to measure.

See you soon at EuroPython !

Ludovic Gasc

unread,
Apr 27, 2015, 5:47:46 AM4/27/15
to python...@googlegroups.com
Hello people,

Thanks all for your suggestions you sent me to improve this e-mail, I've tried to take all ideas.
I've published more details about benchmarks and the future improvements in Python:

And I've changed a little bit the "thanks" section, I've added links to your Twitter/Github/blog on your names.
Don't hesitate to contact me privately if you want another link under your name.

Regards.
Reply all
Reply to author
Forward
0 new messages