framework benchmarks - web2py surprisingly slow?

3,289 views
Skip to first unread message

Jose C

unread,
Sep 25, 2012, 8:01:55 AM9/25/12
to web...@googlegroups.com
Just stumbled across this benchmark:

http://mindref.blogspot.pt/2012/09/python-fastest-web-framework.html

on the python group discussion:
https://groups.google.com/forum/?fromgroups=#!topic/comp.lang.python/yu1_BQZsPPc

The author also notes a memory leak problem with web2py but no specifics that I could see.

Thoughts?

Massimo Di Pierro

unread,
Sep 25, 2012, 8:34:23 AM9/25/12
to web...@googlegroups.com
First of all that is not a apples to apples comparison. For example some of those frameworks do not support sessions out of the box. Web2py has many features always enabled by default and the other frameworks are more bare-bone. 

Anyway, on a simple hello world request, without database and without template, web2py is slower then Flask and Bottle because they do nothing beyond serving the request. web2py does more by preparing an environment, creating the session, parsing cookies, parsing the accept language, looks for the closer internationalization file and pluralization rules, validates the request. copies the input stream to temp file, an more.

In a real production environment they are all dominated by template rendering and database connections. The times are very close because db-io always dominates over everything else.

It is like saying that from 0 to 10mph a moped is better than a car. Of course it it, it weight less. But from 0 to 100mph the car is better because has a bigger engine. The moped does not even reach 100mph. Mind I am not saying web2py is more bloated. It is smaller. I am saying this is not a apple to apple comparison.

Web2py 2.0.x has lots of changes that make it faster.

The memory leak issue is an accusation that has been floating around. The creator of another framework has pointed out that in web2py is you create a class with a self reference and a __del__ method it will create a memory leak. True but: 1) we do not do it, 2) we tell users not to do it; 3) this is a python problem, not a web2py problem. In every web framework a class with a self reference and a __del__ method will cause a memory leak.

Massimo

Michele Comitini

unread,
Sep 25, 2012, 8:46:09 AM9/25/12
to web...@googlegroups.com
Massimo,

the best answer would be another benchmark :-)
seriously we should make an "hello world" that goes blazing fast.
I agree that speed in a benchmark is not everything, it could give
some hints on optimizations to be done.
The issue in this case could be simply a file lock. (session.forget() ?)


mic

2012/9/25 Massimo Di Pierro <massimo....@gmail.com>:
> --
>
>
>

Jose C

unread,
Sep 25, 2012, 9:12:04 AM9/25/12
to
Hi Massimo,

I too agree that benchmarks, like statistics, can be very deceptive. 

The point is comparing just 2 of the frameworks that I'm personally interested in (and I would have imagined had similar startup overheads), i.e. web2py and django, you see web2py getting 686 requests compared to django's 15,346!  That's a massive difference and like Michele's comment, I wonder if there is something that can be learnt from this and some optimization performed that might help with future versions?  The numbers certainly look bad for any new person going through the process of choosing a framework to start with.

On the memory leak issue, the author says he hit it running the simple "hello world" script test.  I imagine he's not creating a class with a self reference as you mentioned for his simple test. 

Perhaps one of the devs could try simulate the test (the author seems that have released all the test code and setup scripts) and see whether the memory leak issue is indeed present.

P.S. I do realize that even django doesn't have sessions enabled by default and wouldn't be surprised if that factor alone accounts for the difference.  A person selecting a framework up front won't know that though.  Perhaps Massimo should point it out in the author's blog comments, specifically all the setup work being done by web2py to make the framework real-world usable.

Massimo Di Pierro

unread,
Sep 25, 2012, 10:02:40 AM9/25/12
to web...@googlegroups.com
I agree we should try reproduce those benchmarks becomes something is clearly very wrong.
I cannot find the code used for those benchmarks, so I added a comment asking for it.

Marin Pranjić

unread,
Sep 25, 2012, 10:52:17 AM9/25/12
to web...@googlegroups.com

Anthony

unread,
Sep 25, 2012, 10:53:37 AM9/25/12
to web...@googlegroups.com

Michele Comitini

unread,
Sep 25, 2012, 11:16:35 AM9/25/12
to web...@googlegroups.com
First try should be adding a session.forget() and see what happens...

N.B. The benchmark in question uses really 4 concurrent processes
through uwsgi, so suspect number #1 is a file lock.

Correctly the test was setup on two multicore machines one with the
server one with the client, so real concurrency comes into play.

mic


2012/9/25 Anthony <abas...@gmail.com>:
> --
>
>
>

Bruno Rocha

unread,
Sep 25, 2012, 12:26:01 PM9/25/12
to web...@googlegroups.com
Well, maybe the situation is really bad, recently I came through some memory problems, So I did a lot of tests, I disabled the cache, also i disabled the sessions but memory problem continued. So I delegated this problem to Uwsgi. I had to set limit-as to 512. web2py/uwsgi memory usage grew very fast and when it gets 512MB uwsgi recycles the workers, also I needed to set reload-on-as and reload-on-rss to recycle individual workers.  I tested the same app running on Apache and with Apache I had no problems, also SCGI had no problems, the problems only happened with uwsgi + nginx.

I figured out the perfect config[1], but note that this config is a workaround for uwsgi to reload its workers every time web2py hits the memory usage limit. As my application is not trivial (I used the model less approach and maybe I am having memory leaks on my own classes). With the correct uwsgi config my app is running very well.

Massimo Di Pierro

unread,
Sep 25, 2012, 2:11:31 PM9/25/12
to web...@googlegroups.com
Yes. They are probably creating a session file at every request.

Niphlod

unread,
Sep 25, 2012, 3:50:08 PM9/25/12
to web...@googlegroups.com
nope. web2py doesn't save a session if no changes to the session has been made, so that's not the case.
PS: give the guy some slack, it is one of the most documented benchmark around. However, there is obviously something wrong in his setup. My numbers are not even remotely comparable.

I had no time to set up 2 rigs to make tests, but I figured "what the heck".
I have my rig, a full blown desktop installation of ubuntu 10.04, python 2.6.5, no virtualenvs (commented out the relevant lines on the config files (uwsgi.ini)). Machine is an Intel i7 920 2.6 Ghz, 12 GB RAM. Apache Bench is executed on the same machine (Version 2.3 <$Revision: 655654 $>)

uwsgi 1.2.6 lying around, web.py 0.37 and of course web2py 2.0.9.
 
Downloaded the repo and set up 4 tests:
1) web2py the way the test has been made (to set a "reference")
2) web.py the way the test has been made (to set a "reference")
3) web2py with "session.forget()" just before "return 'Hello World!'", compiled app, no default "internal-redirection" (the way it's supposed to be the test of web2py)

Attached the results

For the ones that don't want to read, here's the summary
1) Rps:    3190.58  99% ms   369
2) Rps:    5416.62  99% ms   230
3) Rps:    3304.90  99% ms   359

So, still on the tail, but not that far.

@all: benchmarks are useful, documented one as this one is the best thing you can get, but remember that no app is a "hello world".
I'm going to test django for everyone's mental-sanity. Wait a few.
bench.txt

Niphlod

unread,
Sep 25, 2012, 5:26:04 PM9/25/12
to web...@googlegroups.com
updated with:
4) django Rps: 13331.91 ms 99% 86
5) web2py "plain cheating mode" (read the comments) Rps: 4719.26 ms 99% 250

Just to stress it out a web.py (minimal framework - apples) vs web2py (full framework - oranges) comparison.
Hack 10 minutes one single file - no thinking required - to have a more fair comparison between apples and oranges and voilà, bump +50% on "performances".
I know, we're still far from django (a more "fair competitor"), but as soon as you start using databases, forms, templates and cookies/sessions, the gap "collapses".

bench.txt

Jose C

unread,
Sep 25, 2012, 5:37:22 PM9/25/12
to
Interesting... did you also happen to note the memory usage on runs 1,3 & 5?  Any sign of the "large memory leak" that the author of the benchmark says he encountered, and that Bruno also alluded to in a previous post?

Massimo Di Pierro

unread,
Sep 25, 2012, 5:51:05 PM9/25/12
to web...@googlegroups.com
Very useful information.

Niphlod

unread,
Sep 25, 2012, 6:36:25 PM9/25/12
to web...@googlegroups.com


On Tuesday, September 25, 2012 11:37:01 PM UTC+2, Jose C wrote:
Interesting... did you also happen to note the memory usage on runs 1,3 & 5?  Any sign of the "large memory leak" that the author of the benchmark says he encountered, and that Bruno also alluded to in a previous post?


Nope. A leaking web2py will have worried a lot of users if shows up in a simple app like this (i.e. if it leaks while processing this app, the "bases" of the web2py code are leaking somewhere and everyone would have noticed that)

However, here the results in "used MB of RAM" of my rig . NB: all options of "graceful reloading" of worker if they hit a limit is removed. Frameworks are "free to expand" to whatever they want on RAM.
Here "1 round of ab" is a bench with 1000 concurrent requests on a total of 2 millions:
- initial 1417 (just the desktop, firefox, exaile playing music :P)
- uwsgi started 1440
- while doing 1st round of ab 1510-1513
- finished 1nd round (uwsgi still active, but ab is not there anymore) 1498
- while doing 2nd round of ab 1507-1510
- finished 2nd round 1494
- terminated all 1420

So no leaks (apparently). Cases 1-3-5 behave exactly in the same way.

case 2)
- initial 1428
- uwsgi started  1429
- 1st round 1453-1457
- finished 1st round 1447
- 2nd round 1470-1472
- finished 2nd round 1452
- terminated all 1427

case 4)
- initial 1428
- started 1430
- 1st round 1475-1497
- finished 1st round 1460
- 2nd round 1497-1507
- finished 2nd round 1466
- terminated uwsgi 1432

Jose C

unread,
Sep 26, 2012, 3:25:54 AM9/26/12
to web...@googlegroups.com
> Nope. A leaking web2py will have worried a lot of users if shows up in a simple app like this (i.e. if it leaks while processing this app, the "bases" of the web2py code are leaking somewhere and everyone would have noticed that)

You'd expect so... although not sure how many users have apps handling 1,000 concurrent connections.  (As an aside it would be nice if any power users out there could give some feedback on their experience).   However the author of the article states on comp.lang.python as well as in his blog in response to Massimo that "... during test I have noticed huge memory leak."

You mentioned you'd hacked on one of the files.  Can you confirm that when you checked the memory usage it was the standard version of web2py 2.0.9, or was it using your hacked version of the code? I'd imagine the test would need to be run with sessions on and web2py as out-of-the-box if we're to have a chance of reproducing the memory leak.

Thanks.

Niphlod

unread,
Sep 26, 2012, 8:45:46 AM9/26/12
to web...@googlegroups.com
tests are done with a simple "hello, world" app. Cases 1-3 differs from having session enabled or not. Case 5 is done with a "hacked" gluon/main.py version. So case 1 should "reproduce" the same behaviour the author described (standard web2py, session enabled).
None of them showed any memory leak. A memory leak shows up generally growing with number of requests, not on concurrent accesses, however with 1000 concurrent and 2 rounds of 2 millions of requests (that is 4 millions in total) no "evident" memory compsumtion showed up.

Massimo Di Pierro

unread,
Sep 26, 2012, 10:12:11 AM9/26/12
to web...@googlegroups.com, andriy.k...@live.com
First of all kudos to Andriy,

He created an excellent testing code, he was very responsive, and he really took the time to understand some of the web2py code. Moreover he is the author of the excellent wheezy.web framework.

He just emailed me that he has rebuilt his testing environment and has updated the benchmarks:


The memory leak is gone! I am not sure about the cause but I suspect he had an older web2py version installed via pip that was creating problems.

We still score last but the numbers are closer to the numbers that Niphlod got. Anyway, this is not a concern to me because although this is a simple "hello world" test, web2py does more than the others (session, T, url and ip validation, etc.) and it is expected to be slower. The difference, as Niphlod sasys, washes away in real life applications. Yet we can probably do better with some simple tweaks and we should pursue that. Niphlod numbers still look better by almost a factor 2 so something else is going on too.

Andriy also posted template benchmarks:

So if we compare web2py with Django you see that web2py is slower on "hello world" but has faster templates. As you can see the time to render one template page dominates the time to serve "hello world". Of course wheezy.web smokes everybody else on both tests and that is something we should try understand. We should also try port gluino to wheezy.web.

Massimo

Alec Taylor

unread,
Sep 26, 2012, 12:21:36 PM9/26/12
to web...@googlegroups.com, andriy.k...@live.com
Thanks Massimo, I saw Andriy posting on comp.lang.python and recommended he benchmark web2py.

Would be very interested to see if web2py can catchup to the others. I think that the performance when compared with Flask should be our main goal.

Definitely a port of gluino to wheezy.web would be amazing :)

Massimo Di Pierro

unread,
Sep 26, 2012, 12:50:03 PM9/26/12
to web...@googlegroups.com, andriy.k...@live.com
We should definitively try to understand and improve these results but I am not convinced that matching the speed of others on "hello world" benchmarks should be a goal. This just says to me that others do very little for you beyond the web server. If you turn on sessions and templates in other frameworks (except wheezy perhaps) you lose most of the advantage. If one really need something so barebone, one should just use wsgi.

Here is an example... one month ago a user with uWSGI+web2py had a problem. People with iPad devices accessing the site where having their sessions mixed up. They would go to the site and find themselves logged in as other users. We tried to understand it and found that he was using a buggy version of uWSGI that was caching cookies. This was due to the fact that iPad devices has IPv6 addresses that were not properly parsed and appear as "unkown". uWSGI was passing to web2py cached session cookies with "unkown" client ip.

I order to prevent this we decided not to trust the web server and add a client ip validation in web2py. Other frameworks do not validate the client ip. Should we forfeit these checks because others do not do them and we want to appear faster in the benchmarks? I say no.

Massimo

Bruno Rocha

unread,
Sep 26, 2012, 1:02:11 PM9/26/12
to web...@googlegroups.com
Should we forfeit these checks because others do not do them and we want to appear faster in the benchmarks? I say no.

+1 

I completely agree with this. After some months working with another framework I say that the speed issues is not a problem compared to the hard way of getting things done. 

With the fine-tuned uwsgi+nginx I have all the web2py advantages and also the speed because uwsgi is taking care to respawn my problematic workers, that is seamless to my users and the app works very well. 

"Premature optimization is the root of all evil!" I recently understood this sentence, I tried to have a "performatic" app and I've lost more time on this than on my app core. At the end I solved performance issue on the server deployment + web2py best practices as the use of cache, session.forget, migrate=False and wise use of models.

Ovidio Marinho

unread,
Sep 26, 2012, 2:31:19 PM9/26/12
to web...@googlegroups.com
This is a web development or a comparison with the fastest formula 1, or rifle which has the largest capacity of distance. Comparison baseless


       Ovidio Marinho Falcao Neto
                Web Developer
             ovid...@gmail.com 
          ovidio...@itjp.net.br
                 ITJP - itjp.net.br
               83   8826 9088 - Oi
               83   9334 0266 - Claro
                        Brasil
              



2012/9/26 Bruno Rocha <rocha...@gmail.com>

--
 
 
 

Marin Pranjić

unread,
Sep 26, 2012, 2:59:24 PM9/26/12
to web...@googlegroups.com
This benchmark is useless.

Why don't we make a benchmark with more complex app?

Marin

Massimo Di Pierro

unread,
Sep 26, 2012, 3:13:14 PM9/26/12
to web2py Web Framework, Andriy Kornatskyy
Good point. Forwarding to the mailing list. Feel free join, even only to pitch in this discussion.
On the web2py list we do not mind learning more about new frameworks.

Massimo

On Sep 26, 2012, at 1:36 PM, Andriy Kornatskyy wrote:

>
> Massimo,
>
> Let me please add my two cents as to why simple benchmark like `hello world` is important.
>
> Before I step in, a link to live demo:
>
> http://wheezy.pythonanywhere.com/
>
> The home page of this application is rendered at the speed of `hello world` application.
>
> hello world - 23318 rps
> home page - 21144 rps (see http://mindref.blogspot.com/2012/07/python-fastest-template.html at real world example section)
>
> With content caching:
> http://packages.python.org/wheezy.http/userguide.html#content-cache
>
> and cache dependency:
>
> http://packages.python.org/wheezy.caching/userguide.html#cachedependency
>
> any of your dynamic page turns into `hello world` application performance.
>
> Please share this with your team as you find this appropriate.
>
> Thanks.
>
> Andriy Kornatskyy
>
> P.S. asserts over things like ip address are essential, but must still be optional. Once bug in application server has been fixed those checks have no sense. Paranoia? Why not just simply certify certain application servers to be TRUSTED... there are no so much to assert to say this is bad idea.

Michele Comitini

unread,
Sep 26, 2012, 6:08:16 PM9/26/12
to web...@googlegroups.com
let me benchmark...

I have been using curl-loader against rocket on localhost and I get interesting things.




2012-09-26 23:34:25,972 - web2py - ERROR - Traceback (most recent call last):
  File "/var/tmp/web2py/gluon/main.py", line 439, in wsgibase
    request.uuid = request.compute_uuid() # requires client
  File "/var/tmp/web2py/gluon/globals.py", line 111, in compute_uuid
    web2py_uuid())
  File "/var/tmp/web2py/gluon/utils.py", line 139, in web2py_uuid
    ubytes = [ord(c) for c in os.urandom(16)] # use /dev/urandom if possible
OSError: [Errno 24] Too many open files: '/dev/urandom'


The above is after a minute or so of running curl-loader with a "light"  config:

########### GENERAL SECTION ################################

BATCH_NAME= need for speed
CLIENTS_NUM_MAX=200 # Same as CLIENTS_NUM
#CLIENTS_NUM_START=10
CLIENTS_RAMPUP_INC=5
INTERFACE   =eth0
NETMASK=16  
IP_ADDR_MIN= 127.0.0.1
IP_ADDR_MAX= 127.0.0.1
#IP_SHARED_NUM=3
CYCLES_NUM= -1
URLS_NUM= 1

########### URL SECTION ####################################

URL_SHORT_NAME="need for speed"
REQUEST_TYPE=GET
TIMER_URL_COMPLETION = 5000
TIMER_AFTER_URL_SLEEP = 500


you can see the impact from the table below. 
The lines below the * * * * * * are the summary of the above lines.
 "D" and "D-2xx" are the important columns: respectively response time including errors, response time on 200 codes.
  the D-2xx stays almost constant, what changes is the D i.e. error increase cause ticket generation and all the error management.
Anyway the D-2XX is always around 4ms, 

RunTime(sec) Appl Clients Req 1xx 2xx 3xx 4xx 5xx Err T-Err D D-2xx Ti To
0 H/F 5 5 0 0 0 0 0 0 0 0 0 0 705
0 H/F/S 5 0 0 0 0 0 0 0 0 0 0 0 0
3 H/F 15 55 0 60 0 0 0 0 0 6 6 6900 2585
3 H/F/S 15 0 0 0 0 0 0 0 0 0 0 0 0
6 H/F 30 103 0 93 0 0 0 0 0 9 9 10695 4841
6 H/F/S 30 0 0 0 0 0 0 0 0 0 0 0 0
9 H/F 45 170 0 173 0 0 0 0 10 4 4 19895 8460
9 H/F/S 45 0 0 0 0 0 0 0 0 0 0 0 0
12 H/F 60 235 0 223 0 0 0 0 0 4 4 25645 11045
12 H/F/S 60 0 0 0 0 0 0 0 0 0 0 0 0
15 H/F 75 284 0 275 0 0 0 0 11 4 4 31625 13865
15 H/F/S 75 0 0 0 0 0 0 0 0 0 0 0 0
18 H/F 90 286 0 274 0 0 0 0 8 4 4 31510 13818
18 H/F/S 90 0 0 0 0 0 0 0 0 0 0 0 0
21 H/F 105 283 0 266 0 0 0 0 24 7 7 30590 14429
21 H/F/S 105 0 0 0 0 0 0 0 0 0 0 0 0
24 H/F 120 280 0 267 0 0 0 0 33 5 5 30705 14711
24 H/F/S 120 0 0 0 0 0 0 0 0 0 0 0 0
27 H/F 135 334 0 349 0 0 0 0 36 4 4 40135 17390
27 H/F/S 135 0 0 0 0 0 0 0 0 0 0 0 0
30 H/F 150 414 0 383 0 0 0 0 46 4 4 44045 21620
30 H/F/S 150 0 0 0 0 0 0 0 0 0 0 0 0
33 H/F 165 315 0 276 0 0 0 0 36 9 9 31740 16497
33 H/F/S 165 0 0 0 0 0 0 0 0 0 0 0 0
36 H/F 180 261 0 260 0 0 0 0 62 5 5 29900 15181
36 H/F/S 180 0 0 0 0 0 0 0 0 0 0 0 0
39 H/F 195 386 0 387 0 0 0 0 100 4 4 44505 22842
39 H/F/S 195 0 0 0 0 0 0 0 0 0 0 0 0
42 H/F 200 402 0 402 0 0 0 0 49 5 5 46230 21150
42 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
45 H/F 200 334 0 318 0 0 0 0 87 7 7 36570 19834
45 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
48 H/F 200 355 0 374 0 0 0 0 83 4 4 43010 20586
48 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
51 H/F 200 328 0 306 0 0 0 0 66 5 5 35190 18518
51 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
54 H/F 200 353 0 362 0 0 0 0 114 5 5 41630 21949
54 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
57 H/F 200 350 0 333 0 0 0 0 39 5 5 38295 18283
57 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
60 H/F 200 247 0 211 0 0 59 0 120 9 6 42948 15651
60 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
63 H/F 200 130 0 38 0 0 111 0 46 95 25 39693 7191
63 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
66 H/F 200 139 0 20 0 0 125 94 54 174 53 42772 6721
66 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
69 H/F 200 157 0 50 0 0 94 131 6 65 14 35693 7379
69 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
72 H/F 200 243 0 36 0 0 74 51 2 139 20 27986 11609
72 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
75 H/F 200 136 0 36 0 0 106 90 100 114 27 38060 7144
75 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
78 H/F 200 169 0 10 0 0 153 33 3 113 75 49596 8084
78 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
81 H/F 200 242 0 5 0 0 144 129 25 133 60 46532 12079
81 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
84 H/F 200 180 0 0 0 0 178 80 94 112 0 56366 8507
84 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
87 H/F 200 179 0 0 0 0 178 1 15 110 0 56127 8507
87 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
90 H/F 200 282 0 0 0 0 179 157 30 109 0 56683 14570
90 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
93 H/F 200 171 0 0 0 0 167 38 6 120 0 53123 8131
93 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
96 H/F 200 167 0 0 0 0 168 20 129 124 0 53200 7990
96 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
99 H/F 200 195 0 0 0 0 166 114 61 121 0 52327 9964
99 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
102 H/F 200 215 0 0 0 0 172 1 14 116 0 54706 10199
102 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
105 H/F 200 170 0 0 0 0 167 165 23 131 0 52883 7990
105 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
108 H/F 200 175 0 0 0 0 169 80 5 119 0 53516 8225
108 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
111 H/F 200 168 0 0 0 0 169 58 49 125 0 53516 7990
111 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
114 H/F 200 172 0 0 0 0 165 170 5 124 0 52250 8131
114 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
117 H/F 200 93 0 0 0 0 83 13 2 317 0 26283 4512
117 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
120 H/F 200 87 0 0 0 0 49 151 37 558 0 15516 4183
120 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
123 H/F 200 51 0 0 0 0 49 56 29 570 0 15516 2444
123 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
126 H/F 200 57 0 0 0 0 54 23 84 565 0 17100 2726
126 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0
* * * * * * * * * * * * * *
RunTime(sec) Appl Clients Req 1xx 2xx 3xx 4xx 5xx Err T-Err D D-2xx Ti To
129 H/F 200 9412 0 5787 0 0 3031 1717 1854 48 4 37853 11402
129 H/F/S 200 0 0 0 0 0 0 0 0 0 0 0 0

mic

2012/9/26 Massimo Di Pierro <massimo....@gmail.com>:
> --
>
>
>

Massimo Di Pierro

unread,
Sep 26, 2012, 6:20:03 PM9/26/12
to web...@googlegroups.com
Are you telling us that under heavy load, trying to generate a new session_id at every request using the os entropy generator (/dev/urandom), results in too many files open, causes tickets, and this produces the apparent slow down? 

What do you suggest?

For testing purpose, what if you replace

        ubytes = [ord(c) for c in os.urandom(16)] # use /dev/urandom if possible                                                                         
bytes = [bytes[i] ^ ubytes[i] for i in range(16)]
    except NotImplementedError:
        pass

with

        ubytes = [ord(c) for c in os.urandom(16)] # use /dev/urandom if possible                                                                         
bytes = [bytes[i] ^ ubytes[i] for i in range(16)]
    except NotImplementedError:
        pass
    except OSError:
        pass

what if you completely comment the try... except? Perhaps access to urandom is a bottle neck.

Massimo

Niphlod

unread,
Sep 26, 2012, 6:36:46 PM9/26/12
to web...@googlegroups.com

On Thursday, September 27, 2012 12:20:03 AM UTC+2, Massimo Di Pierro wrote:
Are you telling us that under heavy load, trying to generate a new session_id at every request using the os entropy generator (/dev/urandom), results in too many files open, causes tickets, and this produces the apparent slow down? 

What do you suggest?

 
this is not the case within uwsgi....no errors show up. For production sites handling a lot concurrent requests rocket is not recommended.

Commenting out the part generating request.uuid on main.py was part of "the hack" for case 5). Just commenting that turned out in a ~50 reqs/sec more capability. The big "speed bump" (~400 reqs/sec more) is removing all the session logic.
 

Michele Comitini

unread,
Sep 26, 2012, 7:18:23 PM9/26/12
to web...@googlegroups.com
I confirm that is a rocket issue.  After a while it starts leaving CLOSE WAIT sockets around until consuming all available file handles.

mic

2012/9/27 Niphlod <nip...@gmail.com>

--
 
 
 

Massimo Di Pierro

unread,
Sep 27, 2012, 8:37:23 AM9/27/12
to web...@googlegroups.com
Tim, creator or Rocket has not been very responsive recently. Should we revert to cherrypy wsgiserver?

Michele Comitini

unread,
Sep 28, 2012, 5:57:47 AM9/28/12
to web...@googlegroups.com
If we have to change, could we come up with a list of good candidates, including cherrypy, and have a voting session?

mic



2012/9/27 Massimo Di Pierro <massimo....@gmail.com>
--
 
 
 

Niphlod

unread,
Sep 28, 2012, 6:37:00 AM9/28/12
to web...@googlegroups.com
Well, I wouldn't know if leaving sockets open for such a high concurrency is enough to change, but cherrypy is definitely more active. When Tim rejoins we could switch back as soon as the bugs are fixed.
@mic: is there really an alternative to cherrypy.wsgiserver (multiplatform, pure python, SSL support, rock solid thread-enabled uwsgi server)?

Massimo Di Pierro

unread,
Sep 28, 2012, 8:37:28 AM9/28/12
to web...@googlegroups.com
+1

Massimo Di Pierro

unread,
Sep 28, 2012, 8:38:10 AM9/28/12
to web...@googlegroups.com
Anyway, to my knowledge cherrypy is the only pure-python single-file ssl-enabled web server, apart for rocket. Are there others?

Derek

unread,
Sep 28, 2012, 1:40:15 PM9/28/12
to web...@googlegroups.com
Waitress? But it's not a single file.

Vasile Ermicioi

unread,
Sep 28, 2012, 2:19:07 PM9/28/12
to web...@googlegroups.com
yes, waitress is great - async and threads combined and that in pure python :)

Niphlod

unread,
Sep 28, 2012, 2:31:25 PM9/28/12
to web...@googlegroups.com
waitress does not support ssl

Rakesh Singh

unread,
Sep 30, 2012, 1:30:10 PM9/30/12
to web...@googlegroups.com
I've been playing with the Tornado web server recently, and it's quite impressive (size, performance and feature wise).
Although perhaps an overkill for web2py?

Massimo Di Pierro

unread,
Sep 30, 2012, 1:43:37 PM9/30/12
to web...@googlegroups.com
We can only package pure python web servers. Tornado does not meet this requirement.

Derek

unread,
Oct 1, 2012, 2:51:08 PM10/1/12
to web...@googlegroups.com
I think we all might be asking for the same thing then - can we make the webserver pluggable, like how the databases are now?  So if I install Tornado, or Gunicorn, or whatever, I can just specify that in a config file, or perhaps the little gui that comes up i can specify which webserver i prefer to use.

Bruno Rocha

unread,
Oct 1, 2012, 3:00:40 PM10/1/12
to web...@googlegroups.com
Take a look at web2py/anyserver.py you can run a lot of webservers, I specially like Bjorn, it is very fast.

Massimo Di Pierro

unread,
Oct 1, 2012, 4:58:14 PM10/1/12
to web...@googlegroups.com
It is not pure python.

Among the pure python ones, rocket is really good. We just made some improvements to it.

Massimo

Derek

unread,
Oct 1, 2012, 5:13:31 PM10/1/12
to web...@googlegroups.com
Yea, we get it. Pure python is not a requirement for most of the people who use this software. It is important to you, and I can understand your design decision to keep it that way, but still, we were just talking about alternatives, not a new default.

Alan

unread,
Oct 1, 2012, 5:22:22 PM10/1/12
to web...@googlegroups.com
Massimo
I think given that neither have received upstream updates in a while, it would be a question of whats more stable for web2py?
what was the reason for moving to rocket in the first place from cherrypy?

are the fixes for rocket something the web2py community can take on (as a rocket fork i guess) or does it have to go upstream?

Alan

Ricardo Pedroso

unread,
Oct 1, 2012, 5:27:02 PM10/1/12
to web...@googlegroups.com
On Sun, Sep 30, 2012 at 6:43 PM, Massimo Di Pierro
<massimo....@gmail.com> wrote:
> We can only package pure python web servers. Tornado does not meet this
> requirement.


Tornado is pure Python and very fast - and even much faster with PyPy.

It only has an optional C module for those who want to use epoll on
Linux/Python2.5,
if this module is not available it will use select, this is the logic
in tornado at the end of ioloop.py:

# Choose a poll implementation. Use epoll if it is available, fall back to
# select() for non-Linux platforms
if hasattr(select, "epoll"):
# Python 2.6+ on Linux
_poll = select.epoll
elif hasattr(select, "kqueue"):
# Python 2.6+ on BSD or Mac
_poll = _KQueue
else:
try:
# Linux systems with our C module installed
import epoll
_poll = _EPoll
except Exception:
# All other systems
import sys
if "linux" in sys.platform:
logging.warning("epoll module not found; using select()")
_poll = _Select

It can be put in a single py module. I once did it with tornado
version 1.2.1 and if I recall the size
was around 40/50k of code

The big difference with rocket is that it's not a threaded server. The
recommended way is running
one instance per CPU - this can be handled automatically by tornado.

Ricardo

Massimo Di Pierro

unread,
Oct 1, 2012, 6:28:45 PM10/1/12
to web...@googlegroups.com
Does it support ssl?
We need to benchmark it for non-linux environments. Also I am not sure about the PycURL dependence for python2.5.
Can you send us instructions to build a single-file tornado server?

Massimo

tomasz bandura

unread,
Oct 2, 2012, 2:30:06 AM10/2/12
to web...@googlegroups.com
Hello,

Maybe i am not quite on a topic but...

back to the beginning of the subject, I found the performance tests for web.py, which in my opinion should achieve good results (because it is simple).
Tests are also primitive (just 'hello world'), only prepared for different environment: apache and various wsgi configurations.

And the result: for the best configuration (WSGIScriptAlias​​) the performance of web.py is worse only 10% of static files.

So I do not know what to think about the tests described in the first post ...


Best Regards,
Tomasz





2012/10/2 Massimo Di Pierro <massimo....@gmail.com>
--
 
 
 

Ricardo Pedroso

unread,
Oct 2, 2012, 9:12:33 PM10/2/12
to web...@googlegroups.com
On Mon, Oct 1, 2012 at 11:28 PM, Massimo Di Pierro
<massimo....@gmail.com> wrote:
> Does it support ssl?

Yes.

> We need to benchmark it for non-linux environments. Also I am not sure about
> the PycURL dependence for python2.5.

PycURL is only need for the http client part, to act as a server,
there is no need.


> Can you send us instructions to build a single-file tornado server?

As I said, once I did a single-file build of tornado 1.2.1.
It wasn't an automatic build with some script magic, but instead,
was made by hand and was surprisingly easy to join the essential parts.


Today I look at tornado and they are in version 2.4. So I spend some
time rebuild
the single-file tornado. It was easier and smaller with version 1.2,
but it is still easy.

I put it on github (https://github.com/rpedroso/motor) with some instrutions.


It was tested only in Linux, so may work (I hope) or not for others systems.

I put the results of two ab benchmarks in the README
(rocket vs monolithic tornado) of web2py trunk


Ricardo
> --
>
>
>

Massimo Di Pierro

unread,
Oct 3, 2012, 12:56:24 AM10/3/12
to web...@googlegroups.com
Please open a ticket your your proposal and the link. This is a serious possibility!
Can the monolitic build be automated?

Niphlod

unread,
Oct 3, 2012, 4:14:29 AM10/3/12
to web...@googlegroups.com
I was excited too, never thought that tornado runs on Windows, where no epoll/kqueue is available.
However, I'm ok with having a "normal" webserver (performance-wise, I expect to be at least fast as rocket/cherrypy) on Windows but a speedup under Linux,  then I bumped into this

https://github.com/rpedroso/motor/blob/master/motor.py#L596

No Windows support for fork! Additionally, has the "tornado team" adopted fully the idea of supporting tornado under Windows ?

Niphlod

unread,
Oct 3, 2012, 4:40:35 AM10/3/12
to web...@googlegroups.com
Ok, with some modifications can be run on windows with anyserver.py . So, I remain with one only doubt ... is tornado on Windows fully tested ?

   def motor(app, address, **options):
       
import motor
        app
= motor.WSGIContainer(app)
        http_server
= motor.HTTPServer(app)
        http_server
.listen(address=address[0], port=address[1])
       
#http_server.start(2)
        motor
.IOLoop.instance().start()

@Ricardo: the part issuing the fork was https://github.com/rpedroso/motor/blob/master/t_wsgi.py#L19. When you replace it with http_server.start() it's fine (i.e. doesn't try to call os.fork()). However, in "normal" tornado recipes (and in anyserver.py ) normally http_server.start() is never used.... I'm a tornado newbie, is that line useful for something (of course apart having two processes started) ?

Ricardo Pedroso

unread,
Oct 3, 2012, 4:58:30 AM10/3/12
to web...@googlegroups.com
I was about to ask you to test with listen() instead of start()
With start(1) should be ok too.

start(n) - will fork n processes. when n=0 it will automatically fork
one process per CPU available.

For what I see in code, in windows only one process is supported.

Ricardo

Ricardo Pedroso

unread,
Oct 3, 2012, 4:58:45 AM10/3/12
to web...@googlegroups.com
> --
>
>
>

Ricardo Pedroso

unread,
Oct 3, 2012, 5:04:51 AM10/3/12
to web...@googlegroups.com
On Wed, Oct 3, 2012 at 5:56 AM, Massimo Di Pierro
<massimo....@gmail.com> wrote:
> Please open a ticket your your proposal and the link. This is a serious
> possibility!
> Can the monolitic build be automated?

I guess so, but don't know how without enter in the lex/yacc world.
Probably not worth it.
Suggestions?

Ricardo

Niphlod

unread,
Oct 3, 2012, 5:35:06 AM10/3/12
to web...@googlegroups.com
Well, seems to work. I'd need to know if there is some way to test if all web2py framework works on on this, but a normal app seems to do pretty fine.
Windows Vista (aaargh!), PortablePython 2.7.3 32-bit, T8100 2.10Ghz, 3 GB RAM, adjusted softcron in anyserver.py (to allow removal) in order to keep things smooth, removed logging of every request in the console of tornado (commented out line 2214 of motor.py)

Performance-wise, same app started with anyserver, one loads cherrypy and the other tornado, 2 concurrent benches (one for static files, the other for the "hello world" app) no memory leaks for either.

ab -c 100 -n 1000 (it is windows after all, bumping to -c 1000 -n 1000000 at this point is unuseful)

/app/default/index  rps     cherrypy 14.90 , tornado 132
/app/static/test.css rps     cherrypy  23.71, tornado 174


David Marko

unread,
Oct 3, 2012, 5:39:21 AM10/3/12
to web...@googlegroups.com
The speed difference is huge!

Dne středa, 3. října 2012 11:35:06 UTC+2 Niphlod napsal(a):

Ricardo Pedroso

unread,
Oct 3, 2012, 6:05:54 AM10/3/12
to web...@googlegroups.com
I guess cherrypy spawns some threads, right? To be fair can you
ajust to use only one thread/process to match tornado.
I think you will see some improvements in cherrypy rps in this
particular bench, since the OS/Python threads switching contexts is avoided.

I think web2py should not choose a webserver only in terms of speed,
but should focus on the simplest, complete and pure python webserver,
My 2 cents.

Ricardo

Niphlod

unread,
Oct 3, 2012, 6:31:43 AM10/3/12
to web...@googlegroups.com
me too. The default webserver is not - usually - meant for production anyway. I started the test with the idea "let's see if tornado on windows (i.e. without all the goodies of select/epoll) can be at least comparable with a threaded server".

PS: cherrypy (and rocket) uses a separate thread for every request. If I got that part right, limiting the threadpool to max 1 means that no concurrency at all is achieved. Unfortunately, it seems that I missed a step. Limiting using

server = wsgiserver.CherryPyWSGIServer(address, app, numthreads=1, max=1)

seems to raise rps to 68.93. 99% of the requests are served in 3078 ms.

server = wsgiserver.CherryPyWSGIServer(address, app, numthreads=10, max=20)

stops to rps 14.95 while the 99% of the requests are served in 8034 ms.

If someone can explain this to me I'll be glad to offer him a beer if he passes nearby Milan.

Ricardo Pedroso

unread,
Oct 3, 2012, 7:44:21 AM10/3/12
to web...@googlegroups.com
On Wed, Oct 3, 2012 at 11:31 AM, Niphlod <nip...@gmail.com> wrote:
> me too. The default webserver is not - usually - meant for production
> anyway. I started the test with the idea "let's see if tornado on windows
> (i.e. without all the goodies of select/epoll) can be at least comparable
> with a threaded server".
>
> PS: cherrypy (and rocket) uses a separate thread for every request. If I got
> that part right, limiting the threadpool to max 1 means that no concurrency
> at all is achieved. Unfortunately, it seems that I missed a step. Limiting
> using
>
> server = wsgiserver.CherryPyWSGIServer(address, app, numthreads=1, max=1)
>
> seems to raise rps to 68.93. 99% of the requests are served in 3078 ms.
>
> server = wsgiserver.CherryPyWSGIServer(address, app, numthreads=10, max=20)
>
> stops to rps 14.95 while the 99% of the requests are served in 8034 ms.
>
> If someone can explain this to me I'll be glad to offer him a beer if he
> passes nearby Milan.
>

Think about a bar with 10 barmans and two
beer faucets (I have to search to know I to call -
http://www.homebrewery.com/images/chrome-beer-faucet.jpg)
At some time the barmans start to starving each other to handle all
the beer requests.

I guess it's the thread context switch penalty plus GIL.
Try cherrypy with:
numthreads=2, max=2
You should get about the same performance of single thread or better.


A good reading that can give you some enlightenment:

http://www.slideshare.net/cjgiridhar/pycon11-python-threads-dive-into-gil-9315128
www.dabeaz.com/python/GIL.pdf
http://en.wikipedia.org/wiki/Context_switch

Ricardo

Niphlod

unread,
Oct 3, 2012, 8:30:56 AM10/3/12
to web...@googlegroups.com
I got that part (barman and beers are nice :P).
I envisioned that the threaded servers start one thread per request, and that thread closes when the request ends.
So, if only 1 thread is available at most, basically all the requests have to be served serially (the server queues them and pass it to the "execution environment" only 1 at a time). With that in mind, without exxagerating the numbers (e.g. 100 threads), I thought that the achievable concurrency in threaded servers related semi-linearly with the number of available threads, but it seems that this is not the case.
I expected at least a higher response-time (calculated from the moment ab issues the request to the moment it receives the 1st byte back from the server) with 1 thread max.

BTW, with 2 threads rps is 26.08, 99% served within 7566 ms.

Michele Comitini

unread,
Oct 3, 2012, 8:36:17 AM10/3/12
to web...@googlegroups.com
2012/10/3 Niphlod <nip...@gmail.com>

> I thought that the achievable concurrency in threaded servers related semi-linearly with the number of available threads, but it seems that this is not the case. 

the cause is GIL.  You could see that on pypi, that has no GIL, but I never tested.
CPython is better on forked servers.

mic



2012/10/3 Niphlod <nip...@gmail.com>


--
 
 
 

Niphlod

unread,
Oct 3, 2012, 9:09:12 AM10/3/12
to web...@googlegroups.com
Just for the sake of discussion: are you all saying that in threaded servers on Cpython the best thing is to have a single thread ? Why on hell should anyone support a thread-pool in the beginning if that's the case? Cherrypy existed long before pypy, so it's not a matter of "we predisposed a Thread-pool just for pypy and jython". Also, everywhere on the net is recommended to have a rather "high" number of threads when cherrypy/rocket is used in production (as in "use 10 to 64 min threads on cPython").
Is it only for concurrent long-running requests (i.e. 500-700ms)?

PS: ab -c 1000 -n 10000 on tornado-motor fails after ~7000 requests, cherrypy finishes without issues (both with 1-1 (89 rps) and 10-20 threads(18 rps)).

Massimo Di Pierro

unread,
Oct 3, 2012, 11:49:56 AM10/3/12
to web...@googlegroups.com
Threads in python are slower then single threads on multicore machines (any modern computer). So 2 threads on 2 cores is almost twice as slow instead of twice as fast because of the GIL. Yet there are advantages. If a thread blocks (because it is streaming data or doing a computation), a multithreaded server is still responsive. A non threaded server can only do one thing at the same.

In some languages on multicore concurrency means speed. In python this is not true for threads.

One can tweak things to make faster benchmarks for simple apps by serializing all requests but this is not good in a real life scenario where you need concurrency.

Niphlod

unread,
Oct 3, 2012, 12:18:40 PM10/3/12
to web...@googlegroups.com
tested on windows the "hello world" and a complete app's index page, results are quite similar.... in some ways at least on windows seems that having a few threads also if in relatively high concurrency environment lead to faster response times (forget about requests served per second, I'm also talking about new requests served in n seconds).

My point is: if a normal web-app ships responses in max 1 second (slower would be a pain in the *** for the users navigating to those pages), then having the user wait 5 sec because his request has been queued - because there are a few threads actually serving pages - or having the user wait 7 sec because the server is "busy" switching threads equals the user waiting n seconds.
Tests seems to point to the fact that on this computer, with 1000 concurrent requests served by a few threads (down to just one) the users would wait (in average) less than having them served by 10 to 20 threads (and this is the bit getting me a little confused). This happens both on "hello world" super-fast responses and on a complete "index" page (complex db query, some math, a little bit of markup, no session, returns the response (40.2kb of html) in something like 800ms). BTW, as always, the more "real" the app is, the more the gap between tornado and rocket/cherry reduces itself.
Motor seems to handle concurrency better, if not "pushed" too high (then it stops responding)

Knowing that the server is holding back the response to the user A:
- because has put away the request in its queue and forgot about it (it is processing requests coming from B, that requested the page before )
- because is currently processing A's request within other 20 coming from [B-Z] users

it's fine, but then again "academically" I expected it to behave better with 10 threads than 1.

I'm beginning to think that ab in windows doesn't behave the way it's supposed to, but alas ab.exe is shipped from years within the apache win32 build.

PS back in the thread: motor looks good, also on windows, it's just not as stable as rocket or cherrypy.

Massimo Di Pierro

unread,
Oct 3, 2012, 1:33:59 PM10/3/12
to web...@googlegroups.com
There is an old post by Tim who did extensive benchmarks with Rocket. He discusses extensively problems with using ab for testing. I cannot find the post unfortunately.

Ricardo Pedroso

unread,
Oct 3, 2012, 1:37:45 PM10/3/12
to web...@googlegroups.com
On Wed, Oct 3, 2012 at 2:09 PM, Niphlod <nip...@gmail.com> wrote:

> PS: ab -c 1000 -n 10000 on tornado-motor fails after ~7000 requests,
> cherrypy finishes without issues (both with 1-1 (89 rps) and 10-20
> threads(18 rps)).

tornado fails because on Windows it's using select() and select doesn't scale
well. That is the reason that on Linux/BSD they implemented epoll/kqueue
to avoid the limitations of select. Windows has IOCP but is not
supported by tornado not even in python.

There is, at least, one attempt to bring IOCP to python using ctypes
http://code.google.com/p/python-iocp/

Ricardo

Niphlod

unread,
Oct 3, 2012, 4:17:02 PM10/3/12
to web...@googlegroups.com
I know of the limitations of tornado's architecture on Windows: before today I didn't even know it was possible to run tornado on Win :D

To sum up, the problem is: web2py needs a pure python webserver that supports SSL. It needs to be runnable on Win, Mac and Linux (possibly on Solaris too). Raw performance tested with ab doesn't really care because of the way it's meant to be an app: if you need thousands of requests in parallel you don't need to run web2py in "development" mode, possibly you'll end up on unix and behind a webserver (apache, nginx) or gunicorn, circus, uwsgi, etc. just after facing other problems first (db access, to say the least)
Anyway, of course, faster is better.
There are some folks (me, for example) that are limited and can't use those "tools" on production on linux or behind apache (corporate rules madness, Windows environment, etc).
For intranet web-apps, rocket is totally fine: concurrent users are never over 30 people in my case and I can afford having web2py in "deployment" while running on rocket (or motor :P).
The big bottleneck is always the database for 90% of the apps. 
Rocket and Cherrypy are battle-tested on Win, tornado isn't (can't find an "official" endorsement of deployment on Windows in production). For users in the need to run web2py on different webservers under linux, there are plenty of possibilities with scripts (apache, nginx, uwsgi, etc) or anyserver.py (you can mount web2py in quite whatever wsgi runner you'd like to).

For me, bases are covered with either rocket or cherrypy. As long as we can patch and support rocket while Tim regains some time to spend on it, I'm fine (after all, I think Massimo spent on it little time).
Tornado (in the motor "incarnation") seems to need some more refinement to reach the "battle tested" status (don't know how much time you spent on it, but I'm glad you did anyway). Performance-wise, this tests acknowledged that is a valid alternative (given the due refinements).
As long as there is someone willing to support motor and periodically check-in new code and patches from tornado source, and test it on all platforms, I'll be fine with replacing rocket with motor: the main point here is that it can run slower (up until it can compare with rocket) but can't crash on any condition (select() or epoll availability doesn't really matter on this "perspective").
If the time needed to include and support motor (given a gain in performances > 25% for normal apps) is more than the one virtually spent on rocket patches, I think that all the peoples in the need can turn to the way web2py yet provides, using anyserver.py (pip install tornado, python anyserver.py -s tornado is easy even for newbies).

LightDot

unread,
Oct 3, 2012, 4:51:30 PM10/3/12
to web...@googlegroups.com
I just noticed that Andriy has updated the article with an "Isolated Benchmark" yesterday. I quote: "In order to provide more reliable benchmark I get rid of application server and network boundary. As a result I simulated a valid WSGI request and isolated calls just to framework alone"

The article is still at the same address: http://mindref.blogspot.com/2012/09/python-fastest-web-framework.html

And the source code for the new test is here: https://bitbucket.org/akorn/helloworld/src/tip/benchmark.py

Regards,
Ales

On Tuesday, September 25, 2012 2:01:55 PM UTC+2, Jose C wrote:
Just stumbled across this benchmark:

http://mindref.blogspot.pt/2012/09/python-fastest-web-framework.html

on the python group discussion:
https://groups.google.com/forum/?fromgroups=#!topic/comp.lang.python/yu1_BQZsPPc

The author also notes a memory leak problem with web2py but no specifics that I could see.

Thoughts?

Niphlod

unread,
Oct 3, 2012, 5:28:18 PM10/3/12
to web...@googlegroups.com
no news there. we know that web2py is more packed of features by default.
Reply all
Reply to author
Forward
0 new messages