I recently started working at a new company where we gonna develop a
webapplication with high security and large amounts of users.
I've already looked into the security issues and it seems that, that
isn't really gonna be a problem to solve however i can't really find
anything about load / stress testing of the cherrypy server.
In order to compare cherrypy with the other options, I was looking for
some type of document where connecting time, receiving and sending
times are specified.
offcourse it's possible to perform these tests ourselves but it would
be a waste of time if such information already exists. Does any know
if such information is already available and where it's located.
thanks in advance for your help,
Richard Mendes
Check out:
http://www.cherrypy.org/wiki/CherryPySpeed
Since I've started using CherryPy with mod_wsgi I've
found that CherryPy pretty much irrelevant in terms of
performance of my web app - it's the database that's
the bottleneck. I suspect that this will be the case
with virtually any web app.
A
Now i was able to show something of how the server would handle
himself during heavy traffic.
I did some tests with 'ab' from apache as well and the load handling
was alright althougt the server
errored when more than a hundred users simaltaniously connected and
tried to get data.
Richard
My application is used by about 7,000 people per day,
peaking at about 25 hits per second to CherryPy and
the performance has been absolutely excellent, the
area that I'm struggling on right now is scaling the
database. (I'm thinking about memcache for that.)
I wouldn't like to be trying to do this with one of
the more heavyweight frameworks and I am speaking from
some painful experience!
A
If you haven't already, you should try tuning the server by playing
with the following config settings:
server.socket_queue
--------------------------
This tells the app server the number of connections to queue while the
worker threads are busy handling other requests. After the socket
queue is full, any further connections will be denied. The default is
5, IIRC. See the following post for some eye-opening results when
tweaking server.socket_queue.
http://www.aminus.org/blogs/index.php/fumanchu/2007/10/02/cherrypy_3_request_queue_size
server.thread_pool
------------------------
The number of worker threads available to handle requests. The
default is 10. More threads will allow more simultaneous requests to
be handled, albeit with greater resource consumption. But unused
resources are wasted resources, right? ;-)
Also note that the CherryPySpeed wiki page is quite out of date. It
looks like it was written for CherryPy 2.0. Maybe someday I (or
someone else) will get a chance to update it.
In the meantime, take a look at
http://www.aminus.org/blogs/index.php/fumanchu/2006/12/23/cherrypy_3_has_fastest_wsgi_server_yet
That is nearly a year old, but it at least has some numbers on
CherryPy 3's WSGI server. There is a host of other good info on
Robert's blog. Have a look around there.
HTH,
Christian
http://www.dowski.com
Seems like there can be done a lot of tuning to create a optimal
server condition. I will read op on all these suggestions and start
tuning the server config.
By looking at this it seems that cherrypy might be the best option to
use in this case, see if this will convince the system architect and
project manager.
thanks.
Richard