Benchmark it with a WSGI hello world script and see whether that
behaves normally.
Your MPM settings are bit broken as well. You have:
StartServers 2
MinSpareServers 2
MaxSpareServers 5
MaxClients 50
MaxRequestsPerChild 3000
ServerLimit 8
Keepalive off
HostnameLookups Off
You have said MaxClients should be 50, yet ServerLimit of 8 means that
MaxClients will actually be 8. So, not even enough capacity to proxy
as many as 10 requests to use all daemon processes at once.
If your Django application is thread safe, would also recommend not
using multiple single threaded processes, but a single multithreaded
process instead.
By using multiple single threaded processes, you are likely seeing the
startup cost of loading Django and initialising each instance when you
are doing the testing.
Suggest at least drop number of daemon processes down to 3. Even that
will likely still manage to handle adequate requests and you avoid
start up costs on extra processes you do not need.
> Of note, I'm seeing this in my apache logs:
> [Sun Sep 19 18:04:58 2010] [error] Exception KeyError:
> KeyError(-1218693376,) in <module 'threading' from '/usr/lib/python2.6/
> threading.pyc'> ignored
You can ignore it as warning only. Eliminated in mod_wsgi 3.X.
Graham
I'm load testing a django 1.21/apache (prefork)/mod_wsgi (daemon)
configuration on an AWS small instance (ubuntu 10.04) with apache
bench, and seeing extremely high CPU load (using uptime and vmstat) at
low concurrent requests. Note that I'm using a trivial out-of-box
django project/app with a simple "hello world" view (no DBs, etc).
CPU is at 100% even with an apache bench concurrency value of 2. I'm
running apache bench from a different AWS instance in the same region/
zone. Ideas on what's the problem, or how I should continue to debug
this?
> I appreciate the feedback. I've tried the hello world wsgi program,
> with similar results:
>
> Using an apache bench concurrency value of 10 (ab -c 10 -n 10000
> "my_url"), I'm still seeing load CPU loads (with top and uptime) above
> 3 and 0% idle time (with MaxClients = 3, and processes=3 threads=1)
>
> I've established that as expected, setting MaxClients = 10, and
> processes=10 threads=1 causes CPU Load to skyrocket, up to 9. I'm
> running 10000 requests, so I'm not just seeing the initial startup CPU
> spike.
>
> Questions:
>
> 1)Are these simply the result I should expect with an AWS small
> instance (1.7 GB memory, 1 virtual core which according to Amazon is
> 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor) ?
>
> 2)Am I banging on my server too hard? I know it's all about expected
> traffic, but is my apache bench test an unrealistic baseline?
>
> 3)I'm using prefork MPM since apache is also running php, which
> doesn't work with worker MPM. With prefork MPM, can I still run
> multithreaded wsgi (ie processes=1 threads=10)? Uncertain if they're
> independent. In any case, I'm seeing similar results when I try.
You could run PHP with Nginx like this: http://wiki.nginx.org/NginxFcgiExample, then you're not stuck with prefork.
S
~Carl
> --
> You received this message because you are subscribed to the Google Groups "modwsgi" group.
> To post to this group, send email to mod...@googlegroups.com.
> To unsubscribe from this group, send email to modwsgi+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/modwsgi?hl=en.
>
>
--
-------------------------------------------------------------------------------
Carl J. Nobile (Software Engineer)
carl....@gmail.com
-------------------------------------------------------------------------------