Are you using Apache prefork MPM or worker MPM? If worker, how many
threads in each process? Also, what do you have for maximum number of
child processes in Apache configuration. Finally, how many Django
instances are you hosting?
There were a few comments recently about issues with using memcached
in worker MPM due to each thread creating a connection to memcached
servers. If this turns out to be the case, could be possible to
exhaust maximum number of connections that memcached servers will
accept. Still need to look into this to work out what is reality, but
your feedback on your setup might help in that.
BTW, does everyone use the pure Python memcached client module, ie.,
memcache. Have seen comments to the effect that C client is three
times faster, although if you then want object marshaling on top of
that it would slow down a bit.
Graham
> BTW, does everyone use the pure Python memcached client module, ie.,
> memcache. Have seen comments to the effect that C client is three
> times faster, although if you then want object marshaling on top of
> that it would slow down a bit.
cmemcache still seems to segfault unacceptably (ie. when memcached goes
down).
--
Jarek Zgoda
Skype: jzgoda | GTalk: zg...@jabber.aster.pl | voice: +48228430101
"We read Knuth so you don't have to." (Tim Peters)
I'm using prefork, as recommended. Settings as follows:
StartServers 20
MinSpareServers 20
MaxSpareServers 40
ServerLimit 300
MaxClients 300
MaxRequestsPerChild 4000