low performance of FastCGI deployment

200 views
Skip to first unread message

Alexander Boldakov

unread,
Mar 12, 2007, 8:56:48 AM3/12/07
to Django users
Hello all,

My django application runs slower under Apache+FastCGI or
Lighttpd+FastCGI than under django development HTTP server. The
approximate times for generating the page are 0.6 vs 1.0 seconds for
FastCGI and development server correspondingly.

I've tried different combinations of 'prefork' and 'threaded', unix
domain socket and tcp socket, manually/web server started fastcgi
server as described on django FastCGI documentation page, but nothing
helped.

If you have any idea of solving this problem, i will greatly
appreciate it!

Django version is 0.95. Python version is 2.4.4. Apache version is
2.2.3. Flup version is 0.5.

Alexander Boldakov

Atilla

unread,
Mar 12, 2007, 9:10:32 AM3/12/07
to django...@googlegroups.com
I am running a system with basically the same versions of all software
packages as you are. It is in production and under stress testing it
performed very very well and there've been no issues with performance
so far. I am using Apache + FastCGI, server-managed.

First thing you might want to look at is Apache configuration. What
MPM are you using for Apache - prefork or worker/threaded? Are you
loading too many apache mods that you might not need. Are the
MaxRequestsPerChild and other similar server directives set up
properly ? Note that the FCGI also has a finite number of requests to
serve, before being recycled, but unless you've changed the default
values you should be Ok.

Joseph Heck

unread,
Mar 12, 2007, 1:32:46 PM3/12/07
to django...@googlegroups.com
Have you done any profiling to see where your bottlenecks are? There's a decent set of notes on profiling Django at http://code.djangoproject.com/wiki/ProfilingDjango and an even greater writeup at http://www.rkblog.rk.edu.pl/w/p/django-profiling-hotshot-and-kcachegrind/.

Apache+Mod_Python and Lighttpd+FastCGI have both rendered out (very simple) pages in the sub 100ms range for me. I'd look to the code and see exactly where the bottlenecks are happening.

-joe

Alexander Boldakov

unread,
Mar 14, 2007, 5:39:24 AM3/14/07
to Django users
Thanks to all for the useful advices!

I've done a profiling with a simple django application and it showed
that the bottleneck is flup. My django application often serves big
HTML/XML pages (about 1-2 Mb) - in such cases the overhead of FastCGI
implemented in python becomes tangible. That's why django application
deployed under Apache+FastCGI ot Lighttpd+FastCGI runs slower than
under Django development server or Apache+mod_python.

I found the python-fastcgi implementation, which is a wrapper around
the Open Market FastCGI C Library/SDK (http://cheeseshop.python.org/
pypi/python-fastcgi). Then i changed django.core.servers.fastcgi to
use WSGI server not from flup, but from python-fastcgi. And the
performance imporved greatly! It became even faster then Apache
+mod_python.

Django deployment documentation claims the need of flup as the FastCGI
library. Are there any flup features that Django relies on? Is there
any experience of deploying Django application with FastCGI but
without flup?

Alex


On 12 мар, 20:32, "Joseph Heck" <joseph.h...@gmail.com> wrote:
> Have you done any profiling to see where your bottlenecks are? There's a
> decent set of notes on profiling Django athttp://code.djangoproject.com/wiki/ProfilingDjangoand an even greater

> writeup athttp://www.rkblog.rk.edu.pl/w/p/django-profiling-hotshot-and-kcachegr....


>
> Apache+Mod_Python and Lighttpd+FastCGI have both rendered out (very simple)
> pages in the sub 100ms range for me. I'd look to the code and see exactly
> where the bottlenecks are happening.
>
> -joe
>

Ivan Sagalaev

unread,
Mar 14, 2007, 6:46:24 AM3/14/07
to django...@googlegroups.com
Alexander Boldakov wrote:
> Django deployment documentation claims the need of flup as the FastCGI
> library. Are there any flup features that Django relies on?

Nothing specific. Flup is recommended because it's pure Python and
easier to install.

Also documentation *strongly* recommends not to serve media files from
Django but to use a separate server instead. This is because serving
media from a framework will always be slower and should be avoided. I
suspect that even your fix with using more faster FastCGI should still
fail you when you get more users. The thing is that you will need a
separate process for each download and Django process tends to take
about 15-20 MB in memory. Multiply this by user count and you get pretty
much memory exhausted for just copying bytes into socket.

P.S. I was struggling with similar problem last year and made a 2-part
writeup that I think you'll be able to read in Russian:
http://softwaremaniacs.org/blog/2006/04/18/controlled-download/
http://softwaremaniacs.org/blog/2006/04/18/controlled-download-2/

Alexander Boldakov

unread,
Mar 14, 2007, 9:38:05 AM3/14/07
to Django users
The web pages served in my django applications are not static media
files, but dynamically generated content (the result of applying XSLT
transformation to the XML data retrieved from XML database). I
consider the architecture of public site with static version of data
and private dynamic site, but the problem of synchronization of the
two systems is not trivial in my case. Though i agree that serving big
dynamic content will fail with increasing the number of users.

P.S. Your writeup is very exciting and interesting!

Atilla

unread,
Mar 15, 2007, 4:25:26 PM3/15/07
to django...@googlegroups.com
On 14/03/07, Alexander Boldakov <bold...@gmail.com> wrote:
>
> The web pages served in my django applications are not static media
> files, but dynamically generated content (the result of applying XSLT
> transformation to the XML data retrieved from XML database). I
> consider the architecture of public site with static version of data
> and private dynamic site, but the problem of synchronization of the
> two systems is not trivial in my case. Though i agree that serving big
> dynamic content will fail with increasing the number of users.
>
> P.S. Your writeup is very exciting and interesting!

What kind of XML tools are you using to apply the XSLTs? This can be
quite a heavy task, and combined with the large output sizes - it will
make your processes stay in memory quite long. LibXML is VERY fast
when it comes to XML processing, but even with it sometimes the
processing can be quite heavy, as your user base grows.

In any case - consider caching the results of the transformations, or
reusable parts of them. Any piece of the output that you can reuse and
cache can significantly improve your performance. If you can cache the
whole output - even better, then it's as simple as enabling Django's
Cache middleware.

If your server are stuggling and you have a user base that's rather
big and will need proper performance, then you'll need more machines
to serve your content. In that case having Memcached cahing will
really benefit you, as cached output will be shared between your
server nodes.

How big is the application you're writing and what's its target ?

Reply all
Reply to author
Forward
0 new messages