What is the ideal web server to use with Django?

187 views
Skip to first unread message

akshat

unread,
May 13, 2015, 1:15:18 AM5/13/15
to django...@googlegroups.com
I am new to Django. I am building a app which will have to handle several concurrent requests. Which web server is suitable for this? I have read that Apache's performance degrades on high load.

Mario Gudelj

unread,
May 13, 2015, 3:11:17 AM5/13/15
to django...@googlegroups.com
The most common setups I came across are Nginx + Gunicorn and Nginx + uWSGI. Nginx + Gunicorn is really easy to setup and it will probably be sufficient. You just need to have enough Gunicorn workers running for your amount of traffic. I really like this setup http://michal.karzynski.pl/blog/2013/06/09/django-nginx-gunicorn-virtualenv-supervisor/.

On 13 May 2015 at 11:12, akshat <akshatw...@gmail.com> wrote:
I am new to Django. I am building a app which will have to handle several concurrent requests. Which web server is suitable for this? I have read that Apache's performance degrades on high load.

--
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users...@googlegroups.com.
To post to this group, send email to django...@googlegroups.com.
Visit this group at http://groups.google.com/group/django-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/4b0d96b1-76b8-4068-92e2-cbdbdc55e609%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

kk

unread,
May 13, 2015, 6:58:03 AM5/13/15
to django...@googlegroups.com

I would suggest using Nginx.

happy hacking.
Krishnakant.

On Wednesday 13 May 2015 06:42 AM, akshat wrote:
I am new to Django. I am building a app which will have to handle several concurrent requests. Which web server is suitable for this? I have read that Apache's performance degrades on high load.

Tom Evans

unread,
May 13, 2015, 8:37:38 AM5/13/15
to django...@googlegroups.com
On Wed, May 13, 2015 at 2:12 AM, akshat <akshatw...@gmail.com> wrote:
> I am new to Django. I am building a app which will have to handle several
> concurrent requests. Which web server is suitable for this?

Any and all.

>I have read that
> Apache's performance degrades on high load.

That is absolute nonsense.

Cheers

Tom

termopro

unread,
May 13, 2015, 8:55:27 AM5/13/15
to django...@googlegroups.com

>I have read that
> Apache's performance degrades on high load.

That is absolute nonsense.

Cheers

Tom

Doesn't Apache create new process for each request thus eating memory when serving large amounts of static files during traffic peaks ?

Tom Evans

unread,
May 13, 2015, 9:03:23 AM5/13/15
to django...@googlegroups.com
No.

Cheers

Tom

termopro

unread,
May 13, 2015, 9:06:47 AM5/13/15
to django...@googlegroups.com

No.

Cheers

Tom

OK :)

James Schneider

unread,
May 13, 2015, 9:46:40 AM5/13/15
to django...@googlegroups.com

If you get enough traffic to trounce a (web server of choice) installation, you probably are making enough money to hire an expert with that system to tune it properly or recommend adding additional resources.

Don't get bogged down in Apache vs. Nginx vs. uWSGI, etc. You're nowhere near that point if you're asking that question.

The web server platform is usually not the cause of site slowness, especially if it is tuned correctly (number of worker threads, memory allocation, etc.). Optimizing the application itself normally provides the largest gains.

The three most common configurations I've seen are using Apache/mod_wsgi, Nginx/uWSGI, and Gnuicorn. All have varying degrees of difficulty, strengths, extended functionality (ie allowing configuration via environment variables for things like your SECRET_KEY), documentation, and support.

Research all of them and determine either a) the one that is easiest to implement/configure/maintain/secure for you and b) whether your site warrants the extra work of doing so rather than using a pre-built environment such as Heroku.

Of course, the Django docs have recommendations:

https://docs.djangoproject.com/en/dev/howto/deployment/

-James

--
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users...@googlegroups.com.
To post to this group, send email to django...@googlegroups.com.
Visit this group at http://groups.google.com/group/django-users.

reduxionist

unread,
May 14, 2015, 3:36:14 AM5/14/15
to django...@googlegroups.com
That's only evidence that a lot of people don't know how to "performance-tune" (a.k.a. configure) Apache.

The question you asked Tom was "Doesn't Apache create new process for each request [thus eating memory when serving large amounts of static files during traffic peaks]?", and the reason that Tom correctly answers "No" is that as far as "serving large amounts of static files" goes you should be using mpm-worker (multi-threaded Apache) which most definitely does not spawn a new process for each request.

The reason for those search results is that mpm-prefork does, however, spawn a process per request, but it is only needed for non-thread-safe environments (most notoriously mod_php) and you shouldn't have to use it as long as you've been a good coder and avoided global state in your Django app (e.g. keep request-specific shared-state thread-local).

I think the reason a lot of people seem to run mpm-prefork is just that it's the default multi-processing module for Apache on most (all?) *nix platforms and they don't know any better.

Hope that helps explain away that myth! :)

Best wishes,
Jonathan

Tom Evans

unread,
May 14, 2015, 9:27:12 AM5/14/15
to django...@googlegroups.com
On Thu, May 14, 2015 at 4:36 AM, reduxionist <jonathan...@gmail.com> wrote:
> The question you asked Tom was "Doesn't Apache create new process for each
> request [thus eating memory when serving large amounts of static files
> during traffic peaks]?", and the reason that Tom correctly answers "No" is
> that as far as "serving large amounts of static files" goes you should be
> using mpm-worker (multi-threaded Apache) which most definitely does not
> spawn a new process for each request.
>
> The reason for those search results is that mpm-prefork does, however, spawn
> a process per request,

No, really, it does not. It only spawns a new process when there are
no available workers to process an incoming request, and you have not
reached the maximum number of workers that you have configured it to
start. You can configure it to start all the worker processes you want
when it starts up, and never to kill them off, and it will never spawn
a new process.

Apache processes are small, unless you do daft things like embed your
web application in each worker process (mod_php style). This is the
main complaint "Apache is eating all my memory" - it isn't, your web
application you've embedded into Apache is eating all your memory.

All of this is irrelevant for django, because with Apache you should
use mod_wsgi in daemon mode, which separates out your web application
processes from the web server.

> but it is only needed for non-thread-safe
> environments (most notoriously mod_php) and you shouldn't have to use it as
> long as you've been a good coder and avoided global state in your Django app
> (e.g. keep request-specific shared-state thread-local).
>
> I think the reason a lot of people seem to run mpm-prefork is just that it's
> the default multi-processing module for Apache on most (all?) *nix platforms
> and they don't know any better.

Quite. We run a pair of Apache 2.4 reverse proxies in front of all of
our (400+) domains, serving around 40 million requests per day,
providing SSL termination and static file serving. We use event MPM
and we have it scaled to support a peak of 2048 simultaneous
connections. Load on the server never goes above 0.2, memory usage
never goes above 1GB for the entire OS + applications, the rest of the
RAM is used by the OS to cache the aforementioned static files.

On our app servers we typically use Apache with worker MPM and
mod_wsgi, although we have a few nginx+uwsgi sites, and I would dearly
love some time to play around with a circusd + chausette + celery
setup.

The choice of web server is, these days, irrelevant. If it uses too
much memory or can't handle enough users, it is never the fault of the
web server, but instead of your application and/or configuration.
Which is why I return to my original advice:

> I am new to Django. I am building a app which will have to handle several
> concurrent requests. Which web server is suitable for this?

Any and all.

Leave the fanboyism to the phone guys.

Cheers

Tom

Avraham Serour

unread,
May 14, 2015, 9:41:02 AM5/14/15
to django...@googlegroups.com

My main reason for recommending nginx is the config file, they are simpler than apache,
when developing an application you shouldn't spend much time configuring the web server,
only after you reached so much traffic that it would make sense to spend time with it

--
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users...@googlegroups.com.
To post to this group, send email to django...@googlegroups.com.
Visit this group at http://groups.google.com/group/django-users.

Jonathan Barratt

unread,
May 14, 2015, 1:57:59 PM5/14/15
to django...@googlegroups.com
> On 14 พ.ค. 2558, at 05:26, Tom Evans <teva...@googlemail.com> wrote:
>
>> On Thu, May 14, 2015 at 4:36 AM, reduxionist <jonathan...@gmail.com> wrote:
<snip>
>> The reason for those search results is that mpm-prefork does, however, spawn
>> a process per request,
>
> No, really, it does not. It only spawns a new process when there are
> no available workers to process an incoming request, and you have not
> reached the maximum number of workers that you have configured it to
> start.

Oops, sorry, I was totally wrong about that.

> You can configure it to start all the worker processes you want
> when it starts up, and never to kill them off, and it will never spawn
> a new process.
>
> Apache processes are small, unless you do daft things like embed your
> web application in each worker process (mod_php style). This is the
> main complaint "Apache is eating all my memory" - it isn't, your web
> application you've embedded into Apache is eating all your memory.
>
> All of this is irrelevant for django, because with Apache you should
> use mod_wsgi in daemon mode, which separates out your web application
> processes from the web server.
<snip>
>> I think the reason a lot of people seem to run mpm-prefork is just that it's
>> the default multi-processing module for Apache on most (all?) *nix platforms
>> and they don't know any better.
>
> Quite. We run a pair of Apache 2.4 reverse proxies in front of all of
> our (400+) domains, serving around 40 million requests per day,
> providing SSL termination and static file serving. We use event MPM
> and we have it scaled to support a peak of 2048 simultaneous
> connections. Load on the server never goes above 0.2, memory usage
> never goes above 1GB for the entire OS + applications, the rest of the
> RAM is used by the OS to cache the aforementioned static files.
>
> On our app servers we typically use Apache with worker MPM and
> mod_wsgi, although we have a few nginx+uwsgi sites, and I would dearly
> love some time to play around with a circusd + chausette + celery
> setup.
>
> The choice of web server is, these days, irrelevant. If it uses too
> much memory or can't handle enough users, it is never the fault of the
> web server, but instead of your application and/or configuration.

Thanks for the awesome write-up Tom and for correcting my error, my apologies to all for contributing to any FUD!

Yours gratefully,
Jonathan

Marc Aymerich

unread,
May 14, 2015, 3:19:13 PM5/14/15
to django-users
Hi Tom,
never heard about circusd and chaussette, but it sounds interesting.

I believe the advantage of this setup pays when you have several
independent low-traffic applications, because you don't want all of
them to have preforked wsgi worker processes, not even the master
process. I think uwsgi can do that (emperor mode, cheaper subsystem),
but circusd will allow you to do the same which celery (celery needs
at least one master per pool). Is this the main point? I'm asking
because I can only find just a couple of blogposts and the
documentation is concise, neither mention the real tangible advantage
over traditional deployments.


>
> The choice of web server is, these days, irrelevant. If it uses too
> much memory or can't handle enough users, it is never the fault of the
> web server, but instead of your application and/or configuration.
> Which is why I return to my original advice:
>
> > I am new to Django. I am building a app which will have to handle several
> > concurrent requests. Which web server is suitable for this?
>
> Any and all.
>
> Leave the fanboyism to the phone guys.
>
> Cheers
>
> Tom
>
> --
> You received this message because you are subscribed to the Google Groups "Django users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to django-users...@googlegroups.com.
> To post to this group, send email to django...@googlegroups.com.
> Visit this group at http://groups.google.com/group/django-users.
> To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/CAFHbX1KEVRM6WU7OCcLRSkJhpMS%2BfHpd7%2BWo7LO8XcEt8_f0Nw%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.




--
Marc

Andrew Farrell

unread,
May 15, 2015, 3:19:49 AM5/15/15
to akshat, django...@googlegroups.com
Ashkat,

I second the recommendation that Ashkat go with gunicorn+nginx for the same reason Avraham does. Digital Ocean has a good walkthrough of how to set that up here.


One thing though is I would make sure to do it in stages when you are just starting out. That way you always have some piece working and when some thing isn't working you have a guess where that is:
1) First run through the tutorial with just a sqlite database and the built-in webserver on port 8000.
2) Get nginx so that it is able to serve a static html file and you can see its' logs (probably in /var/log/nginx)
3) Get nginx to be able to proxy_pass from port 80 to the built-in webserver on port 8000
4) Get gunicorn able to serve on port 8000
5) Get nginx able to proxy to gunicorn on port 8000

And then you can also get the postgres database working, first by being able to connect to it with manage.py dbshell, then by running migrations and the built-in webserver and then the full stack.

Welcome aboard the django train,
Andrew


Tom Evans

unread,
May 15, 2015, 8:33:33 AM5/15/15
to django...@googlegroups.com
On Thu, May 14, 2015 at 4:18 PM, Marc Aymerich <glic...@gmail.com> wrote:
> On Thu, May 14, 2015 at 11:26 AM, Tom Evans <teva...@googlemail.com> wrote:
>> On our app servers we typically use Apache with worker MPM and
>> mod_wsgi, although we have a few nginx+uwsgi sites, and I would dearly
>> love some time to play around with a circusd + chausette + celery
>> setup.
>
>
> Hi Tom,
> never heard about circusd and chaussette, but it sounds interesting.
>
> I believe the advantage of this setup pays when you have several
> independent low-traffic applications, because you don't want all of
> them to have preforked wsgi worker processes, not even the master
> process. I think uwsgi can do that (emperor mode, cheaper subsystem),
> but circusd will allow you to do the same which celery (celery needs
> at least one master per pool). Is this the main point? I'm asking
> because I can only find just a couple of blogposts and the
> documentation is concise, neither mention the real tangible advantage
> over traditional deployments.
>

There are few very cool things I like about circusd + chaussette,
chausette allows you to run over a unix socket, and this allows
circusd to easily spin up a new backend (with different code) and
transfer requests to that unix socket, whilst leaving the old backend
still running.

This means zero downtime when doing a code upgrade, and instant
failback to the old application if for some reason you don't like what
was pushed.

The second thing is that circusd is a process manager like
supervisord, but it allows for dynamic operation - you can run celery
worker processes underneath it, and spin up/spin down worker processes
as you see fit to allow for load.

The third is that circusd is accessed by a web interface, which allows
for simple day to day use and also simplifies how admins and sysadmins
can interact with it. Its very easy for our sysadmins to control
things they can just fire http requests at.

Cheers

Tom

Avraham Serour

unread,
May 15, 2015, 8:51:10 AM5/15/15
to django...@googlegroups.com
>chausette allows you to run over a unix socket, and this allows
circusd to easily spin up a new backend (with different code) and
transfer requests to that unix socket, whilst leaving the old backend
still running.

This was the killer feature that I started using uwsgi, you can reload your new django code without uwsgi closing the socket, in the worst case some requests may feel a little delay but won't feel any downtime

I once even did a crazy thing: a form that updated and reload its own django code (git pull and uwsgi reload), it worked great, user would click on the button to update the system, the request would come and the user would get the updated version on reload (because form response is redirect)

my only problem was doing the same trick on cygwin, but that's another story


Cheers

Tom

--
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users...@googlegroups.com.
To post to this group, send email to django...@googlegroups.com.
Visit this group at http://groups.google.com/group/django-users.

Ilya Kazakevich

unread,
May 15, 2015, 1:15:38 PM5/15/15
to django...@googlegroups.com
Hi.

I believe the best installation is Apache + wsgi and nginx for static. 

In some scenarios Apache performance may be your bottleneck, but:
1) there are a lot of ways to tune it. Read about "Multi-Processing Modules" for example.
2) 99% of web applications do have different bottlenecks, not the web server.
3) really huge installations should use clusters hence should be scalable horizontally. In such scenarios web server performance is not an issue (unless you work for Google or Facebook) :)

Marc Aymerich

unread,
May 15, 2015, 8:36:53 PM5/15/15
to django-users
thanks Tom, that is really cool, I really appreciated your comments on this!


--
Marc
Reply all
Reply to author
Forward
0 new messages