Nginx + WSGI == Blazing Fast!

440 views
Skip to first unread message

David Cancel

unread,
Jan 7, 2008, 10:55:41 AM1/7/08
to web.py
I know.. I know... Simple benchmarks mean nothing but I couldn't help
playing with the new(ish) mod_wsgi module for my favorite webserver
Nginx.

Nginx: http://nginx.net/
Nginx mod_wsgi module: http://wiki.codemongers.com/NginxNgxWSGIModule

I tested Nginx vs. the recommended setup of Lighttpd/Fastcgi. These
very simple and flawed tests were run on Debian Etch running under
virtualization (Parallels) on my Macbook Pro. Hey I said they were
flawed.. :-)

The results show Nginx/WSGI performing 3x as fast as Lighttpd/Fastcgi,
over 1000 requests per second!!

I tested both with Keep-Alives on and off. I'm not sure why Nginx/WSGI
performed 2x as fast with keep-alives on.

*********** Full results below *************

--------------------------------------------
Nginx 0.5.34 - Keepalives On
---------------------------------------------
ab -c 10 -n 1000 -k http://10.211.55.4/wsgi-webpy/david
This is ApacheBench, Version 1.3d <$Revision: 1.73 $> apache-1.3

Server Software: nginx/
0.5.34
Server Hostname: 10.211.55.4
Server Port: 80

Document Path: /wsgi-webpy/david
Document Length: 14 bytes

Concurrency Level: 10
Time taken for tests: 0.970 seconds
Complete requests: 1000
Failed requests: 0
Broken pipe errors: 0
Keep-Alive requests: 1001
Total transferred: 136136 bytes
HTML transferred: 14014 bytes
** Requests per second: 1030.93 [#/sec] (mean) **
Time per request: 9.70 [ms] (mean)
Time per request: 0.97 [ms] (mean, across all concurrent
requests)
Transfer rate: 140.35 [Kbytes/sec] received

Connnection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.4 0 5
Processing: 1 9 4.3 9 26
Waiting: 0 9 4.2 9 25
Total: 1 9 4.3 9 26

Percentage of the requests served within a certain time (ms)
50% 9
66% 11
75% 12
80% 13
90% 15
95% 17
98% 20
99% 22
100% 26 (last request)

--------------------------------------------
Nginx 0.5.34 - No Keepalives
---------------------------------------------
ab -c 10 -n 1000 http://10.211.55.4/wsgi-webpy/david
This is ApacheBench, Version 1.3d <$Revision: 1.73 $> apache-1.3

Server Software: nginx/
0.5.34
Server Hostname: 10.211.55.4
Server Port: 80

Document Path: /wsgi-webpy/david
Document Length: 14 bytes

Concurrency Level: 10
Time taken for tests: 2.378 seconds
Complete requests: 1000
Failed requests: 0
Broken pipe errors: 0
Total transferred: 131131 bytes
HTML transferred: 14014 bytes
** Requests per second: 420.52 [#/sec] (mean) **
Time per request: 23.78 [ms] (mean)
Time per request: 2.38 [ms] (mean, across all concurrent
requests)
Transfer rate: 55.14 [Kbytes/sec] received

Connnection Times (ms)
min mean[+/-sd] median max
Connect: 0 4 2.9 3 26
Processing: 8 19 8.8 18 136
Waiting: 0 19 8.8 17 135
Total: 8 23 8.9 21 142

Percentage of the requests served within a certain time (ms)
50% 21
66% 24
75% 26
80% 28
90% 34
95% 40
98% 45
99% 47
100% 142 (last request)

*********************************************************************

--------------------------------------------
Lighttpd 1.4.13 - Keepalives On
---------------------------------------------
ab -c 10 -n 1000 -k http://10.211.55.4/david
This is ApacheBench, Version 1.3d <$Revision: 1.73 $> apache-1.3

Server Software: lighttpd/
1.4.13
Server Hostname: 10.211.55.4
Server Port: 80

Document Path: /david
Document Length: 14 bytes

Concurrency Level: 10
Time taken for tests: 2.901 seconds
Complete requests: 1000
Failed requests: 1
(Connect: 0, Length: 1, Exceptions: 0)
Broken pipe errors: 0
Keep-Alive requests: 942
Total transferred: 138711 bytes
HTML transferred: 14001 bytes
** Requests per second: 344.71 [#/sec] (mean) **
Time per request: 29.01 [ms] (mean)
Time per request: 2.90 [ms] (mean, across all concurrent
requests)
Transfer rate: 47.81 [Kbytes/sec] received

Connnection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.1 0 21
Processing: 3 28 29.3 22 385
Waiting: 3 28 29.3 22 385
Total: 3 28 29.3 22 385

Percentage of the requests served within a certain time (ms)
50% 22
66% 26
75% 31
80% 34
90% 48
95% 60
98% 100
99% 164
100% 385 (last request)

--------------------------------------------
Lighttpd 1.4.13 - No Keepalives
---------------------------------------------
ab -c 10 -n 1000 http://10.211.55.4/david
This is ApacheBench, Version 1.3d <$Revision: 1.73 $> apache-1.3

Server Software: lighttpd/
1.4.13
Server Hostname: 10.211.55.4
Server Port: 80

Document Path: /david
Document Length: 14 bytes

Concurrency Level: 10
Time taken for tests: 4.017 seconds
Complete requests: 1000
Failed requests: 1
(Connect: 0, Length: 1, Exceptions: 0)
Broken pipe errors: 0
Total transferred: 134269 bytes
HTML transferred: 14029 bytes
** Requests per second: 248.94 [#/sec] (mean) **
Time per request: 40.17 [ms] (mean)
Time per request: 4.02 [ms] (mean, across all concurrent
requests)
Transfer rate: 33.43 [Kbytes/sec] received

Connnection Times (ms)
min mean[+/-sd] median max
Connect: 0 3 4.9 2 68
Processing: 3 36 49.6 28 852
Waiting: 2 35 49.6 28 852
Total: 3 39 50.1 30 855

Percentage of the requests served within a certain time (ms)
50% 30
66% 36
75% 41
80% 44
90% 61
95% 87
98% 148
99% 252
100% 855 (last request)

rkm...@gmail.com

unread,
Jan 7, 2008, 11:29:24 AM1/7/08
to we...@googlegroups.com
hei david,
thanks for the benchmarks....
can you give the configuration files.. for both nginx and lighttpd that you used....

David Cancel

unread,
Jan 7, 2008, 12:02:16 PM1/7/08
to web.py
Here is the configuration files and the code used for both tests.

--------------------------------------------
Configuration file for Lighttpd
--------------------------------------------
server.modules = (
"mod_rewrite",
"mod_fastcgi",
)
server.document-root = "/var/www/"
server.errorlog = "/var/log/lighttpd/error.log"
server.pid-file = "/var/run/lighttpd.pid"

## virtual directory listings
dir-listing.encoding = "utf-8"
server.dir-listing = "enable"
server.username = "www-data"
server.groupname = "www-data"

fastcgi.server = ( "/code-fastcgi.py" =>
(( "socket" => "/tmp/fastcgi.socket",
"bin-path" => "/var/www/code-fastcgi.py",
"max-procs" => 1
))
)

url.rewrite-once = (
"^/favicon.ico$" => "/static/favicon.ico",
"^/static/(.*)$" => "/static/$1",
"^/(.*)$" => "/code-fastcgi.py/$1",
)

--------------------------------------------
Configuration file for Nginx
--------------------------------------------
worker_processes 2;
error_log logs/error.log info;
pid logs/nginx.pid;

events {
worker_connections 1024;
}


env HOME;
env PYTHONPATH=/usr/bin/python;

http {
include conf/mime.types;
default_type application/octet-stream;

sendfile on;
keepalive_timeout 65;

wsgi_python_optimize 2;
wsgi_python_executable /usr/bin/python;
wsgi_python_home /usr/;
wsgi_enable_subinterpreters on;

server {
listen 80;
server_name localhost;


include conf/wsgi_vars;

location / {
#client_body_buffer_size 50;
wsgi_pass /usr/local/nginx/nginx.py;

wsgi_pass_authorization off;
wsgi_script_reloading on;
wsgi_use_main_interpreter on;
}

location /wsgi {
#client_body_buffer_size 50;
wsgi_var TEST test;
wsgi_var FOO bar;
wsgi_var EMPTY "";
# override existing HTTP_ variables
wsgi_var HTTP_USER_AGENT "nginx";
wsgi_var HTTP_COOKIE $http_cookie;

wsgi_pass /usr/local/nginx/nginx-2.py main;

wsgi_pass_authorization on;
wsgi_script_reloading off;
wsgi_use_main_interpreter off;
}

location /wsgi-webpy {
wsgi_pass /usr/local/nginx/webpy-code.py;
}
}
}

--------------------------------------------
Code for Lighttpd
--------------------------------------------
#!/usr/bin/env python

import web

urls = (
'/(.*)', 'hello'
)

class hello:
def GET(self, name):
i = web.input(times=1)
if not name: name = 'world'
for c in xrange(int(i.times)):
print 'Hello,', name+'!'

if __name__ == "__main__": web.run(urls, globals())

--------------------------------------------
Code for Nginx
--------------------------------------------
import web

urls = (
'/(.*)', 'hello'
)

class hello:
def GET(self, name):
i = web.input(times=1)
if not name: name = 'world'
for c in xrange(int(i.times)): print 'Hello,', name+'!'

application = web.wsgifunc(web.webpyfunc(urls, globals()))







On Jan 7, 11:29 am, "rkmr...@gmail.com" <rkmr...@gmail.com> wrote:
> hei david,
> thanks for the benchmarks....
> can you give the configuration files.. for both nginx and lighttpd that you
> used....
>
> On Jan 7, 2008 7:55 AM, David Cancel <dcan...@gmail.com> wrote:
>
>
>
> > I know.. I know... Simple benchmarks mean nothing but I couldn't help
> > playing with the new(ish) mod_wsgi module for my favorite webserver
> > Nginx.
>
> > Nginx:http://nginx.net/
> > Nginx mod_wsgi module:http://wiki.codemongers.com/NginxNgxWSGIModule
>
> > I tested Nginx vs. the recommended setup of Lighttpd/Fastcgi. These
> > very simple and flawed tests were run on Debian Etch running under
> > virtualization (Parallels) on my Macbook Pro. Hey I said they were
> > flawed.. :-)
>
> > The results show Nginx/WSGI performing 3x as fast as Lighttpd/Fastcgi,
> > over 1000 requests per second!!
>
> > I tested both with Keep-Alives on and off. I'm not sure why Nginx/WSGI
> > performed 2x as fast with keep-alives on.
>
> > *********** Full results below *************
>
> > --------------------------------------------
> > Nginx 0.5.34 - Keepalives On
> > ---------------------------------------------
> > ab -c 10 -n 1000 -khttp://10.211.55.4/wsgi-webpy/david
> > ab -c 10 -n 1000 -khttp://10.211.55.4/david
> > ab -c 10 -n 1000http://10.211.55.4/david

Graham Dumpleton

unread,
Jan 7, 2008, 8:23:10 PM1/7/08
to web.py
On Jan 8, 2:55 am, David Cancel <dcan...@gmail.com> wrote:
> I know.. I know... Simple benchmarks mean nothing but I couldn't help
> playing with the new(ish) mod_wsgi module for my favorite webserver
> Nginx.
>
> Nginx:http://nginx.net/
> Nginx mod_wsgi module:http://wiki.codemongers.com/NginxNgxWSGIModule
>
> I tested Nginx vs. the recommended setup of Lighttpd/Fastcgi. These
> very simple and flawed tests were run on Debian Etch running under
> virtualization (Parallels) on my Macbook Pro. Hey I said they were
> flawed.. :-)
>
> The results show Nginx/WSGI performing 3x as fast as Lighttpd/Fastcgi,
> over 1000 requests per second!!
>
> I tested both with Keep-Alives on and off. I'm not sure why Nginx/WSGI
> performed 2x as fast with keep-alives on.

I realise people love to show benchmarks with Keep-Alive enabled
because they make things look much better, but results from using Keep-
Alive are pretty bogus as they generally bear no relationship to
reality. You are just never going to get a browser client sending
hundreds of requests down the same socket connection for a start.

In reality what is more likely to happen is you get one request
against your WSGI application and then you might get subsequent
requests for linked images from the page. But then, if the page has
been visited before, it is likely that those linked images are already
cached by the browser and so even that will not happen. Thus, in the
majority of cases you will get one request only and the socket
connection will not even be used.

That this occurs is why in part for a high performance web site you
are better of hosting your media files on a different server. The main
server hosting the WSGI application would have Keep-Alive turned off
so that socket connections don't linger and consume resources. The
media server on the other hand could quite happily use Keep-Alive, as
it is the media files linked from a page that one is more likely to
get the potential for serialised requests on the same socket
connection.

The only time that Keep-Alive may be valid in testing for dynamic web
applications is where one is trying to remove the overhead of the
socket connection from the picture so as to evaluate the overheads of
any internal mechanisms applicable to the hosting mechanism.

For example, in contrasting the overheads of using mod_python,
mod_wsgi embedded, vs systems which need to do a subsequent proxy to a
further process such as mod_proxy, or mod_wsgi daemon, of fastcgi,
scgi, ajp solutions. Ie., use Keep-Alive so that it is easier to see
what the overhead of that proxying actually is. You really need to
know what you are looking for in doing that though.

Also be mindful that nginx mod_wsgi isn't necessarily seen as being a
front facing web server as I don't think it is really setup to also
handle static file serving at the same time. Thus, it is more seen as
being something that would sit behind mod_proxy, or maybe in parallel
to a media server. If your only option is behind mod_proxy so front
end server is serving static files, then you really need to take that
mod_proxy hop into consideration when doing testing.

Anyway, all the results are pretty meaningless anyway, as your
bottleneck isn't going to be the network, but the WSGI application
itself or any database it accesses.

Unless a solution has really bad performance relative to others, one
is always better off using the mechanism which you find easier to work
with, or which has specific features you need. :-)

Graham

David Cancel

unread,
Jan 7, 2008, 9:17:49 PM1/7/08
to web.py
As I said it was a flawed benchmark as are all such attempts IMO. The
reason for my quick test was to see what the difference if any there
was between one interface option vs. another. This is probably the
only useful aspect of a benchmark, I assume that's why you've used
such benchmarks yourself when testing modwsgi.

The results were interesting (to me) given the ease of integration
with Nginx vs. Lighttpd. Also since I tested both options using keep-
alives it was not an attempt to make things look better. An
interesting observation from testing both options with keep-alives on
was Nginx's more than 2x improvement (keep-alives on vs off) vs the
much smaller improvement seen by Lighttpd. Something worth looking
into a bit more and whose results my reinforce Nginx's attractiveness
as an asset server (images and other media).

Correct that paths served by nginx's mod_wsgi isn't where you want to
serve media files but Nginx is itself a great front-facing webserver,
that is what most people use it for (proxy). My instance was
configured to serve up static files via Nginx and only pass a certain
path to mod_wsgi. In this case you can use keep-alives to serve up
both media files and dynamic content from this single front facing web
server. I agree though that if possible you should split up the two
operations for optimal performance.

Agreed Nginx is the mechanism that I find easiest to work with.

Benchmarks are an interesting thing in that everyone agrees they're
flawed but yet everyone loves to perform them and to comment on them.
Interesting indeed.

David


On Jan 7, 8:23 pm, Graham Dumpleton <Graham.Dumple...@gmail.com>
wrote:

Rolf Dergham

unread,
Mar 30, 2017, 3:19:18 AM3/30/17
to web.py, dca...@gmail.com
I understand that lighthttps is single threaded, whereas nginx is multithreaded. So if you have multiple CPU cores, then nginx would be faster but also using twice the CPU resources.
Reply all
Reply to author
Forward
0 new messages