hello world: Gevent versus Node.js

932 views
Skip to first unread message

VP

unread,
Apr 28, 2011, 5:35:15 PM4/28/11
to gevent: coroutine-based Python network library
Hi there, i'm a newbie and I don't do blog, so I'm not sure if this
post is appropriate for this usergroup. Anyhow, I'm recently
interested in the type of things that Node.js and Gevent do.

Background:
I came across this blog post (http://entitycrisis.blogspot.com/2011/04/
pyramid-vs-nodejs_08.html ), which has been reposted a couple of
times. Looking into it, I think the conclusion that Node.js beat
Gevent that the author drew was misleading. So I did my own test.
Granted that a hello-world test can only tell so much, I think there
are interesting observations.

The problem with the test in that blog is that Node.js by default does
not output to stdout, whereas gevent does for every request. For a
hello-world app, it makes a big deal.

The test: from the latest of Node.js and Gevent on Debian Squeeze.

Node.js:
var util = require('util'),
http = require('http');

http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.write('Hello world!')
res.end();
}).listen(3004);

/* server started */
util.puts('> hello world running on port 3004');


Gevent:
from gevent import wsgi

class WebServer(object):
def application(self, environ, start_response):
start_response("200 OK", [])
return ["Hello world!"]

if __name__ == "__main__":
app = WebServer()
wsgi.WSGIServer(('', 8888), application=app.application,
log=None).serve_forever()

Notice that I set log=None on Gevent to prevent it from outputing to
stdout every request.

===

The result is kind of interesting....

For ab -n 10000 -c 5, Gevent does better than Node.js

Result from Node.js:


Concurrency Level: 5
Time taken for tests: 3.750 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 760000 bytes
HTML transferred: 120000 bytes
Requests per second: 2666.39 [#/sec] (mean)
Time per request: 1.875 [ms] (mean)
Time per request: 0.375 [ms] (mean, across all concurrent
requests)
Transfer rate: 197.90 [Kbytes/sec] received

Percentage of the requests served within a certain time (ms)
50% 2
66% 2
75% 2
80% 2
90% 3
95% 3
98% 3
99% 4
100% 16 (longest request)

---
And result for Gevent:

Concurrency Level: 5
Time taken for tests: 3.442 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 1470000 bytes
HTML transferred: 120000 bytes
Requests per second: 2905.48 [#/sec] (mean)
Time per request: 1.721 [ms] (mean)
Time per request: 0.344 [ms] (mean, across all concurrent
requests)
Transfer rate: 417.09 [Kbytes/sec] received

Percentage of the requests served within a certain time (ms)
50% 2
66% 2
75% 2
80% 2
90% 2
95% 2
98% 2
99% 2
100% 3 (longest request)
---

As you see, Gevent is actually better, processed more requests. But
what is interesting in this test, is that if you look at the 100th
percentile of requests. Node.js took up to 16ms to serve them;
whereas Gevent took only 3ms. This means these requests waited for a
significantly longer amount of time than the rest, on Node.js.

==================

To explore this a little further, I increased the concurrency ten
folds.

Test: ab -n 10000 -c 50

Result for Node.js:

Concurrency Level: 50
Time taken for tests: 3.561 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 760000 bytes
HTML transferred: 120000 bytes
Requests per second: 2808.56 [#/sec] (mean)
Time per request: 17.803 [ms] (mean)
Time per request: 0.356 [ms] (mean, across all concurrent
requests)
Transfer rate: 208.45 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 2
Processing: 3 17 8.8 17 45
Waiting: 3 17 8.8 17 45
Total: 3 18 8.8 18 46

Percentage of the requests served within a certain time (ms)
50% 18
66% 22
75% 25
80% 26
90% 29
95% 31
98% 33
99% 37
100% 46 (longest request)

-----

Result for Gevent:

Concurrency Level: 50
Time taken for tests: 3.606 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 1470000 bytes
HTML transferred: 120000 bytes
Requests per second: 2772.88 [#/sec] (mean)
Time per request: 18.032 [ms] (mean)
Time per request: 0.361 [ms] (mean, across all concurrent
requests)
Transfer rate: 398.06 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 2
Processing: 7 18 0.5 18 20
Waiting: 7 18 0.5 18 20
Total: 7 18 0.4 18 20

Percentage of the requests served within a certain time (ms)
50% 18
66% 18
75% 18
80% 18
90% 18
95% 18
98% 18
99% 18
100% 20 (longest request)

----

As you see, Gevent serves a fewer number of requests. I would
contribute this to system fluctuations. What is interesting to me is
that Gevent is very fair to all requests; whereas Node.js isn't.

Even the slowest requests, Gvent took only 20ms, whereas Node.js took
46ms. Further, if you look at all requests from 50 to 100
percentiles, Gevent consistently serve them in 18ms, whereas Node.js
spent between 18 to 46 ms. I think for these requests, Gevent wins
hand down.

Apache Bench doesn't show, but we can deduce that Node.js serves the
requests in the 1-50 percentiles very quickly.

--

Pushing concurrency to 100: ab -n 10000 -c 100, same story:

Node.js
Requests per second: 2763.72 [#/sec] (mean)
Time per request: 36.183 [ms] (mean)

50% 35
66% 44
75% 50
80% 53
90% 59
95% 62
98% 67
99% 72
100% 963 (longest request)

Gevent:
Requests per second: 2726.76 [#/sec] (mean)
Time per request: 36.674 [ms] (mean)
50% 37
66% 37
75% 37
80% 37
90% 37
95% 37
98% 37
99% 37
100% 38 (longest request)

------

For concurrency = 200, same story, but Gevent performance degrades
badly, while Node.js still keeps roughly the same performance.

Node.js
Requests per second: 2750.41 [#/sec] (mean)
Time per request: 72.716 [ms] (mean)

50% 50
66% 63
75% 70
80% 74
90% 81
95% 90
98% 938
99% 973
100% 1261 (longest request)


Gevent:
Requests per second: 1807.85 [#/sec] (mean)
Time per request: 110.629 [ms] (mean)

50% 49
66% 49
75% 49
80% 49
90% 50
95% 50
98% 72
99% 249
100% 5485 (longest request)

CryptWizard

unread,
Apr 29, 2011, 1:14:16 AM4/29/11
to gev...@googlegroups.com
Good catch.

Matt Billenstein

unread,
Apr 29, 2011, 9:22:46 PM4/29/11
to gev...@googlegroups.com
I don't know if I'd read very much into the latency and fairness results here
-- this is a bad benchmark IMO -- it basically simulates a cpu-bound sort of
workload whereas most web app servers are I/O bound (waiting on databases and
various other internal services). I mean, why not throw nginx into the race?
It's really built for this sorta thing anyway; serving static content...

I think the key observation even from the original test is that regardless of
which server wins, they're comparable in terms of performance. So the choice
on which to use basically falls to library support or personal preference or
maybe something else entirely.

I personally dislike writing Javascript or using callbacks, but I can see how a
frontend web guy would naturally be drawn to node.js - and the jit there does
make it interesting -- I hope PyPy brings some of this technology to Python in
a more mainstream manner eventually.

m

--
Matt Billenstein
ma...@vazor.com
http://www.vazor.com/

Andy

unread,
Apr 30, 2011, 12:00:28 PM4/30/11
to gevent: coroutine-based Python network library
Very interesting. Thanks for posting.

Do you know why gevent's performance degrades so badly at high
concurrency (200)? Is that due to the difference between libevent and
libev (which node.js uses)?

VP

unread,
May 1, 2011, 5:19:41 PM5/1/11
to gevent: coroutine-based Python network library


On Apr 29, 8:22 pm, Matt Billenstein <m...@vazor.com> wrote:
> I don't know if I'd read very much into the latency and fairness results here
> -- this is a bad benchmark IMO -- it basically simulates a cpu-bound sort of
> workload whereas most web app servers are I/O bound (waiting on databases and
> various other internal services).

I think you are right that most wep apps are database oriented. But
as they are non-blocking platforms, the strength of Node.js and Gevent
are not in these database oriented applications. Things such as
chatrooms or highly interactively and/or concurrent are where Node.js
and Gevent shine (I think). As such, I think these "hello-world"
tests are a little more meaningful than in those other I/O bound
cases.

Matt Billenstein

unread,
May 1, 2011, 5:43:28 PM5/1/11
to gev...@googlegroups.com

If your message queue is in the same python process -- maybe so, but even for
chat, you're likely talking to redis or some other server for queuing messages
where it turns into the same sort of 'waiting on the db' workload...

m

Andy

unread,
May 2, 2011, 1:12:56 AM5/2/11
to gevent: coroutine-based Python network library
But even for database oriented workload it is possible to access the
database asynchronously, no? So after sending a database query,
instead of blocking and waiting for a response, the python process is
free to process another request. There's no longer a need to have 1
process per request. That should improve throughput and memory
consumption.

Matt Billenstein

unread,
May 2, 2011, 2:25:17 AM5/2/11
to gev...@googlegroups.com
On Sun, May 01, 2011 at 10:12:56PM -0700, Andy wrote:
> > > I don't know if I'd read very much into the latency and fairness results here
> > > -- this is a bad benchmark IMO -- it basically simulates a cpu-bound sort of
> > > workload whereas most web app servers are I/O bound (waiting on databases and
> > > various other internal services).
> >
> > I think you are right that most wep apps are database oriented. ?But

> > as they are non-blocking platforms, the strength of Node.js and Gevent
> > are not in these database oriented applications. ?Things such as

> > chatrooms or highly interactively and/or concurrent are where Node.js
> > and Gevent shine (I think). ?As such, I think these "hello-world"

> > tests are a little more meaningful than in those other I/O bound
> > cases.
>
> But even for database oriented workload it is possible to access the
> database asynchronously, no? So after sending a database query,
> instead of blocking and waiting for a response, the python process is
> free to process another request. There's no longer a need to have 1
> process per request. That should improve throughput and memory
> consumption.

This is done with threads -- so the thread blocks and the os schedules another
one to service another request -- threads are rather heavy though and why
things like gevent exist.

I was trying to highlight how I think this benchmark is only interesting in
showing that at the core node.js and python/gevent are comparable in
performance and that saying gevent is "more fair" to requests on this type of
benchmark/workload using these results isn't conclusive IMO...

VP

unread,
May 2, 2011, 10:32:35 AM5/2/11
to gevent: coroutine-based Python network library
>I think this benchmark is only interesting in
> showing that at the core node.js and python/gevent are comparable in
> performance and that saying gevent is "more fair" to requests on this type of
> benchmark/workload using these results isn't conclusive IMO...
>
> m
>
> --
> Matt Billenstein
> m...@vazor.comhttp://www.vazor.com/

I am in agreement with you that database access will likely be the
bottle neck in many applications. But I do still that gevent is more
fair to its requests than node.js at the very core. Let's take
another look:

With 100 concurrent connections:

Node.js
Requests per second: 2763.72 [#/sec] (mean)
Time per request: 36.183 [ms] (mean)

50% 35
66% 44
75% 50
80% 53
90% 59
95% 62
98% 67
99% 72
100% 963 (longest request)

Gevent:
Requests per second: 2726.76 [#/sec] (mean)
Time per request: 36.674 [ms] (mean)
50% 37
66% 37
75% 37
80% 37
90% 37
95% 37
98% 37
99% 37
100% 38 (longest request)


From what I see, Gevent take no more than 38ms across the board;
whereas for node.js 10% of the requests took almost 1 second. Unless,
you say that these tests were technically inaccurate/flawed in some
ways, 1 second is a perceivable delay for real-time apps. Surely,
database blocking, etc., will add to this, but I think at the core
level, a 1 second processing time for 10% of the requests is something
to ponder about.

Andy

unread,
May 2, 2011, 11:04:39 AM5/2/11
to gevent: coroutine-based Python network library

> This is done with threads -- so the thread blocks and the os schedules another
> one to service another request -- threads are rather heavy though and why
> things like gevent exist.

Not sure what you mean here. Are you saying database access is always
blocking? But isn't that what monkey.patch_all() is for? It turns your
(pure python) database driver into non-blocking driver. There're also
native non-blocking drivers like gevent-MySQL. So why would database
access block the thread?


Matt Billenstein

unread,
May 2, 2011, 1:26:33 PM5/2/11
to gev...@googlegroups.com

Almost 1 second for 10%? Where are you getting that? The 963ms is the longest
single request (it says it right there...) which falls in the 100th
percentile. Would be interesting to run it 10 times and compare the results --
I'm curious if that outlier is just some spurious system event...

Good benchmarking is really really hard -- you can't expect to learn much when
you invest so little - that was my only point...

VP

unread,
May 2, 2011, 2:28:57 PM5/2/11
to gevent: coroutine-based Python network library
Sorry, my bad, I meant to say 1%. This quickly grew to become a few
more percent of requests (look at the other results). I do not
believe that what you see is spurious system events, because this is
not the first time I carried out this test; and it wasn't just one
run.

I also agree with you that this test is not comprehensive. Please
feel free to post test results that are more meaningful.

Best.
> m...@vazor.comhttp://www.vazor.com/
Reply all
Reply to author
Forward
0 new messages