Oh, what a procrastination opportunity, I could not resists : )!
I decided to set up my experiment using node.js itself as the client.
I suspect that 'ab' is not optimized for c > 1k, but I might be wrong.
Either way, using node as the client gives me full control over what
is going on.
My experiment itself is set up to have the client client open as many
connections as possible, and keep count of the open connections and
messages received. My server script just accepts incoming connections
and sends them a message every 10 seconds.
Code:
http://gist.github.com/243632
Here are my results:
Simultaneous connections: 10071 (peak)
Memory used: 50236K (at 10071 connections)
KB / connection: 4.99
Environment: Ubuntu 9.10 running inside virtual box with 512 MB Ram on
my Laptop
I also used the same linux kernel tcp settings and ulimit
configuration as outlined in the post cited above (http://
www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-1/).
In addition to monitoring the number of connections from client.js, I
also a customized version of Richards bash voodoo to measure
connections and memory usage from the outside:
SERVERPID=`pgrep -f 'node server.js'`; while [ 1 ] ; do
NUMCON=`netstat -n | awk '/ESTABLISHED/ && $4=="
127.0.0.1:8000"' | wc -
l`; MEM=`ps -o rss= -p $SERVERPID`; echo -e "`date`\t`date +%s`\t$MEM\t
$NUMCON"; sleep 1; done
Please take my results with a grain of salt as I only did 10-20 runs,
however my results were somewhat consistent. What happened is that
after just about 10k connection, the server would suddenly start
dropping connections like crazy and end the whole show with a
"Segmentation Fault" error. I'm not entirely sure what is causing
this. I tried to vary the message sending frequency and a few other
things I could think of, but I was not able to overcome the issue (it
may even be located on the client side, but the client script does not
segfault so the server is more suspicious).
Anyway, if my results are somewhat relevant, I think they are pretty
impressive. 10k open connections is pretty cool, and 4.99kb /
connection seems very competitive for this kind of scenario.
I would love for some people other than me to repeat my test. It
really only takes a few minutes to set up the kernel tuning and
install the test scripts if you already have node installed. (Don't
bother trying on OSX, at least I was not able to go above 1017
connections there until I got a "(libev) select: Invalid argument"
error).
Also if you have some theory about the segfault and the limit I hit,
please let me know. Ryan is gone for a while, but I'm sure he will add
some more info to this discussion when he comes back.
-- Felix Geisendörfer aka the_undefined
> > ab -n 10000 -c 1000
http://127.0.0.1:8080/