How would you serve 100,000 simultaneous comet requests with Node?

Showing 1-47 of 47 messages
How would you serve 100,000 simultaneous comet requests with Node? Simon Willison 11/25/09 2:34 AM
I'm trying to figure out if / how it would be possible to serve
"broadcast" comet long polling to 100,000 users simultaneously using
Node. This is currently just an academic thought exercise, but if it's
possible and not ridiculously expensive there are plenty of
interesting potential applications.

The two key challenges seem to be:

1. How many simultaneous comet connections can a single machine
handle, and how can a machine be configured for maximum connections?
2. What's the most effective way to load test / benchmark this?

With a reliable method of getting as many parallel connections out of
a machine as possible, scaling up to a specific number is just a case
of adding more boxes (or EC2 nodes or whatever).

So far I've just messed around on my MacBook Pro with the 2 second
delay hello world example, Apache bench, various ulimit settings and
pretty unsatisfactory results - I can't get any higher than around "ab
-t 10 -c 300" before ab starts throwing "apr_socket_recv: Connection
reset by peer (54)". This is after running "ulimit -n 10000" before
both starting the node server and kicking off the "ab" command.

I'm not sure if the apr_socket_recv error is caused by Node or by
Apache Bench. I'm pretty sure my MacBook Pro isn't the ideal
environment for running these kinds of tests.

What should I be doing to accurately stress test Node from a parallel
connection (as opposed to requests-per-second) point of view?

I noticed that the Node HTTP server has a backlog hard coded to 128 -
is this relevant to what I'm trying to do?

Finally, is long polling even the correct solution for broadcast to
100,000 users? With long polling, every single browser receives an
event message and has to re-open a new HTTP connection at the same
time. Getting hit by 100,000 new HTTP requests at the same instant
sounds scary. Can Node handle that? Can anything handle that? Should I
be thinking about comet streaming techniques instead?

Cheers,

Simon Willison
Re: How would you serve 100,000 simultaneous comet requests with Node? Simon Willison 11/25/09 2:45 AM
I probably should have read this before posting:

http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-1/

I'm still very interested in hearing people's opinions on doing this
with Node.
This message has been hidden because it was flagged for abuse.
Re: How would you serve 100,000 simultaneous comet requests with Node? Simon Willison 11/25/09 8:51 AM
On Nov 25, 2:58 pm, Kam Kasravi <kamkasr...@yahoo.com> wrote:
> I think key to this approach is to keep the connections open and use non-blocking io to
> read / write to the connections. Within the javascript you should do continuations using yield.

Aha - I've used yield in Python but didn't realise V8 supported it. Do
you have a simple example of using yield to write events out to a
response stream in Node?

Cheers,

Simon
Re: How would you serve 100,000 simultaneous comet requests with Node? Tautologistics 11/26/09 6:32 AM
How would yield help here? You'd still need something (node?) to
iterate your generators. I guess a setInterval(function ()
{ target.next(); }, 0) would do the trick while still allowing a
return to the event loop but I think one could do this just as easily
without yield.

Also, I've been prodding node to see what its concurrency limits are
(I've got an erlang project that I would love to switch over to node,
at least for the proof of concept). On OSX I've tested to 1000
concurrent connections but there are some caveats that will eventually
need to be address (connection rate causes crashes, buggy behavior
when passing the host and backlog params to server.listen()). I'll
post up the results and the test methods a little later...
Re: How would you serve 100,000 simultaneous comet requests with Node? Chris Winberry 11/26/09 7:42 AM
Quick update, on Ubuntu 9.04 running under VirtualBox I am at 5000
concurrent connections and climbing. The page being served sends a few
bytes once a second for 50 seconds. What's also interesting is that a
rough plot of memory usage shows only about 10K of memory used per
connection.

I'm moving the Linux testing over to a physical box later today to see
what can really be gotten out of node.
Re: How would you serve 100,000 simultaneous comet requests with Node? Simon Willison 11/26/09 8:51 AM
On Nov 26, 3:42 pm, Tautologistics <cpt.obvi...@gmail.com> wrote:
> Quick update, on Ubuntu 9.04 running under VirtualBox I am at 5000
> concurrent connections and climbing.

What did you do to get that kind of concurrency - just a standard Node
server and a high ulimit setting? Which benchmarking tool are you
using?

Thanks,

Simon
Re: How would you serve 100,000 simultaneous comet requests with Node? Chris Winberry 11/26/09 9:43 AM
I still plan to post something more thorough but in the mean time,
here's what I did.

The script (sleep.js):
-------------------------------------
var sys = require('sys'),
  http = require('http');

var i = 1;

http.createServer(function(req, res) {
        sys.puts(i++);
        res.sendHeader(200, {'Content-Type': 'text/html'});
        res.sendBody('HELLO');
        var sendCount = 1;
        var intHandle = setInterval(function () {
                sendCount++;
                res.sendBody("" + sendCount);
                if (sendCount > 50) {
                        clearInterval(intHandle);
                        res.finish();
                }
        }, 1000)
}).listen(8080);
-------------------------------------

The node server was run using the following commands (OSX used a
ulimit value of 9999):
> sudo bash
> ulimit -n 99999
> su chris
> ulimit -n
> node sleep.js

I noticed problems when the connection rate was too fast (esp. on OSX)
and ab had epoll problems with too many connections as well so I split
the testing into multiple processes (shells). Each shell then ran the
following commands (on OSX the ab -c param was 100 and the -n param
was 1000):
> sudo bash
> ulimit -n 9999
> su chris
> ulimit -n
> ab -n 10000 -c 1000 http://127.0.0.1:8080/

Everyone once in a while I'd run the following to check if the
response from the server was still something sane:
> echo -e "GET / HTTP/1.0\r\r" | nc 127.0.0.1 8080

BTW - there's something nutty with node here because the netcat
command fails under OSX. Also noticed it fails if the HTTP/1.0 isn't
all uppercase.

Anyway, six shells under Linux gives us one server and 5000 open
client connections. And what's really cool is that all responses take
almost exactly 50sec so there's evidence that the timing/
responsiveness isn't hosed.
Re: How would you serve 100,000 simultaneous comet requests with Node? Felix Geisendoerfer 11/26/09 12:48 PM
Oh, what a procrastination opportunity, I could not resists : )!

I decided to set up my experiment using node.js itself as the client.
I suspect that 'ab' is not optimized for c > 1k, but I might be wrong.
Either way, using node as the client gives me full control over what
is going on.

My experiment itself is set up to have the client client open as many
connections as possible, and keep count of the open connections and
messages received. My server script just accepts incoming connections
and sends them a message every 10 seconds.

Code: http://gist.github.com/243632

Here are my results:

Simultaneous connections: 10071 (peak)
Memory used: 50236K (at 10071 connections)
KB / connection: 4.99
Environment: Ubuntu 9.10 running inside virtual box with 512 MB Ram on
my Laptop

I also used the same linux kernel tcp settings and ulimit
configuration as outlined in the post cited above (http://
www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-1/).

In addition to monitoring the number of connections from client.js, I
also a customized version of Richards bash voodoo to measure
connections and memory usage from the outside:

SERVERPID=`pgrep -f 'node server.js'`; while [ 1 ] ; do
NUMCON=`netstat -n | awk '/ESTABLISHED/ && $4=="127.0.0.1:8000"' | wc -
l`; MEM=`ps -o rss= -p $SERVERPID`; echo -e "`date`\t`date +%s`\t$MEM\t
$NUMCON"; sleep 1; done

Please take my results with a grain of salt as I only did 10-20 runs,
however my results were somewhat consistent. What happened is that
after just about 10k connection, the server would suddenly start
dropping connections like crazy and end the whole show with a
"Segmentation Fault" error. I'm not entirely sure what is causing
this. I tried to vary the message sending frequency and a few other
things I could think of, but I was not able to overcome the issue (it
may even be located on the client side, but the client script does not
segfault so the server is more suspicious).

Anyway, if my results are somewhat relevant, I think they are pretty
impressive. 10k open connections is pretty cool, and 4.99kb /
connection seems very competitive for this kind of scenario.

I would love for some people other than me to repeat my test. It
really only takes a few minutes to set up the kernel tuning and
install the test scripts if you already have node installed. (Don't
bother trying on OSX, at least I was not able to go above 1017
connections there until I got a "(libev) select: Invalid argument"
error).

Also if you have some theory about the segfault and the limit I hit,
please let me know. Ryan is gone for a while, but I'm sure he will add
some more info to this discussion when he comes back.

-- Felix Geisendörfer aka the_undefined
> > ab -n 10000 -c 1000http://127.0.0.1:8080/
Re: How would you serve 100,000 simultaneous comet requests with Node? ryan 11/26/09 12:58 PM
- OSX sucks. Don't use Macintosh to do benchmarks like this - the
Linux version of node, which uses epoll() will scale much better.

- I haven't done arn't any very large scale concurrency tests on node.
Interested in hearing what people find.

- broadcasting single updates to 100,000 people with http long-poll
sounds a bit extreme. I'm not sure if that's possible on any system.
(Michael Carter should discuss his experience on the subject here!)

- long-poll is pretty good - afaik the browsers will use keep-alive to
make new requests, so you don't have the overhead of constructing a
TCP connection each time. The only thing to be said against long-poll
is that HTTP requests are bit verbose - in this way websockets will be
better.
Re: How would you serve 100,000 simultaneous comet requests with Node? Joel Perras (jperras) 11/26/09 1:03 PM
> What happened is that
> after just about 10k connection, the server would suddenly start
> dropping connections like crazy and end the whole show with a
> "Segmentation Fault" error. I'm not entirely sure what is causing
> this.

Felix, meet the C10k problem: http://www.kegel.com/c10k.html &
http://en.wikipedia.org/wiki/C10k_problem

Awesome post, by the way. Guess I'm going to have to delve into
node.js a bit more seriously at some point ;-).
-jperras
Re: How would you serve 100,000 simultaneous comet requests with Node? ryan 11/26/09 1:10 PM
> Please take my results with a grain of salt as I only did 10-20 runs,
> however my results were somewhat consistent. What happened is that
> after just about 10k connection, the server would suddenly start
> dropping connections like crazy and end the whole show with a
> "Segmentation Fault" error. I'm not entirely sure what is causing
> this. I tried to vary the message sending frequency and a few other
> things I could think of, but I was not able to overcome the issue (it
> may even be located on the client side, but the client script does not
> segfault so the server is more suspicious).

Would be cool if you could gdb it and find out what's happening.
Otherwise, I'll check this out when I get back.
Re: How would you serve 100,000 simultaneous comet requests with Node? pxh 11/26/09 1:18 PM
Had a go at running it through gdb and got:
root@seagull:~/code/javascript/node# ulimit -n 999999
root@seagull:~/code/javascript/node# cd 243632/
root@seagull:~/code/javascript/node/243632# node server.js
^C
root@seagull:~/code/javascript/node/243632# node server.js ^C
root@seagull:~/code/javascript/node/243632# gdb node
GNU gdb (GDB) 7.0-ubuntu
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/local/bin/node...done.
(gdb) run server.js
Starting program: /usr/local/bin/node server.js
[Thread debugging using libthread_db enabled]
[New Thread 0x7ffff7eca910 (LWP 9970)]

Program received signal SIGSEGV, Segmentation fault.
0x0000000000499066 in ev_timer_again ()
(gdb) bt
#0  0x0000000000499066 in ev_timer_again ()
#1  0x00000000004a1baa in evcom_stream_close ()
#2  0x0000000000494066 in node::Connection::Close(v8::Arguments const&) ()
#3  0x00000000004c00ed in v8::internal::Builtin_HandleApiCall(v8::internal::Arguments) ()
#4  0x00007fffd2ea61aa in ?? ()
#5  0x00007fffd2ea6141 in ?? ()
#6  0x00007fffffffd3a0 in ?? ()
#7  0x00007fffffffd3e0 in ?? ()
#8  0x00007fffd2ef5aa4 in ?? ()
#9  0x00007fffd2545261 in ?? ()
#10 0x00007ffff7f507d9 in ?? ()
#11 0x00007fffd28099f1 in ?? ()
#12 0x00007fffd2550df1 in ?? ()
#13 0x00007fffffffd418 in ?? ()
#14 0x00007fffd2eb7e58 in ?? ()
#15 0x00007fffd249ae79 in ?? ()
#16 0x00007fffd28099f1 in ?? ()
#17 0x00007fffd2eb7dc1 in ?? ()
#18 0x0000000500000000 in ?? ()
#19 0x0000000000000000 in ?? ()
(gdb)

Cheers.

2009/11/27 Ryan Dahl <coldre...@gmail.com>

--

You received this message because you are subscribed to the Google Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com.
To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.





--
Home - http://www.piersharding.com
xmpp:...@ompka.net
mailto:pi...@ompka.net

Re: How would you serve 100,000 simultaneous comet requests with Node? pxh 11/26/09 1:45 PM
and a little more detail when built with symbols properly (doh):

Program received signal SIGSEGV, Segmentation fault.
0x0000000000499106 in ev_timer_again (w=0x7ffff7f4c219) at ../deps/libev/ev.c:2602
2602              ANHE_at_cache (timers [ev_active (w)]);
(gdb) b
Breakpoint 1 at 0x499106: file ../deps/libev/ev.c, line 2602.
(gdb) bt
#0  0x0000000000499106 in ev_timer_again (w=0x7ffff7f4c219) at ../deps/libev/ev.c:2602
#1  0x00000000004a1c4a in evcom_stream_attach (stream=0x7ffff7f4c191) at ../deps/evcom/evcom.c:1228
#2  evcom_stream_close (stream=0x7ffff7f4c191) at ../deps/evcom/evcom.c:1111
#3  0x0000000000494106 in node::Connection::Close(v8::Arguments const&) ()
#4  0x00000000004c1a7d in v8::internal::Builtin_HandleApiCall(v8::internal::Arguments) ()

Cheers.

2009/11/27 Piers Harding <piers....@gmail.com>
Re: How would you serve 100,000 simultaneous comet requests with Node? Chris Winberry 11/26/09 3:44 PM
Not exactly hard limit but an order of magnitude where server apps hit
a wall (esp. where there's a thread per connection/request). I'm
already at 12K connections without a significant problem under the
virtual linux instance. The problem I'm hitting now is a latency one
where the a connection times out. I'm doing a clean linux install on a
physical box now and will use a separate machine (or two) for driving
the clients. I'm going to make a guess and say that I should be able
to hit 24 - 28K if I configure things correctly.

On Nov 26, 4:03 pm, "Joel Perras (jperras)" <joel.per...@gmail.com>
wrote:
> > What happened is that
> > after just about 10k connection, the server would suddenly start
> > dropping connections like crazy and end the whole show with a
> > "Segmentation Fault" error. I'm not entirely sure what is causing
> > this.
>
> Felix, meet the C10k problem:http://www.kegel.com/c10k.html&http://en.wikipedia.org/wiki/C10k_problem
Re: How would you serve 100,000 simultaneous comet requests with Node? Felix Geisendoerfer 11/27/09 1:14 AM
Tautologistics: Were you able to get above 12k connections? That's the
number I initially hit a wall with, but later on I discovered that I
was rapidly loosing existing connections after 10k clients, so the 2k
additional connections that occurred until the segfault were
meaningless.

I would love to hear your latest results!

-- Felix Geisendörfer aka the_undefined

Re: How would you serve 100,000 simultaneous comet requests with Node? Chris Winberry 11/27/09 4:59 AM
I'm actually having more problems with the physical linux box than the
virtual one! Once I figure out what's going on I'll update my results.

I may also switch to an erlang test client so I can tailor how the
responses and verified and for the reporting I want.
Re: How would you serve 100,000 simultaneous comet requests with Node? Ricardo Tomasi 11/27/09 1:32 PM
That's great Felix, thanks! At first I got up to 15k connections with
your script. After unrolling the client creation loop (2 iterations)
and increasing my system's port range, it topped out at nearly 37k
connections, at that point I started getting "(evcom) connect Cannot
assign requested address" errors until the server crashed with "V8
FATAL ERROR. CALL_AND_RETRY_2 Allocation failed - process out of
memory".

I had 2.7 gb of free memory, so that amounts to around 70k per
connection, IF it was using all that. There's clearly something wrong
here, i think V8 could not allocate all the memory left (I'm kind of a
linux noob, so I couldn't manage to plot memory usage for the server
process). I can reach 20.000 connections using only 700mb (35k/conn).

Anyway, that was pretty satisfying, I guess I won't be worried with
node.js' performance for the time being. 5000 req/s and 36k
connections is more than enough for my projects. All of them,
combined :) And this was a crude test, there is much room for
optimizations.

cheers
ricardo

On Nov 26, 6:48 pm, Felix Geisendörfer <fe...@debuggable.com> wrote:
> Oh, what a procrastination opportunity, I could not resists : )!
>
> I decided to set up my experiment using node.js itself as the client.
> I suspect that 'ab' is not optimized for c > 1k, but I might be wrong.
> Either way, using node as the client gives me full control over what
> is going on.
>
> My experiment itself is set up to have the client client open as many
> connections as possible, and keep count of the open connections and
> messages received. My server script just accepts incoming connections
> and sends them a message every 10 seconds.
>
> Code:http://gist.github.com/243632
>
> Here are my results:
>
> Simultaneous connections: 10071 (peak)
> Memory used: 50236K (at 10071 connections)
> KB / connection: 4.99
> Environment: Ubuntu 9.10 running inside virtual box with 512 MB Ram on
> my Laptop
>
> I also used the same linux kernel tcp settings and ulimit
> configuration as outlined in the post cited above (http://www.metabrew.com/article/a-million-user-comet-application-with-mochiw...).
Re: How would you serve 100,000 simultaneous comet requests with Node? Ricardo Tomasi 11/27/09 1:44 PM
Worked out the memory usage: 400mb for 20k clients (20k per
connection) it goes up to roughly 1gb at 36.7k connections (27k per
client). Is it possible node is not completely freeing up memory from
dropped connections (lots during the process)?
> > configuration as outlined in the post cited above (http://www.metabrew.com/article/a-million-user-comet-application-with......).
Re: How would you serve 100,000 simultaneous comet requests with Node? ryan 11/27/09 1:50 PM
On Fri, Nov 27, 2009 at 10:44 PM, Ricardo Tomasi <ricar...@gmail.com> wrote:
> Worked out the memory usage: 400mb for 20k clients (20k per
> connection) it goes up to roughly 1gb at 36.7k connections (27k per
> client). Is it possible node is not completely freeing up memory from
> dropped connections (lots during the process)?

The V8 GC is pretty lazy - it likes to keep stuff around for a while.
It's possible there is a memory leak, but I doubt it.
Re: How would you serve 100,000 simultaneous comet requests with Node? billywhizz 11/27/09 3:36 PM
Hi Ricardo. I have run felix's script on a Centos 5.4 32 bit VM (on
VMWare Server) with 1GB ram and 2 2.66 GHz processors from a quad core
CPU allocated to it. I ran the client and server scripts on the same
VM and both of them ate up 100% of a CPU consistently. I am seeing the
same behaviour described above where the number of connections tops
out between 11k an 12k and then starts moving back down to between 7k
and 8k where it stayed until i got bored...

I've made all the sysctl and ulimit changes suggested here but the
sysctl changes don't seem to have any effect over the default
behaviour.

Can you go into a bit more detail regarding what changes you made to
the client script to get 30k+ connections? or if anyone can suggest
what i should investigate in order to get to the bottom of the issue,
that would be great.

I think node is really exciting. does anyone know of any plans to
integrate the following libraries btw?

- memcached binary protocol
- xml/xslt processing

i am working myself on a node library for memcached telnet interface,
but this is a crappy interface and no good for production systems... I
presume the binary protocol wouldn't be possible to integrate into
node without wrapping a c/c++ library in some way?

Great forum btw!

On Nov 27, 9:32 pm, Ricardo Tomasi <ricardob...@gmail.com> wrote:
> > configuration as outlined in the post cited above (http://www.metabrew.com/article/a-million-user-comet-application-with......).
Re: How would you serve 100,000 simultaneous comet requests with Node? billywhizz 11/27/09 6:27 PM
ok, i've done some more testing with the same setip as outlined above.

i also get the segfault on the server after about 12 minutes of
operation. when the segfault happened, the server app had 526MB of RSS/
private ram allocated, 566MB virtual. The box has 1 GB and top was
showing it as having a little over 154MB free when the seg fault
happened. is a bit strange that is faulted with only 566 MB virtual
mapped and still over 150MB available free.

The actual error that came from the server is the same as the one
reported by Ricardo: V8 FATAL ERROR. CALL_AND_RETRY_2 Allocation
failed - process out of memory.

It seems to me that is has to be a memory leak on the server side as
the client happily chugged away with it's RSS size fluctuating between
60 and 80 MB, which would indicate the GC working as expected in V8.

I logged some extra stats on the client side to see what was happening
with the connections. It looks like the following happens when the max
connections threshold (approx 11k for my setup) is reached:

The server continues accepting new connections
No HTTP status codes other than 200/OK are ever returned from the get
request
In the background, the server starts dropping the earliest created
connections and continues to do this, while continuing to accept new
connections
Eventually, some sort of equilibrium is reached with a consistent
number of approx 7500 active connections at any one time
This dropping and accepting of connections continues until the seg
fault happens

you can see a graph of the results of my testing here:
http://nodejs.googlegroups.com/web/stress-test.gif?gsc=le-p2gsAAADfo6nxZjimO68OJQq2lJmq

i can post up the scripts i used if anyone else wants to try. the
client script will output the stats in that graph in csv format...

Re: How would you serve 100,000 simultaneous comet requests with Node? billywhizz 11/27/09 6:35 PM
i've uploaded the client script i used here:
http://nodejs.googlegroups.com/web/client.js?gsc=SeQkrRYAAACYlaAqk1B4YZalaD-BVn15u4w3FxcUuKmQnM9jeHwPGA
Re: How would you serve 100,000 simultaneous comet requests with Node? billywhizz 11/27/09 7:46 PM
more info...

if i removed the sendBody lines from the server.js script, it happily
works away until approx 34k connections are active, after which it
starts failing on the client side with (evcom) connect Cannot assign
requested address.

var server = http.createServer(function(req, res) {
  res.sendHeader(200, {'Content-Type': 'text/plain'});
  setInterval(function() {
          iRuns++;
    //res.sendBody("Hello\n");
    //res.flush();
  }, 1000);
});
Re: How would you serve 100,000 simultaneous comet requests with Node? Chris Winberry 11/27/09 8:37 PM
Have you tried creating additional IFs and spreading the connection
amongst them?
Re: How would you serve 100,000 simultaneous comet requests with Node? Chris Winberry 11/27/09 8:56 PM
Still working on this too =)

I'm currently hitting a peak of 24K but I believe my problem is that
I'm running a good chunk of the clients on the same machine. I'll be
getting some other boxes added as clients this weekend.

One thing I realized is that my server code was rather slow due to all
the anonymous functions flying around. I also send data much less
frequently to focus on testing total connections and not overall event
processing speed. This results in much less memory usage (~180MB @ 22K
connections).

What's promising about this all is that:
  1) The memory usage plateaus
  2) Once the clients are removed, the latency and CPU usage quickly
drops

Here's the current script:
http://github.com/tautologistics/node_loadtest/blob/master/sleep.js

Collecting stats from the server is done via the url http://localhost:8080/cstat
Or from the REPL: cstat();

Using this command:
(while x=1; do curl -s -w " | CT: %{time_connect}ms | FB: %
{time_starttransfer}ms | TT: %{time_total}ms \n" http://metanoia:8080/cstat
>> metanoia.log; sleep 1; done)

I get the following log format (could switch to CVS or something
later):
NOW: 1259381660 | OC: 1 | TC: 1 | CT: 0.010ms | FB: 0.013ms | TT:
0.013ms
NOW: 1259381721 | OC: 2770 | TC: 5372 | CT: 0.005ms | FB: 0.015ms |
TT: 0.015ms
NOW: 1259381812 | OC: 10085 | TC: 38230 | CT: 0.010ms | FB: 0.059ms |
TT: 0.059ms
NOW: 1259381987 | OC: 13630 | TC: 175528 | CT: 0.004ms | FB: 0.071ms |
TT: 0.071ms
NOW: 1259382031 | OC: 16061 | TC: 215399 | CT: 0.006ms | FB: 0.024ms |
TT: 0.025ms
NOW: 1259382141 | OC: 17463 | TC: 327497 | CT: 0.009ms | FB: 0.070ms |
TT: 0.070ms
NOW: 1259382175 | OC: 22996 | TC: 372244 | CT: 0.004ms | FB: 0.069ms |
TT: 0.069ms
NOW: 1259382191 | OC: 23059 | TC: 394120 | CT: 2.162ms | FB: 2.317ms |
TT: 2.317ms

NOW = current unix timestamp
OC = open connections
TC = total connections
CT = connect time to fetch /cstat
FB = time to first byte for /cstat
TT = total time to serve /cstat

On Nov 27, 10:46 pm, billywhizz <apjohn...@gmail.com> wrote:
Re: How would you serve 100,000 simultaneous comet requests with Node? Chris Winberry 11/27/09 9:16 PM
Wow, I'm tired.

RE: the fixed script, the original fired multiple events per
connection and used anonymous functions; now, there's one event fired
to serve the content after a delay and the only anonymous function has
very little in it.

And it shoud have read:

What's promising about this all is that:
  1) The memory usage always plateaus when the connection count
stabilizes

On Nov 27, 11:56 pm, Tautologistics <cpt.obvi...@gmail.com> wrote:
> Still working on this too =)
>
> I'm currently hitting a peak of 24K but I believe my problem is that
> I'm running a good chunk of the clients on the same machine. I'll be
> getting some other boxes added as clients this weekend.
>
> One thing I realized is that my server code was rather slow due to all
> the anonymous functions flying around. I also send data much less
> frequently to focus on testing total connections and not overall event
> processing speed. This results in much less memory usage (~180MB @ 22K
> connections).
>
> What's promising about this all is that:
>   1) The memory usage plateaus
>   2) Once the clients are removed, the latency and CPU usage quickly
> drops
>
> Here's the current script:http://github.com/tautologistics/node_loadtest/blob/master/sleep.js
>
> Collecting stats from the server is done via the urlhttp://localhost:8080/cstat
Re: How would you serve 100,000 simultaneous comet requests with Node? billywhizz 11/27/09 9:34 PM
what do you mean by IFs? sorry if i'm being an idiot...
Re: How would you serve 100,000 simultaneous comet requests with Node? billywhizz 11/27/09 9:57 PM
i tried to run your script with the client from the other example and
the connections topped out at around 5000 on the same setup. it then
evened out at around 3600...
Re: How would you serve 100,000 simultaneous comet requests with Node? billywhizz 11/27/09 10:14 PM
ok. i tried the same experiment i did earlier, except this time with 2
clients. one running on the same box as the server and another on a
remote box. managed to get 10k+ connections active on each client
giving a total of 20k on the server. am going to test with more
clients to see what happens. am now thinking there is something wrong
on the client side...
Re: How would you serve 100,000 simultaneous comet requests with Node? Erik Corry 11/28/09 4:35 AM
Unless node goes in and changes it the max heap size for V8 is 1Gbyte
on a 64 bit system and half a GByte on a 32 bit system.  If you really
have heaps of that size then garbage collections are going to start
taking a very long time.  I'm not sure whether it's enough to cause
dropped connections.

There's a V8 flag (how do you pass those when using node?) called --
trace-gc which will give you some info (on stdout) about how many
garbage collections are happening and how long they are taking.

On Nov 28, 5:37 am, Tautologistics <cpt.obvi...@gmail.com> wrote:
Re: How would you serve 100,000 simultaneous comet requests with Node? Chris Winberry 11/28/09 5:10 AM
IFs -> Interfaces.
Re: How would you serve 100,000 simultaneous comet requests with Node? Chris Winberry 11/28/09 9:59 AM
Updated the client and server test scripts:

http://github.com/tautologistics/node_loadtest/blob/master/client.js

http://github.com/tautologistics/node_loadtest/blob/master/server.js

Clients are reused for connections and the REPL has the commands add()
and rem() to control the number of running clients running.
Re: How would you serve 100,000 simultaneous comet requests with Node? Chris Winberry 11/28/09 11:32 AM
Still hitting a limit at about 23 - 24K connections. Memory usage is
at ~180 so that still puts per connection memory usage at < 10K.

I'm now building an EC2 image that can host the client or server and
then try the load test using 2+ client machines against a dedicated
server.
Re: How would you serve 100,000 simultaneous comet requests with Node? Stefan Scholl 12/16/09 11:47 PM
On 27 Nov., 22:50, Ryan Dahl <coldredle...@gmail.com> wrote:
> On Fri, Nov 27, 2009 at 10:44 PM, Ricardo Tomasi <ricardob...@gmail.com> wrote:
> > Worked out thememoryusage: 400mb for 20k clients (20k per

> > connection) it goes up to roughly 1gb at 36.7k connections (27k per
> > client). Is it possible node is not completely freeing upmemoryfrom
> > dropped connections (lots during the process)?
>
> TheV8GCis pretty lazy - it likes to keep stuff around for a while.

> It's possible there is amemoryleak, but I doubt it.

New to node.js. I was testing the simple Hello World script from the
article on
http://simonwillison.net/2009/Nov/23/node/ with ApacheBench and was
wondering
the same.

The memory usage was rising after each run of ApacheBench.

Searched this group for memory leaks and most of them seem to be
fixed.
It's very unlikely that this simple script leaks.

Is there a way to start V8's GC "by hand" to check for leaks in case
I'm
writing something more complex?

Re: [nodejs] Re: How would you serve 100,000 simultaneous comet requests with Node? lpsantil 12/17/09 9:27 AM
This was recently covered on the v8-users list.

http://groups.google.com/group/v8-users/browse_thread/thread/21283f6178febab/cb23293c8a705b59?lnk=gst&q=beal#cb23293c8a705b59

The skinny:

void V8::RunGC() {
        while( IdleNotification() )
                ;
}

-L

> --
>
> You received this message because you are subscribed to the Google Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com.
> To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.
>
>
>

Re: How would you serve 100,000 simultaneous comet requests with Node? Felix Geisendoerfer 1/18/10 3:21 AM
A client asked me how many Comet connections node could handle, so I
decided to give it try again today.

To make a long story short, I was able to modify my client/server
scripts so that node would no longer segfault, smoothly reaching 64510
parallel connections!

http://gist.github.com/243632

There are two things I changed:

a) Disabled the timeout for the underlaying tcp connections
b) Always create 10 new connections in parallel and only attempt new
connections when previous connections are established. Node still
segfaults if I increase to a higher number of parallel connection
starts.

I was running my tests on a ridiculous "High-Memory Quadruple Extra
Large" Ec2 instance with 68.4 GB memory (why not?) using Ubuntu 9.04.
Here are some numbers:

* From 0 to 64510 connections in 51 seconds
* Minimal memory usage at 64510 connections: 364828 kb (5.66 kb /
connection)
* Peak memory usage at 64510 connections: 452732 kb (7.02 kb /
connection)
* Memory oscillation is due to v8's lazy garbage collection
* There appear to be no noticeable memory leaks
* The test happily ran for 2h 52m, I stopped it after that because
nothing interesting seemed to happen anymore
* During this time a total of 43281192 "Hello\n" messages were
received by the 64510 clients
* An average of 4194 messages / sec was received (ideally this value
would have been closer to 6451, as each client was supposed to be sent
a message every 10 sec)

Conclusion:

64510 is the maximum of connections due to the available ports on the
system, so the test had to stop there. However, I do think node could
have a chance at the 1 million comet user record [1] set by Richard
Jones with Erlang. But I decied to wait for the net2 branch to be
ready before going through the trouble of setting this up.

v8's occasional garbage collection does not seem to take longer than 1
second for most parts (otherwise the message / last-sec counter would
drop below 0 several times in the logs, it only does so once after
reaching 64510 connections). So here is hope it won't become a problem
for high-performance apps.

Response time variation seems big (as indicated by the oscillation in
the message / last-sec counter), but this test is not really setup to
measure it so I can't say much further about it.

There is probably still a bug in node's connection handling that can
cause a segfault if one tries to establish new connections too
aggressively. But I challenge everybody to trigger this in a real-
world scenario that is not a DOS attack. Either way, I'll retest that
when the net2 branch is merged.

Overall I am very impressed with node handling such a load, and
especially the memory footprint seems very impressive.

Oh, and I'd love to hear more thoughts on the topic! : )

[1] http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-3/

--fg

PS: My log files can be found here: http://gist.github.com/279936

On Dec 17 2009, 6:27 pm, Louis Santillan <lpsan...@gmail.com> wrote:
> This was recently covered on the v8-users list.
>
> http://groups.google.com/group/v8-users/browse_thread/thread/21283f61...

>
> The skinny:
>
> void V8::RunGC() {
>         while( IdleNotification() )
>                 ;
>
> }
>
> -L
>
>
>
> On Wed, Dec 16, 2009 at 11:47 PM, Stefan Scholl <stefan.sch...@gmail.com> wrote:
> > On 27 Nov., 22:50, Ryan Dahl <coldredle...@gmail.com> wrote:
> >> On Fri, Nov 27, 2009 at 10:44 PM, Ricardo Tomasi <ricardob...@gmail.com> wrote:
> >> > Worked out thememoryusage: 400mb for 20k clients (20k per
> >> > connection) it goes up to roughly 1gb at 36.7kconnections(27k per

> >> > client). Is it possible node is not completely freeing upmemoryfrom
> >> > droppedconnections(lots during the process)?

>
> >> TheV8GCis pretty lazy - it likes to keep stuff around for a while.
> >> It's possible there is amemoryleak, but I doubt it.
>
> > New to node.js. I was testing the simple Hello World script from the
> > article on
> >http://simonwillison.net/2009/Nov/23/node/with ApacheBench and was

> > wondering
> > the same.
>
> > The memory usage was rising after each run of ApacheBench.
>
> > Searched this group for memory leaks and most of them seem to be
> > fixed.
> > It's very unlikely that this simple script leaks.
>
> > Is there a way to start V8's GC "by hand" to check for leaks in case
> > I'm
> > writing something more complex?
>
> > --
>
> > You received this message because you are subscribed to the Google Groups "nodejs" group.
> > To post to this group, send email to nod...@googlegroups.com.
> > To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
> > For more options, visit this group athttp://groups.google.com/group/nodejs?hl=en.

Re: How would you serve 100,000 simultaneous comet requests with Node? Chris Winberry 1/18/10 6:32 AM
Thanks for the update, that's some great info! Wish I had more
personal time to explore this area too.

If you have a little more time, could you try creating another IF,
binding the server to all IFs, and distributing client connections
between the IPs? Curious to see if there's any internal magic numbers
that are encountered or if 65k+ is smooth sailing.

--
Sent from my mobile device

Re: How would you serve 100,000 simultaneous comet requests with Node? Felix Geisendoerfer 1/18/10 7:16 AM
> If you have a little more time, could you try creating another IF,
> binding the server to all IFs, and distributing client connections
> between the IPs?

Afaik node's current tcp module doesn't allow me to specify the
interface/IP from which to connect to a server. I didn't see a quick
way to hack it in (probably due to my almost non-existing C++ skills),
so I decided to wait for the net2 branch to come along.

Is there are way to tell assign a given interface to a process? In
that case I could just bring up multiple client.js processes.

> Curious to see if there's any internal magic numbers
> that are encountered or if 65k+ is smooth sailing.

I am hopeful! But in theory there should be no limitation other than
bugs.

--fg

> >http://www.metabrew.com/article/a-million-user-comet-application-with...


>
> > --fg
>
> > PS: My log files can be found here:http://gist.github.com/279936
>
> > On Dec 17 2009, 6:27 pm, Louis Santillan <lpsan...@gmail.com> wrote:
> >> This was recently covered on the v8-users list.
>
> >>http://groups.google.com/group/v8-users/browse_thread/thread/21283f61...
>
> >> The skinny:
>
> >> void V8::RunGC() {
> >>         while( IdleNotification() )
> >>                 ;
>
> >> }
>
> >> -L
>
> >> On Wed, Dec 16, 2009 at 11:47 PM, Stefan Scholl <stefan.sch...@gmail.com>
> >> wrote:
> >> > On 27 Nov., 22:50, Ryan Dahl <coldredle...@gmail.com> wrote:
> >> >> On Fri, Nov 27, 2009 at 10:44 PM, Ricardo Tomasi
> >> >> <ricardob...@gmail.com> wrote:
> >> >> > Worked out thememoryusage: 400mb for 20k clients (20k per
> >> >> > connection) it goes up to roughly 1gb at 36.7kconnections(27k per
> >> >> > client). Is it possible node is not completely freeing upmemoryfrom
> >> >> > droppedconnections(lots during the process)?
>
> >> >> TheV8GCis pretty lazy - it likes to keep stuff around for a while.
> >> >> It's possible there is amemoryleak, but I doubt it.
>
> >> > New to node.js. I was testing the simple Hello World script from the
> >> > article on
> >> >http://simonwillison.net/2009/Nov/23/node/withApacheBench and was

> >> > wondering
> >> > the same.
>
> >> > The memory usage was rising after each run of ApacheBench.
>
> >> > Searched this group for memory leaks and most of them seem to be
> >> > fixed.
> >> > It's very unlikely that this simple script leaks.
>
> >> > Is there a way to start V8's GC "by hand" to check for leaks in case
> >> > I'm
> >> > writing something more complex?
>
> >> > --
>
> >> > You received this message because you are subscribed to the Google
> >> > Groups "nodejs" group.
> >> > To post to this group, send email to nod...@googlegroups.com.
> >> > To unsubscribe from this group, send email to
> >> > nodejs+un...@googlegroups.com.
> >> > For more options, visit this group
> >> > athttp://groups.google.com/group/nodejs?hl=en.
>
> --
> Sent from my mobile device

Re: [nodejs] Re: How would you serve 100,000 simultaneous comet requests with Node? Rasmus Andersson 1/18/10 7:18 AM
Very interesting numbers. Thanks for taking the time to share this.

2010/1/18 Felix Geisendörfer <fe...@debuggable.com>:

--
Rasmus Andersson

Re: [nodejs] Re: How would you serve 100,000 simultaneous comet requests with Node? zimbatm 1/18/10 8:46 AM
2010/1/18 Felix Geisendörfer <fe...@debuggable.com>:

>> If you have a little more time, could you try creating another IF,
>> binding the server to all IFs, and distributing client connections
>> between the IPs?
>
> Afaik node's current tcp module doesn't allow me to specify the
> interface/IP from which to connect to a server. I didn't see a quick
> way to hack it in (probably due to my almost non-existing C++ skills),
> so I decided to wait for the net2 branch to come along.
>
> Is there are way to tell assign a given interface to a process? In
> that case I could just bring up multiple client.js processes.

While you can't bind on a specific interface, you can bind on the
interface's IP address.
Or just bind on "*" host to use all interfaces.

--
  Jonas Pfenniger (zimbatm) <jo...@pfenniger.name>

Re: How would you serve 100,000 simultaneous comet requests with Node? Marcel Mitsuto F. S. 6/6/12 10:28 AM
which node.js version was this benchmark run upon? 

I'm getting this output:
/opt/node/bin/node client.js

node.js:134
        throw e; // process.nextTick error, or 'error' event on first tick
        ^
TypeError: Object #<ClientRequest> has no method 'finish'
    at addClient (/home/arq_msugano/bench/client.js:23:11)
    at Object.<anonymous> (/home/arq_msugano/bench/client.js:32:3)
    at Module._compile (module.js:402:26)
    at Object..js (module.js:408:10)
    at Module.load (module.js:334:31)
    at Function._load (module.js:293:12)
    at Array.<anonymous> (module.js:421:10)
    at EventEmitter._tickCallback (node.js:126:26)

any thoughts? 

tya,
Re: How would you serve 100,000 simultaneous comet requests with Node? mscdex 6/6/12 12:59 PM
On Jun 6, 1:28 pm, "Marcel Mitsuto F. S." <mits...@gmail.com> wrote:
> which node.js version was this benchmark run upon?

This thread is over 2 years old and node has changed a lot since
then ;-)

As for what node version? Maybe 0.1.x?
Re: How would you serve 100,000 simultaneous comet requests with Node? Oleg Efimov (Sannis) 6/6/12 1:38 PM
That was wild times :D

среда, 6 июня 2012 г., 23:59:00 UTC+4 пользователь mscdex написал:
Re: How would you serve 100,000 simultaneous comet requests with Node? Marcel Mitsuto F. S. 6/6/12 3:30 PM
I've made it to work now. 

I can spawn as many connections as the kernel TCP stack permits, but then node crashes when it gets to the limit.

thanks!
Re: [nodejs] Re: How would you serve 100,000 simultaneous comet requests with Node? Ben Noordhuis 6/6/12 3:34 PM
On Thu, Jun 7, 2012 at 12:30 AM, Marcel Mitsuto F. S. <mit...@gmail.com> wrote:
> I can spawn as many connections as the kernel TCP stack permits, but then
> node crashes when it gets to the limit.

Can you specify 'crash'? If you mean an EMFILE or ENFILE error, that's
expected. If you mean 'segmentation fault' or something like that,
that's a bug.
Re: [nodejs] Re: How would you serve 100,000 simultaneous comet requests with Node? Marcel Mitsuto F. S. 7/2/12 2:37 PM
Sorry, you're right. Node exits with EMFILE warning.







More topics »