Response buffering and scalability?

22 views
Skip to first unread message

Preston L. Bannister

unread,
Mar 27, 2011, 4:39:53 PM3/27/11
to rin...@googlegroups.com
Thinking about the differences between RingoJS and node.js, and what helps solve the C10K problem...

When a web server has to deal with a very large number of connections, the event-driven model has a large advantage in that there is only one thread (or a small fixed number) handling requests. Fewer threads means (much!) less memory usage, and far more code running within the processor cache. 

The thread-per-connection model runs into trouble when there are a large number of incoming requests, and when 1) request processing includes blocking operations, and 2) the net latency required for the client to absorb the response is significant (due to latency and/or throughput). 

Without changes to the programming model, cannot do much about (1).

On the other hand, we might be able to do quite a lot about (2). This may be more of a Jetty question, but can we configure RingoJS to buffer responses, allow the thread to terminate immediately, and allow response transmission to complete after the thread has terminated?

(Not sure about Comet ... must a thread be always dedicated for the idle time?)

Hannes Wallnoefer

unread,
Mar 27, 2011, 6:04:29 PM3/27/11
to rin...@googlegroups.com
2011/3/27 Preston L. Bannister <preston....@gmail.com>:

> Thinking about the differences between RingoJS and node.js, and what helps
> solve the C10K problem...
> When a web server has to deal with a very large number of connections, the
> event-driven model has a large advantage in that there is only one thread
> (or a small fixed number) handling requests. Fewer threads means (much!)
> less memory usage, and far more code running within the processor cache.

There are so many myths around single threaded event loop server
performance. Of course no threads mean less memory usage, but it also
means server latency will increase badly once you get some load. See
<http://hns.github.com/2010/09/29/benchmark2.html> for an example of
that effect. Not to mention the question how to keep your other x CPU
cores busy.

I'm not saying the JVM is perfect (it could be less memory hungry, for
example). But having highly scalable threads and being able to do both
blocking and non-blocking I/O seems very hard to beat if you want to
build scalable servers.

If you don't trust me on this, here are two posts from quite reputable
folks that come to similar conclusions:

http://amix.dk/blog/post/19577
http://www.olympum.com/java/java-aio-vs-nodejs/

> The thread-per-connection model runs into trouble when there are a large
> number of incoming requests, and when 1) request processing includes
> blocking operations, and 2) the net latency required for the client to
> absorb the response is significant (due to latency and/or throughput).
> Without changes to the programming model, cannot do much about (1).
> On the other hand, we might be able to do quite a lot about (2). This may be
> more of a Jetty question, but can we configure RingoJS to buffer responses,
> allow the thread to terminate immediately, and allow response transmission
> to complete after the thread has terminated?

Ringo is well equipped for both issues you mention. Jetty's default
SelectSocketConnector uses non-blocking I/O, so any I/O before a
request is ready to be processed or after a response is committed and
buffered will not run on a dedicated thread. And with Jetty (or and
Servlet 3.0 container) you can detatch the current request and resume
later when waiting for I/O or other events.

See http://hns.github.com/2010/07/02/versatility.html for an
introduction, or
https://github.com/hns/stick/blob/master/examples/continuation/app.js#L24-36
for a more advanced approach using JS 1.7 generators and continations.

Hannes

Reply all
Reply to author
Forward
0 new messages