dB. | Moscow - Geneva - Seattle - New York
dblock.org - @dblockdotorg
--
You received this message because you are subscribed to the Google Groups "thin-ruby" group.
To unsubscribe from this group and stop receiving emails from it, send an email to thin-ruby+...@googlegroups.com.
To post to this group, send email to thin...@googlegroups.com.
Visit this group at http://groups.google.com/group/thin-ruby?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
I meant to ask, looking at the second link: is this running on a separate thread behind a select()?
For example, if I have a request that takes 20 seconds. First request comes in, the loop runs once, and EM gets it, we're now processing.
Then a second request comes in - does this loop run again, or do we wait 19 seconds or so before another accept() is called?
Wow, and so, since we're on the Thin mailing list, why is Thin using EM? Sounds like EM only does 1 good thing in this scenario, which is to accept something faster and sit on it, well ... sometimes.
For a web server, for a single-threaded case wouldn't we be better off with a basic loop: blocking accept, process, blocking accept process?
And for a MT case a basic threadpool: select / accept -> pass to a thread from the pool?
Wow, and so, since we're on the Thin mailing list, why is Thin using EM? Sounds like EM only does 1 good thing in this scenario, which is to accept something faster and sit on it, well ... sometimes.
For a web server, for a single-threaded case wouldn't we be better off with a basic loop: blocking accept, process, blocking accept process?
And for a MT case a basic threadpool: select / accept -> pass to a thread from the pool?
I want to thank everyone for detailed explanations. This has been very helpful.I was asking this in the context of the Heroku load-balancing. We went down the wrong path when first trying to measure the actual wait time inside the web server, blocked on an already processing request - trying to do it monkey-patching Thin. We ended up with what's described in http://artsy.github.com/blog/2013/02/17/impact-of-heroku-routing-mesh-and-random-routing instead, which has its drawbacks.We'll get cooperation from the Heroku router at some point, but have also been also exploring options like closing the listening socket after a request has come in (see https://github.com/dblock/1aat-ruby), causing the router to try another dyno. I know at least two people who went down the path of trying to get EM to do the same and now I completely understand why they have not been successful at it.
Unfortunately most operating systems don't respect this number and will adjust it dynamically, usually larger than the number set. So it's nothing more than a hint.
On Mon, Feb 18, 2013 at 9:49 PM, James Tucker <jftu...@gmail.com> wrote:On Feb 18, 2013, at 11:23 AM, Daniel Doubrovkine <dbl...@dblock.org> wrote:I want to thank everyone for detailed explanations. This has been very helpful.I was asking this in the context of the Heroku load-balancing. We went down the wrong path when first trying to measure the actual wait time inside the web server, blocked on an already processing request - trying to do it monkey-patching Thin. We ended up with what's described in http://artsy.github.com/blog/2013/02/17/impact-of-heroku-routing-mesh-and-random-routing instead, which has its drawbacks.We'll get cooperation from the Heroku router at some point, but have also been also exploring options like closing the listening socket after a request has come in (see https://github.com/dblock/1aat-ruby), causing the router to try another dyno. I know at least two people who went down the path of trying to get EM to do the same and now I completely understand why they have not been successful at it.You don't need to close the socket, you just need to tune the accept backlog. If the Heroku balancer is written at all sensibly (which I suspect it is), then it'll have a short TCP_CONNECTIONTIMEOUT set, and will move on to another dyno when the backlog is full. The patch isn't that hard at all, you'll mainly have to deal with how to pass the options without breaking the existing APIs.cheersdB.On Mon, Feb 18, 2013 at 1:31 PM, James Tucker <jftu...@gmail.com> wrote:
On Feb 17, 2013, at 12:16 PM, Daniel Doubrovkine <dbl...@dblock.org> wrote:Wow, and so, since we're on the Thin mailing list, why is Thin using EM? Sounds like EM only does 1 good thing in this scenario, which is to accept something faster and sit on it, well ... sometimes.EM doesn't "sit on it". EM is pretty clear about "don't block the reactor", Thin breaches this. Actually, that's not fair on Thin either, as Thin doesn't breach this, applications do.For a web server, for a single-threaded case wouldn't we be better off with a basic loop: blocking accept, process, blocking accept process?See Unicorn, etc.And for a MT case a basic threadpool: select / accept -> pass to a thread from the pool?This is subject to some load balancing, latency and throughput concerns as well. It is also possible (and often simpler) to use a shared accept model. Please be aware, as I am trying to make it clear, any time your pre-accept, you're creating the potential for a bad case of the scenario the OP was talking about.Either of the latter solutions are still not immune to these problems, again, depending on the layer at which the load balancer operates. A low layer round-robin load balancer, even in some particularly evil cases a weighted high layer load balancer, will still make bad decisions unless you make your response rates stable. Without additional configuration, you're using constant algorithms against what is essentially non-constant data, the likelihood of error cases is inherently high.In the Unicorn case of these scenarios, your OS is actually saving you from some of t
--