2 Dynos available... If i have one tab open with /wait, according to Heroku.. I should not get a time out on / if I just hit reload and reload in my browser... but I do...
2011-02-16T12:07:03-08:00 heroku[web.2]: State changed from starting to up 2011-02-16T12:07:04-08:00 heroku[web.1]: State changed from starting to up 2011-02-16T12:09:44-08:00 heroku[router]: Error H12 (Request timeout) -
On Wednesday, 16 February 2011 at 18:01, Tim W wrote:
It is not identical to what Heroku is providing.. The Heroku mesh seems to blindly sends a request to a dyno, no matter the current status of the dyno. The queue is at the dyno level. Passenger holds back the request until a process is available..
With passenger you do not end up in the situation noted below, where as with Heroku you do.. (Request Y gets served ok with passenger, with Heroku, request Y gets the H12 error)
Quoted from passenger docs (this is what happens if you have that feature turned off on passenger and what always happens with Heroku): ---------------------------------------------------------------------- The situation looks like this:
Backend process A: [* ] (1 request in queue) Backend process B: [*** ] (3 requests in queue) Backend process C: [*** ] (3 requests in queue) Backend process D: [*** ] (3 requests in queue) Each process is currently serving short-running requests.
Phusion Passenger will forward the next request to backend process A. A will now have 2 items in its queue. We’ll mark this new request with an X:
Backend process A: [*X ] (2 request in queue) Backend process B: [*** ] (3 requests in queue) Backend process C: [*** ] (3 requests in queue) Backend process D: [*** ] (3 requests in queue)
Assuming that B, C and D still aren’t done with their current request, the next HTTP request - let’s call this Y - will be forwarded to backend process A as well, because it has the least number of items in its queue:
Backend process A: [*XY ] (3 requests in queue) Backend process B: [*** ] (3 requests in queue) Backend process C: [*** ] (3 requests in queue) Backend process D: [*** ] (3 requests in queue)
But if request X happens to be a long-running request that needs 60 seconds to complete, then we’ll have a problem. Y won’t be processed for at least 60 seconds. It would have been a better idea if Y was forward to processes B, C or D instead, because they only have short- living requests in their queues.
On Feb 16, 12:50 pm, Neil Middleton <neil.middle...@gmail.com> wrote:
Is this not identical to what Heroku provides though? Your global queue is your applications dynos and the routing mesh will send requests to whichever dynos are idle. The wait being the backlog.
The only difference I can see is that Passenger won't, by default, spit back any requests that take longer than 30 seconds.
If global queuing is turned on, then Phusion Passenger will use a global queue that’s shared between all backend processes. If an HTTP request comes in, and all the backend processes are still busy, then Phusion Passenger will wait until at least one backend process is done, and will then forward the request to that process.<<
default is on
On Feb 16, 12:36 pm, Neil Middleton <neil.middle...@gmail.com> wrote:
AFAIK Passenger does have a similar concept with running processes (having a default of six running processes, which are comparable to 6 dynos).
On Wednesday, 16 February 2011 at 16:55, Tim W wrote:
Thanks, I will give rack-timeout a try.
So what it seems like is that the routing mesh is not as sophisticated as Heroku leads on?
On Feb 16, 11:45 am, Neil Middleton <neil.middle...@gmail.com> wrote:
The dyno is still running the long request, successfully. It's only the routing mesh that's returned the timeout error back to the user. Therefore, the dynos still in your 'grid' and ready for new requests.
In my experience, this does not seem to be the case. We have several admin features in our app that when requested with certain params, it can take longer then 30s to run. (I am working on ways to get these in check and in the background). When a user trips one of these long running requests, Heroku appears to queue additional requests to this dyno and those requests time out, even though there are plenty of other dynos available to handle that request.
Is the statement on the Heroku website true or false? It does not appear that Heroku actively monitors the dynos to see if they are busy with a long running request. Is there a better way to handle this situation?