Is there a way to track the length of the http client queue?

224 views
Skip to first unread message

Vladimir Ivanov

unread,
Feb 27, 2018, 4:35:46 PM2/27/18
to Tornado Web Server
subj.

I have an application built in Tornado that mostly using 3 different APIs internally.
The problem is I can a see lot of HTTP 599: Timeout in request queue errors registered.
After googling around I understood that the reason could be in `request_timeout`, `connection_timeout` and `max_clients` values.
I could image I can set `request_timeout`, `connection_timeout` in the range of 10 to 60 sec in order to give my customers a feedback in reasonable amount of time. But how can I figure out the `max_clients` value?

I believe there should be a way to track the http client queue length, send metrics (e.g. every 5 sec), draw a graph in Graphana and probably set some alerts so I could play around with `max_clients` value.

What is the best way to do it? Or what is the maximum reasonable `max_clients` value I could use?

Best regards,
Vladimir

Ben Darnell

unread,
Mar 2, 2018, 4:17:19 PM3/2/18
to python-...@googlegroups.com
On Tue, Feb 27, 2018 at 4:35 PM Vladimir Ivanov <icu...@gmail.com> wrote:
subj.

I have an application built in Tornado that mostly using 3 different APIs internally.
The problem is I can a see lot of HTTP 599: Timeout in request queue errors registered.
After googling around I understood that the reason could be in `request_timeout`, `connection_timeout` and `max_clients` values.
I could image I can set `request_timeout`, `connection_timeout` in the range of 10 to 60 sec in order to give my customers a feedback in reasonable amount of time. But how can I figure out the `max_clients` value?

I believe there should be a way to track the http client queue length, send metrics (e.g. every 5 sec), draw a graph in Graphana and probably set some alerts so I could play around with `max_clients` value.

There is not currently a documented or portable way to track the size of the request queue (you could look at len(http_client.queue) for SimpleAsyncHTTPClient or len(http_client._requests) for CurlAsyncHTTPClient if you don't mind that that might change out from under you).

Consider implementing your own queuing layer with `tornado.queues` and a `tornado.locks.Semaphore`. This will let you monitor the queue however you want and also do smarter scheduling by replacing the AsyncHTTPClient's simple FIFO queue with a priority queue. 
 

What is the best way to do it? Or what is the maximum reasonable `max_clients` value I could use?

How many concurrent connections do the APIs that you are using allow you to use? Divide that by the number of server processes you need. 

-Ben
 

Best regards,
Vladimir

--
You received this message because you are subscribed to the Google Groups "Tornado Web Server" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python-tornad...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages