QueuedThreadPool is always unbounded in Dropwizard

1,727 views
Skip to first unread message

Olve Hansen

unread,
May 14, 2014, 10:49:23 AM5/14/14
to dropwiz...@googlegroups.com
We are trying to configure our DW services so that they give a 503 pushback when reaching a limit, but we are unable to do that.. I tried all max-threads and accept queues etc.

It seems that both DW 0.6.x and 0.7 suffers from this, as 
com.yammer.metrics.jetty.InstrumentedQueuedThreadPool (0.6)
com.codahale.metrics.jetty9.InstrumentedQueuedThreadPool (0.7) 
does not give setters of constructor parameters for this.

In 0.6 I might be able to set org.eclipse.jetty.util.thread.QueuedThreadPool#setMaxQueued before the pool starts, but in 0.7 using v9 of jetty this parameter must be set in the constructor.


This issue actually makes Jetty in DW a bit unsafe as it is not possible by configuration to limit the amount of connections Jetty will accept.

http://wiki.eclipse.org/Jetty/Howto/Configure_Connectors acceptQueueSize actualluy configures how large the queue for the acceptor threads is, not how many jobs can be in queue for the connection threads.

Of course I might be wrong in this, but the values in our tests clearly suggests this is the issue. E.g. I had a list of 5200 activeConnections and active threads 150 while having the acceptQueueSize set to100, and maxThreads to 150.

It would be nice if I could configure this value, instead of creating filters for handling this. 

Best regards, 
Olve

Olve Hansen

unread,
May 15, 2014, 9:30:03 AM5/15/14
to dropwiz...@googlegroups.com
A little update, it seems that setting this value triggers Jetty to use java.util.concurrent.ArrayBlockingQueue instead of org.eclipse.jetty.util.BlockingArrayQueue. 

This is according to this discussion[1] here a bad thing, as the Jetty Q is much faster than the java.util.concurrent version.

I'll go back to the drawing-board with this. Would be nice to have Jetty handle this limit instead of adding a filter with a semaphore or something similar.

--
Olve

Ryan Kennedy

unread,
May 23, 2014, 12:13:42 AM5/23/14
to dropwiz...@googlegroups.com
Have you looked at setting (in a DW 0.7.0 application) server.applicationConnectors.acceptQueueSize? Looking at the DW 0.7.0 code it does use BlockingArrayQueues internally. So I wonder if the default acceptQueueSize (0) is leaving the OS accept queue unbounded, meaning it doesn't matter how much you bound your internal queues.

I haven't actually tried this out yet. I just found myself in a similar area of the code recently and remembered this thread.

Ryan


--
You received this message because you are subscribed to the Google Groups "dropwizard-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dropwizard-us...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Olve Hansen

unread,
May 23, 2014, 3:50:21 AM5/23/14
to dropwiz...@googlegroups.com
I am currently migrating the service we are load-testing to DW 0.7 so I will investigate this. I did manage to set up a bounded queue for 0.6.2 as well (local build of DW, with slower queue https://github.com/olvesh/dropwizard/tree/v0.6.2-max-queued ), but the problem is that when the queue limit is reached, Jetty just refuses connections. I am hoping to get Jetty to serve 503 when this happens. 

It looks like adding and configuring the LowResourceMonitor (new in 9.x I think) might help. I did try to set max idle times for threads in Jetty 8 (in DW 0.6.2) but I didn't find it having any effect, as there has to be some workers in Jetty looking over these threds and inspecting their idle times, which the LowResourceMonitor provides. 

Also this thread pool is common for all connectors, so if it is exhausted, the health-checks (i.e. all of admin) will also fail.

Sure isn't easy configuring for high load.. I will update with my progress.

--
Olve

Ryan Kennedy

unread,
May 23, 2014, 11:44:00 AM5/23/14
to dropwiz...@googlegroups.com
I played around with this last night a bit as well. Some of the configuration values are hard to find and some of them will break your app on startup. Set minThreads = 1, maxThreads = 2, and maxQueuedRequests = 2 and watch it blow up on startup with:

WARN  [2014-05-23 15:40:01,310] org.eclipse.jetty.util.component.AbstractLifeCycle: FAILED org.eclipse.jetty.server.Server@6ac6724c: java.util.concurrent.RejectedExecutionException: org.eclipse.jetty.io.SelectorManager$ManagedSelector@6d8ad46a keys=0 selected=0

because the acceptors are still configured to Runtime.getRuntime().availableProcessors() and there aren't enough slots in the BlockingArrayQueue to hold them all.

I'd love to see some folks sink some cycles into documentation/tutorials and/or code changes showing how to run Dropwizard at high concurrency. Even just having some tests to ascertain some of the upper bounds on concurrent connections/requests would be great. I've long wished we had better built-in support for capacity and utilization measurement and monitoring in Dropwizard. There's some underlying metrics, but they're incomplete and difficult to interpret.

Ryan

Olve Hansen

unread,
May 23, 2014, 5:57:15 PM5/23/14
to dropwiz...@googlegroups.com

Yeah, saw that as well, but did not chase the reason as to why it blew.

What puzzled me the most was that the max idle time settings didn't seem to have any effect. I had very long request times with my artificial setup, and even with low "max idle time" I didn't see any connections closed, only timeouts.

I will report what I find out using the low resource monitor on DW 0.7.

Ryan Kennedy

unread,
May 23, 2014, 6:22:45 PM5/23/14
to dropwiz...@googlegroups.com
I'm pretty sure "max idle time" controls how long threads idle (not handling a request) before they get shut down, shrinking the thread pool. There's a separate timeout that controls how long a request can go without sending output back to the client before that connection gets severed, but I forget what it's called.

Obviously given the number of knobs and dials we could do with a bit of labeling and explanation. Any takers? ;)

Ryan
Reply all
Reply to author
Forward
0 new messages