Connection burst is leading to timeouts

1,650 views
Skip to first unread message

Andy Villiger

unread,
Mar 10, 2014, 11:26:35 AM3/10/14
to hika...@googlegroups.com
I'm using Hikari in Scala together with MySQL, Spray (HTTP Server) and Slick. I'm having a strange problem for which I did not find any solution: If I request many connections (db.withSession) in a short time frame with a HTTP benchmarking tool (https://github.com/wg/wrk) all connections are timing out - it seems Hikari is blocking forever. This does only occur if multiple parallel connections are being thrown at Hikari. The application hangs then forever, every future request after the HTTP benchmark burst will time out, as long as I don't restart the application itself. However, it doesn't occur if I'm using the default driver/datasource without the Hikari CP.

I don't have a clue, why this is happening... any ideas?

Andy Villiger

unread,
Mar 10, 2014, 11:28:06 AM3/10/14
to hika...@googlegroups.com
I had this problem with older versions as well as with the new 1.3.2 version.

Brett Wooldridge

unread,
Mar 10, 2014, 11:38:21 AM3/10/14
to
Can you try with the development branch version?  We fixed an issue, though we thought
it theoretical at the time, with "stampede" demands on the pool.

cd HikariCP
git checkout dev
mvn install

Then add version 1.3.3-SNAPSHOT as your dependency, or use HikariCP-1.3.3-SNAPSHOT.jar
from the target directory.  We'd be interested to hear the results.

Can you capture a thread dump at the time of the issue?

Brett Wooldridge

unread,
Mar 10, 2014, 11:54:27 AM3/10/14
to hika...@googlegroups.com
One more thought... the Play framework, also Scala-based has similar issues with BoneCP due to Scala's use of thread pools for asynchronous execution. One difference is that the conventional DataSource will have no bound on the number of connections that can be creates, but a pool will cap and block threads once the maximum configured limit is reached. What happens in the case of Scala is that if, for example the core thread pool has 4 threads, and the pool has a max of 50 connections, once 50 connections are reached, the next 4 requests for connections will block -- once the 4 core threads are blocked, everything comes to a halt. None of the existing code holding the initial 50 connections will ever be executed again (because the threads are blocked), and so no connections will ever be released to free up the blocked threads. In that case, to effectively use a pool, the maximum pool size must be greater than the maximum number of anticipated requests. To test that theory, you can try setting the maximum pool size to something very large, like 10,000.

Brett Wooldridge

unread,
Mar 10, 2014, 8:38:08 PM3/10/14
to hika...@googlegroups.com
As an aside, we (the HikariCP project) are working to solve this issue for the Play Framework, but you might take a similar approach or suggest it to the Slick guys.  Basically, requests for Connections should return Promises/Futures and should not block.  The actual call to getConnection() on the DataSource should occur in a private thread pool, not the core thread pool.  Requests for Connection promises need to be queued up internally and processed single-file.  We're experimenting with this now.  While it might sound like a bad idea to dispatch Connection requests single-file, the reality is that a request to the pool when a connection is available will return in nanoseconds.  The area we are focusing on is when the pool has capacity (is not at max) but has no available connections. Creating new connections single-file might introduce latency (but only in this edge case), but then again it may not due to the fact that connection setup by the database may not be effectively parallelized either.

Anyway, "reactive" (non-blocking/event based) systems like Scala's ExecutionContext concept all face this challenge when dealing with blocking resources.  But I think it is the job of frameworks like Play or Slick to provide the proper abstraction to avoid deadlock/livelock in applications.  We hope to assist Play in this effort.


Andy Villiger

unread,
Mar 17, 2014, 4:41:37 AM3/17/14
to hika...@googlegroups.com
Thank you Brett for all your help and information you came up with. Unfortunately the version jump to 1.3.3 did not help.

Here is the thread dump of the application at the time of blocking:

2014-03-17 09:34:57
Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.45-b08 mixed mode):

"api-akka.actor.default-dispatcher-32" prio=5 tid=0x00007fbfc343a000 nid=0x740f waiting on condition [0x000000011273b000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00000007806feba0> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"api-akka.actor.default-dispatcher-30" prio=5 tid=0x00007fbfc3199800 nid=0x7c0b waiting on condition [0x0000000112b47000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00000007806feba0> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"RMI TCP Connection(2)-127.0.0.1" daemon prio=5 tid=0x00007fbfc2967000 nid=0x9403 runnable [0x0000000113bfa000]
   java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
- locked <0x0000000781a0efd8> (a java.io.BufferedInputStream)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:538)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

   Locked ownable synchronizers:
- <0x0000000781a0f228> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"JMX server connection timeout 55" daemon prio=5 tid=0x00007fbfc58ef000 nid=0x9203 in Object.wait() [0x000000011371d000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x0000000781a10ff8> (a [I)
at com.sun.jmx.remote.internal.ServerCommunicatorAdmin$Timeout.run(ServerCommunicatorAdmin.java:168)
- locked <0x0000000781a10ff8> (a [I)
at java.lang.Thread.run(Thread.java:744)

   Locked ownable synchronizers:
- None

"RMI Scheduler(0)" daemon prio=5 tid=0x00007fbfc296a000 nid=0x7307 waiting on condition [0x000000011361a000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x0000000781a11018> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

   Locked ownable synchronizers:
- None

"RMI TCP Connection(1)-127.0.0.1" daemon prio=5 tid=0x00007fbfc3936800 nid=0x7007 runnable [0x0000000113462000]
   java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
- locked <0x0000000781a11110> (a java.io.BufferedInputStream)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:538)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

   Locked ownable synchronizers:
- <0x0000000781a0f398> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"RMI TCP Accept-0" daemon prio=5 tid=0x00007fbfc2964000 nid=0x370b runnable [0x000000011335f000]
   java.lang.Thread.State: RUNNABLE
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
at java.net.ServerSocket.implAccept(ServerSocket.java:530)
at java.net.ServerSocket.accept(ServerSocket.java:498)
at sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:52)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:388)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:360)
at java.lang.Thread.run(Thread.java:744)

   Locked ownable synchronizers:
- None

"api-akka.actor.default-dispatcher-29" prio=5 tid=0x00007fbfc2d5d800 nid=0x8e13 waiting on condition [0x000000011232f000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00000007806feba0> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"Attach Listener" daemon prio=5 tid=0x00007fbfc2dca000 nid=0x3d0f waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
- None

"ForkJoinPool-3-worker-5" daemon prio=5 tid=0x00007fbfc335f000 nid=0x8407 waiting for monitor entry [0x0000000113159000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at java.io.BufferedInputStream.read(BufferedInputStream.java:253)
- waiting to lock <0x00000007819f8140> (a java.io.BufferedInputStream)
at com.redis.IO$class.readLine(IO.scala:88)
at com.redis.RedisClient.readLine(RedisClient.scala:60)
at com.redis.Reply$class.receive(RedisProtocol.scala:110)
at com.redis.RedisClient.receive(RedisClient.scala:60)
at com.redis.R$class.asBulk(RedisProtocol.scala:121)
at com.redis.RedisClient.asBulk(RedisClient.scala:60)
at com.redis.StringOperations$$anonfun$get$1.apply(StringOperations.scala:15)
at com.redis.StringOperations$$anonfun$get$1.apply(StringOperations.scala:15)
at com.redis.Redis$class.send(RedisClient.scala:21)
at com.redis.RedisClient.send(RedisClient.scala:60)
at com.redis.StringOperations$class.get(StringOperations.scala:15)
at com.redis.RedisClient.get(RedisClient.scala:60)
at com.starmind.api.cache.RedisManager.get(Redis.scala:27)
at com.starmind.api.cache.CacheDistributed$.getCache(CacheDistributed.scala:13)
at com.starmind.api.model.repositories.TokenManager$$anonfun$get$1.apply(TokenRepository.scala:57)
at com.starmind.api.model.repositories.TokenManager$$anonfun$get$1.apply(TokenRepository.scala:57)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at scala.concurrent.impl.ExecutionContextImpl$$anon$3.exec(ExecutionContextImpl.scala:107)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"ForkJoinPool-3-worker-1" daemon prio=5 tid=0x00007fbfc3686800 nid=0x680f waiting for monitor entry [0x0000000112f53000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at java.io.BufferedInputStream.read(BufferedInputStream.java:253)
- waiting to lock <0x00000007819f8140> (a java.io.BufferedInputStream)
at com.redis.IO$class.readLine(IO.scala:88)
at com.redis.RedisClient.readLine(RedisClient.scala:60)
at com.redis.Reply$$anonfun$4.applyOrElse(RedisProtocol.scala:69)
at com.redis.Reply$$anonfun$4.applyOrElse(RedisProtocol.scala:63)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:162)
at com.redis.Reply$class.receive(RedisProtocol.scala:114)
at com.redis.RedisClient.receive(RedisClient.scala:60)
at com.redis.R$class.asBulk(RedisProtocol.scala:121)
at com.redis.RedisClient.asBulk(RedisClient.scala:60)
at com.redis.StringOperations$$anonfun$get$1.apply(StringOperations.scala:15)
at com.redis.StringOperations$$anonfun$get$1.apply(StringOperations.scala:15)
at com.redis.Redis$class.send(RedisClient.scala:21)
at com.redis.RedisClient.send(RedisClient.scala:60)
at com.redis.StringOperations$class.get(StringOperations.scala:15)
at com.redis.RedisClient.get(RedisClient.scala:60)
at com.starmind.api.cache.RedisManager.get(Redis.scala:27)
at com.starmind.api.cache.CacheDistributed$.getCache(CacheDistributed.scala:13)
at com.starmind.api.model.repositories.TokenManager$$anonfun$get$1.apply(TokenRepository.scala:57)
at com.starmind.api.model.repositories.TokenManager$$anonfun$get$1.apply(TokenRepository.scala:57)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at scala.concurrent.impl.ExecutionContextImpl$$anon$3.exec(ExecutionContextImpl.scala:107)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"ForkJoinPool-3-worker-3" daemon prio=5 tid=0x00007fbfc5912000 nid=0x8607 waiting for monitor entry [0x0000000113056000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at java.io.BufferedInputStream.read(BufferedInputStream.java:253)
- waiting to lock <0x00000007819f8140> (a java.io.BufferedInputStream)
at com.redis.IO$class.readLine(IO.scala:88)
at com.redis.RedisClient.readLine(RedisClient.scala:60)
at com.redis.Reply$class.receive(RedisProtocol.scala:110)
at com.redis.RedisClient.receive(RedisClient.scala:60)
at com.redis.R$class.asBulk(RedisProtocol.scala:121)
at com.redis.RedisClient.asBulk(RedisClient.scala:60)
at com.redis.StringOperations$$anonfun$get$1.apply(StringOperations.scala:15)
at com.redis.StringOperations$$anonfun$get$1.apply(StringOperations.scala:15)
at com.redis.Redis$class.send(RedisClient.scala:21)
at com.redis.RedisClient.send(RedisClient.scala:60)
at com.redis.StringOperations$class.get(StringOperations.scala:15)
at com.redis.RedisClient.get(RedisClient.scala:60)
at com.starmind.api.cache.RedisManager.get(Redis.scala:27)
at com.starmind.api.cache.CacheDistributed$.getCache(CacheDistributed.scala:13)
at com.starmind.api.model.repositories.TokenManager$$anonfun$get$1.apply(TokenRepository.scala:57)
at com.starmind.api.model.repositories.TokenManager$$anonfun$get$1.apply(TokenRepository.scala:57)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at scala.concurrent.impl.ExecutionContextImpl$$anon$3.exec(ExecutionContextImpl.scala:107)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"Hikari Housekeeping Timer" daemon prio=5 tid=0x00007fbfc2b16800 nid=0x8c03 in Object.wait() [0x0000000113af7000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00000007813ba2c8> (a java.util.TaskQueue)
at java.util.TimerThread.mainLoop(Timer.java:552)
- locked <0x00000007813ba2c8> (a java.util.TaskQueue)
at java.util.TimerThread.run(Timer.java:505)

   Locked ownable synchronizers:
- None

"MySQL Statement Cancellation Timer" daemon prio=5 tid=0x00007fbfc346a800 nid=0x8a03 in Object.wait() [0x00000001139f4000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00000007812d3018> (a java.util.TaskQueue)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
- locked <0x00000007812d3018> (a java.util.TaskQueue)
at java.util.TimerThread.run(Timer.java:505)

   Locked ownable synchronizers:
- None

"Hikari Housekeeping Timer" daemon prio=5 tid=0x00007fbfc4484000 nid=0x8803 in Object.wait() [0x000000011129c000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00000007813c9460> (a java.util.TaskQueue)
at java.util.TimerThread.mainLoop(Timer.java:552)
- locked <0x00000007813c9460> (a java.util.TaskQueue)
at java.util.TimerThread.run(Timer.java:505)

   Locked ownable synchronizers:
- None

"ForkJoinPool-3-worker-7" daemon prio=5 tid=0x00007fbfc31fe800 nid=0x8203 runnable [0x0000000112e50000]
   java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
- locked <0x00000007819f8140> (a java.io.BufferedInputStream)
at com.redis.IO$class.readLine(IO.scala:88)
at com.redis.RedisClient.readLine(RedisClient.scala:60)
at com.redis.Reply$class.receive(RedisProtocol.scala:110)
at com.redis.RedisClient.receive(RedisClient.scala:60)
at com.redis.R$class.asBulk(RedisProtocol.scala:121)
at com.redis.RedisClient.asBulk(RedisClient.scala:60)
at com.redis.StringOperations$$anonfun$get$1.apply(StringOperations.scala:15)
at com.redis.StringOperations$$anonfun$get$1.apply(StringOperations.scala:15)
at com.redis.Redis$class.send(RedisClient.scala:21)
at com.redis.RedisClient.send(RedisClient.scala:60)
at com.redis.StringOperations$class.get(StringOperations.scala:15)
at com.redis.RedisClient.get(RedisClient.scala:60)
at com.starmind.api.cache.RedisManager.get(Redis.scala:27)
at com.starmind.api.cache.CacheDistributed$.getCache(CacheDistributed.scala:13)
at com.starmind.api.model.repositories.TokenManager$$anonfun$get$1.apply(TokenRepository.scala:57)
at com.starmind.api.model.repositories.TokenManager$$anonfun$get$1.apply(TokenRepository.scala:57)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at scala.concurrent.impl.ExecutionContextImpl$$anon$3.exec(ExecutionContextImpl.scala:107)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"api-akka.io.pinned-dispatcher-23" prio=5 tid=0x00007fbfc366f000 nid=0x8003 runnable [0x0000000112d4d000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
at sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
at sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- locked <0x00000007811f4ea0> (a sun.nio.ch.Util$2)
- locked <0x00000007811f4e90> (a java.util.Collections$UnmodifiableSet)
- locked <0x000000078116a978> (a sun.nio.ch.KQueueSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
at akka.io.SelectionHandler$ChannelRegistryImpl$$anon$3.tryRun(SelectionHandler.scala:114)
at akka.io.SelectionHandler$ChannelRegistryImpl$Task.run(SelectionHandler.scala:215)
at akka.io.SelectionHandler$ChannelRegistryImpl$$anon$3.run(SelectionHandler.scala:147)
at akka.util.SerializedSuspendableExecutionContext.run$1(SerializedSuspendableExecutionContext.scala:68)
at akka.util.SerializedSuspendableExecutionContext.run(SerializedSuspendableExecutionContext.scala:72)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

   Locked ownable synchronizers:
- <0x000000078116a670> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"Hashed wheel timer #1" prio=5 tid=0x00007fbfc4372000 nid=0x7e03 waiting on condition [0x0000000112c4a000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at org.jboss.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:483)
at org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:392)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at java.lang.Thread.run(Thread.java:744)

   Locked ownable synchronizers:
- None

"api-akka.actor.default-dispatcher-21" prio=5 tid=0x00007fbfc4357800 nid=0x7a03 waiting on condition [0x0000000112a44000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00000007806feba0> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"api-akka.actor.default-dispatcher-20" prio=5 tid=0x00007fbfc4356800 nid=0x7803 waiting on condition [0x0000000112941000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00000007806feba0> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.idleAwaitWork(ForkJoinPool.java:2135)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2067)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"api-akka.actor.default-dispatcher-19" prio=5 tid=0x00007fbfc3af0000 nid=0x7603 waiting on condition [0x000000011283e000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00000007806feba0> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"DestroyJavaVM" prio=5 tid=0x00007fbfc2a28000 nid=0x1b03 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
- None

"api-akka.actor.default-dispatcher-17" prio=5 tid=0x00007fbfc432a800 nid=0x6e03 waiting on condition [0x0000000112638000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00000007806feba0> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"api-akka.actor.default-dispatcher-16" prio=5 tid=0x00007fbfc4332000 nid=0x6c03 waiting on condition [0x0000000112535000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00000007806feba0> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"api-akka.actor.default-dispatcher-15" prio=5 tid=0x00007fbfc4331000 nid=0x6a03 waiting on condition [0x0000000112432000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00000007806feba0> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"api-akka.actor.default-dispatcher-13" prio=5 tid=0x00007fbfc3a6b000 nid=0x6603 waiting on condition [0x000000011222c000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00000007806feba0> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"api-akka.remote.default-remote-dispatcher-12" prio=5 tid=0x00007fbfc34e2800 nid=0x6403 waiting on condition [0x0000000112129000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x000000078086c6c8> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"New I/O server boss #6" prio=5 tid=0x00007fbfc2a40000 nid=0x6203 runnable [0x0000000112026000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
at sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
at sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- locked <0x00000007808830f0> (a sun.nio.ch.Util$2)
- locked <0x00000007808830e0> (a java.util.Collections$UnmodifiableSet)
- locked <0x0000000780882d18> (a sun.nio.ch.KQueueSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
at org.jboss.netty.channel.socket.nio.NioServerBoss.select(NioServerBoss.java:163)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

   Locked ownable synchronizers:
- <0x0000000780882ae0> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"New I/O worker #5" prio=5 tid=0x00007fbfc35c1000 nid=0x6003 runnable [0x0000000111f23000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
at sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
at sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- locked <0x0000000780882538> (a sun.nio.ch.Util$2)
- locked <0x0000000780882528> (a java.util.Collections$UnmodifiableSet)
- locked <0x0000000780882408> (a sun.nio.ch.KQueueSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

   Locked ownable synchronizers:
- <0x00000007808801c0> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"New I/O worker #4" prio=5 tid=0x00007fbfc34e7800 nid=0x5e03 runnable [0x0000000111e20000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
at sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
at sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- locked <0x00000007808800d8> (a sun.nio.ch.Util$2)
- locked <0x00000007808800c8> (a java.util.Collections$UnmodifiableSet)
- locked <0x000000078087ffa8> (a sun.nio.ch.KQueueSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

   Locked ownable synchronizers:
- <0x000000078087f9f8> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"New I/O boss #3" prio=5 tid=0x00007fbfc35cb800 nid=0x5c03 runnable [0x0000000111d1d000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
at sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
at sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- locked <0x0000000780899140> (a sun.nio.ch.Util$2)
- locked <0x0000000780899130> (a java.util.Collections$UnmodifiableSet)
- locked <0x0000000780898ef0> (a sun.nio.ch.KQueueSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

   Locked ownable synchronizers:
- <0x0000000780898b38> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"New I/O worker #2" prio=5 tid=0x00007fbfc2a2a800 nid=0x5a03 runnable [0x0000000111c1a000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
at sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
at sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- locked <0x000000078083db50> (a sun.nio.ch.Util$2)
- locked <0x000000078083db60> (a java.util.Collections$UnmodifiableSet)
- locked <0x000000078083db00> (a sun.nio.ch.KQueueSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

   Locked ownable synchronizers:
- <0x000000078083dc58> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"New I/O worker #1" prio=5 tid=0x00007fbfc2a36000 nid=0x5803 runnable [0x0000000111b17000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
at sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
at sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- locked <0x000000078083e108> (a sun.nio.ch.Util$2)
- locked <0x000000078083e0f8> (a java.util.Collections$UnmodifiableSet)
- locked <0x000000078083dfd8> (a sun.nio.ch.KQueueSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:415)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

   Locked ownable synchronizers:
- <0x000000078083dda8> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"api-akka.remote.default-remote-dispatcher-4" prio=5 tid=0x00007fbfc2a1a800 nid=0x5603 waiting on condition [0x0000000111a14000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x000000078086c6c8> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.idleAwaitWork(ForkJoinPool.java:2135)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2067)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"api-akka.actor.default-dispatcher-3" prio=5 tid=0x00007fbfc429f000 nid=0x5403 waiting on condition [0x0000000111911000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00000007806feba0> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"api-akka.actor.default-dispatcher-2" prio=5 tid=0x00007fbfc352c800 nid=0x5203 waiting on condition [0x000000011180e000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00000007806feba0> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

   Locked ownable synchronizers:
- None

"api-scheduler-1" prio=5 tid=0x00007fbfc32fe800 nid=0x5003 sleeping[0x000000011170b000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at akka.actor.LightArrayRevolverScheduler.waitNanos(Scheduler.scala:226)
at akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:405)
at akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)
at java.lang.Thread.run(Thread.java:744)

   Locked ownable synchronizers:
- None

"Service Thread" daemon prio=5 tid=0x00007fbfc4003800 nid=0x4c03 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
- None

"C2 CompilerThread1" daemon prio=5 tid=0x00007fbfc280b000 nid=0x4a03 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
- None

"C2 CompilerThread0" daemon prio=5 tid=0x00007fbfc302f800 nid=0x4803 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
- None

"Signal Dispatcher" daemon prio=5 tid=0x00007fbfc3016800 nid=0x4603 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
- None

"Finalizer" daemon prio=5 tid=0x00007fbfc3850000 nid=0x3303 in Object.wait() [0x00000001104d0000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00000007810cf7a0> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
- locked <0x00000007810cf7a0> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)

   Locked ownable synchronizers:
- None

"Reference Handler" daemon prio=5 tid=0x00007fbfc384d800 nid=0x3103 in Object.wait() [0x00000001103cd000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00000007810cf448> (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Object.java:503)
at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
- locked <0x00000007810cf448> (a java.lang.ref.Reference$Lock)

   Locked ownable synchronizers:
- None

"VM Thread" prio=5 tid=0x00007fbfc384d000 nid=0x2f03 runnable 

"GC task thread#0 (ParallelGC)" prio=5 tid=0x00007fbfc380f000 nid=0x2703 runnable 

"GC task thread#1 (ParallelGC)" prio=5 tid=0x00007fbfc3810000 nid=0x2903 runnable 

"GC task thread#2 (ParallelGC)" prio=5 tid=0x00007fbfc3810800 nid=0x2b03 runnable 

"GC task thread#3 (ParallelGC)" prio=5 tid=0x00007fbfc3811000 nid=0x2d03 runnable 

"VM Periodic Task Thread" prio=5 tid=0x00007fbfc4025000 nid=0x4e03 waiting on condition 

JNI global references: 251

Brett Wooldridge

unread,
Mar 17, 2014, 6:10:57 AM3/17/14
to
Did you read the previous post about the pool size?  In reactive framework like Scala, the number of connections in the pool must be greater than the maximum number of simultaneous requests you expect to handle.  This is regardless of pool.  The same would be true of BoneCP or C3P0.  If you throw 1000 requests at Scala, you better have maximumPoolSize set to 1000, or Scala will lockup solid.

You mentioned you were using a benchmarking tool, but you did not mention how many simultaneous requests you are sending at once.

Andy Villiger

unread,
Mar 17, 2014, 6:17:27 AM3/17/14
to hika...@googlegroups.com
I am using wrk (https://github.com/wg/wrk) with 10 connections and 10 threads. So there are 10 concurrent connections.

I already set the maximumPoolSize to 10000 – and it did not help. I have a short spike in CPU usage in the Java VisualVM and after the spike the CPU usage goes to zero, wrk knocking on closed doors. If I set wrk to use only one thread everything works well.

Brett Wooldridge

unread,
Mar 17, 2014, 6:49:41 AM3/17/14
to hika...@googlegroups.com
On the train now, but I'll try to setup a similar environment to reproduce the issue when I get to my PC. Is there any key configuration information I should be aware of?

Andy Villiger

unread,
Mar 17, 2014, 7:05:17 AM3/17/14
to hika...@googlegroups.com
It's built like this:

Spray is handling the connection in the first place. The incoming request is then routed by an Akka RoundRobinPool to one of a few Spray HttpServiceActors (which are basically Akka Actors). The DB connections are requested from the Hikari CP inside a Future, it does not block the main application in the meantime (using Scala's global execution context). The connection is acquired with db.withSession(...) where db is the object which is created by JdbcDatabase.forDataSource(new HikariDataSource(...)). My Hikari config is basically the default config, except the newly introduced maximumPoolSize set to 10'000.

Brett Wooldridge

unread,
Mar 17, 2014, 7:10:23 AM3/17/14
to
Have you tried with C3P0 or Tomcat pools?  It might be a good quick check while I setup the test environment.  I assume something is releasing the connections?  Even Futures can block because they are served by the global execution pool, which only has 4 threads by default (or equal to the number of cores).  What is needed, and what we are creating for Play, is a dedicated execution pool for connection acquisition that does not block execution by the global pool.

-Brett

Andy Villiger

unread,
Mar 17, 2014, 7:38:42 AM3/17/14
to hika...@googlegroups.com
Yes, we were using c3p0 in the first place, but switched to HikariCP for exactly this reason. Obviously it didn't help.

I'm right now looking into an issue with the Redis client – who knows, probably I was searching in the wrong place. I come back to you after I've resolved the issue with Redis. Just wanted to say that before you have a lot of work setting up something like my setup.

Andy Villiger

unread,
Mar 17, 2014, 9:11:28 AM3/17/14
to hika...@googlegroups.com
I have indeed fixed the issue. I could narrow it down to Redis – it had nothing to do with Hikari. Sorry Brett for all the hassle I've caused you.

Brett Wooldridge

unread,
Mar 17, 2014, 9:20:31 AM3/17/14
to hika...@googlegroups.com
Here is the basic rule of thumb, follow this and you are likely to avoid most Scala deadlocks:

"Anything that 1) can block, and 2) has a resource limit, should execute outside of the global execution context."

Blocking code is OK -- such as file I/O -- as long as it doesn't have a resource limit.  By which I mean, it cannot get into a condition where by it blocks indefinitely.

That includes HikariCP or any other JDBC pool.  Redis has non-blocking clients available, but JDBC has no definition for asynchronous behavior.  Even asynchronous-backed JDBC drivers like the new pgjdbc-ng for PostgreSQL are only non-blocking underneath but still exposing a blocking facade to the user -- so the effect is the same.

I would verify in the debugger that connection.close() is getting called upon every request.  If you did set HikariCP to 10k connections, and you are limiting the load testing tool to 10 threads x 10 connections, either something else is binding up the global executor context, or connections are not being freed.

However, 10k connections is no long term solution.  I would definitely look at how to get those futures resolved in a different execution context.

Brett Wooldridge

unread,
Mar 17, 2014, 9:25:22 AM3/17/14
to hika...@googlegroups.com
No problem, glad to hear the issue is resolved.  Though remember as noted in my windy post, executing against a blocking pool in the global execution context is a deadlock waiting in the wings.  You really don't want to have to set the connection pool so high.  But if you're resolving those futures in the global execution context, as soon as concurrent requests tip past the pool max, you've got a deadlock.  A situation easily avoided by resolving those futures in a different context, so the global context can continue to retire requests and free up connections to feed back into the pool.

-Brett


Andy Villiger

unread,
Mar 17, 2014, 9:35:36 AM3/17/14
to hika...@googlegroups.com
You are right. But it seems to work even in the global context. For testing purposes I've set the maximumPoolSize to 1 and configured wrk to use 100 threads running 200 simultaneous connections – it's five times slower, but still reasonably fast.

Brett Wooldridge

unread,
Mar 17, 2014, 9:45:57 AM3/17/14
to hika...@googlegroups.com
I'm both shocked and pleased at the same time.  :-)

Brett Wooldridge

unread,
Mar 17, 2014, 9:14:34 PM3/17/14
to
Well, I slept on it overnight and somehow I'm still not convinced.  With a connection pool of 1, and using the global execution context, it should still be possible to deadlock.  This is something you definitely want to prove is either not possible, or verify that it is possible.  It is going to be very time-sensitive.

My suggestion is, set the pool maximum to 1, run the server on a box with at least 2 cores (4 would be better), and make sure to run the load generation on a separate box from the server.  If the load can run for say 30 minutes without deadlocking, I'll be convinced.

-Brett

Reply all
Reply to author
Forward
0 new messages