Dear All,
I need some help with stack traces that I see when I load my system. Per Peter Veentjer's advice I am using the EntryProcessor and I am quite happy with its performance. I am now load testing my application and I am trying to explain some of the stack traces I am seeing. In particular, I would like to know how to speed this stuff up. :-)
In the stack dumps taken during the load test I find more or less equal portions of the two traces shown below. Some of the lines are suppressed to reduce the size and to improve readability of the stack traces.
stack trace form 1:
"http-bio-80-exec-915" daemon prio=10 tid=0x0000000002542000 nid=0x45c1 waiting on condition [0x00007fc301999000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000f9222898> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
at com.hazelcast.spi.impl.InvocationImpl$InvocationFuture.waitForResponse(InvocationImpl.java:326)
at com.hazelcast.spi.impl.InvocationImpl$InvocationFuture.get(InvocationImpl.java:294)
at com.hazelcast.spi.impl.InvocationImpl$InvocationFuture.get(InvocationImpl.java:286)
at com.hazelcast.map.proxy.MapProxySupport.executeOnKeyInternal(MapProxySupport.java:592)
at com.hazelcast.map.proxy.MapProxyImpl.executeOnKeyInternal(MapProxyImpl.java:44)
at com.hazelcast.map.proxy.MapProxyImpl.executeOnKey(MapProxyImpl.java:485)
at org.kjkoster.foo.entities.SeatAllocator.buySeats(SeatAllocator.java:241)
at org.kjkoster.foo.api.HazelcastAPI.buySeats(HazelcastAPI.java:188)
at org.kjkoster.foo.servlets.TicketServlet.doPost(TicketServlet.java:97)
in tomcat...
- locked <0x00000000ff7634e8> (a org.apache.tomcat.util.net.SocketWrapper)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
in java...
stack trace form 2:
"http-bio-80-exec-896" daemon prio=10 tid=0x0000000001b1f000 nid=0x45a3 waiting on condition [0x00007fc302eae000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000f8551398> (a java.util.concurrent.Semaphore$NonfairSync)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1033)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
at java.util.concurrent.Semaphore.tryAcquire(Semaphore.java:588)
at com.hazelcast.spi.impl.OperationServiceImpl.waitForBackups(OperationServiceImpl.java:636)
at com.hazelcast.spi.impl.InvocationImpl$InvocationFuture.waitForBackupsAndGetResponse(InvocationImpl.java:393)
at com.hazelcast.spi.impl.InvocationImpl$InvocationFuture.get(InvocationImpl.java:298)
at com.hazelcast.spi.impl.InvocationImpl$InvocationFuture.get(InvocationImpl.java:286)
at com.hazelcast.map.proxy.MapProxySupport.executeOnKeyInternal(MapProxySupport.java:592)
at com.hazelcast.map.proxy.MapProxyImpl.executeOnKeyInternal(MapProxyImpl.java:44)
at com.hazelcast.map.proxy.MapProxyImpl.executeOnKey(MapProxyImpl.java:485)
at org.kjkoster.foo.entities.SeatAllocator.buySeats(SeatAllocator.java:241)
at org.kjkoster.foo.api.HazelcastAPI.buySeats(HazelcastAPI.java:188)
at org.kjkoster.foo.servlets.TicketServlet.doPost(TicketServlet.java:97)
in tomcat...
- locked <0x00000000f4a85918> (a org.apache.tomcat.util.net.SocketWrapper)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
in java...
There are other traces, but there are maybe one or two instances of them. The ones above show up dozens of times in each stack dump I take under load.
From reading the code, I am seeing that my app ends up waiting for other nodes to respond to executeOnKey() results from other nodes (well, there are only two in the cluster). Is that observation correct?
Where do I start to improve performance here? Any ideas?
Kees Jan