Impact on Performance due to LockSupport.park while using Map.lock()

923 views
Skip to first unread message

Shivi Garg

unread,
Nov 6, 2019, 5:25:36 AM11/6/19
to Hazelcast
Hi Team,

We are using Hazelcast distributed object i.e. IMap for locking mechanism via IMap.trylock() API. we have to obtain and release a lock for each rest call. we observed a significant impact on our rest calls. On thread dump analysis we see that most of time is being spent on LockSupport.park() as mentioned in below different threads

"qtp1073885358-4211" - Thread t@4211

   java.lang.Thread.State: WAITING

at sun.misc.Unsafe.park(Native Method)

at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)

at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:161)

at com.hazelcast.concurrent.lock.LockProxySupport.tryLock(LockProxySupport.java:138)

at com.hazelcast.map.impl.proxy.MapProxyImpl.tryLock(MapProxyImpl.java:483)


   java.lang.Thread.State: WAITING

at sun.misc.Unsafe.park(Native Method)

at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)

at com.hazelcast.internal.util.concurrent.MPSCQueue.takeAll(MPSCQueue.java:231)

at com.hazelcast.internal.util.concurrent.MPSCQueue.take(MPSCQueue.java:153)

at com.hazelcast.spi.impl.operationservice.impl.InboundResponseHandlerSupplier$ResponseThread.doRun(InboundResponseHandlerSupplier.java:283)

at com.hazelcast.spi.impl.operationservice.impl.InboundResponseHandlerSupplier$ResponseThread.run(InboundResponseHandlerSupplier.java:272)


   java.lang.Thread.State: WAITING

at sun.misc.Unsafe.park(Native Method)

at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)

at com.hazelcast.internal.util.concurrent.MPSCQueue.takeAll(MPSCQueue.java:231)

at com.hazelcast.internal.util.concurrent.MPSCQueue.take(MPSCQueue.java:153)

at com.hazelcast.spi.impl.operationexecutor.impl.OperationQueueImpl.take(OperationQueueImpl.java:85)

at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:105)


These calls are consuming a lot of time.

we found https://github.com/hazelcast/hazelcast/issues/14826 where it is mentioned as an bug and marked to be fixed in 4.0 release.

When this issue will be fixed? what can be done to improve the performance.

Thanks,
Shivi Garg

İhsan Demir

unread,
Nov 6, 2019, 7:25:30 AM11/6/19
to haze...@googlegroups.com
Can you explain why you need to lock at each call?

How did you verify that most time is spent at that call? That is a path executed for every invocation hence it may be OK to see it. Did you time the call?


--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hazelcast+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hazelcast/a42f5640-dc8c-43bc-89fb-6e3356ee356f%40googlegroups.com.


--

Ihsan Demir
Software Engineer, Clients Team
Mahir İz Cad. No:35, Altunizade, İstanbul 
ih...@hazelcast.com
skype: idemir

Shivi Garg

unread,
Nov 7, 2019, 8:30:24 AM11/7/19
to haze...@googlegroups.com
We need to lock at each call, as we are using for distributed locking.

I figured it out using VisualVM profiling.


--
Reply all
Reply to author
Forward
0 new messages