You don't have to use the lock() API for locking.. Here is another way:
ConcurrentMap lockMap = Hazelcast.getMap("locks");
if (lockMap.putIfAbsent(lockKey, thisMember) == null) {
// you got the lock
try {
} finally {
lockMap.remove(lockKey);
}
}
What is good about this?
1. You can persist the locks by using MapStore for this map
2. You can set TTL for the locks.. so locks are auto-released after TTL
3. You can call lockMap.get(lockKey) to get the lock owner (if needed).
What is bad about this?
1. You won't have the tryLock(timeout) support
-talip
> --
> You received this message because you are subscribed to the Google Groups "Hazelcast" group.
> To post to this group, send email to haze...@googlegroups.com.
> To unsubscribe from this group, send email to hazelcast+...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/hazelcast?hl=en.
>
Say you have two nodes of Hazelcast (or any other clustered lock
manager) and say there is a network problem between the two. Each node
will keep maintaining the locks independently, which will allow two
different processes to acquire the 'write-lock'. In a network
partitioning (split-brain) scenario, lock consistency cannot be
guaranteed, when the lock manager is clustered.
> In addition, if that
> thread/process dies, we should release the lock so other threads/
> processes can write to the table.
Hazelcast will detect the death of lock owner process and release the
locks owned by that node but I cannot detect the user's dead threads
yet.
> Considering that use case, would you recommend using the Lock API or
> the method that you described?
No. I wouldn't unless you relax the network partitioning requirement
or handle it somehow.
-talip
-talip
I would go with Lock approach because Locks are auto-released when the
lock owner process dies. With the map.putIfAbsent() approach, you will
have to listen to membership events and remove (release) the locks
owned by the dead member yourself.
So definitely go with locks but even here you have two options: global
locks and locks-map
1. Global Locks
Lock lock = Hazelcast.getLock(keyToLock);
lock.lock();
try {
}fnally{
lock.unlock();
}
This approach is nice when you have tens of locks. Because each global
lock instance is managed cluster-wide. And you will have to call
lock.destroy() to terminate it (get it garbage collected).
2. Locks-Map
IMap lockMap = Hazelcast.getMap("locks");
lockMap.lock(keyToLock);
try{
}finally {
lockMap.unlock(keyToLock);
}
You use this map only for locks. Locks created here are very cheap and
auto-garbage collected when there is no lock owner and no-one waiting
on the lock. You can have millions of locks with this approach; very
light and you do nothing to maintain. This is my favorite.
-talip
--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To view this discussion on the web visit https://groups.google.com/d/msg/hazelcast/-/12KgGZ2cDyIJ.
What's the relation between lock consistency and being production ready?lock's are consistent unless you have a split in your network. And at that point Hazelcast selects availability instead of consistency and keeps going. You may want to have a membership listener and whenever a node leaves a cluster, you can stop your own application to preserve the consistency.-fuad