Seeing multiple concurrent locks on a distributed lock

280 views
Skip to first unread message

dnedde

unread,
May 18, 2012, 11:53:11 AM5/18/12
to haze...@googlegroups.com
Hi,

I'm seeing a case where the same lock can be locked on two different machines at the same time.  I perform a ping before attempting the distributed lock to avoid attempting to lock in a split brain scenario.  That said, there may be windows where communication could be disrupted after the ping check and before the lock().

In the logs below, each line is tagged with he machine name, "dg0" or "dg1".

I had the Hazlcast logging set to FINEST, but I don't really understand the Hazelcast logging messages or how they might relate to my problem.  Any help here would be appreciated.

Thanks,

Dave

---------------------


# Starting the logs with the Hazelcast logging messages and the successful locks right before where the problem happened.

0516+12:44:24.027 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:44:24.027 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:1, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:44:24.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-3]

0516+12:44:25.278 dg0 [d3a30188] DistributedLock  Completed calling lock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-1435]
  # Lock ok

0516+12:44:35.022 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:44:35.022 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:44:35.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:44:35.591 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:44:46.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:44:46.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:44:46.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:44:46.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:44:57.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:44:57.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]

0516+12:44:57.282 dg1 [301e08c7] DistributedLock  Calling lock() for DistributedLock@14115fb[owner=null,lockName=SERIALIZE_ALL_LOCK]  [Thread-181]
  # Thread-181 tries to lock, Thread-1435 currently has lock - ok
0516+12:44:57.284 dg0 [d3a30188] DistributedLock  Completed calling unlock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK]  [Thread-1435]
  # Thread-1435 releases lock - ok
0516+12:44:57.284 dg1 [301e08c7] DistributedLock  Completed calling lock() for DistributedLock@14115fb[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-181]
  # Thread-181 lock() call returns with true - ok
0516+12:44:57.309 dg1 [301e08c7] DistributedLock  Completed calling unlock() for DistributedLock@14115fb[owner=null,lockName=SERIALIZE_ALL_LOCK]  [Thread-181]
  # Thread-181 unlocks ok

0516+12:44:57.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:44:57.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:1, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:45:08.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:08.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:1, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:08.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:45:08.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:45:19.020 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:19.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:19.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:19.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:29.022 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:29.022 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:30.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:30.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:40.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:40.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:40.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:40.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:51.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:51.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:51.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:51.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:54.580 dg0 Hazelcast: PartitionManager /169.254.0.1:5701 [dev] Checking partition table for repartitioning... [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:46:01.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:46:01.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:46:02.020 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:02.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:12.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:46:12.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:46:12.589 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:46:12.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:46:23.020 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:23.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:23.591 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:46:23.591 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:46:33.618 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:46:33.618 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:46:34.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:46:34.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:46:44.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:46:44.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:46:45.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:45.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:55.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:55.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:56.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:56.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:47:01.011 dg1 Hazelcast: PartitionManager /169.254.0.2:5701 [dev] Checking partition table for repartitioning... [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:47:06.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:47:06.608 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:47:07.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:47:07.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]


# Problems start here

0516+12:47:11.988 dg1 [301e08c8] DistributedLock  Completed calling lock() for DistributedLock@14115fb[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-182]
  # Thread-182 locks - ok
  # dg0 is pingable at this point


0516+12:47:17.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:47:17.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:47:17.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:47:17.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:47:28.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:47:28.022 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:47:28.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:47:28.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:47:39.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:47:39.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:47:39.593 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:47:39.594 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]


0516+12:47:48.301 dg0 [d3a301c6] DistributedLock  Calling lock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK]  [Thread-1455]
0516+12:47:48.302 dg0 [d3a301c6] DistributedLock  Completed calling lock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-1455]
  # Thread-1455 gets a lock
  # This lock should not have been allowed, as it is currently held by Thread-182
  # What if control-plane is disrupted after member gets the lock?  Master still locks without checking peer, even though peer could have gotten lock and then control plane disruption?
  # What if we check that master is reachable, lock blocks due to a local lock, once local lock i released, master is no longer reachable?

0516+12:48:16.200 dg0 [d3a301c6] DistributedLock  Completed calling unlock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK]  [Thread-1455]
  # Thread-1455 unlocked - ok

0516+12:48:16.200 dg0 [d3a301c8] DistributedLock  Completed calling lock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-1457]
  # this lock should not have been allowed, currently held by Thread-182

0516+12:48:16.200 dg1 [301e08cb] DistributedLock  Calling lock() for DistributedLock@14115fb[owner=Thread-182,lockName=SERIALIZE_ALL_LOCK]  [Thread-185]
  # lock is correctly blocked because Thread-182 has it
  # dg0 is pingable at this time

0516+12:48:16.299 dg0 [d3a301c9] DistributedLock  Completed calling lock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-1458]
  # this lock should not have been allowed, currently held by Thread-182

0516+12:48:16.459 dg0 [d3a301ca] DistributedLock  Calling lock() for DistributedLock@7614d6[owner=Thread-1458,lockName=SERIALIZE_ALL_LOCK]  [Thread-1459]
  # lock is blocked.  It should be blocked due to Thread-182 having the lock,
  # but it looks like it is blocked because Thread Thread-1458 has the lock

0516+12:48:18.825 dg0 Same config [0 lines]. dg0/tmp/sfc-captures/0207, scattered to: [DCF-ROOT(25.031s)]  [DCFRootHAHandler]
  # this lock should not have been allowed, currently held by Thread-182

0516+12:48:26.227 dg1 Hazelcast: ConcurrentMapManager /169.254.0.2:5701 [dev] Still no response! Request{name='c:__hz_Locks',CONCURRENT_MAP_LOCK, redoCount='0', callId='188', lockThreadId='137'} [Thread-185]
0516+12:48:26.513 dg0 Hazelcast: ConcurrentMapManager /169.254.0.1:5701 [dev] Still no response! Request{name='c:__hz_Locks',CONCURRENT_MAP_LOCK, redoCount='0', callId='412', lockThreadId='255'} [Thread-1459]

0516+12:48:28.949 dg0 [d3a301ca] DistributedLock  Completed calling lock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-1459]
0516+12:48:28.951 dg0 Completed config to dg0/tmp/sfc-captures/0205   [Thread-1458]
0516+12:48:29.006 dg0 [d3a301e1] DistributedLock  Completed calling lock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-1461]
  # this lock should not have been allowed, currently held by Thread-182

0516+12:48:29.007 dg0 Completed config to dg0/tmp/sfc-captures/0206   [Thread-1459]
0516+12:48:29.180 dg0 Same config [0 lines]. dg0/tmp/sfc-captures/0208, scattered to: NONE  [FireCallbackHandler]
0516+12:48:32.080 dg0 Same config [0 lines]. dg0/tmp/sfc-captures/0209, scattered to: [DCF-ROOT(24.131s)]  [DCFRootHAHandler]
0516+12:48:33.227 dg1 [301e08c8] DistributedLock  Completed calling unlock() for DistributedLock@14115fb[owner=null,lockName=SERIALIZE_ALL_LOCK]  [Thread-182]
  # Thread-182 unlocks

0516+12:48:33.227 dg1 [301e08cb] DistributedLock  Completed calling lock() for DistributedLock@14115fb[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-185]
  # ok

0516+12:48:33.229 dg1 Completed config to dg1/tmp/sfc-captures/0152   [Thread-182]


Mehmet Dogan

unread,
May 24, 2012, 3:38:55 AM5/24/12
to haze...@googlegroups.com
I could not understand the flow here. Are you creating a split-brain scenario or just testing lock/unlock on a two node cluster? I mean is this issue related to split-brain or not?

@mmdogan







--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To view this discussion on the web visit https://groups.google.com/d/msg/hazelcast/-/K0ld_eDXyqkJ.
To post to this group, send email to haze...@googlegroups.com.
To unsubscribe from this group, send email to hazelcast+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/hazelcast?hl=en.

dnedde

unread,
May 24, 2012, 6:33:58 AM5/24/12
to haze...@googlegroups.com
This is not a split brain scenario. I only mention part of my design that does pings between nodes to demonstrate that I don't think it is a split brain.

Mehmet Dogan

unread,
Sep 20, 2012, 8:29:38 AM9/20/12
to haze...@googlegroups.com
This issue can be the same as issue 267.  We are able to see that multiple threads can acquire the same lock at the same time.

Also see thread;


@mmdogan




On Thu, May 24, 2012 at 1:33 PM, dnedde <dave...@gmail.com> wrote:
This is not a split brain scenario.  I only mention part of my design that does pings between nodes to demonstrate that I don't think it is a split brain.
--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To view this discussion on the web visit https://groups.google.com/d/msg/hazelcast/-/c-Oaud7Dv9kJ.
Reply all
Reply to author
Forward
0 new messages