I am prototyping using a MultiMap for storing grouped lists of tasks to be processed with a LocalEntryListener on the map to make sure that workers know when there is work to be done. I have a couple of questions related to this and my observations from prototyping.1) I am registering my interest in receiving local entries as:Hazelcast.getMultiMap( MAP_NAME ).addLocalEntryListener( this );This subscribes me to the map and makes the local Thread available for accepting a partition of said map (as I understand it). Is there inherently a race condition between subscribing to the map and adding the local entry listener? Is there a possibility that local entries will be added before the listener is in place? I have not observed this behavior, but it looks a bit contentious at a glance.
2) In the case of partition migration I have observed that the new partition does not necessarily receive a call for each key/value pair in the multi map, though I believe that it does get called for each key (which seems reasonable to me but is different than the behavior I have observed when not migrating). Is this expected?
3) In a few circumstances I have observed cases where an Object is not removed from the underlying list on the backup node (we are using default backup of 1). This happens very rarely (In my last two tests it happened 2 and 3 times respectively out of ~350,000 total entries) but I am fairly certain of the cause and result. The steps of the symptom are:a) Instance A receives ID1/value1 via LocalEventEntry listenerb) Instance A processes ID1/value1c) Instance A removes ID1/value1 and sees removal on LocalEventEntry listenerd) some time passes... (multiple minutes sometimes)e) Instance A is killedf) Instance B receives ownership of the partition containing ID1g) Instance B receives ID1/value2 via LocalEventEntry listener (this is potentially not part of the migration)h) Instance B processes ID1/value1 (previously worked)i) Instance B removes ID1/value1 and sees removal on LocalEventEntry listenerj) Instance B processes ID1/value2k) Instance B removes ID1/value2 and sees removal on LocalEventEntry listenerIn my test scenario I am using a random distribution of 1000 keys while generating 100 new events per second on each JVM with a ramp up to 4 total JVMs running and a ramp down to 1 followed by halting production of events on the last machine and finally exiting when everything has been processed. With heavy logging and a simple log parsing utility I observe that all expected events occur only once, except in these couple of cases where the same item is worked twice. I am certain we can work around this, but it seemed worth noting.
4) I have not observed any cases where an item that was being worked has had it's partition moved while working, but I am not entirely confident that this is a given. I have been simulating a 5ms "IO" work time for worked items but it will definitely be more dynamic in reality. I was imagining I would need a global mutex around keys such that two processes would never work the same key at the same time even during migration (due to a new member being added), does that seem like a necessity? In general my process is to make a Queue of keys to work from the keys given to a local process manager via the LocalEventEntry listener and to work them with a pool of worker threads (locally ensuring that only one Thread operates on a given key at a time for safety). Since the "work" is fast in my test case the queue never builds up significantly, but it might spike in a real environment and then slowly drain down. In such a case if a new member is added and a partition migrated will the member that previously "owned" the partition get a call on it's LocalEventEntry listener indicating removal of the key or is a MigrationListener needed (or a different strategy)? Does a distributed Queue work better for this scenario (ignoring that it does not group like updates as is desirable to reduce persistence times)?Sorry for the laundry.Thanks in advance,--Zack
--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To post to this group, send email to haze...@googlegroups.com.
To unsubscribe from this group, send email to hazelcast+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/hazelcast?hl=en.
(As a work-around, I added an "already processed" flag to my objects to mitigate this issue.)
--
Joe P
On Sep 23, 2011, at 12:54 AM, Fuad Malikov wrote:
[snip]
>
> On Wed, Sep 14, 2011 at 9:01 PM, Zack Radick <zra...@conducivetech.com>wrote:
>
[snip]
2) In the case of partition migration I have observed that the new partition does not necessarily receive a call for each key/value pair in the multi map, though I believe that it does get called for each key (which seems reasonable to me but is different than the behavior I have observed when not migrating). Is this expected?
Local listeners will not be triggered if an entry is migrated.
Fuad,Thanks for your responses! I wanted to clarify one thing in regards to your comment below:2) In the case of partition migration I have observed that the new partition does not necessarily receive a call for each key/value pair in the multi map, though I believe that it does get called for each key (which seems reasonable to me but is different than the behavior I have observed when not migrating). Is this expected?
Local listeners will not be triggered if an entry is migrated.In a migration Local listeners don't get ANY calls? I thought it looked like they were being called once per key that migrated (with one of the values), but it could be that my key space is small enough that I was triggering work on them shortly anyway.
Is the intention that they should get called? If not, what is the preferred way to handle migration with local listeners?
Thanks!--Zack