Hi,I'm trying to implement a simple persistence solution where each node in the cluster persists his locally owned map entries on his local disk.The problem I'm having is: when one node goes down, other nodes take over responsibility for the map entries of the failed node and should persist these map entries to disk. Unfortunately, I'm having difficulties identifiying these map entries which should be stored.Here is my simplified setup:- Hazelcast version 2.5- 2 nodes- 1 Map having 1 sync backup- MapStore using write-through (write-delay-seconds = 0)Since each node backups the other node, each node always has all the data, but is only responsible for the locally owned part.During normal operation, each node writes all the locally owned map entries to disk (MapStore.store()) is called.So far so good. Now, when I kill one node, the other node should take over all responsibility - and therefore also write *all* the data to disk (this include the previously owned map entries as well as the map entries which were previously owned by the killed node).What I would expect is, that MapStore.store() is called on the still running node for the data owned by the killed node. Unfortunately, this does not happend. Why?What I also could live with is, that MigrationEvents are fired upon which I could react and persist the data. Unfortunately, this does also not happend. Why?The only events that happen after I kill one node are MembershipEvents. Well, even though this might not be too nice, I could still try to identify the map entries which should be persisted, but it seams that calling IMap.flush() on a map having a write-through MapStore does not do anything.It looks like I'm lost! Can you help?Cheers,Lukas--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To post to this group, send email to haze...@googlegroups.com.
To unsubscribe from this group, send email to hazelcast+...@googlegroups.com.
Visit this group at http://groups.google.com/group/hazelcast?hl=en-US.
For more options, visit https://groups.google.com/groups/opt_out.
First of all, hazelcast persistence is designed for centralized data stores.So you will eventually experience problems if you try to simulate distributed storage.
Hmm, that's a shame. Please see the last post in this Thread: https://groups.google.com/d/topic/hazelcast/mT7buSyuTFY/discussion where Talip also thinks that distributed storage should be possible.
Why mapStore.store() is not called after migration?As the data is already inserted to centralized data store, hazelcast avoids inserting the same data twice.
Jupp, I see. For centralized storage this makes sense. On the other hand, for distributed storage this would make my life a lot easier :-)If you were to support distributed storage, you could probably add a configuration option to a Map (or any other distributed data structure you support) where one can configure if MapStore.store() should be called whenever a node takes over responsibility for a certain amount of keys.
Why can not I listen migrations?You can listen migrations but when a node dies, migration does not occur. Just back-up data becomes real data. So as you already said you can only listen membership events which unfortunately does not give you the information which partitions are re-owned.
Yes, true as well. On the other hand, it would not be difficult to derive the change of owned partitions during/after membership events. E.g. On each node, I could simply remember my owned partitions after a membership event and compute the difference when the next membership event occurs.
Another problem you did not mention:What will you do after cluster restart? Partitions will re-distributed so each node will have different partitions before restart. So each node's local storage and partitions will conflict.
Well, that is surprising! Just to be sure, let me ask again: There is no way to shutdown and restart a cluster (using the exact same node layout) and keep the partition distribution?Because if there is not, distributed storage (aka local on each node) loses a lot of its appeal...
To summarise:Currently hazelcast persistence is designed for centralised database and simulating local datastores is problematic.
Okay. What persistence solution would you recommend then? A relational database (Oracle, MySQL, etc.) is hardly an option because they do not scale to millions of operations per second.Cheers,Lukas
--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To post to this group, send email to haze...@googlegroups.com.
To unsubscribe from this group, send email to hazelcast+...@googlegroups.com.
Visit this group at http://groups.google.com/group/hazelcast?hl=en-US.
For more options, visit https://groups.google.com/groups/opt_out.
Yes it can be possible but the big problem is how to handle the situation after cluster restart.
Yes, no-sql ones probably fit better.
--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To post to this group, send email to haze...@googlegroups.com.
To unsubscribe from this group, send email to hazelcast+...@googlegroups.com.
Visit this group at http://groups.google.com/group/hazelcast?hl=en-US.
For more options, visit https://groups.google.com/groups/opt_out.