2 nodes, 1 superclient.
MapStore writes to (shared) DB. Write-delay set to 60, backup count to 1.
1. Start both nodes, wait til they've found eachother and shared partitioning info
2. Start superclient and use it to insert 10 million values into a map. Both nodes inserts values into the DB using storeAll.
3. Wait for everything to be written to DB, observe the last call to storeAll, then wait another 5 minutes to be sure.
4. getLifeCycleService.shutdown() on superclient, shutdown is immediate.
5. getLifeCycleService.shutdown() on one node, shutdown is immediate.
Here is the question/problem:
6. getLifeCycleService.shutdown() on the remaining node. Watch as it proceeds to do a storeAll( 10 million values ).
Is this how it should be? If yes... assuming theres 20 nodes instead, and that there are 1 billion values instead of 10 million... then shutting down all nodes would take a very long time indeed, exceeding the default value ofhazelcast.graceful.shutdown.max.wait (600)and I'm not sure what happens then, but I'm guessing it'd be bad?
Does it have something to do with the write-delay? Surely it can't think that the entire map is dirty, since they've already been stored, and no changes have occurred since?
--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To post to this group, send email to haze...@googlegroups.com.
To unsubscribe from this group, send email to hazelcast+...@googlegroups.com.
Visit this group at http://groups.google.com/group/hazelcast?hl=en-US.
For more options, visit https://groups.google.com/groups/opt_out.
By the way in the new version 3.0, this problem will not exist.
why? do you already have plans of what will change in the persistence area with version 3.0?(I looked for something like a roadmap, but could not find anything)Thanks,Lukas
--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To post to this group, send email to haze...@googlegroups.com.
To unsubscribe from this group, send email to hazelcast+...@googlegroups.com.
Visit this group at http://groups.google.com/group/hazelcast?hl=en-US.
For more options, visit https://groups.google.com/groups/opt_out.
Thats good. :)
A somewhat related question - disaster recovery, no geographical redundancy.
MapStore DB-backed hazelcast cluster. Write-delay set.
DB explodes, there are no backups... nothing. Start over with an empty database, but all hazelcast instances still running.
Currently I think that can be recovered by simply shutting down all hazelcast instances and it'll write everything to MapStore when the last node shuts down (as it does now).
However, if it stops writing everything to MapStore on last node shutdown in 3.0... an alternative would be to connect a new client and have it issue a command to mark everything as dirty, then issue a flush.
Right know, the only way I can think of to do this would be to lock the map and iterate through it, manually writing everything to the DB. Any better ideas? :)
Oh, and incase I misunderstood your question - this does not happen with single node clusters (as far as I remember from testing).