class CompositeKey {
String fieldOne;
String fieldTwo;
}
map.put(new CompositeKey(fieldOne, fieldTwo), RowObject);
Besides these:
1. Make sure that when your cluster starts, each member has same
number of partitions. Otherwise it won't fit in the memory of the node
holding/owning most of the partitions.
2. Make sure you have enough machine (memory). Calculate each entries
cost = (backup-count + 1) X (key.binary.size + value.binary.size +
440bytes).
> --
> You received this message because you are subscribed to the Google Groups "Hazelcast" group.
> To post to this group, send email to haze...@googlegroups.com.
> To unsubscribe from this group, send email to hazelcast+...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/hazelcast?hl=en.
>
>
This will help but here is the faster way doing it:
On each node read all the key fields (fieldOne, fieldTwo) from the database.
For each key:
Create CompositeKey(fieldOne, fieldTwo) = theKey
Check if theKey is locally owned.
Hazelcast.getPartitionService().getPartition(theKey).getOwner().localMember()
If locally owned then read the entire row and put it into the map.
map.put(new CompositeKey(fieldOne, fieldTwo), RowObject);
We have done it for millions of rows. The benefit is that every node
is putting the locally owned keys so the cost of the put is minimum.
Each node is participating in the loading process and each only doing
the local puts. You just have to make sure that there is no migration
during that process (listen for the migrations), if there is then loop
through all the keys one more time and check each exists.
-talip