Hi,
Sorry for late reply. My actual problem of heart beat failure was because using pipeline in multithreaded env. Now i have put a mutex to allow only one pipeline writing into one map at a time.
Post this I am facing other challenge and want a suggestion.
Hazelcast Server: Java 4.1.1 Enterprise IMDG
Hazelcast Client: C++ 4.0.0
CLUSTER: 3 Nodes with below configuration
CPU 16
RAM 64 GB
MIN HEAP 32 GB
MAX HEAP 32 GB
I am writing ~200 Million records into single map by my client program running over 3 hrs. But getting "GC OOM" after writing about 100M+, post that nodes are crashing.
Want to understand, isn't load is being equally devided amongst all 3 nodes. What setting can do that?
What should be my hardware configuration to avoid such crash for this much data.
Being implementing in a bank's network we can't copy paste logs, configuration etc.
Thanks!
Abhishek