A have some problems with loading large-size data (~60M entries) into a single IMap. Every entry size is 122 byte (22byte key + 100byte value), so the full size is about 7.5 GB (~15 GB with backups).
I used Amazon ECS to run the cluster with 5 m4.large ec2 instances (8 GB ram per node). The heap size was set to 7 GB for all hazelcast instances (It was 5 too).
During the map insertion period one of the nodes are recently stop without exception or error message, but with the following log:
INFO: [10.0.0.144]:5701 [dev] [3.8.3] processors=2, physical.memory.total=7.8G, physical.memory.free=639.6M, swap.space.total=0, swap.space.free=0, heap.memory.used=5.5G, heap.memory.free=375.6M, heap.memory.total=5.9G, heap.memory.max=6.9G, heap.memory.used/total=93.74%, heap.memory.used/max=80.14%, minor.gc.count=153, minor.gc.time=45618ms, major.gc.count=21, major.gc.time=65026ms, load.process=0.06%, load.system=0.01%, load.systemAverage=2.00%, thread.count=36, thread.peakCount=40, cluster.timeDiff=-3, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=9031259, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=1, connection.active.count=5, client.connection.count=1, connection.count=4
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006f6c00000, 615514112, 0) failed; error='Cannot allocate memory' (errno=12)
#
Does hazelcast runtime pre-allocate physical memory? What are the best practices to load millions of entries into a single map? What map and cluster configurations are preferred for this size of data?