Hazelcast memory allocation

840 views
Skip to first unread message

Papp István

unread,
Aug 10, 2017, 7:53:32 AM8/10/17
to Hazelcast
HI All,

A have some problems with loading large-size data (~60M entries) into a single IMap. Every entry size is 122 byte (22byte key + 100byte value), so the full size is about 7.5 GB (~15 GB with backups).
I used Amazon ECS to run the cluster with 5 m4.large ec2 instances (8 GB ram per node). The heap size was set to 7 GB for all hazelcast instances (It was 5 too).

During the map insertion period one of the nodes are recently stop without exception or error message, but with the following log:

INFO: [10.0.0.144]:5701 [dev] [3.8.3] processors=2, physical.memory.total=7.8G, physical.memory.free=639.6M, swap.space.total=0, swap.space.free=0, heap.memory.used=5.5G, heap.memory.free=375.6M, heap.memory.total=5.9G, heap.memory.max=6.9G, heap.memory.used/total=93.74%, heap.memory.used/max=80.14%, minor.gc.count=153, minor.gc.time=45618ms, major.gc.count=21, major.gc.time=65026ms, load.process=0.06%, load.system=0.01%, load.systemAverage=2.00%, thread.count=36, thread.peakCount=40, cluster.timeDiff=-3, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=9031259, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=1, connection.active.count=5, client.connection.count=1, connection.count=4
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006f6c00000, 615514112, 0) failed; error='Cannot allocate memory' (errno=12)
#

Does hazelcast runtime pre-allocate physical memory? What are the best practices to load millions of entries into a single map? What map and cluster configurations are preferred for this size of data?

Thanks for the response(s)

Best wishes,
PI


PI

unread,
Aug 10, 2017, 9:30:15 AM8/10/17
to Hazelcast
Hi All,

I wanted to load large size of data (60M entries) into a single IMap (with 1 backup from each entry). Every entry is 122byte (22byte key - 100byte value), so the full size of data was about 15GB with backups.
I used ECS to run the cluster with 5 m4.large instances (8GB RAM per node) and the heap size was set to 7GB for each hazelcast instances.
During the load, one of the hazelcast instances are failed with no exception or error message but with the following log:

INFO: [10.0.0.144]:5701 [dev] [3.8.3] processors=2, physical.memory.total=7.8G, physical.memory.free=639.6M, swap.space.total=0, swap.space.free=0, heap.memory.used=5.5G, heap.memory.free=375.6M, heap.memory.total=5.9G, heap.memory.max=6.9G, heap.memory.used/total=93.74%, heap.memory.used/max=80.14%, minor.gc.count=153, minor.gc.time=45618ms, major.gc.count=21, major.gc.time=65026ms, load.process=0.06%, load.system=0.01%, load.systemAverage=2.00%, thread.count=36, thread.peakCount=40, cluster.timeDiff=-3, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=9031259, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=1, connection.active.count=5, client.connection.count=1, connection.count=4
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006f6c00000, 615514112, 0) failed; error='Cannot allocate memory' (errno=12)

Does hazelcast instances want to pre-allocate physical memory? If yes, how can we configure the allocation strategy? What are the best practices to estimate the hazelcast memory usage and the neccessary cluster size?

Thanks for help!
Best wishes,
PI

Peter Veentjer

unread,
Aug 10, 2017, 10:46:55 AM8/10/17
to haze...@googlegroups.com
There is quite a lot of overhead for a map entry. I believe > 200 bytes; at least it was that amount the last time I checked.

So a single entry memory consumption is (at least) 200+122 =322 bytes

Total data use without backups

60M * 322B = 19.3GB

With backups that is 38.6GB

In a 4 node cluster that gives 9.66 GB per member.

So your 7 GB of heap isn't sufficient.



--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hazelcast+unsubscribe@googlegroups.com.
To post to this group, send email to haze...@googlegroups.com.
Visit this group at https://groups.google.com/group/hazelcast.
To view this discussion on the web visit https://groups.google.com/d/msgid/hazelcast/4c5f89ec-33c8-4865-a7f1-dff21009c9c1%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages