java.lang.OutOfMemoryError: GC overhead limit exceeded

896 views
Skip to first unread message

Dinesh Kumar

unread,
Feb 5, 2020, 5:30:18 AM2/5/20
to Hazelcast
Hi All,

In my application I am using Hazelcast for storing some session details. The below is my config details of hazelcast,

@Bean
  public Config hazelcastConfig() {
    String managementCenterURL = env.getProperty(IDManagementConstants.MANAGEMENT_CENTER_URL);
    ManagementCenterConfig managementCenterConfig = new ManagementCenterConfig();
    managementCenterConfig.setEnabled(true);
    managementCenterConfig.setUrl(managementCenterURL);
    NetworkConfig nConfig = new NetworkConfig().setPort(5701);
    MapConfig mapConfig =
        new MapConfig().setEvictionPolicy(EvictionPolicy.LRU).setTimeToLiveSeconds(
            Integer.parseInt(env.getProperty(IDManagementConstants.SESSION_EVICTION_TIME)));
    Config config = new Config();
    config.setManagementCenterConfig(managementCenterConfig);
    config.setNetworkConfig(nConfig);
    config.getGroupConfig().setName(env.getProperty(IDManagementConstants.HAZELCAST_CLUSTER_NAME));
    config.setInstanceName("Session").getMapConfigs().put("IDManagement", mapConfig);
    return config;
  }

I m getting below error every 2 or 3 days after I started the application,

2020-01-30 08:01:06.816  INFO 17480 --- [n.HealthMonitor] c.h.internal.diagnostics.HealthMonitor   : [10.2.233.99]:5703 [authenticate6] [3.11.1] processors=4, physical.memory.total=16.0G, physical.memory.free=8.6G, swap.space.total=18.4G, swap.space.free=9.2G, heap.memory.used=199.5M, heap.memory.free=28.5M, heap.memory.total=228.0M, heap.memory.max=228.0M, heap.memory.used/total=87.51%, heap.memory.used/max=87.51%, minor.gc.count=278, minor.gc.time=6442ms, major.gc.count=48, major.gc.time=22676ms, load.process=14.04%, load.system=29.38%, load.systemAverage=n/a thread.count=80, thread.peakCount=96, cluster.timeDiff=-41, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=1443646, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=28, client.connection.count=0, connection.count=21
2020-01-30 08:01:40.611  WARN 17480 --- [MC.State.Sender] c.h.i.m.ManagementCenterService          : [10.2.233.99]:5703 [authenticate6] [3.11.1] Hazelcast Management Center Service will be shutdown due to exception.
java.lang.OutOfMemoryError: GC overhead limit exceeded
 at java.lang.Class.getDeclaredMethods0(Native Method) ~[na:1.8.0_144]
 at java.lang.Class.privateGetDeclaredMethods(Unknown Source) ~[na:1.8.0_144]
 at java.lang.Class.privateGetMethodRecursive(Unknown Source) ~[na:1.8.0_144]
 at java.lang.Class.getMethod0(Unknown Source) ~[na:1.8.0_144]
 at java.lang.Class.getMethod(Unknown Source) ~[na:1.8.0_144]
 at com.hazelcast.internal.management.TimedMemberStateFactoryHelper.get(TimedMemberStateFactoryHelper.java:153) ~[hazelcast-3.11.1.jar!/:3.11.1]
 at com.hazelcast.internal.management.TimedMemberStateFactoryHelper.createRuntimeProps(TimedMemberStateFactoryHelper.java:133) ~[hazelcast-3.11.1.jar!/:3.11.1]
 at com.hazelcast.internal.management.TimedMemberStateFactory.createMemberState(TimedMemberStateFactory.java:190) ~[hazelcast-3.11.1.jar!/:3.11.1]
 at com.hazelcast.internal.management.TimedMemberStateFactory.createTimedMemberState(TimedMemberStateFactory.java:125) ~[hazelcast-3.11.1.jar!/:3.11.1]
 at com.hazelcast.internal.management.ManagementCenterService$PrepareStateThread.run(ManagementCenterService.java:420) ~[hazelcast-3.11.1.jar!/:3.11.1]
2020-01-30 08:02:28.619 ERROR 17480 --- [IO.thread-out-1] h.n.t.TcpIpConnectionChannelErrorHandler : [10.2.233.99]:5703 [authenticate6] [3.11.1] GC overhead limit exceeded
java.lang.OutOfMemoryError: GC overhead limit exceeded
2020-01-30 08:02:42.029 ERROR 17480 --- [ration.thread-0] c.h.s.i.o.impl.OperationExecutorImpl     : [10.2.233.99]:5703 [authenticate6] [3.11.1] Failed to process: com.hazelcast.spi.impl.operationexecutor.impl.TaskBatch@1f2947d on: hz.Session.partition-operation.thread-0
java.lang.OutOfMemoryError: GC overhead limit exceeded
2020-01-30 08:04:55.923 ERROR 17480 --- [alina-utility-1] org.apache.catalina.core.ContainerBase   : Exception processing background thread
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded
 at java.util.concurrent.FutureTask.report(Unknown Source) [na:1.8.0_144]
 at java.util.concurrent.FutureTask.get(Unknown Source) [na:1.8.0_144]
 at org.apache.catalina.core.ContainerBase.threadStart(ContainerBase.java:1269) ~[tomcat-embed-core-9.0.16.jar!/:9.0.16]
 at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessorMonitor.run(ContainerBase.java:1315) [tomcat-embed-core-9.0.16.jar!/:9.0.16]
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [na:1.8.0_144]
 at java.util.concurrent.FutureTask.runAndReset(Unknown Source) [na:1.8.0_144]
 at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown Source) [na:1.8.0_144]
 at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) [na:1.8.0_144]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.8.0_144]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.8.0_144]
 at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-9.0.16.jar!/:9.0.16]
 at java.lang.Thread.run(Unknown Source) [na:1.8.0_144]
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
2020-01-30 08:04:32.457 ERROR 17480 --- [ration.thread-1] c.h.s.i.o.impl.OperationExecutorImpl     : [10.2.233.99]:5703 [authenticate6] [3.11.1] Failed to process: com.hazelcast.spi.impl.operationexecutor.impl.TaskBatch@cb50f44 on: hz.Session.partition-operation.thread-1
java.lang.OutOfMemoryError: GC overhead limit exceeded
2020-01-30 08:04:23.129 ERROR 17480 --- [IO.thread-out-2] h.n.t.TcpIpConnectionChannelErrorHandler : [10.2.233.99]:5703 [authenticate6] [3.11.1] GC overhead limit exceeded
java.lang.OutOfMemoryError: GC overhead limit exceeded
2020-01-30 08:04:18.372 ERROR 17480 --- [ration.thread-1] c.h.s.i.o.impl.OperationExecutorImpl     : [10.2.233.99]:5703 [authenticate6] [3.11.1] GC overhead limit exceeded

Any one pls help me to resolve this issue.

Regards,
Dinesh

Enes Akar

unread,
Feb 5, 2020, 11:53:31 AM2/5/20
to Hazelcast
It looks like your hazelcast instance requires more memory than you reserved. You should increase the heap size that you reserve for the hazelcast instance.  

--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hazelcast+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hazelcast/25365ea4-5479-4b5d-bf4a-29dfe5704dad%40googlegroups.com.


--

Enes Akar
CTO, Hazelcast Cloud

This message contains confidential information and is intended only for the individuals named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. If verification is required, please request a hard-copy version. -Hazelcast

Dimitri Rakitine

unread,
Feb 5, 2020, 9:27:06 PM2/5/20
to Hazelcast
>  I m getting below error every 2 or 3 days after I started the application,

If it happens every 2-3 days why not enable heap dump on OOME and see what is going on? If there is a leak, increasing heap size will only prolong the inevitable.
To unsubscribe from this group and stop receiving emails from it, send an email to haze...@googlegroups.com.

Enes Akar

unread,
Feb 5, 2020, 11:45:02 PM2/5/20
to haze...@googlegroups.com
Dimitri is right. If it is happening despite increased heap, then you need to check the heap dump for possible memory leak.

To unsubscribe from this group and stop receiving emails from it, send an email to hazelcast+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hazelcast/2ffcadb1-dd72-4887-94d1-56eded6bb00e%40googlegroups.com.

Dimitri Rakitine

unread,
Feb 6, 2020, 12:00:34 AM2/6/20
to Hazelcast
Thats a bit of a misquote. If OOME happens on a daily, or bi-daily basis it definitely warrants an investigation.


On Wednesday, February 5, 2020 at 11:45:02 PM UTC-5, Enes Akar wrote:
Dimitri is right. If it is happening despite increased heap, then you need to check the heap dump for possible memory leak.

Ahmet Mircik

unread,
Feb 6, 2020, 1:42:40 AM2/6/20
to Hazelcast

Hi Dinesh,
I want to pinpoint a potential cause of OOME. In your map configuration, you set eviction-policy but didn’t set a max size. When you don’t set it, map uses Integer.MAX_VALUE as a default. And this causes map to start eviction after reaching Integer.MAX_VALUE on a single node. If setting max size to Integer.MAX_VALUE is not intentional, i recommend you to give a smaller max size to be safe from OOME.

MapConfig mapConfig =
        new MapConfig().setEvictionPolicy(EvictionPolicy.LRU).setTimeToLiveSeconds(
            Integer.parseInt(env.getProperty(IDManagementConstants.SESSION_EVICTION_TIME)));
To unsubscribe from this group and stop receiving emails from it, send an email to hazelcast+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hazelcast/89bf332c-dbbb-48f6-b1b7-fc2dc4a19ed5%40googlegroups.com.

Guido Medina

unread,
Feb 6, 2020, 3:33:53 AM2/6/20
to haze...@googlegroups.com
To exclude any possibility of a one in a million scenario being triggered by your environment I would do the following:
- Update your JDK 8, I see you are using update 144, the latest free Oracle JDK 8 is JDK is the updated 212 and the latest OpenJDK is around update 242
- Update your Hazelcast from 3.11.1 to 3.11.6, who knows if by your one in a million environment you discovered a bug? not saying you did but having everything as patched as possible is a good way to cover some bugs
- Can you share your JVM parameters? I have seen plenty of OOM but never such a weird error, lately I have seen people going crazy with GC threads, I saw a configuration the other day with 22 GC threads and I can't honestly guess where are people getting the idea that you need so many threads for the garbage collector, specially if using G1GC

Guido Medina

unread,
Feb 6, 2020, 3:45:35 AM2/6/20
to haze...@googlegroups.com
It could also be a Tomcat problem, go to version 9.0.30

Can Gencer

unread,
Feb 6, 2020, 4:01:26 AM2/6/20
to haze...@googlegroups.com
"heap.memory.total=228.0M, heap.memory.max=228.0M". To me it seems like a total heap of 228MB is too little both for Hazelcast + Spring

Guido Medina

unread,
Feb 6, 2020, 4:22:17 AM2/6/20
to haze...@googlegroups.com
Yeah, that's heap size is way too low, by reading your log it looks like you are using SpringBoot 2.1.3 which depends on Hazelcast 3.11.1 and Tomcat 9.0.16
Changing your SpringBoot version to 2.1.12 will also change Hazelcast to 3.11.6 and Tomcat to 9.0.30, also you want at least 1GB heap size to be safe.

Saikat Sur

unread,
Feb 7, 2020, 4:32:31 AM2/7/20
to Hazelcast
You have the scope to increase the heap size here as you have 8.6G free physical memory. Please try increasing the use of physical memory to say 2GB or so.  Also try with hazelcast.mc.cache.max.size=512 (default is 768) and allocate 75 to 80% of total memory for Heap use.  At present it is too high at 87.51%.

Try once with JAVA_OPTS=' -Dhazelcast.mc.cache.max.size=512 -XX:MaxRAMPercentage=80' -m 2048m 

.. Saikat
Reply all
Reply to author
Forward
0 new messages