java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC

497 views
Skip to first unread message

Sudhanshu Lenka

unread,
Mar 19, 2018, 5:39:57 AM3/19/18
to Druid User

Hi All ,

My Task failing with GC overhead limit exceeded. 

JVM Configuartion for :

For MiddleManager : -Xms1g           For Historical :  -Xms8g
                                  -Xmx1g                                      -Xmx8g
                                  

For Overlord           : -Xms3g         For Coordinator :   -Xmx3g
                               -Xmx3g                                         -Xms3g
                                 
For Broker             : -Xms24g
                                 -Xmx24g


Please let me know best configuration for my production system .

I have   4 machine with 16 core and 2 machine with 8 core and 2 machine with 32 core and why this GC overhead exception happening  ?
And every day i have around 3 GB data need to store.


018-03-19T09:11:52,194 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[KafkaIndexTask{id=index_kafka_ERS_b54c935c05f2a77_gahfjaam, type=index_kafka, dataSource=ERS}]
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded
at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]
at io.druid.segment.realtime.appenderator.AppenderatorDriver.persist(AppenderatorDriver.java:266) ~[druid-server-0.11.0-iap2.jar:0.11.0-iap2]
at io.druid.indexing.kafka.KafkaIndexTask.run(KafkaIndexTask.java:524) ~[?:?]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.11.0-iap2.jar:0.11.0-iap2]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.11.0-iap2.jar:0.11.0-iap2]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_161]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
Suppressed: org.apache.kafka.common.KafkaException: Failed to close kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1575) ~[?:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1526) ~[?:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1504) ~[?:?]
at io.druid.indexing.kafka.KafkaIndexTask.run(KafkaIndexTask.java:601) ~[?:?]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.11.0-iap2.jar:0.11.0-iap2]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.11.0-iap2.jar:0.11.0-iap2]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_161]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_161]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[?:1.8.0_161]
at io.druid.segment.realtime.appenderator.AppenderatorDriver.persist(AppenderatorDriver.java:258) ~[druid-server-0.11.0-iap2.jar:0.11.0-iap2]
... 7 more
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
2018-03-19T09:11:52,204 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_kafka_ERS_b54c935c05f2a77_gahfjaam] status changed to [FAILED].
2018-03-19T09:11:52,207 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_kafka_ERS_b54c935c05f2a77_gahfjaam",
  "status" : "FAILED",
  "duration" : 242158
}

Regards,
Sudhanshu lenka

Jonathan Wei

unread,
Mar 20, 2018, 5:36:02 PM3/20/18
to druid...@googlegroups.com
The heap size for kafka indexing tasks is controlled by `druid.indexer.runner.javaOpts`, you'll probably need to adjust the heap config there.

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/3c3f48c0-6cff-4c83-885e-263f74f7153b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages