java.io.IOException: Cannot allocate memory

795 views
Skip to first unread message

qiumi...@bytedance.com

unread,
Oct 28, 2017, 1:29:32 PM10/28/17
to Druid User
Hi all,
     Recently, I use tranquility to push data to druid, and I got this error of the peon tasks:
     10:42:10.467 [coordinator_handoff_scheduled_0] INFO io.druid.segment.realtime.plumber.CoordinatorBasedSegmentHandoffNotifier - Still waiting for Handoff for Segments : [[SegmentDescriptor{interval=2017-10-28T10:30:00.000Z/2017-10-28T10:40:00.000Z, version='2017-10-28T10:31:28.228Z', partitionNumber=94}]]
10:42:26.222 [toutiao-2017-10-28T10:30:00.000Z-persist-n-merge] ERROR io.druid.segment.realtime.plumber.RealtimePlumber - Failed to persist merged index[toutiao]: {class=io.druid.segment.realtime.plumber.RealtimePlumber, exceptionType=class java.io.IOException, exceptionMessage=Cannot allocate memory, interval=2017-10-28T10:30:00.000Z/2017-10-28T10:40:00.000Z}
java.io.IOException: Cannot allocate memory
	at java.io.FileOutputStream.writeBytes(Native Method) ~[?:1.8.0_131]
	at java.io.FileOutputStream.write(FileOutputStream.java:326) ~[?:1.8.0_131]
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122) ~[?:1.8.0_131]
	at com.google.common.io.CountingOutputStream.write(CountingOutputStream.java:53) ~[guava-16.0.1.jar:?]
	at java.io.FilterOutputStream.write(FilterOutputStream.java:97) ~[?:1.8.0_131]
	at io.druid.segment.data.GenericIndexedWriter.write(GenericIndexedWriter.java:144) ~[druid-processing-0.10.1.jar:0.10.1]
	at io.druid.segment.StringDimensionMergerV9.writeMergedValueMetadata(StringDimensionMergerV9.java:172) ~[druid-processing-0.10.1.jar:0.10.1]
	at io.druid.segment.IndexMergerV9.writeDimValueAndSetupDimConversion(IndexMergerV9.java:546) ~[druid-processing-0.10.1.jar:0.10.1]
	at io.druid.segment.IndexMergerV9.makeIndexFiles(IndexMergerV9.java:177) ~[druid-processing-0.10.1.jar:0.10.1]
	at io.druid.segment.IndexMerger.merge(IndexMerger.java:434) ~[druid-processing-0.10.1.jar:0.10.1]
	at io.druid.segment.IndexMerger.mergeQueryableIndex(IndexMerger.java:218) ~[druid-processing-0.10.1.jar:0.10.1]
	at io.druid.segment.IndexMerger.mergeQueryableIndex(IndexMerger.java:206) ~[druid-processing-0.10.1.jar:0.10.1]
	at io.druid.segment.realtime.plumber.RealtimePlumber$2.doRun(RealtimePlumber.java:415) [druid-server-0.10.1.jar:0.10.1]
	at io.druid.common.guava.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:42) [druid-common-0.10.1.jar:0.10.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]

   Not all task had this error, just one or two out of 160 partitions. My task launch parameters:        druid.indexer.runner.javaOpts=-server -Xmx2g -XX:MaxDirectMemorySize=1500m -XX:+UseG1GC -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManagerdruid.indexer.fork.property.druid.processing.buffer.sizeBytes=268435456
druid.indexer.fork.property.druid.processing.numThreads=2

   The partition size is just 400MB, the memory I allocated to the peon task should be enough. Please help me, thanks.

Gian Merlino

unread,
Nov 2, 2017, 1:54:44 AM11/2/17
to druid...@googlegroups.com
Wild guess, but you can try applying this patch, which will also be available in Druid 0.11.0: https://github.com/druid-io/druid/pull/4684. It eliminates a possible memory leak during indexing.

If you want to try an already-built distribution, we've added this patch already to the Imply distribution available at: https://imply.io/get-started. The full list of patches we've applied is on the release notes at: https://docs.imply.io/2.3.7/release

Gian

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/2891ce36-1998-4ca4-b6c3-5d756496b4c9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages