java.lang.OutOfMemoryError: Direct buffer memory

1,900 views
Skip to first unread message

Zhihui Jiao

unread,
Aug 28, 2016, 11:07:44 PM8/28/16
to Druid User
Batch index task failed with java.lang.OutOfMemoryError: Direct buffer memory, version 0.9.1.1 and MM config:

# Task launch parameters
druid.indexer.runner.javaCommand=/usr/lib/jvm/java-7-openjdk-amd64/bin/java
druid.indexer.runner.javaOpts=-server -Xmx1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=/srv/nbs/0/druid/task/

# HTTP server threads
druid.server.http.numThreads=25

# Processing threads and buffers
druid.processing.buffer.sizeBytes=256870912
druid.processing.numThreads=2

The exception log:

2016-08-28T18:16:48,145 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Uncaught Throwable while running task[IndexTask{id=index_test_datasource_2016-08-28T17:19:00.934Z, type=index, dataSource=test_datasource}]
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:631) ~[?:1.7.0_111]
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) ~[?:1.7.0_111]
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) ~[?:1.7.0_111]
at io.druid.segment.CompressedPools$4.get(CompressedPools.java:100) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.CompressedPools$4.get(CompressedPools.java:93) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.collections.StupidPool.take(StupidPool.java:64) ~[druid-common-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.CompressedPools.getByteBuf(CompressedPools.java:108) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.data.CompressedObjectStrategy.fromByteBuffer(CompressedObjectStrategy.java:286) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.data.CompressedObjectStrategy.fromByteBuffer(CompressedObjectStrategy.java:42) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.data.GenericIndexed$BufferIndexed._get(GenericIndexed.java:225) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.data.GenericIndexed$1.get(GenericIndexed.java:300) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.data.CompressedVSizeIntsIndexedSupplier$CompressedVSizeIndexedInts.loadBuffer(CompressedVSizeIntsIndexedSupplier.java:383) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.data.CompressedVSizeIntsIndexedSupplier$CompressedVSizeIndexedInts.get(CompressedVSizeIntsIndexedSupplier.java:344) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.column.SimpleDictionaryEncodedColumn.getSingleValueRow(SimpleDictionaryEncodedColumn.java:65) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.QueryableIndexIndexableAdapter$2$1.next(QueryableIndexIndexableAdapter.java:257) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.QueryableIndexIndexableAdapter$2$1.next(QueryableIndexIndexableAdapter.java:177) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48) ~[guava-16.0.1.jar:?]
at com.google.common.collect.Iterators$PeekingImpl.peek(Iterators.java:1162) ~[guava-16.0.1.jar:?]
at com.metamx.common.guava.MergeIterator$1.compare(MergeIterator.java:46) ~[java-util-0.27.9.jar:?]
at com.metamx.common.guava.MergeIterator$1.compare(MergeIterator.java:42) ~[java-util-0.27.9.jar:?]
at java.util.PriorityQueue.siftUpUsingComparator(PriorityQueue.java:649) ~[?:1.7.0_111]
at java.util.PriorityQueue.siftUp(PriorityQueue.java:627) ~[?:1.7.0_111]
at java.util.PriorityQueue.offer(PriorityQueue.java:329) ~[?:1.7.0_111]
at java.util.PriorityQueue.add(PriorityQueue.java:306) ~[?:1.7.0_111]
at com.metamx.common.guava.MergeIterator.<init>(MergeIterator.java:55) ~[java-util-0.27.9.jar:?]
at com.metamx.common.guava.MergeIterable.iterator(MergeIterable.java:49) ~[java-util-0.27.9.jar:?]
at io.druid.collections.CombiningIterable.iterator(CombiningIterable.java:95) ~[druid-common-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.IndexMergerV9.mergeIndexesAndWriteColumns(IndexMergerV9.java:680) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.IndexMergerV9.makeIndexFiles(IndexMergerV9.java:222) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.IndexMerger.merge(IndexMerger.java:423) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.IndexMerger.mergeQueryableIndex(IndexMerger.java:244) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.IndexMerger.mergeQueryableIndex(IndexMerger.java:217) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.indexing.common.index.YeOldePlumberSchool$1.finishJob(YeOldePlumberSchool.java:191) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]
at io.druid.indexing.common.task.IndexTask.generateSegment(IndexTask.java:415) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]
at io.druid.indexing.common.task.IndexTask.run(IndexTask.java:221) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.1.1.jar:0.9.1.1]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.1.1.jar:0.9.1.1]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_111]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_111]
2016-08-28T18:16:48,152 ERROR [main] io.druid.cli.CliPeon - Error when starting up.  Failing.
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Direct buffer memory
at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]
at io.druid.indexing.worker.executor.ExecutorLifecycle.join(ExecutorLifecycle.java:211) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]
at io.druid.cli.CliPeon.run(CliPeon.java:287) [druid-services-0.9.1.1.jar:0.9.1.1]
at io.druid.cli.Main.main(Main.java:105) [druid-services-0.9.1.1.jar:0.9.1.1]
Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Direct buffer memory
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299) ~[guava-16.0.1.jar:?]
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286) ~[guava-16.0.1.jar:?]
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) ~[guava-16.0.1.jar:?]
at io.druid.indexing.worker.executor.ExecutorLifecycle.join(ExecutorLifecycle.java:208) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]
... 2 more
Caused by: java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:631) ~[?:1.7.0_111]
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) ~[?:1.7.0_111]
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) ~[?:1.7.0_111]
at io.druid.segment.CompressedPools$4.get(CompressedPools.java:100) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.CompressedPools$4.get(CompressedPools.java:93) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.collections.StupidPool.take(StupidPool.java:64) ~[druid-common-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.CompressedPools.getByteBuf(CompressedPools.java:108) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.data.CompressedObjectStrategy.fromByteBuffer(CompressedObjectStrategy.java:286) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.data.CompressedObjectStrategy.fromByteBuffer(CompressedObjectStrategy.java:42) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.data.GenericIndexed$BufferIndexed._get(GenericIndexed.java:225) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.data.GenericIndexed$1.get(GenericIndexed.java:300) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.data.CompressedVSizeIntsIndexedSupplier$CompressedVSizeIndexedInts.loadBuffer(CompressedVSizeIntsIndexedSupplier.java:383) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.data.CompressedVSizeIntsIndexedSupplier$CompressedVSizeIndexedInts.get(CompressedVSizeIntsIndexedSupplier.java:344) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.column.SimpleDictionaryEncodedColumn.getSingleValueRow(SimpleDictionaryEncodedColumn.java:65) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.QueryableIndexIndexableAdapter$2$1.next(QueryableIndexIndexableAdapter.java:257) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.QueryableIndexIndexableAdapter$2$1.next(QueryableIndexIndexableAdapter.java:177) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48) ~[guava-16.0.1.jar:?]
at com.google.common.collect.Iterators$PeekingImpl.peek(Iterators.java:1162) ~[guava-16.0.1.jar:?]
at com.metamx.common.guava.MergeIterator$1.compare(MergeIterator.java:46) ~[java-util-0.27.9.jar:?]
at com.metamx.common.guava.MergeIterator$1.compare(MergeIterator.java:42) ~[java-util-0.27.9.jar:?]
at java.util.PriorityQueue.siftUpUsingComparator(PriorityQueue.java:649) ~[?:1.7.0_111]
at java.util.PriorityQueue.siftUp(PriorityQueue.java:627) ~[?:1.7.0_111]
at java.util.PriorityQueue.offer(PriorityQueue.java:329) ~[?:1.7.0_111]
at java.util.PriorityQueue.add(PriorityQueue.java:306) ~[?:1.7.0_111]
at com.metamx.common.guava.MergeIterator.<init>(MergeIterator.java:55) ~[java-util-0.27.9.jar:?]
at com.metamx.common.guava.MergeIterable.iterator(MergeIterable.java:49) ~[java-util-0.27.9.jar:?]
at io.druid.collections.CombiningIterable.iterator(CombiningIterable.java:95) ~[druid-common-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.IndexMergerV9.mergeIndexesAndWriteColumns(IndexMergerV9.java:680) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.IndexMergerV9.makeIndexFiles(IndexMergerV9.java:222) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.IndexMerger.merge(IndexMerger.java:423) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.IndexMerger.mergeQueryableIndex(IndexMerger.java:244) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.segment.IndexMerger.mergeQueryableIndex(IndexMerger.java:217) ~[druid-processing-0.9.1.1.jar:0.9.1.1]
at io.druid.indexing.common.index.YeOldePlumberSchool$1.finishJob(YeOldePlumberSchool.java:191) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]
at io.druid.indexing.common.task.IndexTask.generateSegment(IndexTask.java:415) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]
at io.druid.indexing.common.task.IndexTask.run(IndexTask.java:221) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[?:1.7.0_111]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[?:1.7.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[?:1.7.0_111]
at java.lang.Thread.run(Thread.java:745) ~[?:1.7.0_111]

Nishant Bangarwa

unread,
Aug 29, 2016, 3:36:44 AM8/29/16
to Druid User
Try increasing your directmemory limit by adding -XX:MaxDirectMemorySize=4g to your druid.indexer.runner.javaOpts

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/ff4c1de3-ce9e-4829-a71a-a39944d290e3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Zhihui Jiao

unread,
Aug 29, 2016, 5:52:01 AM8/29/16
to Druid User
Thanks Nishant, I can increasing the limit, but I want to know how to set this value properly.

I have some other batch task, the dataset is much more bigger, but I didn't get the exception, 
the setting differences are bigger heap size (-Xmx2g)  and bigger processing buffer (druid.processing.buffer.sizeBytes=536870912)

My question is how these setting (-XX:MaxDirectMemorySize, -Xmx and druid.processing.buffer.sizeBytes) work with Druid? 

Fangjin Yang

unread,
Aug 29, 2016, 7:29:23 PM8/29/16
to Druid User
Off heap memory is used for intermediate computations. Notably topNs and groupBys require them to avoid overflowing JVM memory.

Gian Merlino

unread,
Aug 29, 2016, 10:47:50 PM8/29/16
to druid...@googlegroups.com
Do you have a lot of columns, maybe? There's a small (64KB IIRC) buffer allocated during index merging for each column of each partial index. Usually this doesn't add up to much, but it can be an issue if you have a lot of columns.

Gian

To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.

Zhihui Jiao

unread,
Aug 30, 2016, 6:57:47 AM8/30/16
to Druid User
Hi Gian, 

No, the segment only have 7 columns, and 1 or 3 metrics. I found some hs_err_pidXXX.log files in one of the machine today, I think maybe the exception was throw when some batch task are assigned to the same machine, and the machine's memory is smaller than the task's need.

Gian

Reply all
Reply to author
Forward
0 new messages