BufferUnderFlowException while running groupBy query

77 views
Skip to first unread message

SAURABH JAIN

unread,
Oct 3, 2016, 12:19:41 PM10/3/16
to Druid User
We are receiving bufferunderflow exception while running group by query on one the historical nodes. Issue is with the segments present in the interval queried because when we run this query on another interval we are getting the result.

Does anybody know which segment can cause this issue and how to fix it. We are using kafka indexing service to ingest the data.

Here is the query : 

{
"queryType": "groupBy",
"dataSource": {
"type": "table",
"name": "prism-data-10"
},
"intervals": {
"type": "LegacySegmentSpec",
"intervals": ["2016-10-01T00:33:00.000Z/2016-10-01T00:33:00.000Z"]
},
"filter": {
"type": "and",
"fields": [{
"type": "selector",
"dimension": "lang",
"value": "ENGLISH",
"extractionFn": null
}]
},
"granularity": {
"type": "all"
},
"dimensions": [{
"type": "default",
"dimension": "title",
"outputName": "title"
}, {
"type": "default",
"dimension": "category",
"outputName": "category"
}, {
"type": "default",
"dimension": "score",
"outputName": "score"
}, {
"type": "default",
"dimension": "group_id",
"outputName": "group_id"
}, {
"type": "default",
"dimension": "published_dt",
"outputName": "published_dt"
}, {
"type": "default",
"dimension": "author",
"outputName": "author"
}, {
"type": "default",
"dimension": "shortened_url",
"outputName": "shortened_url"
}],
"aggregations": [{
"type": "doubleSum",
"name": "fullstory_total_time",
"fieldName": "fullstory_total_time"
}, {
"type": "longSum",
"name": "total_fullstory_view",
"fieldName": "total_fullstory_view"
}, {
"type": "longSum",
"name": "total_like_count",
"fieldName": "total_like_count"
}, {
"type": "longMax",
"name": "total_share_views",
"fieldName": "total_share_views"
}, {
"type": "doubleSum",
"name": "total_short_time",
"fieldName": "total_short_time"
}, {
"type": "longSum",
"name": "total_short_views",
"fieldName": "total_short_views"
}, {
"type": "hyperUnique",
"name": "distinct_user",
"fieldName": "distinct_user"
}, {
"type": "doubleMax",
"name": "total_vid_length",
"fieldName": "total_vid_length"
}, {
"type": "longSum",
"name": "total_bookmark",
"fieldName": "total_bookmark"
}, {
"type": "longSum",
"name": "total_share_click",
"fieldName": "total_share_click"
}, {
"type": "longMax",
"name": "is_ab",
"fieldName": "is_ab"
}, {
"type": "longMax",
"name": "ab_variants",
"fieldName": "ab_variants"
}, {
"type": "longSum",
"name": "total_toss_clicked",
"fieldName": "total_toss_clicked"
}, {
"type": "longSum",
"name": "total_toss_opened",
"fieldName": "total_toss_opened"
}, {
"type": "longSum",
"name": "total_noti_shown",
"fieldName": "total_noti_shown"
}, {
"type": "longSum",
"name": "total_noti_opened",
"fieldName": "total_noti_opened"
}, {
"type": "longSum",
"name": "total_video_views",
"fieldName": "total_video_views"
}, {
"type": "hyperUnique",
"name": "distinct_hash_Id",
"fieldName": "distinct_hash_Id"
}, {
"type": "longSum",
"name": "total_ts_valid",
"fieldName": "total_ts_valid"
}, {
"type": "longSum",
"name": "total_full_ts_valid",
"fieldName": "total_full_ts_valid"
}],
"postAggregations": [{
"type": "arithmetic",
"name": "avg_full_story_time",
"fn": "/",
"fields": [{
"type": "fieldAccess",
"name": "fullstory_total_time",
"fieldName": "fullstory_total_time"
}, {
"type": "fieldAccess",
"name": "total_full_ts_valid",
"fieldName": "total_full_ts_valid"
}],
"ordering": null
}, {
"type": "arithmetic",
"name": "avg_short_time",
"fn": "/",
"fields": [{
"type": "fieldAccess",
"name": "total_short_time",
"fieldName": "total_short_time"
}, {
"type": "fieldAccess",
"name": "total_ts_valid",
"fieldName": "total_ts_valid"
}],
"ordering": null
}, {
"type": "arithmetic",
"name": "per_like_count",
"fn": "*",
"fields": [{
"type": "arithmetic",
"name": "avg_like_count",
"fn": "/",
"fields": [{
"type": "fieldAccess",
"name": "total_like_count",
"fieldName": "total_like_count"
}, {
"type": "hyperUniqueCardinality",
"name": "distinct_user",
"fieldName": "distinct_user"
}],
"ordering": null
}, {
"type": "constant",
"name": "per_value",
"value": 100
}],
"ordering": null
}, {
"type": "arithmetic",
"name": "per_share_click",
"fn": "*",
"fields": [{
"type": "arithmetic",
"name": "avg_share_click",
"fn": "/",
"fields": [{
"type": "fieldAccess",
"name": "total_share_click",
"fieldName": "total_share_click"
}, {
"type": "hyperUniqueCardinality",
"name": "distinct_user",
"fieldName": "distinct_user"
}],
"ordering": null
}, {
"type": "constant",
"name": "per_value",
"value": 100
}],
"ordering": null
}, {
"type": "arithmetic",
"name": "per_share_views",
"fn": "*",
"fields": [{
"type": "arithmetic",
"name": "avg_share_views",
"fn": "/",
"fields": [{
"type": "fieldAccess",
"name": "total_share_views",
"fieldName": "total_share_views"
}, {
"type": "hyperUniqueCardinality",
"name": "distinct_user",
"fieldName": "distinct_user"
}],
"ordering": null
}, {
"type": "constant",
"name": "per_value",
"value": 100
}],
"ordering": null
}, {
"type": "arithmetic",
"name": "per_bookmark",
"fn": "*",
"fields": [{
"type": "arithmetic",
"name": "avg_bookmark",
"fn": "/",
"fields": [{
"type": "fieldAccess",
"name": "total_bookmark",
"fieldName": "total_bookmark"
}, {
"type": "hyperUniqueCardinality",
"name": "distinct_user",
"fieldName": "distinct_user"
}],
"ordering": null
}, {
"type": "constant",
"name": "per_value",
"value": 100
}],
"ordering": null
}, {
"type": "arithmetic",
"name": "per_toss_clicked",
"fn": "*",
"fields": [{
"type": "arithmetic",
"name": "avg_toss_clicked",
"fn": "/",
"fields": [{
"type": "fieldAccess",
"name": "total_toss_clicked",
"fieldName": "total_toss_clicked"
}, {
"type": "hyperUniqueCardinality",
"name": "distinct_user",
"fieldName": "distinct_user"
}],
"ordering": null
}, {
"type": "constant",
"name": "per_value",
"value": 100
}],
"ordering": null
}, {
"type": "arithmetic",
"name": "per_toss_opened",
"fn": "*",
"fields": [{
"type": "arithmetic",
"name": "avg_toss_opened",
"fn": "/",
"fields": [{
"type": "fieldAccess",
"name": "total_toss_opened",
"fieldName": "total_toss_opened"
}, {
"type": "hyperUniqueCardinality",
"name": "distinct_user",
"fieldName": "distinct_user"
}],
"ordering": null
}, {
"type": "constant",
"name": "per_value",
"value": 100
}],
"ordering": null
}],
"having": null,
"limitSpec": {
"type": "default",
"columns": [{
"dimension": "total_short_views",
"direction": "DESCENDING",
"dimensionComparator": {
"type": "lexicographic"
}
}],
"limit": 50
},
"context": {
"chunkPeriod": "P1D",
"queryId": "25a57d43-ebdd-4dca-9f4e-eff3f02c5465",
"bySegment": true
},
"descending": false
}

Output : 

{
  "error": "Unknown exception",
  "errorMessage": null,
  "errorClass": "java.nio.BufferUnderflowException",
  "host": "druid-historical-001.c.inshorts-1374.internal:8083"
}

Thanks,
Saurabh

Gian Merlino

unread,
Oct 3, 2016, 12:49:09 PM10/3/16
to druid...@googlegroups.com
Hey Saurabh,

Can you check out the logs on druid-historical-001.c.inshorts-1374.internal and see if you can get the full stack trace of that exception?

Gian

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/eff6708e-fe86-4cee-978d-5e37079727d5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Message has been deleted

SAURABH JAIN

unread,
Oct 3, 2016, 1:22:56 PM10/3/16
to Druid User
Hey ,

Full Stack trace is here. Just FYI , we are using 0.9.2-rc1 build 

016-10-03T17:20:43,407 ERROR [processing-9] io.druid.query.GroupByMergedQueryRunner - Exception with one of the sequences!

java.nio.BufferUnderflowException

at java.nio.Buffer.nextGetIndex(Buffer.java:506) ~[?:1.8.0_101]

at java.nio.DirectByteBuffer.getShort(DirectByteBuffer.java:590) ~[?:1.8.0_101]

at io.druid.query.aggregation.hyperloglog.HyperLogLogCollector.fold(HyperLogLogCollector.java:393) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.aggregation.hyperloglog.HyperUniquesBufferAggregator.aggregate(HyperUniquesBufferAggregator.java:65) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:237) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:200) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:200) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:200) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:200) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:200) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:200) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:200) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.access$100(GroupByQueryEngine.java:150) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowIterator.next(GroupByQueryEngine.java:378) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowIterator.next(GroupByQueryEngine.java:293) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at com.metamx.common.guava.BaseSequence.accumulate(BaseSequence.java:67) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.ConcatSequence$1.accumulate(ConcatSequence.java:46) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.ConcatSequence$1.accumulate(ConcatSequence.java:42) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.MappingAccumulator.accumulate(MappingAccumulator.java:39) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.FilteringAccumulator.accumulate(FilteringAccumulator.java:40) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.MappingAccumulator.accumulate(MappingAccumulator.java:39) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.BaseSequence.accumulate(BaseSequence.java:67) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.MappedSequence.accumulate(MappedSequence.java:40) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.ResourceClosingSequence.accumulate(ResourceClosingSequence.java:38) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.FilteredSequence.accumulate(FilteredSequence.java:42) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.MappedSequence.accumulate(MappedSequence.java:40) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.ResourceClosingSequence.accumulate(ResourceClosingSequence.java:38) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.ConcatSequence.accumulate(ConcatSequence.java:40) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.ResourceClosingSequence.accumulate(ResourceClosingSequence.java:38) ~[java-util-0.27.10.jar:?]

at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:104) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at com.metamx.common.guava.MappedSequence.accumulate(MappedSequence.java:40) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.Sequences$1.accumulate(Sequences.java:90) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.Sequences.toList(Sequences.java:113) ~[java-util-0.27.10.jar:?]

at io.druid.query.BySegmentQueryRunner.run(BySegmentQueryRunner.java:56) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:104) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.spec.SpecificSegmentQueryRunner$2$1.call(SpecificSegmentQueryRunner.java:87) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.spec.SpecificSegmentQueryRunner.doNamed(SpecificSegmentQueryRunner.java:171) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.spec.SpecificSegmentQueryRunner.access$400(SpecificSegmentQueryRunner.java:41) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.spec.SpecificSegmentQueryRunner$2.doItNamed(SpecificSegmentQueryRunner.java:162) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.spec.SpecificSegmentQueryRunner$2.accumulate(SpecificSegmentQueryRunner.java:80) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.CPUTimeMetricQueryRunner$1.accumulate(CPUTimeMetricQueryRunner.java:81) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at com.metamx.common.guava.Sequences$1.accumulate(Sequences.java:90) ~[java-util-0.27.10.jar:?]

at io.druid.query.GroupByMergedQueryRunner$1$1.call(GroupByMergedQueryRunner.java:118) [druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.GroupByMergedQueryRunner$1$1.call(GroupByMergedQueryRunner.java:111) [druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_101]

at io.druid.query.PrioritizedListenableFutureTask.run(PrioritizedExecutorService.java:271) [druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_101]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_101]

at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]

2016-10-03T17:20:43,409 ERROR [processing-9] com.google.common.util.concurrent.Futures$CombinedFuture - input future failed.

java.nio.BufferUnderflowException

at java.nio.Buffer.nextGetIndex(Buffer.java:506) ~[?:1.8.0_101]

at java.nio.DirectByteBuffer.getShort(DirectByteBuffer.java:590) ~[?:1.8.0_101]

at io.druid.query.aggregation.hyperloglog.HyperLogLogCollector.fold(HyperLogLogCollector.java:393) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.aggregation.hyperloglog.HyperUniquesBufferAggregator.aggregate(HyperUniquesBufferAggregator.java:65) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:237) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:200) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:200) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:200) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:200) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:200) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:200) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:200) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.access$100(GroupByQueryEngine.java:150) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowIterator.next(GroupByQueryEngine.java:378) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.groupby.GroupByQueryEngine$RowIterator.next(GroupByQueryEngine.java:293) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at com.metamx.common.guava.BaseSequence.accumulate(BaseSequence.java:67) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.ConcatSequence$1.accumulate(ConcatSequence.java:46) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.ConcatSequence$1.accumulate(ConcatSequence.java:42) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.MappingAccumulator.accumulate(MappingAccumulator.java:39) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.FilteringAccumulator.accumulate(FilteringAccumulator.java:40) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.MappingAccumulator.accumulate(MappingAccumulator.java:39) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.BaseSequence.accumulate(BaseSequence.java:67) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.MappedSequence.accumulate(MappedSequence.java:40) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.ResourceClosingSequence.accumulate(ResourceClosingSequence.java:38) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.FilteredSequence.accumulate(FilteredSequence.java:42) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.MappedSequence.accumulate(MappedSequence.java:40) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.ResourceClosingSequence.accumulate(ResourceClosingSequence.java:38) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.ConcatSequence.accumulate(ConcatSequence.java:40) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.ResourceClosingSequence.accumulate(ResourceClosingSequence.java:38) ~[java-util-0.27.10.jar:?]

at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:104) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at com.metamx.common.guava.MappedSequence.accumulate(MappedSequence.java:40) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.Sequences$1.accumulate(Sequences.java:90) ~[java-util-0.27.10.jar:?]

at com.metamx.common.guava.Sequences.toList(Sequences.java:113) ~[java-util-0.27.10.jar:?]

at io.druid.query.BySegmentQueryRunner.run(BySegmentQueryRunner.java:56) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:104) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.spec.SpecificSegmentQueryRunner$2$1.call(SpecificSegmentQueryRunner.java:87) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.spec.SpecificSegmentQueryRunner.doNamed(SpecificSegmentQueryRunner.java:171) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.spec.SpecificSegmentQueryRunner.access$400(SpecificSegmentQueryRunner.java:41) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.spec.SpecificSegmentQueryRunner$2.doItNamed(SpecificSegmentQueryRunner.java:162) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.spec.SpecificSegmentQueryRunner$2.accumulate(SpecificSegmentQueryRunner.java:80) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.CPUTimeMetricQueryRunner$1.accumulate(CPUTimeMetricQueryRunner.java:81) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at com.metamx.common.guava.Sequences$1.accumulate(Sequences.java:90) ~[java-util-0.27.10.jar:?]

at io.druid.query.GroupByMergedQueryRunner$1$1.call(GroupByMergedQueryRunner.java:118) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at io.druid.query.GroupByMergedQueryRunner$1$1.call(GroupByMergedQueryRunner.java:111) ~[druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_101]

at io.druid.query.PrioritizedListenableFutureTask.run(PrioritizedExecutorService.java:271) [druid-processing-0.9.2-rc1.jar:0.9.2-rc1]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_101]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_101]

at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]



Gian

To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.

SAURABH JAIN

unread,
Oct 8, 2016, 9:10:45 PM10/8/16
to Druid User
Hey,

This change is because of this this pull https://github.com/druid-io/druid/pull/3314. Can someone tell us how to fix this.

Gian Merlino

unread,
Oct 11, 2016, 12:57:10 PM10/11/16
to druid...@googlegroups.com
Hey Saurabh,

I raised this issue for 0.9.2 with your report: https://github.com/druid-io/druid/issues/3560. We'll look into before 0.9.2 is finalized. You could try reverting #3314 in your local branch, or using the stable release 0.9.1.1 instead of 0.9.2-rc1.

Thanks again for the report.

Gian

To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.

Gian Merlino

unread,
Oct 15, 2016, 2:45:23 PM10/15/16
to druid...@googlegroups.com
Saurabh,

Could you try applying https://github.com/druid-io/druid/pull/3578 to your middleManagers and see if that prevents this from happening again? My guess is your previously generated segments are corrupt in some way and probably need to be thrown out or manually repaired.

Gian
Reply all
Reply to author
Forward
0 new messages