Historical Node: GroupBy query gives error, Timeseries does not

101 views
Skip to first unread message

Alexander Makarenko

unread,
Mar 23, 2015, 9:11:58 PM3/23/15
to druid...@googlegroups.com
Hello!

We use Druid 0.7.0.

Segments are hourly.

When I'm doing following GroupBy query on Historical node:

{
  "queryType": "groupBy",
  "dataSource": "d018",
  "granularity": "all",
  "dimensions": ["d18"],
  "aggregations": [
     { "type": "count", "name": "#" }
  ],
  "context": {
    "useCache": false,
    "populateCache": false
  },
  "intervals": ["2015-03-14T10:00/2015-03-14T11:00"]
}

I get this error:

2015-03-24T00:42:20,038 ERROR [processing-0] io.druid.query.GroupByParallelQueryRunner - Exception with one of the sequences!
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:538) ~[?:1.7.0_55]
at java.nio.DirectByteBuffer.getInt(DirectByteBuffer.java:675) ~[?:1.7.0_55]
at io.druid.segment.data.VSizeIndexedInts.get(VSizeIndexedInts.java:119) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.segment.column.SimpleDictionaryEncodedColumn.getSingleValueRow(SimpleDictionaryEncodedColumn.java:61) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.segment.QueryableIndexStorageAdapter$CursorSequenceBuilder$1$1$2$1.iterator(QueryableIndexStorageAdapter.java:345) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:189) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.access$100(GroupByQueryEngine.java:142) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.groupby.GroupByQueryEngine$RowIterator.next(GroupByQueryEngine.java:367) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.groupby.GroupByQueryEngine$RowIterator.next(GroupByQueryEngine.java:285) ~[druid-processing-0.7.0.jar:0.7.0]
at com.metamx.common.guava.BaseSequence.makeYielder(BaseSequence.java:104) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.BaseSequence.toYielder(BaseSequence.java:81) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ConcatSequence.makeYielder(ConcatSequence.java:93) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ConcatSequence.toYielder(ConcatSequence.java:72) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.toYielder(ResourceClosingSequence.java:41) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.YieldingSequenceBase.accumulate(YieldingSequenceBase.java:34) ~[java-util-0.26.14.jar:?]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:102) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:102) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2$1.call(SpecificSegmentQueryRunner.java:85) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner.doNamed(SpecificSegmentQueryRunner.java:169) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner.access$400(SpecificSegmentQueryRunner.java:39) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.doItNamed(SpecificSegmentQueryRunner.java:160) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.accumulate(SpecificSegmentQueryRunner.java:78) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.GroupByParallelQueryRunner$1$1.call(GroupByParallelQueryRunner.java:115) [druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.GroupByParallelQueryRunner$1$1.call(GroupByParallelQueryRunner.java:106) [druid-processing-0.7.0.jar:0.7.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_55]
at io.druid.query.PrioritizedExecutorService$PrioritizedListenableFutureTask.run(PrioritizedExecutorService.java:202) [druid-processing-0.7.0.jar:0.7.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_55]
at java.lang.Thread.run(Thread.java:744) [?:1.7.0_55]
2015-03-24T00:42:20,039 WARN [qtp715982425-48] io.druid.server.QueryResource - Exception occurred on request [GroupByQuery{limitSpec=NoopLimitSpec, dimFilter=null, granularity=AllGranularity, dimensions=[DefaultDimensionSpec{dimension='d18', outputName='d18'}], aggregatorSpecs=[CountAggregatorFactory{name='#'}], postAggregatorSpecs=[], limitFn=identity}]
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:538) ~[?:1.7.0_55]
at java.nio.DirectByteBuffer.getInt(DirectByteBuffer.java:675) ~[?:1.7.0_55]
at io.druid.segment.data.VSizeIndexedInts.get(VSizeIndexedInts.java:119) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.segment.column.SimpleDictionaryEncodedColumn.getSingleValueRow(SimpleDictionaryEncodedColumn.java:61) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.segment.QueryableIndexStorageAdapter$CursorSequenceBuilder$1$1$2$1.iterator(QueryableIndexStorageAdapter.java:345) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:189) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.access$100(GroupByQueryEngine.java:142) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.groupby.GroupByQueryEngine$RowIterator.next(GroupByQueryEngine.java:367) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.groupby.GroupByQueryEngine$RowIterator.next(GroupByQueryEngine.java:285) ~[druid-processing-0.7.0.jar:0.7.0]
at com.metamx.common.guava.BaseSequence.makeYielder(BaseSequence.java:104) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.BaseSequence.toYielder(BaseSequence.java:81) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ConcatSequence.makeYielder(ConcatSequence.java:93) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ConcatSequence.toYielder(ConcatSequence.java:72) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.toYielder(ResourceClosingSequence.java:41) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.YieldingSequenceBase.accumulate(YieldingSequenceBase.java:34) ~[java-util-0.26.14.jar:?]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:102) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:102) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2$1.call(SpecificSegmentQueryRunner.java:85) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner.doNamed(SpecificSegmentQueryRunner.java:169) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner.access$400(SpecificSegmentQueryRunner.java:39) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.doItNamed(SpecificSegmentQueryRunner.java:160) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.accumulate(SpecificSegmentQueryRunner.java:78) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.GroupByParallelQueryRunner$1$1.call(GroupByParallelQueryRunner.java:115) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.GroupByParallelQueryRunner$1$1.call(GroupByParallelQueryRunner.java:106) ~[druid-processing-0.7.0.jar:0.7.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[?:1.7.0_55]
at io.druid.query.PrioritizedExecutorService$PrioritizedListenableFutureTask.run(PrioritizedExecutorService.java:202) ~[druid-processing-0.7.0.jar:0.7.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[?:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[?:1.7.0_55]
at java.lang.Thread.run(Thread.java:744) [?:1.7.0_55]
2015-03-24T00:42:20,039 ERROR [qtp715982425-48] io.druid.server.QueryResource - Exception handling request: {class=io.druid.server.QueryResource, exceptionType=class java.lang.IndexOutOfBoundsException, exceptionMessage=null, exception=java.lang.IndexOutOfBoundsException, query=GroupByQuery{limitSpec=NoopLimitSpec, dimFilter=null, granularity=AllGranularity, dimensions=[DefaultDimensionSpec{dimension='d18', outputName='d18'}], aggregatorSpecs=[CountAggregatorFactory{name='#'}], postAggregatorSpecs=[], limitFn=identity}, peer=10.25.2.118}
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:538) ~[?:1.7.0_55]
at java.nio.DirectByteBuffer.getInt(DirectByteBuffer.java:675) ~[?:1.7.0_55]
at io.druid.segment.data.VSizeIndexedInts.get(VSizeIndexedInts.java:119) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.segment.column.SimpleDictionaryEncodedColumn.getSingleValueRow(SimpleDictionaryEncodedColumn.java:61) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.segment.QueryableIndexStorageAdapter$CursorSequenceBuilder$1$1$2$1.iterator(QueryableIndexStorageAdapter.java:345) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.updateValues(GroupByQueryEngine.java:189) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.groupby.GroupByQueryEngine$RowUpdater.access$100(GroupByQueryEngine.java:142) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.groupby.GroupByQueryEngine$RowIterator.next(GroupByQueryEngine.java:367) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.groupby.GroupByQueryEngine$RowIterator.next(GroupByQueryEngine.java:285) ~[druid-processing-0.7.0.jar:0.7.0]
at com.metamx.common.guava.BaseSequence.makeYielder(BaseSequence.java:104) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.BaseSequence.toYielder(BaseSequence.java:81) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ConcatSequence.makeYielder(ConcatSequence.java:93) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ConcatSequence.toYielder(ConcatSequence.java:72) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.toYielder(ResourceClosingSequence.java:41) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.YieldingSequenceBase.accumulate(YieldingSequenceBase.java:34) ~[java-util-0.26.14.jar:?]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:102) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:102) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2$1.call(SpecificSegmentQueryRunner.java:85) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner.doNamed(SpecificSegmentQueryRunner.java:169) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner.access$400(SpecificSegmentQueryRunner.java:39) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.doItNamed(SpecificSegmentQueryRunner.java:160) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.accumulate(SpecificSegmentQueryRunner.java:78) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.GroupByParallelQueryRunner$1$1.call(GroupByParallelQueryRunner.java:115) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.GroupByParallelQueryRunner$1$1.call(GroupByParallelQueryRunner.java:106) ~[druid-processing-0.7.0.jar:0.7.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[?:1.7.0_55]
at io.druid.query.PrioritizedExecutorService$PrioritizedListenableFutureTask.run(PrioritizedExecutorService.java:202) ~[druid-processing-0.7.0.jar:0.7.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[?:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[?:1.7.0_55]
at java.lang.Thread.run(Thread.java:744) [?:1.7.0_55]

As you can notice I try to query only one segment.

When I change query type to timeseries it gives right result:

This is very annoying: because of this special segment I can't make big GroupBy queries if they include it.

Historical node loads this segment w/o problems:

2015-03-23T20:56:25,637 INFO [main] io.druid.server.coordination.ZkCoordinator - Loading segment cache file [/var/run/druid/historical/info_dir/d018_2015-03-14T10:00:00.000Z_2015-03-14T11:00:00.000Z_2015-03-14T10:00:00.000Z]
2015-03-23T20:56:26,388 INFO [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Loading segment d018_2015-03-14T10:00:00.000Z_2015-03-14T11:00:00.000Z_2015-03-14T10:00:00.000Z
2015-03-23T20:56:26,784 INFO [main] io.druid.server.coordination.BatchDataSegmentAnnouncer - Announcing segment[d018_2015-03-14T10:00:00.000Z_2015-03-14T11:00:00.000Z_2015-03-14T10:00:00.000Z] at path[/druid/d0/segments/10.18.3.44:7020/2015-03-23T20:56:26.782Z2]

Also corresponding index.zip file is not corrupted (checked using zip -T).

What can be a reason of such behavior?

Fangjin Yang

unread,
Mar 23, 2015, 11:12:49 PM3/23/15
to druid...@googlegroups.com
Hi Alexander, a few questions.

1) Your query doesn't seem to be able to map directly to a timeseries as you are grouping on the d18 dimension. If you issue a query with dimensions set to an empty list, do you get the same result as timeseries?
2) If you issue a topN over the same interval (BTW, topNs are significantly faster than groupBys for single dimension groupBys and are highly recommended for that use case), what do you see?

Alexander Makarenko

unread,
Mar 24, 2015, 12:28:24 AM3/24/15
to druid...@googlegroups.com
Hi, Fangjin.

1) If I make a GroupBy query with empty dimensions list, I get the same results as for timeseries query. I.e. it works OK.

2) Well, a query below gives the same {"error": "null exception"}:

{
  "queryType": "topN",
  "dataSource": "d018",
  "granularity": "hour",
  "dimension": "d18",
  "threshold": 10,
  "metric": "m1",
  "aggregations": [
    {
      "type": "longSum",
      "name": "m1LongSum",
      "fieldName": "m1"
    }
  ],
  "context": {
    "useCache": false,
    "populateCache": false
  },
  "intervals": ["2015-03-14T10:00/2015-03-14T11:00"]
}

Errors:

2015-03-24T04:29:32,749 ERROR [processing-0] io.druid.query.ChainedExecutionQueryRunner - Exception with one of the sequences!
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:538) ~[?:1.7.0_55]
at java.nio.DirectByteBuffer.getInt(DirectByteBuffer.java:675) ~[?:1.7.0_55]
at io.druid.segment.data.VSizeIndexedInts.get(VSizeIndexedInts.java:119) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.segment.column.SimpleDictionaryEncodedColumn.getSingleValueRow(SimpleDictionaryEncodedColumn.java:61) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.segment.QueryableIndexStorageAdapter$CursorSequenceBuilder$1$1$2$1.get(QueryableIndexStorageAdapter.java:339) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.PooledTopNAlgorithm.scanAndAggregate(PooledTopNAlgorithm.java:205) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.PooledTopNAlgorithm.scanAndAggregate(PooledTopNAlgorithm.java:35) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.BaseTopNAlgorithm.run(BaseTopNAlgorithm.java:89) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.TopNMapFn.apply(TopNMapFn.java:55) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.TopNMapFn.apply(TopNMapFn.java:25) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.TopNQueryEngine$1.apply(TopNQueryEngine.java:80) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.TopNQueryEngine$1.apply(TopNQueryEngine.java:75) ~[druid-processing-0.7.0.jar:0.7.0]
at com.metamx.common.guava.MappingYieldingAccumulator.accumulate(MappingYieldingAccumulator.java:57) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.FilteringYieldingAccumulator.accumulate(FilteringYieldingAccumulator.java:69) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.MappingYieldingAccumulator.accumulate(MappingYieldingAccumulator.java:57) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.BaseSequence.makeYielder(BaseSequence.java:104) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.BaseSequence.toYielder(BaseSequence.java:81) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.MappedSequence.toYielder(MappedSequence.java:46) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.toYielder(ResourceClosingSequence.java:41) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.FilteredSequence.toYielder(FilteredSequence.java:52) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.MappedSequence.toYielder(MappedSequence.java:46) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.FilteredSequence.toYielder(FilteredSequence.java:52) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.toYielder(ResourceClosingSequence.java:41) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.YieldingSequenceBase.accumulate(YieldingSequenceBase.java:34) ~[java-util-0.26.14.jar:?]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:102) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:102) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2$1.call(SpecificSegmentQueryRunner.java:85) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner.doNamed(SpecificSegmentQueryRunner.java:169) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner.access$400(SpecificSegmentQueryRunner.java:39) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.doItNamed(SpecificSegmentQueryRunner.java:160) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.accumulate(SpecificSegmentQueryRunner.java:78) ~[druid-processing-0.7.0.jar:0.7.0]
at com.metamx.common.guava.Sequences.toList(Sequences.java:113) ~[java-util-0.26.14.jar:?]
at io.druid.query.ChainedExecutionQueryRunner$1$1$1.call(ChainedExecutionQueryRunner.java:130) [druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.ChainedExecutionQueryRunner$1$1$1.call(ChainedExecutionQueryRunner.java:120) [druid-processing-0.7.0.jar:0.7.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_55]
at io.druid.query.PrioritizedExecutorService$PrioritizedListenableFutureTask.run(PrioritizedExecutorService.java:202) [druid-processing-0.7.0.jar:0.7.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_55]
at java.lang.Thread.run(Thread.java:744) [?:1.7.0_55]
2015-03-24T04:29:32,750 WARN [qtp715982425-21] io.druid.server.QueryResource - Exception occurred on request [TopNQuery{dataSource='d018', dimensionSpec=DefaultDimensionSpec{dimension='d18', outputName='d18'}, topNMetricSpec=NumericTopNMetricSpec{metric='m1'}, threshold=10, querySegmentSpec=LegacySegmentSpec{intervals=[2015-03-14T10:00:00.000Z/2015-03-14T11:00:00.000Z]}, dimFilter=null, granularity='DurationGranularity{length=3600000, origin=0}', aggregatorSpecs=[LongSumAggregatorFactory{fieldName='m1', name='m1LongSum'}], postAggregatorSpecs=[]}]
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:538) ~[?:1.7.0_55]
at java.nio.DirectByteBuffer.getInt(DirectByteBuffer.java:675) ~[?:1.7.0_55]
at io.druid.segment.data.VSizeIndexedInts.get(VSizeIndexedInts.java:119) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.segment.column.SimpleDictionaryEncodedColumn.getSingleValueRow(SimpleDictionaryEncodedColumn.java:61) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.segment.QueryableIndexStorageAdapter$CursorSequenceBuilder$1$1$2$1.get(QueryableIndexStorageAdapter.java:339) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.PooledTopNAlgorithm.scanAndAggregate(PooledTopNAlgorithm.java:205) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.PooledTopNAlgorithm.scanAndAggregate(PooledTopNAlgorithm.java:35) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.BaseTopNAlgorithm.run(BaseTopNAlgorithm.java:89) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.TopNMapFn.apply(TopNMapFn.java:55) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.TopNMapFn.apply(TopNMapFn.java:25) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.TopNQueryEngine$1.apply(TopNQueryEngine.java:80) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.TopNQueryEngine$1.apply(TopNQueryEngine.java:75) ~[druid-processing-0.7.0.jar:0.7.0]
at com.metamx.common.guava.MappingYieldingAccumulator.accumulate(MappingYieldingAccumulator.java:57) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.FilteringYieldingAccumulator.accumulate(FilteringYieldingAccumulator.java:69) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.MappingYieldingAccumulator.accumulate(MappingYieldingAccumulator.java:57) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.BaseSequence.makeYielder(BaseSequence.java:104) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.BaseSequence.toYielder(BaseSequence.java:81) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.MappedSequence.toYielder(MappedSequence.java:46) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.toYielder(ResourceClosingSequence.java:41) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.FilteredSequence.toYielder(FilteredSequence.java:52) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.MappedSequence.toYielder(MappedSequence.java:46) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.FilteredSequence.toYielder(FilteredSequence.java:52) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.toYielder(ResourceClosingSequence.java:41) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.YieldingSequenceBase.accumulate(YieldingSequenceBase.java:34) ~[java-util-0.26.14.jar:?]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:102) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:102) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2$1.call(SpecificSegmentQueryRunner.java:85) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner.doNamed(SpecificSegmentQueryRunner.java:169) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner.access$400(SpecificSegmentQueryRunner.java:39) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.doItNamed(SpecificSegmentQueryRunner.java:160) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.accumulate(SpecificSegmentQueryRunner.java:78) ~[druid-processing-0.7.0.jar:0.7.0]
at com.metamx.common.guava.Sequences.toList(Sequences.java:113) ~[java-util-0.26.14.jar:?]
at io.druid.query.ChainedExecutionQueryRunner$1$1$1.call(ChainedExecutionQueryRunner.java:130) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.ChainedExecutionQueryRunner$1$1$1.call(ChainedExecutionQueryRunner.java:120) ~[druid-processing-0.7.0.jar:0.7.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[?:1.7.0_55]
at io.druid.query.PrioritizedExecutorService$PrioritizedListenableFutureTask.run(PrioritizedExecutorService.java:202) ~[druid-processing-0.7.0.jar:0.7.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[?:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[?:1.7.0_55]
at java.lang.Thread.run(Thread.java:744) [?:1.7.0_55]
2015-03-24T04:29:32,751 ERROR [qtp715982425-21] io.druid.server.QueryResource - Exception handling request: {class=io.druid.server.QueryResource, exceptionType=class java.lang.IndexOutOfBoundsException, exceptionMessage=null, exception=java.lang.IndexOutOfBoundsException, query=TopNQuery{dataSource='d018', dimensionSpec=DefaultDimensionSpec{dimension='d18', outputName='d18'}, topNMetricSpec=NumericTopNMetricSpec{metric='m1'}, threshold=10, querySegmentSpec=LegacySegmentSpec{intervals=[2015-03-14T10:00:00.000Z/2015-03-14T11:00:00.000Z]}, dimFilter=null, granularity='DurationGranularity{length=3600000, origin=0}', aggregatorSpecs=[LongSumAggregatorFactory{fieldName='m1', name='m1LongSum'}], postAggregatorSpecs=[]}, peer=10.25.2.118}
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:538) ~[?:1.7.0_55]
at java.nio.DirectByteBuffer.getInt(DirectByteBuffer.java:675) ~[?:1.7.0_55]
at io.druid.segment.data.VSizeIndexedInts.get(VSizeIndexedInts.java:119) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.segment.column.SimpleDictionaryEncodedColumn.getSingleValueRow(SimpleDictionaryEncodedColumn.java:61) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.segment.QueryableIndexStorageAdapter$CursorSequenceBuilder$1$1$2$1.get(QueryableIndexStorageAdapter.java:339) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.PooledTopNAlgorithm.scanAndAggregate(PooledTopNAlgorithm.java:205) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.PooledTopNAlgorithm.scanAndAggregate(PooledTopNAlgorithm.java:35) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.BaseTopNAlgorithm.run(BaseTopNAlgorithm.java:89) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.TopNMapFn.apply(TopNMapFn.java:55) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.TopNMapFn.apply(TopNMapFn.java:25) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.TopNQueryEngine$1.apply(TopNQueryEngine.java:80) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.topn.TopNQueryEngine$1.apply(TopNQueryEngine.java:75) ~[druid-processing-0.7.0.jar:0.7.0]
at com.metamx.common.guava.MappingYieldingAccumulator.accumulate(MappingYieldingAccumulator.java:57) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.FilteringYieldingAccumulator.accumulate(FilteringYieldingAccumulator.java:69) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.MappingYieldingAccumulator.accumulate(MappingYieldingAccumulator.java:57) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.BaseSequence.makeYielder(BaseSequence.java:104) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.BaseSequence.toYielder(BaseSequence.java:81) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.MappedSequence.toYielder(MappedSequence.java:46) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.toYielder(ResourceClosingSequence.java:41) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.FilteredSequence.toYielder(FilteredSequence.java:52) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.MappedSequence.toYielder(MappedSequence.java:46) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.FilteredSequence.toYielder(FilteredSequence.java:52) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.toYielder(ResourceClosingSequence.java:41) ~[java-util-0.26.14.jar:?]
at com.metamx.common.guava.YieldingSequenceBase.accumulate(YieldingSequenceBase.java:34) ~[java-util-0.26.14.jar:?]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:102) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:102) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2$1.call(SpecificSegmentQueryRunner.java:85) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner.doNamed(SpecificSegmentQueryRunner.java:169) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner.access$400(SpecificSegmentQueryRunner.java:39) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.doItNamed(SpecificSegmentQueryRunner.java:160) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.accumulate(SpecificSegmentQueryRunner.java:78) ~[druid-processing-0.7.0.jar:0.7.0]
at com.metamx.common.guava.Sequences.toList(Sequences.java:113) ~[java-util-0.26.14.jar:?]
at io.druid.query.ChainedExecutionQueryRunner$1$1$1.call(ChainedExecutionQueryRunner.java:130) ~[druid-processing-0.7.0.jar:0.7.0]
at io.druid.query.ChainedExecutionQueryRunner$1$1$1.call(ChainedExecutionQueryRunner.java:120) ~[druid-processing-0.7.0.jar:0.7.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[?:1.7.0_55]
at io.druid.query.PrioritizedExecutorService$PrioritizedListenableFutureTask.run(PrioritizedExecutorService.java:202) ~[druid-processing-0.7.0.jar:0.7.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[?:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[?:1.7.0_55]
at java.lang.Thread.run(Thread.java:744) [?:1.7.0_55]

Fangjin Yang

unread,
Mar 24, 2015, 7:17:33 PM3/24/15
to druid...@googlegroups.com
Hi Alexander, do you see the same problem when you issue topNs over any other dimension? I am just curious if the d18 dimension has valid values.
...

Alexander Makarenko

unread,
Mar 25, 2015, 2:16:26 AM3/25/15
to Fangjin Yang, druid...@googlegroups.com
Hi Fangjin. I get same errors if change dimension (tried all of them) or metric.

--
You received this message because you are subscribed to a topic in the Google Groups "Druid User" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/druid-user/kl0zoPGqZJI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/11f992e5-e52c-43df-bd8f-5d8c13f45232%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Fangjin Yang

unread,
Mar 25, 2015, 10:45:47 PM3/25/15
to druid...@googlegroups.com, fangj...@gmail.com
Hi Alexander, do you have  a couple of sample rows of your data? Can you also share the ingestion scheme you used to ingest the data?
...

Alexander Makarenko

unread,
Mar 26, 2015, 9:11:21 AM3/26/15
to druid...@googlegroups.com, fangj...@gmail.com
Hi Fangjin. Think I sent you details privately (though can't see post in thread now).
...

Fangjin Yang

unread,
Mar 26, 2015, 8:04:23 PM3/26/15
to druid...@googlegroups.com, fangj...@gmail.com
Hi Alexander, a few more questions.

Are you ingesting current time data or historical data? It appears to be current time data

I don't think this matters, but if you remove d15 from the list of dimensions, do you see the same error?

Can you send me the actual events you are using? The events you've sent me don't include a timestamp and with a timestamp, I cannot reproduce the error.

-- FJ
...

Alexander Makarenko

unread,
Mar 26, 2015, 9:45:41 PM3/26/15
to druid...@googlegroups.com, fangj...@gmail.com
Will remove d15 and return to you then.
...

Fangjin Yang

unread,
Apr 6, 2015, 6:37:59 PM4/6/15
to druid...@googlegroups.com, fangj...@gmail.com
HI Alexander, just following up if you are still having issues with this. I wonder if you are seeing this behavior for a single segment or if all of your segments are behaving the same way.
...

Alexander Makarenko

unread,
Apr 6, 2015, 7:00:31 PM4/6/15
to Fangjin Yang, druid...@googlegroups.com
Hi, Fangjin. It's OK already. The issue was with only one segment. I removed d15 dimension from ingestion schema and did not see such issues since then.

--
You received this message because you are subscribed to a topic in the Google Groups "Druid User" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/druid-user/kl0zoPGqZJI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages