Re: [druid-user] Kafka indestion failing when done with supervisor task with exception : Could not allocate segment

262 views
Skip to first unread message

Ben Vogan

unread,
Nov 17, 2016, 11:03:43 AM11/17/16
to druid...@googlegroups.com
Hi Rajnandini,

I have seen a similar error when I have run a hadoop compaction job and then receive an event in the compacted timeframe.  Something about how the hadoop indexing task wrote out the segments made it impossible for the kafka indexer to append extra data (probably something to do with partitioning).  I worked around the issue by configuring my supervisor task to ignore data from 2 weeks ago, and only compact data older than 2 weeks.  I have no idea whether there are other situations in which this error presents, but hopefully this helps you out.

Regards,
--Ben

On Wed, Nov 16, 2016 at 11:18 PM, rajnandini ranbhare <ranbh...@gmail.com> wrote:
Hello,

I'm using kafka ingestions service for druid using the supervisor spec. The ingestion task are failing with exception:

com.metamx.common.ISE: Could not allocate segment for row with timestamp[2014-03-01T06:32:33.000+05:30]
	at io.druid.indexing.kafka.KafkaIndexTask.run(KafkaIndexTask.java:427) ~[?:?]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.1.1.jar:0.9.1.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.1.1.jar:0.9.1.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_111]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_111]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_111]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_111]
2016-11-16T18:26:16,958 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_kafka_datasource-kafka_9e6203861689342_adpipppk] status changed to [FAILED].
2016-11-16T18:26:16,960 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_kafka_datasource-kafka_9e6203861689342_adpipppk",
  "status" : "FAILED",
  "duration" : 698
}

Why is this exception is getting raised? and How can I resolve it?

Thanks,
Rajnandini

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/a8a5d709-77b0-4383-bd8b-e012530e6c8b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--

BENJAMIN VOGAN | Data Platform Team Lead
 
The indispensable app that rewards you for shopping.

Nishant Bangarwa

unread,
Nov 18, 2016, 2:07:11 AM11/18/16
to druid...@googlegroups.com
Hi Ben, 
Yes, you are correct, batch job by default generates non-extendable shard specs which can lead to this error. 
We have introduced a new flag in 0.9.2, forceExtendableShardSpecs which can be set to true to avoid this,
http://druid.io/docs/0.9.2-rc3/ingestion/batch-ingestion.html

To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/a8a5d709-77b0-4383-bd8b-e012530e6c8b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--

BENJAMIN VOGAN | Data Platform Team Lead
 
The indispensable app that rewards you for shopping.

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages