Periodic problems with publishing a segmens with KIS task

491 views
Skip to first unread message

Павел Кураев

unread,
Oct 12, 2018, 8:59:35 AM10/12/18
to Druid User
Hi!
In our cluster some indexing task failed to publish segments. I found two error messages: in the tasks log and overloards log. 
One segment dsp_traf_supervisor_2018-10-12T11:15:00.000Z_2018-10-12T11:30:00.000Z_2018-10-12T11:15:00.092Z_12   have problems with duplicated key in pendingsegments table. 
.I have no  idea why this is been happening. Could someone help me what's causing this?
Druid version is 0.12.3

Task log example 

2018-10-12T11:18:30,091 INFO [publish-0] io.druid.segment.realtime.appenderator.BaseAppenderatorDriver - Publishing segments with commitMetadata[AppenderatorDriverMetadata{segments={index_kafka_dsp_traf_supervisor_6d81720c3fb7dec_0=[io.druid.segment.realtime.appenderator.SegmentWithState@14d04136, io.druid.segment.realtime.appenderator.SegmentWithState@2b7b26b, io.druid.segment.realtime.appenderator.SegmentWithState@18a57b3c]}, lastSegmentIds={index_kafka_dsp_traf_supervisor_6d81720c3fb7dec_0=dsp_traf_supervisor_2018-10-12T11:15:00.000Z_2018-10-12T11:30:00.000Z_2018-10-12T11:15:00.092Z_12}, callerMetadata={nextPartitions=KafkaPartitions{topic='druid_queue', partitionOffsetMap={12=6430979856}}, publishPartitions=KafkaPartitions{topic='druid_queue', partitionOffsetMap={12=6430979856}}}}]: [DataSegment{size=14241658, shardSpec=NumberedShardSpec{partitionNum=12, partitions=0}, metrics=[merics], version='2018-10-12T11:15:00.092Z', loadSpec={type=>hdfs, path=>hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T111500.000Z_20181012T113000.000Z/2018-10-12T11_15_00.092Z/12_b4531fa6-ff31-4f77-8ec0-622576357348_index.zip}, interval=2018-10-12T11:15:00.000Z/2018-10-12T11:30:00.000Z, dataSource='dsp_traf_supervisor', binaryVersion='9'}, DataSegment{size=116478, shardSpec=NumberedShardSpec{partitionNum=38, partitions=0}, metrics=[metrics], version='2018-10-12T10:45:00.042Z', loadSpec={type=>hdfs, path=>hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T104500.000Z_20181012T110000.000Z/2018-10-12T10_45_00.042Z/38_0206ea77-a14a-4089-a378-d953b7136e46_index.zip}, interval=2018-10-12T10:45:00.000Z/2018-10-12T11:00:00.000Z, dataSource='dsp_traf_supervisor', binaryVersion='9'}, DataSegment{size=109597332, shardSpec=NumberedShardSpec{partitionNum=19, partitions=0}, metrics=[auctions, visitors_unique, bet_count, win_count, impression_count, win_price, price_fl, click_count, sum_bets, auction_quote, marker], dimensions=["dimensions"], version='2018-10-12T11:00:00.076Z', loadSpec={type=>hdfs, path=>hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T110000.000Z_20181012T111500.000Z/2018-10-12T11_00_00.076Z/19_7eed516a-5458-4518-8cb4-38c0b5173f88_index.zip}, interval=2018-10-12T11:00:00.000Z/2018-10-12T11:15:00.000Z, dataSource='dsp_traf_supervisor', binaryVersion='9'}]
2018-10-12T11:18:30,095 INFO [publish-0] io.druid.indexing.kafka.KafkaIndexTask - Publishing with isTransaction[true].
2018-10-12T11:18:30,096 INFO [publish-0] io.druid.indexing.common.actions.RemoteTaskActionClient - Performing action for task[index_kafka_dsp_traf_supervisor_6d81720c3fb7dec_cpfkmaoi]: SegmentInsertAction{segments=[DataSegment{size=14241658, shardSpec=NumberedShardSpec{partitionNum=12, partitions=0}, metrics=[metrics], dimensions=["dimensions"], version='2018-10-12T11:15:00.092Z', loadSpec={type=>hdfs, path=>hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T111500.000Z_20181012T113000.000Z/2018-10-12T11_15_00.092Z/12_b4531fa6-ff31-4f77-8ec0-622576357348_index.zip}, interval=2018-10-12T11:15:00.000Z/2018-10-12T11:30:00.000Z, dataSource='dsp_traf_supervisor', binaryVersion='9'}, DataSegment{size=116478, shardSpec=NumberedShardSpec{partitionNum=38, partitions=0}, metrics=[metrics], dimensions=["dims"], version='2018-10-12T10:45:00.042Z', loadSpec={type=>hdfs, path=>hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T104500.000Z_20181012T110000.000Z/2018-10-12T10_45_00.042Z/38_0206ea77-a14a-4089-a378-d953b7136e46_index.zip}, interval=2018-10-12T10:45:00.000Z/2018-10-12T11:00:00.000Z, dataSource='dsp_traf_supervisor', binaryVersion='9'}, DataSegment{size=109597332, shardSpec=NumberedShardSpec{partitionNum=19, partitions=0}, metrics=[auctions, visitors_unique, bet_count, win_count, impression_count, win_price, price_fl, click_count, sum_bets, auction_quote, marker], dimensions=["dims"], version='2018-10-12T11:00:00.076Z', loadSpec={type=>hdfs, path=>hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T110000.000Z_20181012T111500.000Z/2018-10-12T11_00_00.076Z/19_7eed516a-5458-4518-8cb4-38c0b5173f88_index.zip}, interval=2018-10-12T11:00:00.000Z/2018-10-12T11:15:00.000Z, dataSource='dsp_traf_supervisor', binaryVersion='9'}], startMetadata=KafkaDataSourceMetadata{kafkaPartitions=KafkaPartitions{topic='druid_queue', partitionOffsetMap={12=6429749689}}}, endMetadata=KafkaDataSourceMetadata{kafkaPartitions=KafkaPartitions{topic='druid_queue', partitionOffsetMap={12=6430979856}}}}
2018-10-12T11:18:30,108 INFO [publish-0] io.druid.indexing.common.actions.RemoteTaskActionClient - Submitting action for task[index_kafka_dsp_traf_supervisor_6d81720c3fb7dec_cpfkmaoi] to overlord: [SegmentInsertAction{segments=[DataSegment{size=14241658, shardSpec=NumberedShardSpec{partitionNum=12, partitions=0}, metrics=[metrics], dimensions=["dims"], version='2018-10-12T11:15:00.092Z', loadSpec={type=>hdfs, path=>hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T111500.000Z_20181012T113000.000Z/2018-10-12T11_15_00.092Z/12_b4531fa6-ff31-4f77-8ec0-622576357348_index.zip}, interval=2018-10-12T11:15:00.000Z/2018-10-12T11:30:00.000Z, dataSource='dsp_traf_supervisor', binaryVersion='9'}, DataSegment{size=116478, shardSpec=NumberedShardSpec{partitionNum=38, partitions=0}, metrics=[metrics], dimensions=["dims"], version='2018-10-12T10:45:00.042Z', loadSpec={type=>hdfs, path=>hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T104500.000Z_20181012T110000.000Z/2018-10-12T10_45_00.042Z/38_0206ea77-a14a-4089-a378-d953b7136e46_index.zip}, interval=2018-10-12T10:45:00.000Z/2018-10-12T11:00:00.000Z, dataSource='dsp_traf_supervisor', binaryVersion='9'}, DataSegment{size=109597332, shardSpec=NumberedShardSpec{partitionNum=19, partitions=0}, metrics=[auctions, visitors_unique, bet_count, win_count, impression_count, win_price, price_fl, click_count, sum_bets, auction_quote, marker], dimensions=["dims"], version='2018-10-12T11:00:00.076Z', loadSpec={type=>hdfs, path=>hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T110000.000Z_20181012T111500.000Z/2018-10-12T11_00_00.076Z/19_7eed516a-5458-4518-8cb4-38c0b5173f88_index.zip}, interval=2018-10-12T11:00:00.000Z/2018-10-12T11:15:00.000Z, dataSource='dsp_traf_supervisor', binaryVersion='9'}], startMetadata=KafkaDataSourceMetadata{kafkaPartitions=KafkaPartitions{topic='druid_queue', partitionOffsetMap={12=6429749689}}}, endMetadata=KafkaDataSourceMetadata{kafkaPartitions=KafkaPartitions{topic='druid_queue', partitionOffsetMap={12=6430979856}}}}].
2018-10-12T11:18:30,124 INFO [publish-0] io.druid.segment.realtime.appenderator.BaseAppenderatorDriver - Transaction failure while publishing segments, removing them from deep storage and checking if someone else beat us to publishing.
2018-10-12T11:18:30,128 INFO [publish-0] io.druid.storage.hdfs.HdfsDataSegmentKiller - Killing segment[dsp_traf_supervisor_2018-10-12T11:15:00.000Z_2018-10-12T11:30:00.000Z_2018-10-12T11:15:00.092Z_12] mapped to path[hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T111500.000Z_20181012T113000.000Z/2018-10-12T11_15_00.092Z/12_b4531fa6-ff31-4f77-8ec0-622576357348_index.zip]
2018-10-12T11:18:30,153 INFO [publish-0] io.druid.storage.hdfs.HdfsDataSegmentKiller - Killing segment[dsp_traf_supervisor_2018-10-12T10:45:00.000Z_2018-10-12T11:00:00.000Z_2018-10-12T10:45:00.042Z_38] mapped to path[hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T104500.000Z_20181012T110000.000Z/2018-10-12T10_45_00.042Z/38_0206ea77-a14a-4089-a378-d953b7136e46_index.zip]
2018-10-12T11:18:30,172 INFO [publish-0] io.druid.storage.hdfs.HdfsDataSegmentKiller - Killing segment[dsp_traf_supervisor_2018-10-12T11:00:00.000Z_2018-10-12T11:15:00.000Z_2018-10-12T11:00:00.076Z_19] mapped to path[hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T110000.000Z_20181012T111500.000Z/2018-10-12T11_00_00.076Z/19_7eed516a-5458-4518-8cb4-38c0b5173f88_index.zip]
2018-10-12T11:18:30,190 INFO [publish-0] io.druid.indexing.common.actions.RemoteTaskActionClient - Performing action for task[index_kafka_dsp_traf_supervisor_6d81720c3fb7dec_cpfkmaoi]: SegmentListUsedAction{dataSource='dsp_traf_supervisor', intervals=[2018-10-12T10:45:00.000Z/2018-10-12T11:30:00.000Z]}
2018-10-12T11:18:30,192 INFO [publish-0] io.druid.indexing.common.actions.RemoteTaskActionClient - Submitting action for task[index_kafka_dsp_traf_supervisor_6d81720c3fb7dec_cpfkmaoi] to overlord: [SegmentListUsedAction{dataSource='dsp_traf_supervisor', intervals=[2018-10-12T10:45:00.000Z/2018-10-12T11:30:00.000Z]}].
2018-10-12T11:18:30,248 WARN [publish-0] io.druid.segment.realtime.appenderator.BaseAppenderatorDriver - Failed publish, not removing segments: [DataSegment{size=14241658, shardSpec=NumberedShardSpec{partitionNum=12, partitions=0}, metrics=[metrics], dimensions=["dims"], version='2018-10-12T11:15:00.092Z', loadSpec={type=>hdfs, path=>hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T111500.000Z_20181012T113000.000Z/2018-10-12T11_15_00.092Z/12_b4531fa6-ff31-4f77-8ec0-622576357348_index.zip}, interval=2018-10-12T11:15:00.000Z/2018-10-12T11:30:00.000Z, dataSource='dsp_traf_supervisor', binaryVersion='9'}, DataSegment{size=116478, shardSpec=NumberedShardSpec{partitionNum=38, partitions=0}, metrics=[metrics], dimensions=["dims"], version='2018-10-12T10:45:00.042Z', loadSpec={type=>hdfs, path=>hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T104500.000Z_20181012T110000.000Z/2018-10-12T10_45_00.042Z/38_0206ea77-a14a-4089-a378-d953b7136e46_index.zip}, interval=2018-10-12T10:45:00.000Z/2018-10-12T11:00:00.000Z, dataSource='dsp_traf_supervisor', binaryVersion='9'}, DataSegment{size=109597332, shardSpec=NumberedShardSpec{partitionNum=19, partitions=0}, metrics=[metrics], dimensions=["dims"], version='2018-10-12T11:00:00.076Z', loadSpec={type=>hdfs, path=>hdfs://nameservice/druid/segments/dsp_traf_supervisor/20181012T110000.000Z_20181012T111500.000Z/2018-10-12T11_00_00.076Z/19_7eed516a-5458-4518-8cb4-38c0b5173f88_index.zip}, interval=2018-10-12T11:00:00.000Z/2018-10-12T11:15:00.000Z, dataSource='dsp_traf_supervisor', binaryVersion='9'}]
io.druid.java.util.common.ISE: Failed to publish segments.
at io.druid.segment.realtime.appenderator.BaseAppenderatorDriver.lambda$publishInBackground$8(BaseAppenderatorDriver.java:578) ~[druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
2018-10-12T11:18:30,252 ERROR [publish-0] io.druid.indexing.kafka.KafkaIndexTask - Error while publishing segments for sequence[SequenceMetadata{sequenceName='index_kafka_dsp_traf_supervisor_6d81720c3fb7dec_0', sequenceId=0, startOffsets={12=6429749689}, endOffsets={12=6430979856}, assignments=[], sentinel=false, checkpointed=true}]
io.druid.java.util.common.ISE: Failed to publish segments.
at io.druid.segment.realtime.appenderator.BaseAppenderatorDriver.lambda$publishInBackground$8(BaseAppenderatorDriver.java:578) ~[druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
2018-10-12T11:18:30,257 INFO [task-runner-0-priority-0] io.druid.segment.realtime.appenderator.AppenderatorImpl - Shutting down immediately...
2018-10-12T11:18:30,260 INFO [task-runner-0-priority-0] io.druid.server.coordination.BatchDataSegmentAnnouncer - Unannouncing segment[dsp_traf_supervisor_2018-10-12T11:15:00.000Z_2018-10-12T11:30:00.000Z_2018-10-12T11:15:00.092Z_12] at path[/druid_new/segments/imply103:8109/imply103:8109_indexer-executor_hot_tier_2018-10-12T11:01:22.807Z_e93f05822d2649ce916eeafcadbf6e4a0]
2018-10-12T11:18:30,261 INFO [task-runner-0-priority-0] io.druid.server.coordination.BatchDataSegmentAnnouncer - Unannouncing segment[dsp_traf_supervisor_2018-10-12T10:45:00.000Z_2018-10-12T11:00:00.000Z_2018-10-12T10:45:00.042Z_38] at path[/druid_new/segments/imply103:8109/imply103:8109_indexer-executor_hot_tier_2018-10-12T11:01:22.807Z_e93f05822d2649ce916eeafcadbf6e4a0]
2018-10-12T11:18:30,263 INFO [task-runner-0-priority-0] io.druid.server.coordination.BatchDataSegmentAnnouncer - Unannouncing segment[dsp_traf_supervisor_2018-10-12T11:00:00.000Z_2018-10-12T11:15:00.000Z_2018-10-12T11:00:00.076Z_19] at path[/druid_new/segments/imply103:8109/imply103:8109_indexer-executor_hot_tier_2018-10-12T11:01:22.807Z_e93f05822d2649ce916eeafcadbf6e4a0]
2018-10-12T11:18:30,263 INFO [task-runner-0-priority-0] io.druid.curator.announcement.Announcer - unannouncing [/druid_new/segments/imply103:8109/imply103:8109_indexer-executor_hot_tier_2018-10-12T11:01:22.807Z_e93f05822d2649ce916eeafcadbf6e4a0]
2018-10-12T11:18:30,279 INFO [task-runner-0-priority-0] io.druid.segment.realtime.firehose.ServiceAnnouncingChatHandlerProvider - Unregistering chat handler[index_kafka_dsp_traf_supervisor_6d81720c3fb7dec_cpfkmaoi]
2018-10-12T11:18:30,279 INFO [task-runner-0-priority-0] io.druid.curator.discovery.CuratorDruidNodeAnnouncer - Unannouncing [DiscoveryDruidNode{druidNode=DruidNode{serviceName='druid/middlemanager', host='imply103', port=-1, plaintextPort=8109, enablePlaintextPort=true, tlsPort=-1, enableTlsPort=false}, nodeType='peon', services={dataNodeService=DataNodeService{tier='hot_tier', maxSize=0, type=indexer-executor, priority=0}, lookupNodeService=LookupNodeService{lookupTier='dont_exist_loookup_tier'}}}].
2018-10-12T11:18:30,279 INFO [task-runner-0-priority-0] io.druid.curator.announcement.Announcer - unannouncing [/druid_new/internal-discovery/peon/imply103:8109]
2018-10-12T11:18:30,283 INFO [task-runner-0-priority-0] io.druid.curator.discovery.CuratorDruidNodeAnnouncer - Unannounced [DiscoveryDruidNode{druidNode=DruidNode{serviceName='druid/middlemanager', host='imply103', port=-1, plaintextPort=8109, enablePlaintextPort=true, tlsPort=-1, enableTlsPort=false}, nodeType='peon', services={dataNodeService=DataNodeService{tier='hot_tier', maxSize=0, type=indexer-executor, priority=0}, lookupNodeService=LookupNodeService{lookupTier='dont_exist_loookup_tier'}}}].
2018-10-12T11:18:30,284 INFO [task-runner-0-priority-0] io.druid.server.coordination.CuratorDataSegmentServerAnnouncer - Unannouncing self[DruidServerMetadata{name='imply103:8109', hostAndPort='imply103:8109', hostAndTlsPort='null', maxSize=0, tier='hot_tier', type=indexer-executor, priority=0}] at [/druid_new/announcements/imply103:8109]
2018-10-12T11:18:30,284 INFO [task-runner-0-priority-0] io.druid.curator.announcement.Announcer - unannouncing [/druid_new/announcements/imply103:8109]
2018-10-12T11:18:30,288 ERROR [task-runner-0-priority-0] io.druid.indexing.kafka.KafkaIndexTask - Encountered exception while running task.
java.util.concurrent.ExecutionException: io.druid.java.util.common.ISE: Failed to publish segments.
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299) ~[guava-16.0.1.jar:?]
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286) ~[guava-16.0.1.jar:?]
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) ~[guava-16.0.1.jar:?]
at io.druid.indexing.kafka.KafkaIndexTask.runInternal(KafkaIndexTask.java:785) ~[druid-kafka-indexing-service-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.indexing.kafka.KafkaIndexTask.run(KafkaIndexTask.java:358) [druid-kafka-indexing-service-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:457) [druid-indexing-service-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:429) [druid-indexing-service-0.12.3-iap6.jar:0.12.3-iap6]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
Caused by: io.druid.java.util.common.ISE: Failed to publish segments.
at io.druid.segment.realtime.appenderator.BaseAppenderatorDriver.lambda$publishInBackground$8(BaseAppenderatorDriver.java:578) ~[druid-server-0.12.3-iap6.jar:0.12.3-iap6]
... 4 more
2018-10-12T11:18:30,295 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_kafka_dsp_traf_supervisor_6d81720c3fb7dec_cpfkmaoi] status changed to [FAILED].
2018-10-12T11:18:30,298 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_kafka_dsp_traf_supervisor_6d81720c3fb7dec_cpfkmaoi",
  "status" : "FAILED",
  "duration" : 1029268,
  "errorMsg" : "java.util.concurrent.ExecutionException: io.druid.java.util.common.ISE: Failed to publish segments.\n..."
}

And related message in overloard.log 
2018-10-12T11:15:23,434 WARN [qtp612854398-134] io.druid.java.util.common.RetryUtils - Failed on try 5, retrying in 19,245ms.
org.skife.jdbi.v2.exceptions.CallbackFailedException: org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "druid_pendingsegments_pkey"
  Detail: Key (id)=(dsp_traf_supervisor_2018-10-12T11:15:00.000Z_2018-10-12T11:30:00.000Z_2018-10-12T11:15:00.092Z_12) already exists. [statement:"INSERT INTO druid_pendingSegments (id, dataSource, created_date, start, \"end\", sequence_name, sequence_prev_id, sequence_name_prev_id_sha1, payload) VALUES (:id, :dataSource, :created_date, :start, :end, :sequence_name, :sequence_prev_id, :sequence_name_prev_id_sha1, :payload)", located:"INSERT INTO druid_pendingSegments (id, dataSource, created_date, start, \"end\", sequence_name, sequence_prev_id, sequence_name_prev_id_sha1, payload) VALUES (:id, :dataSource, :created_date, :start, :end, :sequence_name, :sequence_prev_id, :sequence_name_prev_id_sha1, :payload)", rewritten:"INSERT INTO druid_pendingSegments (id, dataSource, created_date, start, "end", sequence_name, sequence_prev_id, sequence_name_prev_id_sha1, payload) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)", arguments:{ positional:{}, named:{sequence_prev_id:'',payload:[123, 34, 100, 97, 116, 97, 83, 111, 117, 114, 99, 101, 34, 58, 34, 100, 115, 112, 95, 116, 114, 97, 102, 95, 115, 117, 112, 101, 114, 118, 105, 115, 111, 114, 34, 44, 34, 105, 110, 116, 101, 114, 118, 97, 108, 34, 58, 34, 50, 48, 49, 56, 45, 49, 48, 45, 49, 50, 84, 49, 49, 58, 49, 53, 58, 48, 48, 46, 48, 48, 48, 90, 47, 50, 48, 49, 56, 45, 49, 48, 45, 49, 50, 84, 49, 49, 58, 51, 48, 58, 48, 48, 46, 48, 48, 48, 90, 34, 44, 34, 118, 101, 114, 115, 105, 111, 110, 34, 58, 34, 50, 48, 49, 56, 45, 49, 48, 45, 49, 50, 84, 49, 49, 58, 49, 53, 58, 48, 48, 46, 48, 57, 50, 90, 34, 44, 34, 115, 104, 97, 114, 100, 83, 112, 101, 99, 34, 58, 123, 34, 116, 121, 112, 101, 34, 58, 34, 110, 117, 109, 98, 101, 114, 101, 100, 34, 44, 34, 112, 97, 114, 116, 105, 116, 105, 111, 110, 78, 117, 109, 34, 58, 49, 50, 44, 34, 112, 97, 114, 116, 105, 116, 105, 111, 110, 115, 34, 58, 48, 125, 125],start:'2018-10-12T11:15:00.000Z',end:'2018-10-12T11:30:00.000Z',id:'dsp_traf_supervisor_2018-10-12T11:15:00.000Z_2018-10-12T11:30:00.000Z_2018-10-12T11:15:00.092Z_12',created_date:'2018-10-12T11:15:23.418Z',dataSource:'dsp_traf_supervisor',sequence_name:'index_kafka_dsp_traf_supervisor_827bb8282302a59_0',sequence_name_prev_id_sha1:'46CF081194E631F94FD6879CEF3C3A3AF4C5068F'}, finder:[]}]
at org.skife.jdbi.v2.DBI.withHandle(DBI.java:284) ~[jdbi-2.63.1.jar:2.63.1]
at org.skife.jdbi.v2.DBI.inTransaction(DBI.java:329) ~[jdbi-2.63.1.jar:2.63.1]
at io.druid.metadata.SQLMetadataConnector$3.call(SQLMetadataConnector.java:158) ~[druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:63) [java-util-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.metadata.SQLMetadataConnector.retryTransaction(SQLMetadataConnector.java:162) [druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.metadata.IndexerSQLMetadataStorageCoordinator.allocatePendingSegment(IndexerSQLMetadataStorageCoordinator.java:397) [druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.indexing.common.actions.SegmentAllocateAction.tryAllocate(SegmentAllocateAction.java:272) [druid-indexing-service-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.indexing.common.actions.SegmentAllocateAction.tryAllocateFirstSegment(SegmentAllocateAction.java:225) [druid-indexing-service-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.indexing.common.actions.SegmentAllocateAction.perform(SegmentAllocateAction.java:169) [druid-indexing-service-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.indexing.common.actions.SegmentAllocateAction.perform(SegmentAllocateAction.java:56) [druid-indexing-service-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.indexing.common.actions.LocalTaskActionClient.submit(LocalTaskActionClient.java:69) [druid-indexing-service-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.indexing.overlord.http.OverlordResource$3.apply(OverlordResource.java:402) [druid-indexing-service-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.indexing.overlord.http.OverlordResource$3.apply(OverlordResource.java:391) [druid-indexing-service-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.indexing.overlord.http.OverlordResource.asLeaderWith(OverlordResource.java:926) [druid-indexing-service-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.indexing.overlord.http.OverlordResource.doAction(OverlordResource.java:388) [druid-indexing-service-0.12.3-iap6.jar:0.12.3-iap6]
at sun.reflect.GeneratedMethodAccessor299.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_171]
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) [jersey-server-1.19.3.jar:1.19.3]
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) [jersey-server-1.19.3.jar:1.19.3]
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) [jersey-server-1.19.3.jar:1.19.3]
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) [jersey-server-1.19.3.jar:1.19.3]
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) [jersey-server-1.19.3.jar:1.19.3]
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) [jersey-server-1.19.3.jar:1.19.3]
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) [jersey-server-1.19.3.jar:1.19.3]
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) [jersey-server-1.19.3.jar:1.19.3]
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) [jersey-server-1.19.3.jar:1.19.3]
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) [jersey-server-1.19.3.jar:1.19.3]
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) [jersey-server-1.19.3.jar:1.19.3]
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) [jersey-server-1.19.3.jar:1.19.3]
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) [jersey-servlet-1.19.3.jar:1.19.3]
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) [jersey-servlet-1.19.3.jar:1.19.3]
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) [jersey-servlet-1.19.3.jar:1.19.3]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) [javax.servlet-api-3.1.0.jar:3.1.0]
at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:286) [guice-servlet-4.1.0.jar:?]
at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:276) [guice-servlet-4.1.0.jar:?]
at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:181) [guice-servlet-4.1.0.jar:?]
at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91) [guice-servlet-4.1.0.jar:?]
at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85) [guice-servlet-4.1.0.jar:?]
at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:120) [guice-servlet-4.1.0.jar:?]
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:135) [guice-servlet-4.1.0.jar:?]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]
at io.druid.server.http.RedirectFilter.doFilter(RedirectFilter.java:72) [druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]
at io.druid.server.security.PreResponseAuthorizationCheckFilter.doFilter(PreResponseAuthorizationCheckFilter.java:84) [druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]
at io.druid.server.security.AllowOptionsResourceFilter.doFilter(AllowOptionsResourceFilter.java:76) [druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]
at io.druid.server.security.AllowAllAuthenticator$1.doFilter(AllowAllAuthenticator.java:85) [druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.server.security.AuthenticationWrappingFilter.doFilter(AuthenticationWrappingFilter.java:60) [druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]
at io.druid.server.security.SecuritySanityCheckFilter.doFilter(SecuritySanityCheckFilter.java:88) [druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:724) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:61) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.Server.handle(Server.java:531) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281) [jetty-io-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) [jetty-io-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118) [jetty-io-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333) [jetty-util-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310) [jetty-util-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168) [jetty-util-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126) [jetty-util-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366) [jetty-util-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:760) [jetty-util-9.4.10.v20180503.jar:9.4.10.v20180503]
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:678) [jetty-util-9.4.10.v20180503.jar:9.4.10.v20180503]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
Caused by: org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "druid_pendingsegments_pkey"
  Detail: Key (id)=(dsp_traf_supervisor_2018-10-12T11:15:00.000Z_2018-10-12T11:30:00.000Z_2018-10-12T11:15:00.092Z_12) already exists. [statement:"INSERT INTO druid_pendingSegments (id, dataSource, created_date, start, \"end\", sequence_name, sequence_prev_id, sequence_name_prev_id_sha1, payload) VALUES (:id, :dataSource, :created_date, :start, :end, :sequence_name, :sequence_prev_id, :sequence_name_prev_id_sha1, :payload)", located:"INSERT INTO druid_pendingSegments (id, dataSource, created_date, start, \"end\", sequence_name, sequence_prev_id, sequence_name_prev_id_sha1, payload) VALUES (:id, :dataSource, :created_date, :start, :end, :sequence_name, :sequence_prev_id, :sequence_name_prev_id_sha1, :payload)", rewritten:"INSERT INTO druid_pendingSegments (id, dataSource, created_date, start, "end", sequence_name, sequence_prev_id, sequence_name_prev_id_sha1, payload) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)", arguments:{ positional:{}, named:{sequence_prev_id:'',payload:[123, 34, 100, 97, 116, 97, 83, 111, 117, 114, 99, 101, 34, 58, 34, 100, 115, 112, 95, 116, 114, 97, 102, 95, 115, 117, 112, 101, 114, 118, 105, 115, 111, 114, 34, 44, 34, 105, 110, 116, 101, 114, 118, 97, 108, 34, 58, 34, 50, 48, 49, 56, 45, 49, 48, 45, 49, 50, 84, 49, 49, 58, 49, 53, 58, 48, 48, 46, 48, 48, 48, 90, 47, 50, 48, 49, 56, 45, 49, 48, 45, 49, 50, 84, 49, 49, 58, 51, 48, 58, 48, 48, 46, 48, 48, 48, 90, 34, 44, 34, 118, 101, 114, 115, 105, 111, 110, 34, 58, 34, 50, 48, 49, 56, 45, 49, 48, 45, 49, 50, 84, 49, 49, 58, 49, 53, 58, 48, 48, 46, 48, 57, 50, 90, 34, 44, 34, 115, 104, 97, 114, 100, 83, 112, 101, 99, 34, 58, 123, 34, 116, 121, 112, 101, 34, 58, 34, 110, 117, 109, 98, 101, 114, 101, 100, 34, 44, 34, 112, 97, 114, 116, 105, 116, 105, 111, 110, 78, 117, 109, 34, 58, 49, 50, 44, 34, 112, 97, 114, 116, 105, 116, 105, 111, 110, 115, 34, 58, 48, 125, 125],start:'2018-10-12T11:15:00.000Z',end:'2018-10-12T11:30:00.000Z',id:'dsp_traf_supervisor_2018-10-12T11:15:00.000Z_2018-10-12T11:30:00.000Z_2018-10-12T11:15:00.092Z_12',created_date:'2018-10-12T11:15:23.418Z',dataSource:'dsp_traf_supervisor',sequence_name:'index_kafka_dsp_traf_supervisor_827bb8282302a59_0',sequence_name_prev_id_sha1:'46CF081194E631F94FD6879CEF3C3A3AF4C5068F'}, finder:[]}]
at org.skife.jdbi.v2.SQLStatement.internalExecute(SQLStatement.java:1334) ~[jdbi-2.63.1.jar:2.63.1]
at org.skife.jdbi.v2.Update.execute(Update.java:56) ~[jdbi-2.63.1.jar:2.63.1]
at io.druid.metadata.IndexerSQLMetadataStorageCoordinator.insertToMetastore(IndexerSQLMetadataStorageCoordinator.java:662) ~[druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.metadata.IndexerSQLMetadataStorageCoordinator.allocatePendingSegment(IndexerSQLMetadataStorageCoordinator.java:547) ~[druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.metadata.IndexerSQLMetadataStorageCoordinator.access$200(IndexerSQLMetadataStorageCoordinator.java:80) ~[druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.metadata.IndexerSQLMetadataStorageCoordinator$3.inTransaction(IndexerSQLMetadataStorageCoordinator.java:404) ~[druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.metadata.IndexerSQLMetadataStorageCoordinator$3.inTransaction(IndexerSQLMetadataStorageCoordinator.java:399) ~[druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at org.skife.jdbi.v2.tweak.transactions.LocalTransactionHandler.inTransaction(LocalTransactionHandler.java:184) ~[jdbi-2.63.1.jar:2.63.1]
at org.skife.jdbi.v2.BasicHandle.inTransaction(BasicHandle.java:327) ~[jdbi-2.63.1.jar:2.63.1]
at org.skife.jdbi.v2.DBI$5.withHandle(DBI.java:333) ~[jdbi-2.63.1.jar:2.63.1]
at org.skife.jdbi.v2.DBI.withHandle(DBI.java:281) ~[jdbi-2.63.1.jar:2.63.1]
... 80 more
Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "druid_pendingsegments_pkey"
  Detail: Key (id)=(dsp_traf_supervisor_2018-10-12T11:15:00.000Z_2018-10-12T11:30:00.000Z_2018-10-12T11:15:00.092Z_12) already exists.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2284) ~[?:?]
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2003) ~[?:?]
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:200) ~[?:?]
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:424) ~[?:?]
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:161) ~[?:?]
at org.postgresql.jdbc.PgPreparedStatement.execute(PgPreparedStatement.java:155) ~[?:?]
at org.apache.commons.dbcp2.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:198) ~[commons-dbcp2-2.0.1.jar:2.0.1]
at org.apache.commons.dbcp2.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:198) ~[commons-dbcp2-2.0.1.jar:2.0.1]
at org.skife.jdbi.v2.SQLStatement.internalExecute(SQLStatement.java:1328) ~[jdbi-2.63.1.jar:2.63.1]
at org.skife.jdbi.v2.Update.execute(Update.java:56) ~[jdbi-2.63.1.jar:2.63.1]
at io.druid.metadata.IndexerSQLMetadataStorageCoordinator.insertToMetastore(IndexerSQLMetadataStorageCoordinator.java:662) ~[druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.metadata.IndexerSQLMetadataStorageCoordinator.allocatePendingSegment(IndexerSQLMetadataStorageCoordinator.java:547) ~[druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.metadata.IndexerSQLMetadataStorageCoordinator.access$200(IndexerSQLMetadataStorageCoordinator.java:80) ~[druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.metadata.IndexerSQLMetadataStorageCoordinator$3.inTransaction(IndexerSQLMetadataStorageCoordinator.java:404) ~[druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at io.druid.metadata.IndexerSQLMetadataStorageCoordinator$3.inTransaction(IndexerSQLMetadataStorageCoordinator.java:399) ~[druid-server-0.12.3-iap6.jar:0.12.3-iap6]
at org.skife.jdbi.v2.tweak.transactions.LocalTransactionHandler.inTransaction(LocalTransactionHandler.java:184) ~[jdbi-2.63.1.jar:2.63.1]
at org.skife.jdbi.v2.BasicHandle.inTransaction(BasicHandle.java:327) ~[jdbi-2.63.1.jar:2.63.1]
at org.skife.jdbi.v2.DBI$5.withHandle(DBI.java:333) ~[jdbi-2.63.1.jar:2.63.1]
at org.skife.jdbi.v2.DBI.withHandle(DBI.java:281) ~[jdbi-2.63.1.jar:2.63.1]
... 80 more


Gian Merlino

unread,
Oct 12, 2018, 11:25:34 AM10/12/18
to druid...@googlegroups.com
Hey Павел,

These errors are usually transient and retryable (they happen when two threads are trying to allocate a segment at the same time). In fact, it's totally normal for threads to cross paths from time to time, and Druid actually won't log this error unless it's gone through at least a few retries.

If you are seeing this repeatedly it might be because segment allocation is taking a long time, which increases the probability of conflicts. You can mitigate it by adding the indexes added in this PR: https://github.com/apache/incubator-druid/pull/6356. They'll be added by default in 0.13+ and they help quite a bit. It can also help to enable pending segment cleanup (druid.coordinator.kill.pendingSegments.on; see http://druid.io/docs/latest/configuration/index.html#coordinator-operation). That will probably also be on by default in the future.

Gian


--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/97421d15-b6a7-41eb-92fc-dd11da6744a0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Павел Кураев

unread,
Oct 16, 2018, 10:40:51 AM10/16/18
to Druid User
Thanks, Gian. Error in the overloard is gone.  But... Indexing tasks failed as before(.  We tried to check overlord logs but we did not get any exception there to explain this failure while publishing segments

пятница, 12 октября 2018 г., 18:25:34 UTC+3 пользователь Gian Merlino написал:
Reply all
Reply to author
Forward
0 new messages