2018-06-18T15:28:31,806 INFO [task-runner-0-priority-0] io.druid.indexing.common.actions.RemoteTaskActionClient - Submitting action for task[<MASKED>] to overlord: [SegmentAllocateAction{dataSource='<MASKED>
', timestamp=2018-01-31T03:27:49.104Z, queryGranularity=NoneGranularity, preferredSegmentGranularity={type=period, period=PT1H, timeZone=UTC, origin=null}, sequenceName='index_kafka_caliper-sample-atomic-avro-kafka-prod-eventTime-test6_31f44481cd36d94_0', previousSegmentId='caliper-sample-atomic-avro-kafka-prod-eventTime-test6_2018-01-29T22:00:00.000Z_2018-01-29T23:00:00.000Z_2018-06-18T15:28:29.288Z', skipSegmentLineageCheck='true'}].Enter code here...
...
"granularitySpec" : {
"type" : "uniform",
"segmentGranularity" : "HOUR",
"queryGranularity" : "NONE",
"rollup" : false
}
},
"ioConfig" : {
"topic" : "<Masked>
",
"consumerProperties": {
"bootstrap.servers": "<Masked>",
"group.id": "druid-indexer-consumer-avro-prod-atomic-6"
},
"replicas":"2",
"taskCount":"1",
"taskDuration": "PT10M",
"useEarliestOffset":true
},
"tuningConfig" : {
"type" : "kafka",
"resetOffsetAutomatically":true
}
...
}
OverLord Logs:
==============
2018-06-19T11:50:32,603 INFO [qtp465869765-108] io.druid.indexing.common.actions.LocalTaskActionClient - Performing action for task[index_kafka_caliper-sample-atomic-avro-kafka-prod-eventTime-test6_31f44481cd36d94_nndbkfpn]: SegmentAllocateAction{dataSource='caliper-sample-atomic-avro-kafka-prod-eventTime-test6', timestamp=2018-01-31T03:27:49.104Z, queryGranularity=NoneGranularity, preferredSegmentGranularity={type=period, period=PT1H, timeZone=UTC, origin=null}, sequenceName='index_kafka_caliper-sample-atomic-avro-kafka-prod-eventTime-test6_31f44481cd36d94_0', previousSegmentId='caliper-sample-atomic-avro-kafka-prod-eventTime-test6_2018-01-29T22:00:00.000Z_2018-01-29T23:00:00.000Z_2018-06-18T15:28:29.288Z', skipSegmentLineageCheck='true'}
2018-06-19T11:50:32,605 DEBUG [qtp465869765-108] io.druid.indexing.common.actions.SegmentAllocateAction - Trying to allocate pending segment for rowInterval[2018-01-31T03:27:49.104Z/2018-01-31T03:27:49.105Z], segmentInterval[2018-01-31T03:00:00.000Z/2018-01-31T04:00:00.000Z].
2018-06-19T11:50:32,605 INFO [qtp465869765-108] io.druid.indexing.overlord.TaskLockbox - Added task[index_kafka_caliper-sample-atomic-avro-kafka-prod-eventTime-test6_31f44481cd36d94_nndbkfpn] to TaskLock[index_kafka_caliper-sample-atomic-avro-kafka-prod-eventTime-test6]
2018-06-19T11:50:32,605 INFO [qtp465869765-108] io.druid.indexing.overlord.MetadataTaskStorage - Adding lock on interval[2018-01-31T03:00:00.000Z/2018-01-31T04:00:00.000Z] version[2018-06-19T11:50:20.668Z] for task: index_kafka_caliper-sample-atomic-avro-kafka-prod-eventTime-test6_31f44481cd36d94_nndbkfpn
2018-06-19T11:50:32,610 INFO [qtp465869765-108] io.druid.metadata.IndexerSQLMetadataStorageCoordinator - Found existing pending segment [caliper-sample-atomic-avro-kafka-prod-eventTime-test6_2018-01-31T03:00:00.000Z_2018-01-31T04:00:00.000Z_2018-06-18T15:28:25.364Z] for sequence[index_kafka_caliper-sample-atomic-avro-kafka-prod-eventTime-test6_31f44481cd36d94_0] (previous = [caliper-sample-atomic-avro-kafka-prod-eventTime-test6_2018-01-29T22:00:00.000Z_2018-01-29T23:00:00.000Z_2018-06-18T15:28:29.288Z]) in DB
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/98a8dbdf-bad0-4772-9249-590d2d3d4b58%40googlegroups.com.--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
Some snippets below from the task logs, The incremental persists thread seems to create the smoosh files and all the version/meta-data in the tmp storage. However the segments are not pushed. I don't see any of the S3 Segment pusher invoked.
It is worth re-iterating that I'm trying to ingest some millions of events (dated Jan 2018) over the kafka indexer task !
2018-06-20T13:49:31,742 INFO [caliper-sample-sessions-avro-kafka-prod-timestamp-test1-incremental-persist] io.druid.segment.StringDimensionMergerV9 - Completed dim[timestamp] inverted with cardinality[7,461] in 140,921 millis.
2018-06-20T13:49:31,789 INFO [caliper-sample-sessions-avro-kafka-prod-timestamp-test1-incremental-persist] io.druid.segment.IndexMergerV9 - Completed index.drd in 10 millis.
2018-06-20T13:49:31,820 INFO [caliper-sample-sessions-avro-kafka-prod-timestamp-test1-incremental-persist] io.druid.java.util.common.io.smoosh.FileSmoosher - Created smoosh file [/mnt/<Masked>/task/index_kafka_caliper-sample-sessions-avro-kafka-prod-timestamp-test1_eacb41e33fd1473_flpgjfbf/work/persist/caliper-sample-sessions-avro-kafka-prod-timestamp-test1_2018-01-22T19:00:00.000Z_2018-01-22T20:00:00.000Z_2018-06-20T13:44:32.472Z/0/00000.smoosh] of size [410760] bytes.
2018-06-20T13:49:32,164 DEBUG [caliper-sample-sessions-avro-kafka-prod-timestamp-test1-incremental-persist] io.druid.segment.IndexIO - Mapping v9 index[/mnt/<Masked>
/index_kafka_caliper-sample-sessions-avro-kafka-prod-timestamp-test1_eacb41e33fd1473_flpgjfbf/work/persist/caliper-sample-sessions-avro-kafka-prod-timestamp-test1_2018-01-22T19:00:00.000Z_2018-01-22T20:00:00.000Z_2018-06-20T13:44:32.472Z/0]
2018-06-20T13:49:32,198 DEBUG [caliper-sample-sessions-avro-kafka-prod-timestamp-test1-incremental-persist] io.druid.segment.IndexIO - Mapped v9 index[/mnt/<Masked>
/index_kafka_caliper-sample-sessions-avro-kafka-prod-timestamp-test1_eacb41e33fd1473_flpgjfbf/work/persist/caliper-sample-sessions-avro-kafka-prod-timestamp-test1_2018-01-22T19:00:00.000Z_2018-01-22T20:00:00.000Z_2018-06-20T13:44:32.472Z/0] in 34 millis
2018-06-20T13:49:32,203 INFO [caliper-sample-sessions-avro-kafka-prod-timestamp-test1-incremental-persist] io.druid.segment.realtime.appenderator.AppenderatorImpl - Segment[caliper-sample-sessions-avro-kafka-prod-timestamp-test1_2018-01-22T12:00:00.000Z_2018-01-22T13:00:00.000Z_2018-06-20T13:44:26.802Z], persisting Hydrant[FireHydrant{, queryable=caliper-sample-sessions-avro-kafka-prod-timestamp-test1_2018-01-22T12:00:00.000Z_2018-01-22T13:00:00.000Z_2018-06-20T13:44:26.802Z, count=0}]
2018-06-20T13:49:32,241 INFO [caliper-sample-sessions-avro-kafka-prod-timestamp-test1-incremental-persist] io.druid.segment.IndexMergerV9 - Starting persist for interval[2018-01-22T12:00:00.000Z/2018-01-22T13:00:00.000Z], rows[1,106]
2018-06-20T13:49:32,263 INFO [caliper-sample-sessions-avro-kafka-prod-timestamp-test1-incremental-persist] io.druid.segment.IndexMergerV9 - Using SegmentWriteOutMediumFactory[TmpFileSegmentWriteOutMediumFactory]
2018-06-20T13:49:32,290 INFO [caliper-sample-sessions-avro-kafka-prod-timestamp-test1-incremental-persist] io.druid.segment.IndexMergerV9 - Completed version.bin in 18 millis.
2018-06-20T13:49:32,307 INFO [caliper-sample-sessions-avro-kafka-prod-timestamp-test1-incremental-persist] io.druid.segment.IndexMergerV9 - Completed factory.json in 16 millis
2018-06-20T13:49:32,387 INFO [caliper-sample-sessions-avro-kafka-prod-timestamp-test1-incremental-persist] io.druid.segment.StringDimensionMergerV9 - Completed dim[eventType] conversions with cardinality[2] in 80 millis.
2018-06-20T13:49:32,455 INFO [caliper-sample-sessions-avro-kafka-prod-timestamp-test1-incremental-persist] io.druid.segment.StringDimensionMergerV9 - Completed dim[userbrowser.name] conversions with cardinality[0] in 34 millis.
2018-06-20T13:49:32,682 DEBUG [main-SendThread(ip-10-35-35-239.ec2.internal:2181)] org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0xf64134df7f2d07f after 1ms
2018-06-20T13:49:34,562 INFO [caliper-sample-sessions-avro-kafka-prod-timestamp-test1-incremental-persist] io.druid.segment.StringDimensionMergerV9 - Completed dim[actor.districtPid] conversions with cardinality[111] in 2,071 millis.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/CAP7S8jshQ8Ybxgazx%3DVfOJ-a8No0Dfsr8D6XSGOz9y-VSOpiRA%40mail.gmail.com.