incremental persist failed - unable to delete file

178 views
Skip to first unread message

Gowtham Sai

unread,
Jan 11, 2017, 7:23:51 AM1/11/17
to Druid User
Indexing task is not moving to completed state. This is the last few lines of the task log. The file do exists. Permissions nobody and nogroup

2017-01-11T11:15:03,756 INFO [vnk-clst-incremental-persist] io.druid.segment.IndexIO$DefaultIndexIOHandler - Skipped files[[index.drd, inverted.drd, metadata.drd, spatial.drd]]
2017-01-11T11:15:03,883 ERROR [vnk-clst-incremental-persist] io.druid.segment.realtime.plumber.RealtimePlumber - dataSource[vnk-clst] -- incremental persist failed: {class=io.druid.segment.realtime.plumber.RealtimePlumber, interval=2017-01-11T10:00:00.000Z/2017-01-11T11:00:00.000Z, count=3}
Exception in thread "plumber_persist_2" java.lang.RuntimeException: java.io.IOException: Unable to delete file: var/druid/task/index_realtime_vnk-clst_2017-01-11T10:00:00.000Z_0_0/work/persist/vnk-clst/2017-01-11T10:00:00.000Z_2017-01-11T11:00:00.000Z/3/v8-tmp/.nfs00000000037403c30000130e
	at com.google.common.base.Throwables.propagate(Throwables.java:160)
	at io.druid.segment.realtime.plumber.RealtimePlumber.persistHydrant(RealtimePlumber.java:950)
	at io.druid.segment.realtime.plumber.RealtimePlumber$1.doRun(RealtimePlumber.java:320)
	at io.druid.common.guava.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:42)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Unable to delete file: var/druid/task/index_realtime_vnk-clst_2017-01-11T10:00:00.000Z_0_0/work/persist/vnk-clst/2017-01-11T10:00:00.000Z_2017-01-11T11:00:00.000Z/3/v8-tmp/.nfs00000000037403c30000130e
	at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279)
	at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
	at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
	at io.druid.segment.IndexMerger$10.close(IndexMerger.java:637)
	at com.google.common.io.Closer.close(Closer.java:214)
	at io.druid.segment.IndexMerger.makeIndexFiles(IndexMerger.java:906)
	at io.druid.segment.IndexMerger.merge(IndexMerger.java:438)
	at io.druid.segment.IndexMerger.persist(IndexMerger.java:186)
	at io.druid.segment.IndexMerger.persist(IndexMerger.java:152)
	at io.druid.segment.realtime.plumber.RealtimePlumber.persistHydrant(RealtimePlumber.java:929)
	... 5 more
2017-01-11T12:10:00,002 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Starting merge and push.
2017-01-11T12:10:00,002 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Found [1] segments. Attempting to hand off segments that start before [1970-01-01T00:00:00.000Z].
2017-01-11T12:10:00,002 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Skipping persist and merge for entry [1484128800000=Sink{interval=2017-01-11T10:00:00.000Z/2017-01-11T11:00:00.000Z, schema=DataSchema{dataSource='vnk-clst', parser={type=map, parseSpec={format=json, timestampSpec={column=timestamp, format=millis, missingValue=null}, dimensionsSpec={dimensionExclusions=[count, hyper_user_id, timestamp, session_count, huser_id, value], spatialDimensions=[]}}}, aggregators=[CountAggregatorFactory{name='count'}, CountAggregatorFactory{name='session_count'}, HyperUniquesAggregatorFactory{name='hyper_user_id', fieldName='huser_id'}], granularitySpec=io.druid.segment.indexing.granularity.UniformGranularitySpec@983fb11e}}] : Start time [2017-01-11T10:00:00.000Z] >= [1970-01-01T00:00:00.000Z] min timestamp required in this run. Segment will be picked up in a future run.
2017-01-11T12:10:00,002 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Found [0] sinks to persist and merge
8216.954: [GC pause (G1 Evacuation Pause) (young), 0.0074057 secs]
   [Parallel Time: 6.2 ms, GC Workers: 2]
      [GC Worker Start (ms): Min: 8216954.1, Avg: 8216954.2, Max: 8216954.2, Diff: 0.1]
      [Ext Root Scanning (ms): Min: 1.3, Avg: 1.4, Max: 1.5, Diff: 0.3, Sum: 2.8]
      [Update RS (ms): Min: 2.9, Avg: 2.9, Max: 2.9, Diff: 0.0, Sum: 5.9]
         [Processed Buffers: Min: 25, Avg: 30.5, Max: 36, Diff: 11, Sum: 61]
      [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
      [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, Sum: 0.1]
      [Object Copy (ms): Min: 1.6, Avg: 1.7, Max: 1.8, Diff: 0.2, Sum: 3.4]
      [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
         [Termination Attempts: Min: 1, Avg: 1.0, Max: 1, Diff: 0, Sum: 2]
      [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
      [GC Worker Total (ms): Min: 6.1, Avg: 6.1, Max: 6.2, Diff: 0.1, Sum: 12.2]
      [GC Worker End (ms): Min: 8216960.3, Avg: 8216960.3, Max: 8216960.3, Diff: 0.0]
   [Code Root Fixup: 0.1 ms]
   [Code Root Purge: 0.0 ms]
   [Clear CT: 0.1 ms]
   [Other: 1.0 ms]
      [Choose CSet: 0.0 ms]
      [Ref Proc: 0.6 ms]
      [Ref Enq: 0.0 ms]
      [Redirty Cards: 0.0 ms]
      [Humongous Register: 0.0 ms]
      [Humongous Reclaim: 0.0 ms]
      [Free CSet: 0.1 ms]
   [Eden: 142.0M(142.0M)->0.0B(141.0M) Survivors: 2048.0K->3072.0K Heap: 199.2M(240.0M)->57.6M(240.0M)]
 [Times: user=0.02 sys=0.00, real=0.00 secs] 

Gowtham Sai

unread,
Jan 11, 2017, 8:12:18 AM1/11/17
to druid...@googlegroups.com
2017-01-11T10:15:04,755 INFO [vnk-clst-incremental-persist] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[met_hyper_user_id_LITTLE_ENDIAN.drd]
2017-01-11T10:15:04,758 INFO [vnk-clst-incremental-persist] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[met_session_count_LITTLE_ENDIAN.drd]
2017-01-11T10:15:04,758 INFO [vnk-clst-incremental-persist] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[metadata.drd]
2017-01-11T10:15:04,758 INFO [vnk-clst-incremental-persist] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[spatial.drd]
2017-01-11T10:15:04,758 INFO [vnk-clst-incremental-persist] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[time_LITTLE_ENDIAN.drd]
2017-01-11T10:15:04,764 INFO [vnk-clst-incremental-persist] io.druid.segment.IndexIO$DefaultIndexIOHandler - Skipped files[[index.drd, inverted.drd, metadata.drd, spatial.drd]]
2017-01-11T10:15:04,918 ERROR [vnk-clst-incremental-persist] io.druid.segment.realtime.plumber.RealtimePlumber - dataSource[vnk-clst] -- incremental persist failed: {class=io.druid.segment.realtime.plumber.RealtimePlumber, interval=2017-01-11T09:00:00.000Z/2017-01-11T10:00:00.000Z, count=0}
Exception in thread "plumber_persist_0" java.lang.RuntimeException: java.io.IOException: Unable to delete file: var/druid/task/index_realtime_vnk-clst_2017-01-11T09:00:00.000Z_0_0/work/persist/vnk-clst/2017-01-11T09:00:00.000Z_2017-01-11T10:00:00.000Z/0/v8-tmp/.nfs00000000037402c40000122f
	at com.google.common.base.Throwables.propagate(Throwables.java:160)
	at io.druid.segment.realtime.plumber.RealtimePlumber.persistHydrant(RealtimePlumber.java:950)
	at io.druid.segment.realtime.plumber.RealtimePlumber$1.doRun(RealtimePlumber.java:320)
	at io.druid.common.guava.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:42)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Unable to delete file: var/druid/task/index_realtime_vnk-clst_2017-01-11T09:00:00.000Z_0_0/work/persist/vnk-clst/2017-01-11T09:00:00.000Z_2017-01-11T10:00:00.000Z/0/v8-tmp/.nfs00000000037402c40000122f
	at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279)
	at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
	at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
	at io.druid.segment.IndexMerger$10.close(IndexMerger.java:637)
	at com.google.common.io.Closer.close(Closer.java:214)
	at io.druid.segment.IndexMerger.makeIndexFiles(IndexMerger.java:906)
	at io.druid.segment.IndexMerger.merge(IndexMerger.java:438)
	at io.druid.segment.IndexMerger.persist(IndexMerger.java:186)
	at io.druid.segment.IndexMerger.persist(IndexMerger.java:152)
	at io.druid.segment.realtime.plumber.RealtimePlumber.persistHydrant(RealtimePlumber.java:929)
	... 5 more
3645.751: [GC pause (G1 Evacuation Pause) (young), 0.0310372 secs]
   [Parallel Time: 27.7 ms, GC Workers: 2]
      [GC Worker Start (ms): Min: 3645750.7, Avg: 3645750.7, Max: 3645750.7, Diff: 0.0]
      [Ext Root Scanning (ms): Min: 2.7, Avg: 3.6, Max: 4.5, Diff: 1.8, Sum: 7.1]
      [Update RS (ms): Min: 0.3, Avg: 0.3, Max: 0.4, Diff: 0.1, Sum: 0.7]
         [Processed Buffers: Min: 1, Avg: 5.5, Max: 10, Diff: 9, Sum: 11]
      [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.1]
      [Code Root Scanning (ms): Min: 1.0, Avg: 1.1, Max: 1.1, Diff: 0.1, Sum: 2.2]
      [Object Copy (ms): Min: 21.6, Avg: 22.5, Max: 23.4, Diff: 1.8, Sum: 45.1]
      [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
         [Termination Attempts: Min: 1, Avg: 1.0, Max: 1, Diff: 0, Sum: 2]
      [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
      [GC Worker Total (ms): Min: 27.6, Avg: 27.6, Max: 27.6, Diff: 0.0, Sum: 55.2]
      [GC Worker End (ms): Min: 3645778.3, Avg: 3645778.3, Max: 3645778.3, Diff: 0.0]
   [Code Root Fixup: 0.4 ms]
   [Code Root Purge: 0.0 ms]
   [Clear CT: 0.1 ms]
   [Other: 2.9 ms]
      [Choose CSet: 0.0 ms]
      [Ref Proc: 2.5 ms]
      [Ref Enq: 0.0 ms]
      [Redirty Cards: 0.1 ms]
      [Humongous Register: 0.0 ms]
      [Humongous Reclaim: 0.0 ms]
      [Free CSet: 0.1 ms]
   [Eden: 129.0M(129.0M)->0.0B(127.0M) Survivors: 15.0M->17.0M Heap: 146.0M(240.0M)->19.9M(240.0M)]
 [Times: user=0.06 sys=0.00, real=0.03 secs] 
2017-01-11T11:10:00,006 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Starting merge and push.
2017-01-11T11:10:00,006 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Found [1] segments. Attempting to hand off segments that start before [1970-01-01T00:00:00.000Z].
2017-01-11T11:10:00,007 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Skipping persist and merge for entry [1484125200000=Sink{interval=2017-01-11T09:00:00.000Z/2017-01-11T10:00:00.000Z, schema=DataSchema{dataSource='vnk-clst', parser={type=map, parseSpec={format=json, timestampSpec={column=timestamp, format=millis, missingValue=null}, dimensionsSpec={dimensionExclusions=[count, hyper_user_id, timestamp, session_count, huser_id, value], spatialDimensions=[]}}}, aggregators=[CountAggregatorFactory{name='count'}, CountAggregatorFactory{name='session_count'}, HyperUniquesAggregatorFactory{name='hyper_user_id', fieldName='huser_id'}], granularitySpec=io.druid.segment.indexing.granularity.UniformGranularitySpec@4db9e60f}}] : Start time [2017-01-11T09:00:00.000Z] >= [1970-01-01T00:00:00.000Z] min timestamp required in this run. Segment will be picked up in a future run.
2017-01-11T11:10:00,007 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Found [0] sinks to persist and merge
2017-01-11T11:10:51,054 INFO [topN_vnk-clst_[2017-01-11T09:00:00.000Z/2017-01-11T10:00:00.000Z]] io.druid.offheap.OffheapBufferGenerator - Allocating new intermediate processing buffer[0] of size[100,000,000]
2017-01-11T11:10:51,056 INFO [topN_vnk-clst_[2017-01-11T09:00:00.000Z/2017-01-11T10:00:00.000Z]] io.druid.offheap.OffheapBufferGenerator - Allocating new intermediate processing buffer[1] of size[100,000,000]
2017-01-11T12:10:00,006 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Starting merge and push.
2017-01-11T12:10:00,006 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Found [1] segments. Attempting to hand off segments that start before [1970-01-01T00:00:00.000Z].
2017-01-11T12:10:00,007 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Skipping persist and merge for entry [1484125200000=Sink{interval=2017-01-11T09:00:00.000Z/2017-01-11T10:00:00.000Z, schema=DataSchema{dataSource='vnk-clst', parser={type=map, parseSpec={format=json, timestampSpec={column=timestamp, format=millis, missingValue=null}, dimensionsSpec={dimensionExclusions=[count, hyper_user_id, timestamp, session_count, huser_id, value], spatialDimensions=[]}}}, aggregators=[CountAggregatorFactory{name='count'}, CountAggregatorFactory{name='session_count'}, HyperUniquesAggregatorFactory{name='hyper_user_id', fieldName='huser_id'}], granularitySpec=io.druid.segment.indexing.granularity.UniformGranularitySpec@4db9e60f}}] : Start time [2017-01-11T09:00:00.000Z] >= [1970-01-01T00:00:00.000Z] min timestamp required in this run. Segment will be picked up in a future run.
2017-01-11T12:10:00,007 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Found [0] sinks to persist and merge
2017-01-11T13:10:00,006 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Starting merge and push.
2017-01-11T13:10:00,007 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Found [1] segments. Attempting to hand off segments that start before [1970-01-01T00:00:00.000Z].
2017-01-11T13:10:00,007 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Skipping persist and merge for entry [1484125200000=Sink{interval=2017-01-11T09:00:00.000Z/2017-01-11T10:00:00.000Z, schema=DataSchema{dataSource='vnk-clst', parser={type=map, parseSpec={format=json, timestampSpec={column=timestamp, format=millis, missingValue=null}, dimensionsSpec={dimensionExclusions=[count, hyper_user_id, timestamp, session_count, huser_id, value], spatialDimensions=[]}}}, aggregators=[CountAggregatorFactory{name='count'}, CountAggregatorFactory{name='session_count'}, HyperUniquesAggregatorFactory{name='hyper_user_id', fieldName='huser_id'}], granularitySpec=io.druid.segment.indexing.granularity.UniformGranularitySpec@4db9e60f}}] : Start time [2017-01-11T09:00:00.000Z] >= [1970-01-01T00:00:00.000Z] min timestamp required in this run. Segment will be picked up in a future run.
2017-01-11T13:10:00,007 INFO [vnk-clst-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Found [0] sinks to persist and merge


Is there any issue with the task? It is still in running task in overlord console?






DISCLAIMER: This communication (including any accompanying documents) is intended for permitted use of the addressee(s) and contains information that is privileged and confidential. Unauthorized reading, dissemination, distribution or copying of this communication is prohibited. If you have received this communication in error, please notify us immediately at Admin 7h3 !n5|d3r and promptly destroy the original communication and all copies taken thereof.

--
You received this message because you are subscribed to a topic in the Google Groups "Druid User" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/druid-user/j0P4QzusdEc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to druid-user+unsubscribe@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/61d05a0f-dc18-4a6c-9b6f-ce587b45fef8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Gian Merlino

unread,
Jan 19, 2017, 11:26:49 AM1/19/17
to druid...@googlegroups.com
Hey Gowtham,

It looks like your imply distribution is stored on an NFS volume, this is an NFS "silly rename" file. If so, can you try installing it on a local disk instead? You can still use NFS deep storage if you want, but the distribution should be installed on local disk.

Gian

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.

Gowtham Sai

unread,
Jan 20, 2017, 5:57:45 AM1/20/17
to Druid User
Hey Gian, Thanks for reply. But when I try to move the distribution to local, I'm getting the following error.

2017-01-20 10:57:08,684 [main] INFO  k.c.ZookeeperConsumerConnector - [tranquility-kafka_DataServer-1484909828254-7385533c], Creating topic event watcher for topics (vnk-clst)

2017-01-20 10:57:08,694 [main] INFO  k.c.ZookeeperConsumerConnector - [tranquility-kafka_DataServer-1484909828254-7385533c], Topics to consume = List(vnk-clst)

2017-01-20 10:57:08,708 [tranquility-kafka_DataServer-1484909828254-7385533c-leader-finder-thread] INFO  kafka.utils.VerifiableProperties - Verifying properties

2017-01-20 10:57:08,708 [tranquility-kafka_DataServer-1484909828254-7385533c-leader-finder-thread] INFO  kafka.utils.VerifiableProperties - Property client.id is overridden to tranquility-kafka

2017-01-20 10:57:08,708 [tranquility-kafka_DataServer-1484909828254-7385533c-leader-finder-thread] INFO  kafka.utils.VerifiableProperties - Property metadata.broker.list is overridden to 10.2.1.157:9092,10.2.1.157:9093,10.2.1.157:9094

2017-01-20 10:57:08,709 [tranquility-kafka_DataServer-1484909828254-7385533c-leader-finder-thread] INFO  kafka.utils.VerifiableProperties - Property request.timeout.ms is overridden to 30000

2017-01-20 10:57:08,736 [tranquility-kafka_DataServer-1484909828254-7385533c-leader-finder-thread] INFO  kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:10.2.1.157,port:9092 with correlation id 0 for 1 topic(s) Set(vnk-clst)

2017-01-20 10:57:08,738 [tranquility-kafka_DataServer-1484909828254-7385533c-leader-finder-thread] INFO  kafka.producer.SyncProducer - Connected to 10.2.1.157:9092 for producing

2017-01-20 10:57:08,759 [tranquility-kafka_DataServer-1484909828254-7385533c-leader-finder-thread] INFO  kafka.producer.SyncProducer - Disconnecting from 10.2.1.157:9092

2017-01-20 10:57:08,803 [ConsumerFetcherThread-tranquility-kafka_DataServer-1484909828254-7385533c-0-0] INFO  kafka.consumer.ConsumerFetcherThread - [ConsumerFetcherThread-tranquility-kafka_DataServer-1484909828254-7385533c-0-0], Starting 

2017-01-20 10:57:08,806 [tranquility-kafka_DataServer-1484909828254-7385533c-leader-finder-thread] INFO  k.consumer.ConsumerFetcherManager - [ConsumerFetcherManager-1484909828357] Added fetcher for partitions ArrayBuffer([[vnk-clst,0], initOffset 27473 to broker id:0,host:10.2.1.157,port:9092] )

2017-01-20 10:57:08,857 [KafkaConsumer-1] INFO  c.m.t.kafka.writer.WriterController - Creating EventWriter for topic [vnk-clst] using dataSource [vnk-clst]

2017-01-20 10:57:08,912 [KafkaConsumer-1] INFO  o.a.c.f.imps.CuratorFrameworkImpl - Starting

2017-01-20 10:57:08,914 [KafkaConsumer-1] INFO  org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=10.2.1.239 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@5fe7e507

2017-01-20 10:57:08,915 [KafkaConsumer-1-SendThread(10.2.1.239:2181)] INFO  org.apache.zookeeper.ClientCnxn - Opening socket connection to server 10.2.1.239/10.2.1.239:2181. Will not attempt to authenticate using SASL (unknown error)

2017-01-20 10:57:08,915 [KafkaConsumer-1-SendThread(10.2.1.239:2181)] INFO  org.apache.zookeeper.ClientCnxn - Socket connection established to 10.2.1.239/10.2.1.239:2181, initiating session

2017-01-20 10:57:08,918 [KafkaConsumer-1-SendThread(10.2.1.239:2181)] INFO  org.apache.zookeeper.ClientCnxn - Session establishment complete on server 10.2.1.239/10.2.1.239:2181, sessionid = 0x159ba75f9a30200, negotiated timeout = 40000

2017-01-20 10:57:08,923 [KafkaConsumer-1-EventThread] INFO  o.a.c.f.state.ConnectionStateManager - State change: CONNECTED

2017-01-20 10:57:09,079 [KafkaConsumer-1] INFO  c.m.t.finagle.FinagleRegistry - Adding resolver for scheme[disco].

2017-01-20 10:57:10,686 [KafkaConsumer-1] INFO  o.h.validator.internal.util.Version - HV000001: Hibernate Validator 5.1.3.Final

2017-01-20 10:57:11,009 [KafkaConsumer-1] INFO  io.druid.guice.JsonConfigurator - Loaded class[class io.druid.guice.ExtensionsConfig] from props[druid.extensions.] as [ExtensionsConfig{searchCurrentClassloader=true, directory='extensions', hadoopDependenciesDir='hadoop-dependencies', hadoopContainerDruidClasspath='null', loadList=null}]

2017-01-20 10:57:11,340 [KafkaConsumer-1] ERROR c.m.tranquility.kafka.KafkaConsumer - Exception: 

java.lang.NullPointerException: at index 4

at com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:240) ~[com.google.guava.guava-16.0.1.jar:na]

at com.google.common.collect.ObjectArrays.checkElementsNotNull(ObjectArrays.java:231) ~[com.google.guava.guava-16.0.1.jar:na]

at com.google.common.collect.ObjectArrays.checkElementsNotNull(ObjectArrays.java:226) ~[com.google.guava.guava-16.0.1.jar:na]

at com.google.common.collect.ImmutableList.construct(ImmutableList.java:303) ~[com.google.guava.guava-16.0.1.jar:na]

at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:258) ~[com.google.guava.guava-16.0.1.jar:na]

at io.druid.data.input.impl.DimensionsSpec.withDimensionExclusions(DimensionsSpec.java:170) ~[io.druid.druid-api-0.9.1.jar:0.9.1]

at io.druid.segment.indexing.DataSchema.getParser(DataSchema.java:138) ~[io.druid.druid-server-0.9.1.jar:0.9.1]

at com.metamx.tranquility.druid.DruidBeams$.fromConfigInternal(DruidBeams.scala:301) ~[io.druid.tranquility-core-0.8.2.jar:0.8.2]

at com.metamx.tranquility.druid.DruidBeams$.fromConfig(DruidBeams.scala:204) ~[io.druid.tranquility-core-0.8.2.jar:0.8.2]

at com.metamx.tranquility.kafka.KafkaBeamUtils$.createTranquilizer(KafkaBeamUtils.scala:40) ~[io.druid.tranquility-kafka-0.8.2.jar:0.8.2]

at com.metamx.tranquility.kafka.KafkaBeamUtils.createTranquilizer(KafkaBeamUtils.scala) ~[io.druid.tranquility-kafka-0.8.2.jar:0.8.2]

at com.metamx.tranquility.kafka.writer.TranquilityEventWriter.<init>(TranquilityEventWriter.java:64) ~[io.druid.tranquility-kafka-0.8.2.jar:0.8.2]

at com.metamx.tranquility.kafka.writer.WriterController.createWriter(WriterController.java:171) ~[io.druid.tranquility-kafka-0.8.2.jar:0.8.2]

at com.metamx.tranquility.kafka.writer.WriterController.getWriter(WriterController.java:98) ~[io.druid.tranquility-kafka-0.8.2.jar:0.8.2]

at com.metamx.tranquility.kafka.KafkaConsumer$2.run(KafkaConsumer.java:231) ~[io.druid.tranquility-kafka-0.8.2.jar:0.8.2]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_111]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_111]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]

at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]

2017-01-20 10:57:11,341 [KafkaConsumer-1] INFO  c.m.tranquility.kafka.KafkaConsumer - Shutting down - attempting to flush buffers and commit final offsets

2017-01-20 10:57:11,342 [Curator-Framework-0] INFO  o.a.c.f.imps.CuratorFrameworkImpl - backgroundOperationsLoop exiting

2017-01-20 10:57:11,345 [KafkaConsumer-1] INFO  org.apache.zookeeper.ZooKeeper - Session: 0x159ba75f9a30200 closed

2017-01-20 10:57:11,345 [KafkaConsumer-1-EventThread] INFO  org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x159ba75f9a30200

2017-01-20 10:57:11,350 [KafkaConsumer-1] INFO  k.c.ZookeeperConsumerConnector - [tranquility-kafka_DataServer-1484909828254-7385533c], ZKConsumerConnector shutting down

2017-01-20 10:57:11,358 [KafkaConsumer-1] INFO  k.c.ZookeeperTopicEventWatcher - Shutting down topic event watcher.

2017-01-20 10:57:11,358 [KafkaConsumer-1] INFO  k.consumer.ConsumerFetcherManager - [ConsumerFetcherManager-1484909828357] Stopping leader finder thread

2017-01-20 10:57:11,359 [KafkaConsumer-1] INFO  k.c.ConsumerFetcherManager$LeaderFinderThread - [tranquility-kafka_DataServer-1484909828254-7385533c-leader-finder-thread], Shutting down

2017-01-20 10:57:11,359 [tranquility-kafka_DataServer-1484909828254-7385533c-leader-finder-thread] INFO  k.c.ConsumerFetcherManager$LeaderFinderThread - [tranquility-kafka_DataServer-1484909828254-7385533c-leader-finder-thread], Stopped 


Gian

To unsubscribe from this group and all its topics, send an email to druid-user+...@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/61d05a0f-dc18-4a6c-9b6f-ce587b45fef8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.

Gowtham Sai

unread,
Jan 21, 2017, 1:34:46 AM1/21/17
to Druid User
Thanks Gain. It is working awesome. 

Is there any way to keep the distribution in NFS and syslink it to /var/lib/. Cause I dont wanna make deployment in every server manually. I though nfs is the easy one to make change among all the nodes. But ended up here :(


On Thursday, 19 January 2017 21:56:49 UTC+5:30, Gian Merlino wrote:

Gian

To unsubscribe from this group and all its topics, send an email to druid-user+...@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/61d05a0f-dc18-4a6c-9b6f-ce587b45fef8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages