Historical nodes throwing an exception though are able to load segments

1,379 views
Skip to first unread message

SAURABH VERMA

unread,
Feb 14, 2015, 7:05:29 AM2/14/15
to druid-de...@googlegroups.com
Hello,


   The historical node is throwing an exception as below (Druid version 0.6.171):-

"Failed to load segment for dataSource: {class=io.druid.server.coordination.ZkCoordinator, exceptionType=class io.druid.segment.loading.SegmentLoadingException, exceptionMessage=Exception loading segment[viddata_2015-02-14T03:00:00.000-07:00_2015-02-14T04:00:00.000-07:00_2015-02-14T03:36:31.779-07:00], segment=DataSegment{size=129167987, shardSpec=LinearShardSpec{partitionNum=0}, metrics=[xCount], dimensions=[colo, pool, status, typename], version='2015-02-14T03:36:31.779-07:00', loadSpec={type=hdfs, path=hdfs://10.86.184.19:54310/druid/segments/viddata/20150214T030000.000-0700_20150214T040000.000-0700/2015-02-14T03_36_31.779-07_00/0/index.zip}, interval=2015-02-14T03:00:00.000-07:00/2015-02-14T04:00:00.000-07:00, dataSource='viddata', binaryVersion='9'}}
io.druid.segment.loading.SegmentLoadingException: Exception loading segment[viddata_2015-02-14T03:00:00.000-07:00_2015-02-14T04:00:00.000-07:00_2015-02-14T03:36:31.779-07:00]
        at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:140)
        at io.druid.server.coordination.ZkCoordinator.addSegment(ZkCoordinator.java:165)
        at io.druid.server.coordination.SegmentChangeRequestLoad.go(SegmentChangeRequestLoad.java:44)
        at io.druid.server.coordination.BaseZkCoordinator$1.childEvent(BaseZkCoordinator.java:127)
        at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:516)
        at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:510)
        at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92)
        at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
        at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83)
        at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:507)
        at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35)
        at org.apache.curator.framework.recipes.cache.PathChildrenCache$9.run(PathChildrenCache.java:759)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:780)
Caused by: io.druid.segment.loading.SegmentLoadingException: /import/home/mon/druid/persistent/zk_druid/viddata/2015-02-14T03:00:00.000-07:00_2015-02-14T04:00:00.000-07:00/2015-02-14T03:36:31.779-07:00/0/index.drd (No such file or directory)
        at io.druid.segment.loading.MMappedQueryableIndexFactory.factorize(MMappedQueryableIndexFactory.java:42)
        at io.druid.segment.loading.OmniSegmentLoader.getSegment(OmniSegmentLoader.java:96)
        at io.druid.server.coordination.ServerManager.loadSegment(ServerManager.java:145)
        at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:136)
        ... 20 more
Caused by: java.io.FileNotFoundException: /import/home/mon/druid/persistent/zk_druid/viddata/2015-02-14T03:00:00.000-07:00_2015-02-14T04:00:00.000-07:00/2015-02-14T03:36:31.779-07:00/0/index.drd (No such file or directory)
        at java.io.FileInputStream.<init>(FileInputStream.java:138)
        at io.druid.segment.SegmentUtils.getVersionFromDir(SegmentUtils.java:24)
        at io.druid.segment.IndexIO.loadIndex(IndexIO.java:159)
        at io.druid.segment.loading.MMappedQueryableIndexFactory.factorize(MMappedQueryableIndexFactory.java:39)
"



The historical runprops is :-

"druid.host=phxscal1100
druid.service=historical
druid.port=8081

druid.zk.service.host=phxscal1041:2181

druid.extensions.localRepository=/import/home/mon/druid171PullReps
druid.extensions.remoteRepositories=[]
druid.extensions.coordinates=["io.druid.extensions:druid-hdfs-storage:0.6.171"]
druid.storage.type=hdfs
druid.storage.storageDirectory=hdfs://10.86.184.19:54310/druid/segments
druid.server.tier=_default_tier


druid.server.maxSize=10000000000

# Change these to make Druid faster
druid.processing.buffer.sizeBytes=1000000000
druid.processing.numThreads=4

druid.segmentCache.locations=[{"path": "/import/home/mon/druid/persistent/zk_druid", "maxSize"\: 40000000000}]
"

The middleManager runprops is :-

"druid.host=phxscal1111
druid.port=8080
druid.service=middleManager


druid.extensions.remoteRepositories=[]
#druid.extensions.localRepository=/import/home/mon/druidRepository
druid.extensions.localRepository=/import/home/mon/druid171PullReps
druid.extensions.coordinates=["io.druid.extensions:druid-kafka-eight:0.6.171","io.druid.extensions:druid-hdfs-storage:0.6.171"]

druid.zk.service.host=phxscal1041:2181

druid.selectors.indexing.serviceName=overlord

# Dedicate more resources to peons
druid.indexer.runner.javaCommand=/home/mon/jdk1.6/bin/java
druid.indexer.runner.javaOpts=-server -Xmx4g -Xms4g -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:MaxDirectMemorySize=7g -Duser.timezone=MST -Dfile.encoding=UTF-8
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.4.0"]
druid.indexer.task.baseTaskDir=/home/mon/druid/persistent/task
druid.indexer.task.chathandler.type=announce
druid.indexer.runner.startPort=9080
#druid.indexer.fork.property.druid.storage.type=local
#druid.indexer.fork.property.druid.storage.storageDirectory=/import/home/mon/druid/localstorage
druid.indexer.fork.property.druid.storage.type=hdfs
druid.indexer.fork.property.druid.storage.storageDirectory=hdfs://10.86.184.19:54310/druid/segments
druid.indexer.fork.property.druid.computation.buffer.size=1000000000
druid.indexer.fork.property.druid.processing.numThreads=6
druid.indexer.fork.property.druid.request.logging.type=file
druid.indexer.fork.property.druid.request.logging.dir=request_logs/
druid.indexer.fork.property.druid.segmentCache.locations=[{"path": "/import/home/mon/druid/persistent/zk_druid", "maxSize": 0}]
druid.indexer.fork.property.druid.server.http.numThreads=50
druid.indexer.fork.property.druid.computation.buffer.size=268435456
druid.worker.capacity=4


#mysql
druid.db.connector.connectURI=jdbc:mysql://mymisc1.db.stratus.XYZ.com:3306/test
druid.db.connector.user=caljobs_app
druid.db.connector.password=caljobs_app
"

Please help resolve the issue

Thanks,
Saurabh

Gian Merlino

unread,
Feb 14, 2015, 10:39:50 AM2/14/15
to druid-de...@googlegroups.com
If you download and unpack the archive hdfs://10.86.184.19:54310/druid/segments/viddata/20150214T030000.000-0700_20150214T040000.000-0700/2015-02-14T03_36_31.779-07_00/0/index.zip, what files are in it and how big are they?

Is there anything in the /import/home/mon/druid/persistent/zk_druid/viddata/2015-02-14T03:00:00.000-07:00_2015-02-14T04:00:00.000-07:00/2015-02-14T03:36:31.779-07:00/ directory on your historical? Does that directory exist at all (even empty)?

SAURABH VERMA

unread,
Feb 14, 2015, 3:09:35 PM2/14/15
to druid-de...@googlegroups.com
Gian,


1. The index.zip has version.bin, meta.smoosh, 00000.smoosh
2. The path (/import/home/mon....) should be created by the middlemanager, (as of now that folder does not exist), i believe its a path that is accessible from middlemanager and historical both

Gian Merlino

unread,
Feb 14, 2015, 3:19:17 PM2/14/15
to druid-de...@googlegroups.com
The "druid.segmentCache.locations" should be a directory that is local to each historical node. It's used to locally cache segments and each historical node assumes it can add and remove segments from the cache whenever it wants. Sharing the mount between nodes might cause problems, so can you try changing it to a local path?

SAURABH VERMA

unread,
Feb 16, 2015, 10:37:26 AM2/16/15
to druid-de...@googlegroups.com
Gian, the error is still appearing in historical nodes console.

Right now I have changed segmentCacheLocation to a local path, below is the error log :-

"2015-02-16 08:31:34,317 INFO [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Completely removing [viddata_2015-02-16T07:00:00.000-07:00_2015-02-16T08:00:00.000-07:00_2015-02-16T07:03:00.531-07:00] in [30,000] millis
2015-02-16 08:31:34,318 INFO [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Completed request [LOAD: viddata_2015-02-16T07:00:00.000-07:00_2015-02-16T08:00:00.000-07:00_2015-02-16T07:03:00.531-07:00]
2015-02-16 08:31:34,318 ERROR [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Failed to load segment for dataSource: {class=io.druid.server.coordination.ZkCoordinator, exceptionType=class io.druid.segment.loading.SegmentLoadingException, exceptionMessage=Exception loading segment[viddata_2015-02-16T07:00:00.000-07:00_2015-02-16T08:00:00.000-07:00_2015-02-16T07:03:00.531-07:00], segment=DataSegment{size=510132161, shardSpec=LinearShardSpec{partitionNum=0}, metrics=[xCount], dimensions=[colo, duration, machine, pool, status, type, typename], version='2015-02-16T07:03:00.531-07:00', loadSpec={type=hdfs, path=hdfs://10.86.184.19:54310/druid/segments/viddata/20150216T070000.000-0700_20150216T080000.000-0700/2015-02-16T07_03_00.531-07_00/0/index.zip}, interval=2015-02-16T07:00:00.000-07:00/2015-02-16T08:00:00.000-07:00, dataSource='viddata', binaryVersion='9'}}
io.druid.segment.loading.SegmentLoadingException: Exception loading segment[viddata_2015-02-16T07:00:00.000-07:00_2015-02-16T08:00:00.000-07:00_2015-02-16T07:03:00.531-07:00]
        at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:140)
        at io.druid.server.coordination.ZkCoordinator.addSegment(ZkCoordinator.java:165)
        at io.druid.server.coordination.SegmentChangeRequestLoad.go(SegmentChangeRequestLoad.java:44)
        at io.druid.server.coordination.BaseZkCoordinator$1.childEvent(BaseZkCoordinator.java:127)
        at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:516)
        at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:510)
        at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92)
        at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
        at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83)
        at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:507)
        at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35)
        at org.apache.curator.framework.recipes.cache.PathChildrenCache$9.run(PathChildrenCache.java:759)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:780)
Caused by: io.druid.segment.loading.SegmentLoadingException: /home/appmon/druid/persistent/zk_druid/viddata/2015-02-16T07:00:00.000-07:00_2015-02-16T08:00:00.000-07:00/2015-02-16T07:03:00.531-07:00/0/index.drd (No such file or directory)
        at io.druid.segment.loading.MMappedQueryableIndexFactory.factorize(MMappedQueryableIndexFactory.java:42)
        at io.druid.segment.loading.OmniSegmentLoader.getSegment(OmniSegmentLoader.java:96)
        at io.druid.server.coordination.ServerManager.loadSegment(ServerManager.java:145)
        at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:136)
        ... 20 more
Caused by: java.io.FileNotFoundException: /home/appmon/druid/persistent/zk_druid/viddata/2015-02-16T07:00:00.000-07:00_2015-02-16T08:00:00.000-07:00/2015-02-16T07:03:00.531-07:00/0/index.drd (No such file or directory)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.<init>(FileInputStream.java:138)
        at io.druid.segment.SegmentUtils.getVersionFromDir(SegmentUtils.java:24)
        at io.druid.segment.IndexIO.loadIndex(IndexIO.java:159)
        at io.druid.segment.loading.MMappedQueryableIndexFactory.factorize(MMappedQueryableIndexFactory.java:39)
        ... 23 more

Gian Merlino

unread,
Feb 16, 2015, 11:07:28 AM2/16/15
to druid-de...@googlegroups.com
That sounds like the historical node thinks it has the segment cached locally but it actually doesn't.

When you switched the historical node's cache path to something local, did you start it off with a completely empty cache directory (i.e. /home/appmon/druid/persistent/zk_druid) or was there something already in that directory at the time?

Also, does the historical node have a directory at the path "/home/appmon/druid/persistent/zk_druid/viddata/2015-02-16T07:00:00.000-07:00_2015-02-16T08:00:00.000-07:00/2015-02-16T07:03:00.531-07:00/0"? If so, are there any files in it, or is the directory empty?

SAURABH VERMA

unread,
Feb 16, 2015, 11:47:31 AM2/16/15
to druid-de...@googlegroups.com
Gian,

The path exists and it is empty (though there is an empty info_dir folder at this path), 
Why does the historical think that it has the segment locally, as HDFS is deep storage.

The historical doesnt have a path "/home/appmon/druid/persistent/zk_druid/viddata/2015-02-16T07:00:00.000-07:00_2015-02-16T08:00:00.000-07:00/2015-02-16T07:03:00.531-07:00/0" 

Fangjin Yang

unread,
Feb 16, 2015, 1:56:38 PM2/16/15
to druid-de...@googlegroups.com
Saurabh, historicals must download segments locally before you can run any queries over them. Deep storage is only used as a backup for segments, it is completely NOT involved in the query path. Once a historical downloads a segment, it creates a cache entry on the local filesystem saying it has a segment downloaded. This way, when a historical is restarted, a historical can immediately know what segments it downloaded and serve those segments right away.

It sounds like your historical node created a local cache entry but failed to download the segment. This situation should cause the historical to try and redownload the segment. However, there appear to be some problems with your setup that prevent the segment from being downloaded.

Are you sure the java process has permissions to write to your configured directory?

Also, can you confirm that looking at the directory "/home/appmon/druid/persistent/zk_druid/viddata/2015-02-16T07:00:00.000-07:00_2015-02-16T08:00:00.000-07:00/2015-02-16T07:03:00.531-07:00/0", that entire directory path is created but is empty?

SAURABH VERMA

unread,
Feb 16, 2015, 2:30:37 PM2/16/15
to druid-de...@googlegroups.com
Yes FJ, I understand that DS is only for backup and H-nodes serve after downloading segments from DS, 
As of now, when I set druid.segmentCache.deleteOnRemove=false, the problem disappeared and the segment handoff happens smoothly now

Fangjin Yang

unread,
Feb 16, 2015, 4:52:30 PM2/16/15
to druid-de...@googlegroups.com
This config should not explain the behavior you saw. That config will immediately try to memory unmap and clean up a segment once it is dropped. I suspect that your nodes eventually downloaded the segment after the initial error and the config change has nothing to do with why segments were not being loaded. More logs would help us understand though.

SAURABH VERMA

unread,
Feb 16, 2015, 10:19:28 PM2/16/15
to druid-de...@googlegroups.com
Hello FJ,

   PFA the complete logs of one of the historical nodes.

   If you search for "viddata_2015-02-16T11:00:00.000-07:00_2015-02-16T12:00:00.000-07:00_2015-02-16T11:00:06.262-07:00", it is not clear why after failing so many times, it got succeeded 

Thanks,
Saurabh
log.txt

Fangjin Yang

unread,
Feb 17, 2015, 1:17:37 PM2/17/15
to druid-de...@googlegroups.com
This exception is interesting...

Caused by: java.lang.IllegalArgumentException
	at java.nio.Buffer.position(Buffer.java:236)
	at com.metamx.common.io.smoosh.SmooshedFileMapper.mapFile(SmooshedFileMapper.java:129)
	at io.druid.segment.IndexIO$V9IndexLoader.load(IndexIO.java:748)
	at io.druid.segment.IndexIO.loadIndex(IndexIO.java:164)
	at io.druid.segment.loading.MMappedQueryableIndexFactory.factorize(MMappedQueryableIndexFactory.java:39)
	at io.druid.segment.loading.OmniSegmentLoader.getSegment(OmniSegmentLoader.java:96)
	at io.druid.server.coordination.ServerManager.loadSegment(ServerManager.java:145)
	at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:136)

Fangjin Yang

unread,
Feb 17, 2015, 3:59:37 PM2/17/15
to druid-de...@googlegroups.com
Hi Saurabh, I wonder, do u have any dimensions for which no values may exist for a particular segment?

SAURABH VERMA

unread,
Feb 18, 2015, 11:28:14 AM2/18/15
to druid-de...@googlegroups.com
FJ, there is no such dimension as I observed 

Fangjin Yang

unread,
Feb 18, 2015, 1:45:01 PM2/18/15
to druid-de...@googlegroups.com
Hi Saurabh, are you still seeing this problem after updating historical to cache segments on local disk? We think this problem is because your historicals were pointing to NFS at one point..

SAURABH VERMA

unread,
Feb 18, 2015, 1:55:36 PM2/18/15
to druid-de...@googlegroups.com
FJ,


   Just got to know the reason of this issue, the issue seems to be coming up in MST timezone.

   The setup & observations are as below :-

1. data source emits events in MST 
3. So I change events to UTC before pushing to Overlord
4. The peons and the historicals are configured in MST
5. The historicals always try to search for the incorrect segment (for ex. if actual segment is vidmessage_2015-02-18T08:00:00.000Z_2015-02-18T09:00:00.000Z, then historical try for vidmessage_2015-02-18T15:00:00.000Z_2015-02-18T16:00:00.000Z) and fail thereby 
6. After putting the peons to UTC timezone, the historicals seem to fetch segments properly (some ambiguity here why it worked, but it worked as of now)
7. After fetching the segment, I think because of this different timezone only, the peons are not getting any ACK for Handoff on expected segments

Please check if the observations and the deduction are correct

thanks,
Saurabh

Fangjin Yang

unread,
Feb 18, 2015, 7:00:23 PM2/18/15
to druid-de...@googlegroups.com
Hi Saurabh, we don't really have any experience whatsoever with running in different timezones. If you read http://druid.io/docs/0.6.171/Recommendations.html, we run entirely in UTC, but we've heard community feedback that other timezones should work if all nodes are set to the same timezone.
Message has been deleted

laxman Singh Rathore

unread,
Jan 6, 2017, 12:27:34 AM1/6/17
to Druid Development


On Tuesday, 3 January 2017 02:56:47 UTC+5:30, laxman Singh Rathore wrote:
HI Fangjin,

I'm still getting same error on druid-0.9.2 all system time zone set to UTC. zookeeper, mysql and other component installed in distributed environment. now for all datasource i'm getting same error.



2017-01-02T21:21:23,776 ERROR [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Failed to load segment for dataSource: {class=io.druid.server.coordination.ZkCoordinator, exceptionType=class io.druid.segment.loading.SegmentLoadingException, exceptionMessage=Exception loading segment[s3_cluster_2016-11-29T00:00:00.000Z_2016-11-30T00:00:00.000Z_2016-12-06T11:05:56.711Z_10], segment=DataSegment{size=378081901, shardSpec=HashBasedNumberedShardSpec{partitionNum=10, partitions=16, partitionDimensions=[]}, metrics=[eventCount], dimensions=[requestId, placementId, ip, country, deviceId, groupId, bidPrice, network, eventType, bundleId, adType, segmentId, dnt, lat, long, userAgent, latency], version='2016-12-06T11:05:56.711Z', loadSpec={type=s3_zip, bucket=druidstorage, key=prod/v1/s3_cluster/2016-11-29T00:00:00.000Z_2016-11-30T00:00:00.000Z/2016-12-06T11:05:56.711Z/10/index.zip}, interval=2016-11-29T00:00:00.000Z/2016-11-30T00:00:00.000Z, dataSource='s3_cluster', binaryVersion='9'}}


 ERROR [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Failed to load segment for dataSource: {class=io.druid.server.coordination.ZkCoordinator, exceptionType=class io.druid.segment.loading.SegmentLoadingException, exceptionMessage=Exception loading segment[s3_test-again-1_2016-11-22T00:00:00.000Z_2016-11-23T00:00:00.000Z_2016-12-23T07:45:31.246Z], segment=DataSegment{size=14744, shardSpec=NoneShardSpec, metrics=[eventCount], dimensions=[requestId, placementId, ip, country, deviceId, groupId, bidPrice, network, eventType, bundleId, adType, segmentId, dnt, lat, long, userAgent, latency], version='2016-12-23T07:45:31.246Z', loadSpec={type=s3_zip, bucket=druidstorage, key=prod/v1/s3_test-again-1/2016-11-22T00:00:00.000Z_2016-11-23T00:00:00.000Z/2016-12-23T07:45:31.246Z/0/index.zip}, interval=2016-11-22T00:00:00.000Z/2016-11-23T00:00:00.000Z, dataSource='s3_test-again-1', binaryVersion='9'}}

io.druid.segment.loading.SegmentLoadingException: Exception loading segment[s3_test-again-1_2016-11-22T00:00:00.000Z_2016-11-23T00:00:00.000Z_2016-12-23T07:45:31.246Z]

at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:310) ~[druid-server-0.9.2.jar:0.9.2]

at io.druid.server.coordination.ZkCoordinator.addSegment(ZkCoordinator.java:351) [druid-server-0.9.2.jar:0.9.2]

at io.druid.server.coordination.SegmentChangeRequestLoad.go(SegmentChangeRequestLoad.java:44) [druid-server-0.9.2.jar:0.9.2]

at io.druid.server.coordination.ZkCoordinator$1.childEvent(ZkCoordinator.java:153) [druid-server-0.9.2.jar:0.9.2]

at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:522) [curator-recipes-2.11.0.jar:?]

at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:516) [curator-recipes-2.11.0.jar:?]

at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:93) [curator-framework-2.11.0.jar:?]

at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) [guava-16.0.1.jar:?]

at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:84) [curator-framework-2.11.0.jar:?]

at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:513) [curator-recipes-2.11.0.jar:?]

at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35) [curator-recipes-2.11.0.jar:?]

at org.apache.curator.framework.recipes.cache.PathChildrenCache$9.run(PathChildrenCache.java:773) [curator-recipes-2.11.0.jar:?]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_111]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_111]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_111]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_111]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]

at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]

Caused by: io.druid.segment.loading.SegmentLoadingException: No such file or directory

at io.druid.storage.s3.S3DataSegmentPuller.getSegmentFiles(S3DataSegmentPuller.java:238) ~[?:?]

at io.druid.storage.s3.S3LoadSpec.loadSegment(S3LoadSpec.java:62) ~[?:?]

at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegmentFiles(SegmentLoaderLocalCacheManager.java:143) ~[druid-server-0.9.2.jar:0.9.2]

at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment(SegmentLoaderLocalCacheManager.java:95) ~[druid-server-0.9.2.jar:0.9.2]

at io.druid.server.coordination.ServerManager.loadSegment(ServerManager.java:152) ~[druid-server-0.9.2.jar:0.9.2]

at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:306) ~[druid-server-0.9.2.jar:0.9.2]

... 18 more

Caused by: java.io.IOException: No such file or directory

at java.io.UnixFileSystem.createFileExclusively(Native Method) ~[?:1.8.0_111]

at java.io.File.createTempFile(File.java:2024) ~[?:1.8.0_111]

at java.io.File.createTempFile(File.java:2070) ~[?:1.8.0_111]

at com.metamx.common.CompressionUtils.unzip(CompressionUtils.java:149) ~[java-util-0.27.10.jar:?]

at io.druid.storage.s3.S3DataSegmentPuller.getSegmentFiles(S3DataSegmentPuller.java:207) ~[?:?]

at io.druid.storage.s3.S3LoadSpec.loadSegment(S3LoadSpec.java:62) ~[?:?]

at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegmentFiles(SegmentLoaderLocalCacheManager.java:143) ~[druid-server-0.9.2.jar:0.9.2]

at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment(SegmentLoaderLocalCacheManager.java:95) ~[druid-server-0.9.2.jar:0.9.2]

at io.druid.server.coordination.ServerManager.loadSegment(ServerManager.java:152) ~[druid-server-0.9.2.jar:0.9.2]

at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:306) ~[druid-server-0.9.2.jar:0.9.2]

... 18 more



runtime.properties


druid.service=druid/historical

druid.port=8083


# HTTP server threads

druid.server.http.numThreads=25


# Processing threads and buffers

druid.processing.buffer.sizeBytes=536870912

druid.processing.numThreads=7


# Segment storage

#druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]

druid.server.maxSize=130000000000



#indexing Service Discovery Module (All nodes)

druid.selectors.indexing.serviceName=druid:overlord

druid.segmentCache.deleteOnRemove=false

druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]

druid.zk.service.host=ec2-xx.xxx.xxx.xxx.compute-1.amazonaws.com

druid.zk.paths.base=/druid


# Metrics Module (All nodes)


#tore task logs in deep storage

druid.indexer.logs.type=s3

druid.indexer.logs.s3Bucket=

druid.s3.accessKey=

druid.s3.secretKey=


# Emitter Module (All nodes)

druid.emitter=logging


# For PostgreSQL (make sure to additionally include the Postgres extension):

druid.metadata.storage.type=mysql

#druid.metadata.storage.connector.connectURI=jdbc:mysql://ec2-xx.xx.xx.xx.compute-1.amazonaws.com:3306/druid?characterEncoding=UTF-8

druid.metadata.storage.connector.connectURI=jdbc:mysql://ec2-xx.xx.xx.xx.compute-1.amazonaws.com:3306/druid?characterEncoding=UTF-8

druid.metadata.storage.connector.user=druid

druid.metadata.storage.connector.password=druid



#druid.segmentCache.locations=[{"path": "/mnt/persistent/zk_druid", "maxSize": 300000000000}]


druid.monitoring.monitors=["io.druid.server.metrics.HistoricalMetricsMonitor", "com.metamx.metrics.JvmMonitor"]


 
i'm try to figureout the issue but still no clue for the issue. any help appericiated.

Nishant Bangarwa

unread,
Jan 6, 2017, 6:41:58 AM1/6/17
to Druid Development
Hi, 
It is failing at creating temp directory. Make sure you have set java.io.tmpdir to a writable directory.
Exception - 

Caused by: java.io.IOException: No such file or directory

at java.io.UnixFileSystem.createFileExclusively(Native Method) ~[?:1.8.0_111]

at java.io.File.createTempFile(File.java:2024) ~[?:1.8.0_111]

at java.io.File.createTempFile(File.java:2070) ~[?:1.8.0_111]

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/81a02126-8c6a-467c-84dd-370c98a8264a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages