Caused by: java.lang.IllegalArgumentException at java.nio.Buffer.position(Buffer.java:236) at com.metamx.common.io.smoosh.SmooshedFileMapper.mapFile(SmooshedFileMapper.java:129) at io.druid.segment.IndexIO$V9IndexLoader.load(IndexIO.java:748) at io.druid.segment.IndexIO.loadIndex(IndexIO.java:164) at io.druid.segment.loading.MMappedQueryableIndexFactory.factorize(MMappedQueryableIndexFactory.java:39) at io.druid.segment.loading.OmniSegmentLoader.getSegment(OmniSegmentLoader.java:96) at io.druid.server.coordination.ServerManager.loadSegment(ServerManager.java:145) at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:136)
HI Fangjin,I'm still getting same error on druid-0.9.2 all system time zone set to UTC. zookeeper, mysql and other component installed in distributed environment. now for all datasource i'm getting same error.
2017-01-02T21:21:23,776 ERROR [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Failed to load segment for dataSource: {class=io.druid.server.coordination.ZkCoordinator, exceptionType=class io.druid.segment.loading.SegmentLoadingException, exceptionMessage=Exception loading segment[s3_cluster_2016-11-29T00:00:00.000Z_2016-11-30T00:00:00.000Z_2016-12-06T11:05:56.711Z_10], segment=DataSegment{size=378081901, shardSpec=HashBasedNumberedShardSpec{partitionNum=10, partitions=16, partitionDimensions=[]}, metrics=[eventCount], dimensions=[requestId, placementId, ip, country, deviceId, groupId, bidPrice, network, eventType, bundleId, adType, segmentId, dnt, lat, long, userAgent, latency], version='2016-12-06T11:05:56.711Z', loadSpec={type=s3_zip, bucket=druidstorage, key=prod/v1/s3_cluster/2016-11-29T00:00:00.000Z_2016-11-30T00:00:00.000Z/2016-12-06T11:05:56.711Z/10/index.zip}, interval=2016-11-29T00:00:00.000Z/2016-11-30T00:00:00.000Z, dataSource='s3_cluster', binaryVersion='9'}}
ERROR [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Failed to load segment for dataSource: {class=io.druid.server.coordination.ZkCoordinator, exceptionType=class io.druid.segment.loading.SegmentLoadingException, exceptionMessage=Exception loading segment[s3_test-again-1_2016-11-22T00:00:00.000Z_2016-11-23T00:00:00.000Z_2016-12-23T07:45:31.246Z], segment=DataSegment{size=14744, shardSpec=NoneShardSpec, metrics=[eventCount], dimensions=[requestId, placementId, ip, country, deviceId, groupId, bidPrice, network, eventType, bundleId, adType, segmentId, dnt, lat, long, userAgent, latency], version='2016-12-23T07:45:31.246Z', loadSpec={type=s3_zip, bucket=druidstorage, key=prod/v1/s3_test-again-1/2016-11-22T00:00:00.000Z_2016-11-23T00:00:00.000Z/2016-12-23T07:45:31.246Z/0/index.zip}, interval=2016-11-22T00:00:00.000Z/2016-11-23T00:00:00.000Z, dataSource='s3_test-again-1', binaryVersion='9'}}
io.druid.segment.loading.SegmentLoadingException: Exception loading segment[s3_test-again-1_2016-11-22T00:00:00.000Z_2016-11-23T00:00:00.000Z_2016-12-23T07:45:31.246Z]
at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:310) ~[druid-server-0.9.2.jar:0.9.2]
at io.druid.server.coordination.ZkCoordinator.addSegment(ZkCoordinator.java:351) [druid-server-0.9.2.jar:0.9.2]
at io.druid.server.coordination.SegmentChangeRequestLoad.go(SegmentChangeRequestLoad.java:44) [druid-server-0.9.2.jar:0.9.2]
at io.druid.server.coordination.ZkCoordinator$1.childEvent(ZkCoordinator.java:153) [druid-server-0.9.2.jar:0.9.2]
at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:522) [curator-recipes-2.11.0.jar:?]
at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:516) [curator-recipes-2.11.0.jar:?]
at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:93) [curator-framework-2.11.0.jar:?]
at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) [guava-16.0.1.jar:?]
at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:84) [curator-framework-2.11.0.jar:?]
at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:513) [curator-recipes-2.11.0.jar:?]
at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35) [curator-recipes-2.11.0.jar:?]
at org.apache.curator.framework.recipes.cache.PathChildrenCache$9.run(PathChildrenCache.java:773) [curator-recipes-2.11.0.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_111]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_111]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_111]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
Caused by: io.druid.segment.loading.SegmentLoadingException: No such file or directory
at io.druid.storage.s3.S3DataSegmentPuller.getSegmentFiles(S3DataSegmentPuller.java:238) ~[?:?]
at io.druid.storage.s3.S3LoadSpec.loadSegment(S3LoadSpec.java:62) ~[?:?]
at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegmentFiles(SegmentLoaderLocalCacheManager.java:143) ~[druid-server-0.9.2.jar:0.9.2]
at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment(SegmentLoaderLocalCacheManager.java:95) ~[druid-server-0.9.2.jar:0.9.2]
at io.druid.server.coordination.ServerManager.loadSegment(ServerManager.java:152) ~[druid-server-0.9.2.jar:0.9.2]
at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:306) ~[druid-server-0.9.2.jar:0.9.2]
... 18 more
Caused by: java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method) ~[?:1.8.0_111]
at java.io.File.createTempFile(File.java:2024) ~[?:1.8.0_111]
at java.io.File.createTempFile(File.java:2070) ~[?:1.8.0_111]
at com.metamx.common.CompressionUtils.unzip(CompressionUtils.java:149) ~[java-util-0.27.10.jar:?]
at io.druid.storage.s3.S3DataSegmentPuller.getSegmentFiles(S3DataSegmentPuller.java:207) ~[?:?]
at io.druid.storage.s3.S3LoadSpec.loadSegment(S3LoadSpec.java:62) ~[?:?]
at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegmentFiles(SegmentLoaderLocalCacheManager.java:143) ~[druid-server-0.9.2.jar:0.9.2]
at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment(SegmentLoaderLocalCacheManager.java:95) ~[druid-server-0.9.2.jar:0.9.2]
at io.druid.server.coordination.ServerManager.loadSegment(ServerManager.java:152) ~[druid-server-0.9.2.jar:0.9.2]
at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:306) ~[druid-server-0.9.2.jar:0.9.2]
... 18 more
runtime.properties
druid.service=druid/historical
druid.port=8083
# HTTP server threads
druid.server.http.numThreads=25
# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numThreads=7
# Segment storage
#druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]
druid.server.maxSize=130000000000
#indexing Service Discovery Module (All nodes)
druid.selectors.indexing.serviceName=druid:overlord
druid.segmentCache.deleteOnRemove=false
druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]
druid.zk.service.host=ec2-xx.xxx.xxx.xxx.compute-1.amazonaws.com
druid.zk.paths.base=/druid
# Metrics Module (All nodes)
#tore task logs in deep storage
druid.indexer.logs.type=s3
druid.indexer.logs.s3Bucket=
druid.s3.accessKey=
druid.s3.secretKey=
# Emitter Module (All nodes)
druid.emitter=logging
# For PostgreSQL (make sure to additionally include the Postgres extension):
druid.metadata.storage.type=mysql
#druid.metadata.storage.connector.connectURI=jdbc:mysql://ec2-xx.xx.xx.xx.compute-1.amazonaws.com:3306/druid?characterEncoding=UTF-8
druid.metadata.storage.connector.connectURI=jdbc:mysql://ec2-xx.xx.xx.xx.compute-1.amazonaws.com:3306/druid?characterEncoding=UTF-8
druid.metadata.storage.connector.user=druid
druid.metadata.storage.connector.password=druid
#druid.segmentCache.locations=[{"path": "/mnt/persistent/zk_druid", "maxSize": 300000000000}]
druid.monitoring.monitors=["io.druid.server.metrics.HistoricalMetricsMonitor", "com.metamx.metrics.JvmMonitor"]
i'm try to figureout the issue but still no clue for the issue. any help appericiated.
Caused by: java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method) ~[?:1.8.0_111]
at java.io.File.createTempFile(File.java:2024) ~[?:1.8.0_111]
at java.io.File.createTempFile(File.java:2070) ~[?:1.8.0_111]
--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/81a02126-8c6a-467c-84dd-370c98a8264a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.