Hi guys, I got an error on Indexing task segment hand off.
I checked historical node log and found an 'Failed to load segment' error message.
Log messages as follows:
2016-07-27 06:35:39,398 INFO [io.druid.server.coordination.ZkCoordinator] Loading segment stb_2016-07-27T05:00:00.000Z_2016-07-27T06:00:00.000Z_2016-07-27T05:03:01.960Z
2016-07-27 06:35:39,398 WARN [io.druid.server.coordination.BatchDataSegmentAnnouncer] No path to unannounce segment[stb_2016-07-27T05:00:00.000Z_2016-07-27T06:00:00.000Z_2016-07-27T05:03:01.960Z]
2016-07-27 06:35:39,398 INFO [io.druid.server.coordination.ZkCoordinator] Completely removing [stb_2016-07-27T05:00:00.000Z_2016-07-27T06:00:00.000Z_2016-07-27T05:03:01.960Z] in [30,000] millis
2016-07-27 06:35:39,399 INFO [io.druid.server.coordination.ZkCoordinator] Completed request [LOAD: stb_2016-07-27T05:00:00.000Z_2016-07-27T06:00:00.000Z_2016-07-27T05:03:01.960Z]
2016-07-27 06:35:39,399 ERROR [io.druid.server.coordination.ZkCoordinator] Failed to load segment for dataSource: {class=io.druid.server.coordination.ZkCoordinator, exceptionType=class io.druid.segment.loading.SegmentLoadingException, exceptionMessage=Exception loading segment[stb_2016-07-27T05:00:00.000Z_2016-07-27T06:00:00.000Z_2016-07-27T05:03:01.960Z], segment=DataSegment{size=4033, shardSpec=LinearShardSpec{partitionNum=0}, metrics=[count], dimensions=[action, cause, dvc_type, scn, topology], version='2016-07-27T05:03:01.960Z', loadSpec={type=hdfs, path=hdfs://
ndap09.ndap.com:8020/user/root/druid/segments/stb/20160727T050000.000Z_20160727T060000.000Z/2016-07-27T05_03_01.960Z/0/index.zip}, interval=2016-07-27T05:00:00.000Z/2016-07-27T06:00:00.000Z, dataSource='stb', binaryVersion='9'}}
io.druid.segment.loading.SegmentLoadingException: Exception loading segment[stb_2016-07-27T05:00:00.000Z_2016-07-27T06:00:00.000Z_2016-07-27T05:03:01.960Z]
Caused by: com.metamx.common.ISE: Segment[stb_2016-07-27T05:00:00.000Z_2016-07-27T06:00:00.000Z_2016-07-27T05:03:01.960Z:4,033] too large for storage[var/druid/segment-cache:-555,499,020].
It is really strange that the value in exception cause message is displayed as a negative number about segment-cache size.
too large for storage[var/druid/segment-cache:-555,499,020]
Historical node can't load segment and so Indexing task can't complete it's task.
My test environment as follows:
- CentOS 6.5
- Druid 0.9.0
- historical node config:
# HTTP server threads
druid.server.http.numThreads=50
druid.server.maxSize=300000000000
# Processing threads and buffers
druid.processing.buffer.sizeBytes=1073741824
druid.processing.numThreads=12
# Segment storage
druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:15000000000}]
Any reply is appreciated. =)
Thank you.