How are you guys?
I'm trying to index data on historical node.
But I got an error like this.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2014-01-28 13:18:54,579 INFO [ZkCoordinator-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"alerts","timestamp":"2014-01-28T13:18:54.579Z","service":"historical","host":"localhost:29851","severity":"component-failure","desc$
2014-01-28 13:18:54,581 INFO [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Completed processing for node[/druid/loadQueue/localhost:29851/dpi_2013-11-26T00:00:00.000Z_2013-11-27T00:00:00.000Z_2014-01-28T11:55:50.890Z]
2014-01-28 13:18:54,581 INFO [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - /druid/loadQueue/localhost:29851/dpi_2013-11-26T00:00:00.000Z_2013-11-27T00:00:00.000Z_2014-01-28T11:55:50.890Z was removed
2014-01-28 13:18:54,584 INFO [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - New node[/druid/loadQueue/localhost:29851/dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:40.449Z] with segmentClass[cla$
2014-01-28 13:18:54,585 INFO [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Loading segment dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:40.449Z
2014-01-28 13:18:54,585 INFO [ZkCoordinator-0] io.druid.server.coordination.ServerManager - Told to delete a queryable on dataSource[dpi] for interval[2013-11-25T00:00:00.000Z/2013-11-26T00:00:00.000Z] and version [2014-01-28T11:55:40.$
2014-01-28 13:18:54,586 INFO [ZkCoordinator-0] io.druid.segment.loading.OmniSegmentLoader - Asked to cleanup something[DataSegment{size=52104639, shardSpec=NoneShardSpec, metrics=[count, INDI_NOTI_PRICE, LAND_AMT, PUB_PRSN_CNT, OWN_JIB$
2014-01-28 13:18:54,586 WARN [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Unable to delete segmentInfoCacheFile[/data/historical/segmentInfoCache/dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:$
2014-01-28 13:18:54,586 INFO [ZkCoordinator-0] io.druid.server.coordination.SingleDataSegmentAnnouncer - Unannouncing segment[dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:40.449Z] at path[/druid/servedSegments$
2014-01-28 13:18:54,586 INFO [ZkCoordinator-0] io.druid.curator.announcement.Announcer - unannouncing [/druid/servedSegments/localhost:29851/dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:40.449Z]
2014-01-28 13:18:54,586 ERROR [ZkCoordinator-0] io.druid.curator.announcement.Announcer - Path[/druid/servedSegments/localhost:29851/dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:40.449Z] not announced, cannot $
2014-01-28 13:18:54,587 ERROR [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Failed to load segment for dataSource: {class=io.druid.server.coordination.ZkCoordinator, exceptionType=class io.druid.segment.loading.Segment$
io.druid.segment.loading.SegmentLoadingException: Exception loading segment[dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:40.449Z]
at io.druid.server.coordination.ZkCoordinator.addSegment(ZkCoordinator.java:239)
at io.druid.server.coordination.SegmentChangeRequestLoad.go(SegmentChangeRequestLoad.java:44)
at io.druid.server.coordination.ZkCoordinator$1.childEvent(ZkCoordinator.java:131)
at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:494)
at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:488)
at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92)
at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:485)
at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35)
at org.apache.curator.framework.recipes.cache.PathChildrenCache$11.run(PathChildrenCache.java:755)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: com.metamx.common.ISE: Segment[dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:40.449Z:52,104,639] too large for storage[/data/historical/indexCache:10,535,986].
at io.druid.segment.loading.OmniSegmentLoader.getSegmentFiles(OmniSegmentLoader.java:114)
at io.druid.segment.loading.OmniSegmentLoader.getSegment(OmniSegmentLoader.java:93)
at io.druid.server.coordination.ServerManager.loadSegment(ServerManager.java:129)
at io.druid.server.coordination.ZkCoordinator.addSegment(ZkCoordinator.java:235)
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
I still have memory left.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_dpi-lv_root 50G 29G 19G 61% /
tmpfs 64G 0 64G 0% /dev/shm
/dev/sda2 485M 104M 356M 23% /boot
/dev/sda1 200M 260K 200M 1% /boot/efi
/dev/mapper/vg_dpi-lv_home 412G 111G 281G 29% /home
Could you let me know how to deal with this?
Many thanks,
Sue.