too large for storage

335 views
Skip to first unread message

Sue

unread,
Jan 28, 2014, 9:18:17 AM1/28/14
to druid-de...@googlegroups.com

Hello,


How are you guys?


I'm trying to index data on historical node.


But I got an error like this.

 

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 

2014-01-28 13:18:54,579 INFO [ZkCoordinator-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"alerts","timestamp":"2014-01-28T13:18:54.579Z","service":"historical","host":"localhost:29851","severity":"component-failure","desc$

2014-01-28 13:18:54,581 INFO [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Completed processing for node[/druid/loadQueue/localhost:29851/dpi_2013-11-26T00:00:00.000Z_2013-11-27T00:00:00.000Z_2014-01-28T11:55:50.890Z]

2014-01-28 13:18:54,581 INFO [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - /druid/loadQueue/localhost:29851/dpi_2013-11-26T00:00:00.000Z_2013-11-27T00:00:00.000Z_2014-01-28T11:55:50.890Z was removed

2014-01-28 13:18:54,584 INFO [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - New node[/druid/loadQueue/localhost:29851/dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:40.449Z] with segmentClass[cla$

2014-01-28 13:18:54,585 INFO [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Loading segment dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:40.449Z

2014-01-28 13:18:54,585 INFO [ZkCoordinator-0] io.druid.server.coordination.ServerManager - Told to delete a queryable on dataSource[dpi] for interval[2013-11-25T00:00:00.000Z/2013-11-26T00:00:00.000Z] and version [2014-01-28T11:55:40.$

2014-01-28 13:18:54,586 INFO [ZkCoordinator-0] io.druid.segment.loading.OmniSegmentLoader - Asked to cleanup something[DataSegment{size=52104639, shardSpec=NoneShardSpec, metrics=[count, INDI_NOTI_PRICE, LAND_AMT, PUB_PRSN_CNT, OWN_JIB$

2014-01-28 13:18:54,586 WARN [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Unable to delete segmentInfoCacheFile[/data/historical/segmentInfoCache/dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:$

2014-01-28 13:18:54,586 INFO [ZkCoordinator-0] io.druid.server.coordination.SingleDataSegmentAnnouncer - Unannouncing segment[dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:40.449Z] at path[/druid/servedSegments$

2014-01-28 13:18:54,586 INFO [ZkCoordinator-0] io.druid.curator.announcement.Announcer - unannouncing [/druid/servedSegments/localhost:29851/dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:40.449Z]

2014-01-28 13:18:54,586 ERROR [ZkCoordinator-0] io.druid.curator.announcement.Announcer - Path[/druid/servedSegments/localhost:29851/dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:40.449Z] not announced, cannot $

2014-01-28 13:18:54,587 ERROR [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Failed to load segment for dataSource: {class=io.druid.server.coordination.ZkCoordinator, exceptionType=class io.druid.segment.loading.Segment$

io.druid.segment.loading.SegmentLoadingException: Exception loading segment[dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:40.449Z]

        at io.druid.server.coordination.ZkCoordinator.addSegment(ZkCoordinator.java:239)

        at io.druid.server.coordination.SegmentChangeRequestLoad.go(SegmentChangeRequestLoad.java:44)

        at io.druid.server.coordination.ZkCoordinator$1.childEvent(ZkCoordinator.java:131)

        at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:494)

        at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:488)

        at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92)

        at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)

        at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83)

        at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:485)

        at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35)

        at org.apache.curator.framework.recipes.cache.PathChildrenCache$11.run(PathChildrenCache.java:755)

        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

        at java.util.concurrent.FutureTask.run(FutureTask.java:262)

        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

        at java.util.concurrent.FutureTask.run(FutureTask.java:262)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

        at java.lang.Thread.run(Thread.java:744)

Caused by: com.metamx.common.ISE: Segment[dpi_2013-11-25T00:00:00.000Z_2013-11-26T00:00:00.000Z_2014-01-28T11:55:40.449Z:52,104,639] too large for storage[/data/historical/indexCache:10,535,986].

        at io.druid.segment.loading.OmniSegmentLoader.getSegmentFiles(OmniSegmentLoader.java:114)

        at io.druid.segment.loading.OmniSegmentLoader.getSegment(OmniSegmentLoader.java:93)

        at io.druid.server.coordination.ServerManager.loadSegment(ServerManager.java:129)

        at io.druid.server.coordination.ZkCoordinator.addSegment(ZkCoordinator.java:235)

 

 

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 

I still have memory left.

 

Filesystem                           Size    Used     Avail   Use%  Mounted on

/dev/mapper/vg_dpi-lv_root    50G     29G      19G      61%     /

tmpfs                                       64G      0          64G      0%      /dev/shm

/dev/sda2                                485M  104M    356M    23%    /boot

/dev/sda1                                200M  260K     200M    1%     /boot/efi

/dev/mapper/vg_dpi-lv_home  412G  111G     281G    29%   /home

 

 

Could you let me know how to deal with this?

 

Many thanks,

 

Sue.

 

Fangjin Yang

unread,
Jan 28, 2014, 5:40:02 PM1/28/14
to druid-de...@googlegroups.com
Hi Sue,

This problem is one with the default size limitations on your historical node:

Try setting this JVM args:

-Ddruid.segmentCache.locations=[{"path":"[/data/historical/indexCache","maxSize":"500000000000"}]
-Ddruid.server.maxSize=500000000000

Historical nodes have a configurable upper bound in the total size of segments they can download. This allows historical nodes to have a flexible memory/disk ratio for performance tuning.

Let me know if that helps.

Thx,
FJ

Sue

unread,
Feb 4, 2014, 4:03:18 AM2/4/14
to druid-de...@googlegroups.com
Hi Fangjin,

it's working properly.
Thank you so much.

Best regards.
Sue
Reply all
Reply to author
Forward
0 new messages