java.lang.OutOfMemoryError: Java heap space during Initialization

928 views
Skip to first unread message

TK

unread,
Jun 15, 2016, 3:34:02 AM6/15/16
to Hippo Community
Hi,

I am having difficulties choosing the memory size for my hippo production instance. At the moment it runs with 3GB of Xmx and Xms but I'd like to decrease the size cause I'd rather like to scale horizontally than vertically. I scaled it down to 1GB but at the startup of the tomcat I am getting OutOfMemoryErrors. I guess the initializing phase (building the lucene index) consumes too much heap space.

Should this be considered a bug, or is there no way for hippo consuming so much heap? 1.5GB seems to be the boundary/threshold (takes 20 min for building the index)

The error is as follows

Exception in thread "AsyncFileHandlerWriter-225534817" java.lang.OutOfMemoryError: Java heap space
        at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:410)
        at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:3009)
        at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:2257)
        at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2650)
        at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2545)
        at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1901)
        at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2002)
        at org.apache.tomcat.dbcp.dbcp2.PoolableConnection.validate(PoolableConnection.java:300)
        at org.apache.tomcat.dbcp.dbcp2.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:356)
        at org.apache.tomcat.dbcp.dbcp2.PoolableConnectionFactory.validateObject(PoolableConnectionFactory.java:341)
        at org.apache.tomcat.dbcp.pool2.impl.GenericObjectPool.evict(GenericObjectPool.java:805)
        at org.apache.tomcat.dbcp.pool2.impl.BaseGenericObjectPool$Evictor.run(BaseGenericObjectPool.java:1034)
        at java.util.TimerThread.mainLoop(Timer.java:555)
        at java.util.TimerThread.run(Timer.java:505)
java.lang.OutOfMemoryError: Java heap space
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
        at java.util.concurrent.LinkedBlockingDeque.pollFirst(LinkedBlockingDeque.java:522)
        at java.util.concurrent.LinkedBlockingDeque.poll(LinkedBlockingDeque.java:684)
        at org.apache.juli.AsyncFileHandler$LoggerThread.run(AsyncFileHandler.java:145)
Exception in thread "HstSiteConfigurationChangesChecker" java.lang.OutOfMemoryError: Java heap space
        at java.util.LinkedList.listIterator(LinkedList.java:868)
        at java.util.AbstractList.listIterator(AbstractList.java:299)
        at java.util.AbstractSequentialList.iterator(AbstractSequentialList.java:239)
        at org.apache.commons.configuration.CompositeConfiguration.getProperty(CompositeConfiguration.java:187)
        at org.apache.commons.configuration.AbstractConfiguration.resolveContainerStore(AbstractConfiguration.java:1178)
        at org.apache.commons.configuration.AbstractConfiguration.getString(AbstractConfiguration.java:1044)
        at org.apache.commons.configuration.AbstractConfiguration.getString(AbstractConfiguration.java:1027)
        at org.hippoecm.hst.site.container.HstSiteConfigServlet$HstSiteConfigurationChangesChecker.run(HstSiteConfigServlet.java:836)

14.06.2016 16:31:58 ERROR localhost-startStop-1 [RepositoryServlet.init:229] Error while setting up JCR repository: 
javax.jcr.RepositoryException: unchecked exception: java.lang.reflect.InvocationTargetException: null
        at org.hippoecm.repository.HippoRepositoryFactory.getHippoRepository(HippoRepositoryFactory.java:191)
        at org.hippoecm.repository.RepositoryServlet.init(RepositoryServlet.java:189)
        at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1231)
        at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1144)
        at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1031)
        at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4978)
        at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5270)
        at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
        at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:725)
        at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:701)
        at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717)
        at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:945)
        at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1795)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.hippoecm.repository.HippoRepositoryFactory.getHippoRepository(HippoRepositoryFactory.java:178)
        ... 17 more
Caused by: java.lang.OutOfMemoryError: Java heap space

ospk...@gmail.com

unread,
Jun 15, 2016, 3:57:38 AM6/15/16
to Hippo Community, ospk...@gmail.com
So, it somehow seems to me that the Heap size should be measured by the size of the content in the JCR which needs to be indexed (I hope it s not like that). If so this might be arguable :)


Ard Schrijvers

unread,
Jun 15, 2016, 4:05:36 AM6/15/16
to hippo-c...@googlegroups.com, ospk...@gmail.com
Hey,
the heap size doesn't need to be that big normally and certainly
doesn't have to scale with the amount of content: There are just some
LRU caches that start to evict when they are full.

Starting up should be fairly lightweight unless you

1) the cluster node is way behind with the revision id (does consume
quite some memory I think if you are way behind)
2) you have to rebuild the lucene index (takes time but should not
result in OOM)
3) if you have 'enableConsistencyCheck' and/or 'autoRepair' equal to
true : Set them to false in repository.xml and for existing storage in
workspace.xml

This should help. If you still have problems, I advice you to contact
Hippo helpdesk / sales because for these kind of production /
deployment issues we have official support

Regards Ard

[1] https://www.onehippo.org/library/enterprise/installation-and-configuration/repository-maintenance.html

>
>>
> --
> Hippo Community Group: The place for all discussions and announcements about
> Hippo CMS (and HST, repository etc. etc.)
>
> To post to this group, send email to hippo-c...@googlegroups.com
> RSS:
> https://groups.google.com/group/hippo-community/feed/rss_v2_0_msgs.xml?num=50
> ---
> You received this message because you are subscribed to the Google Groups
> "Hippo Community" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to hippo-communi...@googlegroups.com.
> Visit this group at https://groups.google.com/group/hippo-community.
> For more options, visit https://groups.google.com/d/optout.



--
Hippo Netherlands, Oosteinde 11, 1017 WT Amsterdam, Netherlands
Hippo USA, Inc. 71 Summer Street, 2nd Floor Boston, MA 02110, United
states of America.

US +1 877 414 4776 (toll free)
Europe +31(0)20 522 4466
www.onehippo.com

ospk...@gmail.com

unread,
Jun 15, 2016, 10:42:04 AM6/15/16
to Hippo Community, ospk...@gmail.com
Hi Ard,

thanks for your comments.

On Wednesday, June 15, 2016 at 10:05:36 AM UTC+2, a.schrijvers wrote:
Hey,


Starting up should be fairly lightweight unless you

1) the cluster node is way behind with the revision id (does consume
quite some memory I think if you are way behind)
     We 're using an auto scaling aproach where ansible provisions new nodes and they start at zero, so this might be an issue. I don't really understand how to autoscale with Jackrabbit cause each new node gets a random cluster node id which results in new jackrabbit journal table. I don't think that Jackrabbit is designed for autoscaling. From your comments, if I read between the lines, I guess you are always reusing node-ids so the journals do not start at the beginning again.
2) you have to rebuild the lucene index (takes time but should not
result in OOM)
   You are right, if I reuse a node-id but without a lucene index the startup is still fast, so the node sync might be the bad guy 
3) if you have 'enableConsistencyCheck' and/or 'autoRepair' equal to
true : Set them to false in repository.xml and for existing storage in
workspace.xml
   Good point! Disabled them now, would n make sense at all for us cause we 're rebuilding the index everytime we deploy 

Ard Schrijvers

unread,
Jun 15, 2016, 10:47:36 AM6/15/16
to hippo-c...@googlegroups.com, Kevin Klein
Hello,

You can best read the thread [1] where your topic was recently
discussed. We will provide horizontal autoscaling in the near future
on our OD2 (on demand) environment. Most likely it will however be
(and stay) an enterprise feature. If you want to know more about it,
you can best contact sales

HTH,

Regards Ard

[1] https://groups.google.com/d/msg/hippo-community/CzMXIcHk_VY/wISNMBUNAQAJ
Reply all
Reply to author
Forward
0 new messages