Performance / Slowness issues in Production

152 views
Skip to first unread message

Shyla Rajeev

unread,
Sep 16, 2015, 1:46:40 AM9/16/15
to Hippo Community

As our data grew, we are seeing some performance issues in production.

We run tomcat on Linux and has the following JVM settings.

MAX_HEAP=6144
MIN_HEAP=256
JVM_OPTS="-server -Xmx${MAX_HEAP}m -Xms${MIN_HEAP}m -XX:MaxPermSize=128m -XX:NewRatio=2 -XX:SurvivorRatio=6 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:CMSFullGCsBeforeCompaction=1 -XX:MaxGCPauseMillis=200"

Igc.log in servers, I see Allocation failures shown below, and it is running in frequent intervals:

94 secs] 81255K->17334K(251264K), 0.0170126 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2015-09-16T02:03:48.538+0000: 13.184: [GC (Allocation Failure) 13.184: [ParNew: 73243K->8339K(76480K), 0.0203458 secs] 82934K->20774K(251264K), 0.0204103 secs] [Times: user=0.03 sys=0.00, real=0.02 secs]
2015-09-16T02:03:49.187+0000: 13.833: [GC (Allocation Failure) 13.833: [ParNew: 73939K->10880K(76480K), 0.0188553 secs] 86374K->27107K(251264K), 0.0189236 secs] [Times: user=0.04 sys=0.00, real=0.02 secs]
2015-09-16T02:03:49.568+0000: 14.214: [GC (Allocation Failure) 14.214: [ParNew: 76480K->10880K(76480K), 0.0210290 secs] 92707K->34156K(251264K), 0.0210920 secs] [Times: user=0.03 sys=0.01, real=0.02 secs]
2015-09-16T02:03:49.817+0000: 14.463: [GC (Allocation Failure) 14.463: [ParNew: 76480K->10880K(76480K), 0.0186485 secs] 99756K->36207K(251264K), 0.0187362 secs] [Times: user=0.04 sys=0.00, real=0.02 secs]
2015-09-16T02:03:50.072+0000: 14.718: [GC (Allocation Failure) 14.718: [ParNew: 76480K->10299K(76480K), 0.0105036 secs] 101807K->37152K(251264K), 0.0105578 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2015-09-16T02:03:50.393+0000: 15.039: [GC (Allocation Failure) 15.039: [ParNew: 75899K->10880K(76480K), 0.0237515 secs] 102752K->39840K(251264K), 0.0238306 secs] [Times: user=0.05 sys=0.00, real=0.03 secs]
2015-09-16T02:03:50.649+0000: 15.295: [GC (Allocation Failure) 15.295: [ParNew: 76480K->10879K(76480K),:
2015-09-16T02:03:37.124+0000: 1.769: [GC (Allocation Failure) 1.769: [ParNew: 65600K->9196K(76480K), 0.0103478 secs] 65600K->9196K(251264K), 0.0104132 secs] [Times: user=0.02 sys=0.01, real=0.02 secs]

As we are struggling with this issue for sometime wanted to see if anyone in this hippo group has seen this issue before / have some suggestions.

Thank you!
Shyla Rajeev

marijan milicevic

unread,
Sep 16, 2015, 4:48:56 AM9/16/15
to hippo-c...@googlegroups.com
Hi,

these are normal GC messages, I believe it is for eden space.... and it looks like GC pause is not taking that long (0.02 seconds)
cheers
marijan

--
Hippo Community Group: The place for all discussions and announcements about Hippo CMS (and HST, repository etc. etc.)
 
To post to this group, send email to hippo-c...@googlegroups.com
RSS: https://groups.google.com/group/hippo-community/feed/rss_v2_0_msgs.xml?num=50
---
You received this message because you are subscribed to the Google Groups "Hippo Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hippo-communi...@googlegroups.com.
Visit this group at http://groups.google.com/group/hippo-community.
For more options, visit https://groups.google.com/d/optout.

Ard Schrijvers

unread,
Sep 16, 2015, 4:58:23 AM9/16/15
to hippo-c...@googlegroups.com
Hey Shyla,

There can be many different reasons that impact performance. Apart
from jvm settings it can also be for example expensive queries or a
wrong content model. Either way, I see you have -XX:MaxPermSize=128m.
IIRC, for java 7 it is better to set it to -XX:MaxPermSize=256m and
for java 8 the option is gone so pointless.

Furthermore, it really helps to know what is slow. The cms, the site,
everything or certain pages? Etc etc

For the site, you can also diagnose performance via [1]

HTH,

Regards Ard

[1] http://www.onehippo.org/library/concepts/request-handling/hst-page-diagnostics.html

On Wed, Sep 16, 2015 at 7:46 AM, Shyla Rajeev <shyl...@gmail.com> wrote:
>
> --
> Hippo Community Group: The place for all discussions and announcements about
> Hippo CMS (and HST, repository etc. etc.)
>
> To post to this group, send email to hippo-c...@googlegroups.com
> RSS:
> https://groups.google.com/group/hippo-community/feed/rss_v2_0_msgs.xml?num=50
> ---
> You received this message because you are subscribed to the Google Groups
> "Hippo Community" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to hippo-communi...@googlegroups.com.
> Visit this group at http://groups.google.com/group/hippo-community.
> For more options, visit https://groups.google.com/d/optout.



--
Hippo Netherlands, Oosteinde 11, 1017 WT Amsterdam, Netherlands
Hippo USA, Inc.- 745 Atlantic Ave, Eight Floor, Boston MA 02111,
United states of America.

US +1 877 414 4776 (toll free)
Europe +31(0)20 522 4466
www.onehippo.com

Shyla Rajeev

unread,
Sep 17, 2015, 2:48:19 PM9/17/15
to Hippo Community
Thank you!
It is the CMS Authoring app that is slow. There is an overall slowness at certain times, and  waits too long for each click like Save&Close, Edit etc.

-Shyla

Ard Schrijvers

unread,
Sep 18, 2015, 3:50:20 AM9/18/15
to hippo-c...@googlegroups.com
On Thu, Sep 17, 2015 at 8:48 PM, Shyla Rajeev <shyl...@gmail.com> wrote:
> Thank you!
> It is the CMS Authoring app that is slow. There is an overall slowness at
> certain times, and waits too long for each click like Save&Close, Edit etc.

Do you have very large documents perhaps? Can you check in the browser
(network traffic) whether the response from the server is slow or
whether it is slow on your client (the browser). Also you still did
not mention which cms version you use and which java version. As said,
if you have -XX:MaxPermSize=128m with java 8, then you should increase
it to 192m.

That said, without more specific input it is close to impossible to
help you really well (other than asking you many questions)

Regards Ard

Shyla Rajeev

unread,
Sep 18, 2015, 7:51:51 AM9/18/15
to Hippo Community
Yes. We do have large documents. Server response is slow. We have newRelic, and can see slowest transactions up to 2mts. Average slow response time is 40-50secs now.

Our application information:
Load Balancers
Amazon ELBs - 1 for CMS server requests, 1 for Site server requests.
Authoring CMS server
(Embedded Content Repository)
Apache Tomcat
Java 8.0
CentOS 7
Delivery Server
(Delivery Tier (Site), Content Repository)
Apache Tomcat
Java 8.0
CentOS 7
Database
Amazon RDS - PostgreSQL 9.4.1
Additional notes
Sticky session enabled for CMS requests only.
HTTPS for Authoring and Delivery servers.
AWS AZ: Multiple AZ in PROD.
ELB Ports: 443 
Tomcat ports: 8443 & 8080 for Tomcat. 8080 will redirect to 8443 
Network and Clustering: Our servers are in AWS. We have two EC2 instances for CMS, and two for Site in production.
Hippo Version: 7.9.8
With following hotfixes:
<hippo.cms.version>2.26.21</hippo.cms.version>
<hippo.hst.version>2.28.12-HSTTWO-3367</hippo.hst.version>


I am trying different configurations in DEV where we can reproduce the issue. DEV is loaded with PROD data.  So, I removed the -XX:MaxPermSize=128m there.
I am attaching a set of new Relic graphs here. The one without DEV in name are from PROD where I have the JVM settings as I posted in first post in this thread:
MAX_HEAP=6144
MIN_HEAP=256
JVM_OPTS="-server -Xmx${MAX_HEAP}m -Xms${MIN_HEAP}m -XX:MaxPermSize=128m -XX:NewRatio=2 -XX:SurvivorRatio=6 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:CMSFullGCsBeforeCompaction=1 -XX:MaxGCPauseMillis=200"

In DEV I tried 3 different settings, and you can see them clearly separated with gaps in DEV new Relic graphs attached:
First try: (changed only MIN HEAP)
MAX_HEAP=6144
MIN_HEAP= 6144
JVM_OPTS="-server -Xmx${MAX_HEAP}m -Xms${MIN_HEAP}m -XX:NewRatio=2 -XX:SurvivorRatio=6 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:CMSFullGCsBeforeCompaction=1 -XX:MaxGCPauseMillis=200"

Second try: (Changed new Ration and Survivor ration to give more space in EDEN) 
MAX_HEAP=6144
MIN_HEAP= 6144
JVM_OPTS="-server -Xmx${MAX_HEAP}m -Xms${MIN_HEAP}m -XX:NewRatio=1 -XX:SurvivorRatio=10 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:CMSFullGCsBeforeCompaction=1 -XX:MaxGCPauseMillis=200"

Third try with G1 Collector: (after reading this article about G1. We have 8GB memory)
MAX_HEAP=6144
MIN_HEAP= 6144
JVM_OPTS="-server -Xmx${MAX_HEAP}m -Xms${MIN_HEAP}m -XX:+UseG1GC -XX:ParallelGCThreads=20 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70 -XX:MaxGCPauseMillis=200"

Hippo index size is at 734MB now.

If you look at the new Relic graphs in PROD,  attached here. Committed heap is a flat line. Eden committed heap is too low as well. 
Not sure why it is not increasing to MAX with need (contradicting the documentation read). Also, ParNew GC is running very frequent.. Assuming it may be because of the small committed EDEN space?

Please let me know if you have any other questions.

Thank you!
Shyla Rajeev
DEV_mem_3_types1.png
DEV_mem_3_types2.png
DEV_mem_3_types3.png
DEV_mem_3_types4.png
mem1.png
mem2.png
mem3.png

Ard Schrijvers

unread,
Sep 18, 2015, 8:30:49 AM9/18/15
to hippo-c...@googlegroups.com
Hey Shyla,

On Fri, Sep 18, 2015 at 1:51 PM, Shyla Rajeev <shyl...@gmail.com> wrote:
> Yes. We do have large documents. Server response is slow. We have newRelic,
> and can see slowest transactions up to 2mts. Average slow response time is
> 40-50secs now.

That is quite slow indeed! To get an idea of the order of magnitude we
are talking about wrt 'large documents', can you tell me for example
how many jcr nodes a large document consists off. You can count this
in the /console for a document that is slow.

Regards Ard

Shyla Rajeev

unread,
Sep 18, 2015, 12:39:50 PM9/18/15
to Hippo Community
A Large document we have can have about 25 fields, out of which some are linked documents which can have multiples.

Thanks
Shyla

Ard Schrijvers

unread,
Sep 18, 2015, 3:39:04 PM9/18/15
to hippo-c...@googlegroups.com
Hey Shyla,


On Fri, Sep 18, 2015 at 6:39 PM, Shyla Rajeev <shyl...@gmail.com> wrote:
> A Large document we have can have about 25 fields, out of which some are
> linked documents which can have multiples.

I think you have a specific document model performance issue that is
beyond the scope of this public mailinglist support for, at least me,
and most likely other Hippo developers. For these more time consuming
issues that require a deeper investigation you can better approach
formal Hippo support via sales, see
https://www.onehippo.com/en/about/contact

Regards Ard

Shyla Rajeev

unread,
Sep 18, 2015, 8:00:32 PM9/18/15
to Hippo Community
Thank you! 

We have enterprise license and Hippo Support account. 
We have been affected by this issue for a long time now.
So, I was trying my luck here as well to see if anyone else in the community has encountered/resolved this before.

Thanks again,
Shyla Rajeev

Shyla Rajeev

unread,
Jan 6, 2016, 2:55:48 PM1/6/16
to Hippo Community
I recently got a request from a user here, asking if we got this resolved.
Adding response here as it may benefit others too.

We got this resolved to some level.
  • We had to throw lot of memory and disk space on our AWS servers, database etc.
  • Bundle Cache and Version Cache size were increased to maintain the cache miss ratio to recommended level.
  • Some custom caching were introduced to warm up bundle caching.
  • Did GC tuning to get to a better performance. 
  • repository.xml configurations were tweaked according to Hippo recommendations (more on search index section).
  • Performed recommendations by Hippo for Repository maintenance etc...
  • Also, redesigned  some of the documents which has lot of custom compound elements to keep the documents simple, and within the hippo limitations. (Divided into multiple docs, and linked them)

Thanks
Shyla

Ashbin Mathew

unread,
Jan 7, 2016, 12:40:59 AM1/7/16
to Hippo Community
Thank you Shyla. Hope this will help us.

Thanks
Ashbin
Reply all
Reply to author
Forward
0 new messages