Java memory tuning in vertx

1,786 views
Skip to first unread message

Jorge L.

unread,
Jun 6, 2014, 6:21:17 AM6/6/14
to ve...@googlegroups.com
Although this is not a specific issue of vertx, but of Java, I would like to know if there is any recommendations or best practice to tune the Java memory when using vertx.
Our experience is that this is the hardest point. We can find two issues:
a) OOM due to exceeding Java heap. It can happen when working with big documents in vertx under stress load. It would be nice if vertx could provide some type of protection (discarding requests that cannot be processed due to lack of memory). Currently, we set -Xmx and -Xms to the same value and relatively high (2GB).
b) Exhaust RAM of the host (Java process captures all of it). We have experienced that Java process does not return all the memory to the OS. We have monitored memory with Java tools and we can confirm that heap never exceeds the Xmx limit and there is no memory leak (apparently). However, when we stress vertx, it looks like Java process takes much more RAM memory than expected, and the performance is degraded due to lack of free RAM memory for the OS. It looks like a well-know Java problem (not vertx specific).
We have tested some JVM parameters but without any benefit (or even with penalty). So, we maintain the default ones except for Xmx and Xms.

Martijn Verburg

unread,
Jun 6, 2014, 7:47:34 AM6/6/14
to ve...@googlegroups.com
Hi Jorge,

1.) Setting -Xms and -Xmx to be the same is a bit of tuning folklore.

2.) I'd recommend streaming your documents or splitting them into smaller batches

3.) It's the native component of Java that is probably 'leaking' and eating up all of your RAM.  When I mean 'leaking' I mean that there's probably a memory leak in your Java/vertx code that's calling a native component.

Our recommendation is to capture the GC logs and use a GC log analyser to see what's really going on.

Disclaimer: We sell a GC Log Analyser (http://www.jclarity.com/censum) but it has a 14-day free trial.

Cheers,
Martijn


--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jez P

unread,
Jun 6, 2014, 11:35:40 AM6/6/14
to ve...@googlegroups.com
I watched someone use jClarity software on a recent project and it's a great tool (particularly because it confirmed exactly what I was telling them). I'd recommend it to anyone running into suspected GC/OOM problems. 

I would say that it's tricky just to say "here's your memory tuning best practice" because not all applications have the same characteristics in terms of object lifetimes. In that regard, best practice is to measure, to find out what's misbehaving or sub-optimal and then respond by changing your parameters for the environment and measure again. Measure doesn't mean just look at performance, it means look at heap growth and the growth of regions within the heap, and look at whether you get objects being promoted to older generations when they should be disposable. jClarity's stuff does that, and even makes suggestions for the parameters you should change to modify the behaviour. 

The other thing I learned is that very few places seem to have people on their teams who know much about GC, even though they have excellent coders :)

Jordan Halterman

unread,
Jun 6, 2014, 4:33:28 PM6/6/14
to ve...@googlegroups.com


On Jun 6, 2014, at 4:47 AM, Martijn Verburg <martijn...@gmail.com> wrote:

Hi Jorge,

1.) Setting -Xms and -Xmx to be the same is a bit of tuning folklore.

2.) I'd recommend streaming your documents or splitting them into smaller batches
This.

I built a system that processes various files - some > 1GB - throughout the day and sends a lot of that information over the event bus. It simply requires a mixture if streaming and event bus flow control. I ensure only a few thousand rows are ever in memory at any given time and throttle the stream to prevent overloading the event bus. With proper tuning that makes for very efficient memory usage and good performance.

Jorge L.

unread,
Jun 9, 2014, 9:59:58 AM6/9/14
to ve...@googlegroups.com
Thanks Martijn,
Unfortunatelly, I have to buffer the whole document because I need to invoke a web service before forwarding the message to the final destination. This web service will indicate us if that application is authorized or not. Because access to the web service is asynchronous, I have to store the whole document in memory (otherwise I will lose it). We can use chunked mode for these big documents but the memory consumption (under high load with 1MB documents) is going to be very high.

I will give a try to your GC analyser. We've used jmap and jvisualvm but we could not see any memory leak at garbage collection.

Eduardo Alonso García

unread,
Jun 9, 2014, 10:15:35 AM6/9/14
to ve...@googlegroups.com
I have been monitoring a vert.x application during several days.
The first day I perform several big load testings looking for limits in my process; We also set Xmx and Xms to same value to avoid having impact in java redimensioning heap space.

The thing is that using jconsole (similar application to those you use) we never see the heap to exceed 900MB, and the garbage collection was working well (we see some peaks during the tests). The problem and weird thing is that the virtual memory allocated to java process grew up to 2GB...
We leave the vert.x process without work for 2 days and still same results (just 900MB of heap occupied and 2GB of virtual memory)

Where are the 1.1GB released memory by garbage collection and never returned to SO.

After investigatin something in internet it seems that java (not about vert.x) never releases memory back to SO (at least in most of JVM implementations)

Nate McCall

unread,
Jun 9, 2014, 12:22:52 PM6/9/14
to ve...@googlegroups.com
Virtual memory is not an indicator of RAM in use by a process. It
includes the address space, swap space and resident memory (RES column
in top - which is a better indicator for what you describe).

For some more info about what is going on, take a look at
/proc/meminfo (well described here:
http://www.redhat.com/advice/tips/meminfo.html). Also, you may want to
consider disabling swap entirely. For a single purpose server, it is
often not needed.

For gathering historicals over the course of several days, using a
monitoring system like graphite, munin, etc. is a really good idea. At
the least, install the systat package
(https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/Introduction_To_System_Administration/s2-resource-tools-sar.html)
and become familiar with the `sar` command.
Reply all
Reply to author
Forward
0 new messages