private static int getReservedCodeCacheSize(int jvmMemory){ return 100;}
private static int getMaxMetaspaceSize(int jvmMemory){ return 256;}
private static int getCompressedClassSpaceSize(int jvmMemory){ return 256;}
private static int getExtraJvmOverhead(int jvmMemory){ if (jvmMemory <= 2048) { return 1024; } else if(jvmMemory <= (1024 * 16)) { return 2048; } else if(jvmMemory <= (1024 * 31)) { return 5120; } else { return 8192; }}
public static int adjustJvmMemoryForYarn(int jvmMemory){ if (jvmMemory == 0) { return 0; } return jvmMemory + getReservedCodeCacheSize(jvmMemory) + getMaxMetaspaceSize(jvmMemory) + getCompressedClassSpaceSize(jvmMemory) + getExtraJvmOverhead(jvmMemory);}
Hi Sebastian,Our product runs within the JVM, within a (Hadoop) YARN container. Similar to your situation, YARN will kill the container if it goes over the amount of memory reserved for the container. Java heap sizes (-Xmx) for the apps we run within containers vary from about 6GB to about 31GB, so this may be completely inappropriate if you use much smaller heaps, but here is the heuristic we use on Java 8. 'jvmMemory' is the -Xmx setting given to the JVM and adjustJvmMemoryForYarn() gives the size of the container we request.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
The downside is that with MaxRAM parameter I lose control over Xms.
// If the initial_heap_size has not been set with InitialHeapSize
// or -Xms, then set it as fraction of the size of physical memory,
// respecting the maximum and minimum sizes of the heap.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Note that -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap is simply an alternative to setting Xmx: "...When these two JVM command line options are used, and -Xmx is not specified, the JVM will look at the Linux cgroup configuration, which is what Docker containers use for setting memory limits, to transparently size a maximum Java heap size.".However, the flags have no meaning when you set Xmx explicitly, and virtually all applications that do any sizing actually set their Xmx.Also, the flag has no effect on non-Heap memory consumption.
The main benefit of -XX:+UseCGroupMemoryLimitForHeap in the context of containers tends to be not the "main" application JVMs, but all the (often short lived) little utility things people run without specifying -Xmx (start/stop/admin command things that are java-based, javac, monitoring thing, etc.). Since HotSpot will default to an Xmx that is 1/4 of the (host) system memory size, this can create some surprises in container environments. The effect is somewhat dampened by the fact that most of these things start with an Xmx that is much smaller (1/64th of host system memory), those surprises happen less often as the JVMs for short running things often don't expand to use the full Xmx. But still. But even 1/64th of system memory can become a problem as container environments run many 10s and maybe 100s of JVMs on commodity machines with 100s of GB of memory. In addition to setting a default Xmx that depends on container limits, -XX:+UseCGroupMemoryLimitForHeap will [i think/hope] also set a default -Xms (at 1/64 of the container limit) which can help.
My view is that -XX:+UseCGroupMemoryLimitForHeap is *currently* a "somewhat meaningless" flag, and will remain so until it is on by default (at which point it will have a true beneficial effect, so lets hope it gets out the experimental phase soon). The logic for this claim is simple: Any application for which someone would have the forethought for explicitly adding -XX:+UseCGroupMemoryLimitForHeap, would likely already have -Xmx setting. Or stated differently: chances are that any java command NOT specifying -Xmx will NOT have -XX:+UseCGroupMemoryLimitForHeap flag set, unless it is the default.
Neat, didn't know about MaxRAM or native memory tracking.
RE:
The downside is that with MaxRAM parameter I lose control over Xms.
Oh, it doesn't work? Can't track down definitive info from a quick Google around, but this seems to imply it should: https://stackoverflow.com/ questions/19712446/how-does- java-7-decide-on-the-max- value-of-heap-memory- allocated-xmx-on-osx ... It's a few years old, but this comment sticks out from the OpenJDK copy/paste in the StackOverflow answer:
// If the initial_heap_size has not been set with InitialHeapSize
// or -Xms, then set it as fraction of the size of physical memory,
// respecting the maximum and minimum sizes of the heap.
Seems to imply InitialHeapSize/Xms gets precedence. Perhaps that information is out of date / incorrect ... a look at more recent OpenJDK source code might offer some hints.
If Xms isn't an option for some reason, is InitialRAMFraction/ MaxRAMFraction available? Maybe something else to look at. In any case, thanks for the info!
On Fri, Aug 4, 2017 at 12:18 AM, Sebastian Łaskawiec <sebastian...@gmail.com > wrote:
Thanks a lot for all the hints! They helped me a lot.I think I'm moving forward. The key thing was to calculate the amount of occupied memory seen by CGroups. It can be easily done using:
- /sys/fs/cgroup/memory/memory. usage_in_bytes
- /sys/fs/cgroup/memory/memory. limit_in_bytes
Calculated ratio along with Native Memory Tracking [1] helped me to find a good balance. I also found a shortcut which makes setting initial parameters much easier: -XX:MaxRAM [2] (and set it based on CGroups limit). The downside is that with MaxRAM parameter I lose control over Xms.
On Thursday, 3 August 2017 20:16:50 UTC+2, Tom Lee wrote:
Hey Sebastian,Dealt with a similar issues on Docker a few years back -- safest way to do it is to use some sort of heuristic for your maximum JVM process size. Working from a very poor memory and perhaps somebody here will tell me this is a bad idea for perfectly good reasons, but iirc the ham-fisted heuristic we used at the time for max total JVM process size was something like:<runtime value of -Xmx> + <runtime value of -XX:MaxDirectMemorySize> + slopEasy enough to see these values via -XX:+PrintFlagsFinal if they're not explicitly defined by your apps. We typically had Xmx somewhere between 8-12GB, but MaxDirectMemorySize varied greatly from app to app. Sometimes a few hundred MB, in some weird cases it was multiples of the JVM heap size.The "slop" was for things we hadn't accounted for, but we really should have included things like the code cache size etc. as Meg's estimate above does. I think we used ~10% of the JVM heap size, which was probably slightly wasteful, but worked well enough for us. Suggest you take the above heuristic and mix it up with Meg's idea to include code cache size etc. & feel your way from there. I'd personally always leave at least a few hundred megs additional overhead on top of my "hard" numbers because I don't trust myself with such things. :)Let's see, what else. At the time our JVM -- think this was an Oracle Java 8 JDK -- set MaxDirectMemorySize to the value of Xmx by default, implying the JVM process could (but not necessarily would) grow up to roughly double its configured size to accommodate heap + direct buffers if you had an application that made heavy use of direct buffers and put enough pressure on the heap to grow it to the configured Xmx value (or as we typically did, set Xmx == Xms).Where possible we would constrain MaxDirectMemorySize to something "real" rather than leaving it to this default, preferring to have the JVM throw up an OOME if we were allocating more direct memory than we expected so we could get more info about the failure rather than worrying about the OOM killer hard kill the entire process & not being able to understand why. YMMV.
One caveat: I can't quite remember if Unsafe.allocateMemory()/ Unsafe.freeMemory() count toward your MaxDirectMemorySize ... perhaps somebody else here more familiar with the JVM internals could weigh in on that. Perhaps another thing to watch out for if you're doing "interesting" things with the JVM.
I found this sort of "informed guess" to be much more reliable than trying to figure things out empirically by monitoring processes over time etc. ... anyway, hope that helps, curious to know what you ultimately end up with.Cheers,Tom
On Thu, Aug 3, 2017 at 10:31 AM, Meg Figura <mfi...@alum.wpi.edu> wrote:
Hi Sebastian,Our product runs within the JVM, within a (Hadoop) YARN container. Similar to your situation, YARN will kill the container if it goes over the amount of memory reserved for the container. Java heap sizes (-Xmx) for the apps we run within containers vary from about 6GB to about 31GB, so this may be completely inappropriate if you use much smaller heaps, but here is the heuristic we use on Java 8. 'jvmMemory' is the -Xmx setting given to the JVM and adjustJvmMemoryForYarn() gives the size of the container we request.
private static int getReservedCodeCacheSize(int jvmMemory){return 100;}private static int getMaxMetaspaceSize(int jvmMemory){return 256;}
private static int getCompressedClassSpaceSize( int jvmMemory)