Is there a way to get the JVM to play nicely? Ideally, how about only
using a few MB to start with, and just allocating more if it needs it?
(I don't really like the game of guessing/estimating how much memory
the app might use and passing that to -Xmx)
Sorry, I realise this isn't really Scala related, but it's come up
when trying to promote Scala to Perl developers.. they're like "Hello
world needed 1GB ram to run!? This is madness!"
Cheers,
Toby
--
Turning and turning in the widening gyre
The falcon cannot hear the falconer
Things fall apart; the center cannot hold
Mere anarchy is loosed upon the world
AFAIK, the JVM is already meant to do what you want:
(a) Request blocks of memory from the OS as it needs them.
(b) But never request more than -Xmx. (Yes, you do need to play that
"game": what else could it reasonably do - grow without bound (like
Ruby) until the OS crashes?)
I have heard that sometimes (a) doesn't work right, and the whole of
-Xmx is requested eagerly at startup.
What OS/Jvm are you on? What settings are you using and what do you
observe happening?
-Ben
Fair call, I don't mind that it has an upper limit to prevent run-away
code killing the machine :)
> I have heard that sometimes (a) doesn't work right, and the whole of
> -Xmx is requested eagerly at startup.
That sounds like the behaviour I have observed.
> What OS/Jvm are you on? What settings are you using and what do you
> observe happening?
I've mainly noticed this stuff happening on Debian and Ubuntu, 64bit, OpenJDK 6.
For instance, running a hello-world scala app, it grabs more than 600
MB of memory - yet only 28MB of that is considered resident.
I can reduce the -Xmx values down as low as 2M and still manage to
this simple program. (at 1M it complains about insufficient memory)
So yeah.. the jvm isn't really using all that memory, as it's
considered non-resident and is able to be swapped out - but the system
won't overcommit memory, in case the jvm does decide it wants that RAM
one day. And thus the jvm *is* using all that memory, since it's
preventing anyone else from having it.
Now in some senses, that's quite a safe approach for the JVM.. It has
said that it's maximum memory size is so much, and by preallocating
it, it's making sure no other program can steal it away later.
However for programs which will never use all that memory, it sucks..
You only need a few concurrent users running java apps locally (or one
user running several) and then all your virtual memory is consumed.
You *can* adjust the systems so that the kernel will overcommit memory
- ie. let programs demand as much as they like as long as they don't
use it, even to the point where the system would crash if they all
demanded it at once - but you can see the flaws in that plan too.
Is this different behaviour to what you see?
cheers,
Toby
Yep, that's the crux of the problem I think. JVM says "I /might/ need
2Gb, even though right now I need 1% of that", so OS says "Ok, I
better reserve that for you"
> However for programs which will never use all that memory, it sucks..
> You only need a few concurrent users running java apps locally (or one
> user running several) and then all your virtual memory is consumed.
Yep. A problem.
> You *can* adjust the systems so that the kernel will overcommit memory
> - ie. let programs demand as much as they like as long as they don't
> use it, even to the point where the system would crash if they all
> demanded it at once - but you can see the flaws in that plan too.
Cant have it every way at once :) Might be a better tradeoff for dev usage.
-Ben
> I've mainly noticed this stuff happening on Debian and Ubuntu, 64bit, OpenJDK 6.
> For instance, running a hello-world scala app, it grabs more than 600
> MB of memory - yet only 28MB of that is considered resident.
>
> I can reduce the -Xmx values down as low as 2M and still manage to
> this simple program. (at 1M it complains about insufficient memory)
>
> So yeah.. the jvm isn't really using all that memory, as it's
> considered non-resident and is able to be swapped out - but the system
> won't overcommit memory, in case the jvm does decide it wants that RAM
> one day. And thus the jvm *is* using all that memory, since it's
> preventing anyone else from having it.
I believe the "600 MB" figure you're quoting is the Linux "Virtual" number. This includes everything in the process's address space, including stuff never swapped and rarely made resident, like memory mapped files. This StackOverflow question may provide some enlightenment:
I don't think a large value for Virtual is anything to be concerned about by itself. It does not necessarily imply that all that address space is backed by space reserved in ram/swap. In practice, I think you could run hundreds of these "Hello world" programs without starving out anyone else.
The value for Resident is much more important.
Lachlan.
Nope, not on the virtual machines which are the servers we tend to
work on at work! When that virtual size adds up to the total reported
free memory (ram+swap), it's game over.
And at ~600MB a throw, it's not hard to do, especially when you throw
in something like Selenium or Jenkins which will swallow >1GB each if
you don't adjust the jvm parameters.
> The value for Resident is much more important.
RSS is more representative of the RAM actually being used, yes, but
that virtual memory is still getting reserved by the system somewhere.
Ok, interesting. I went and re-read the Linux docs on overcommit. They seem to be saying that read-only mmapped files count as 0 for overcommit calculations. Perhaps relatively little of the virtual space used by Hotspot is actually mmapped files. If it's mostly just pre-allocated memory, I can see the problem. Hard to see a reason for Hotspot do that when Xmx is so low. BTW, I can't get virtual below 1GB when I try this on my box. Weird.
What's the value of the "vm.overcommit_memory" sysctl on your servers? You could try setting it to 1 (always overcommit) and see what explodes.
Lachlan.
Debian and Ubuntu seem to default to 0, I've noticed before.
Toby