I've been saying this since I first discovered the terrible
OutOfMemoryException in 1999. Xmx makes java inherently unstable.
You can't rely on your app to stay up until you find the magic Xmx
setting. WTF!?
Dick/Carl/Tor/Joe - when you talk to Mark Reinnhold, can you bring this up?
Though I can understand why it is hard to avoid -Xmx for various GC
algorithms whereas the pre-OS-X Mac OS really had no excuse.
--
You received this message because you are subscribed to the Google Groups "The Java Posse" group.
To post to this group, send email to java...@googlegroups.com.
To unsubscribe from this group, send email to javaposse+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/javaposse?hl=en.
see it as performance optimisation. A GC impl which would have to deal
with fragmented, non continuous and resizing heap would have a hard
time to compete with the current VM. A JVM is a operating system for
java applications. You can't plug in new RAM brick in your system
easily at runtime for the same reasons.
I would go even further and say that writing java applications with
unknown memory requirements is not that professional :P
In fact this post really highlighted to me how restrictive the approach is! Forget writing an app that can adapt to different inputs dynamically!!! You have to pick memory size first. No wonder java is only serious for server side stuff.
Alan
Sent from my iPhone
see it as performance optimisation. A GC impl which would have to deal
with fragmented, non continuous and resizing heap would have a hard
time to compete with the current VM.
That's why I'm completely uninterested in language improvements, such as
Project Coin or closures, things where other languages can provide
innovation, while some basic weaknesses of the VM (where others can't
provide innovation) aren't dealt with after sixteen years.
--
Fabrizio Giudici - Java Architect, Project Manager
Tidalwave s.a.s. - "We make Java work. Everywhere."
java.net/blog/fabriziogiudici - www.tidalwave.it/people
Fabrizio...@tidalwave.it
The biggest problem, anyway, is that you have to force your users to
understand a technicality of the application tuning. When an end user
launches the average desktop application, he doesn't have to tune it.
So, immediately your application goes in the category of "cumbersome stuff".
First point, Java heap must be contiguous for the current GC implementations. Which means you have to specify -Xmx right at startup. At start, the JVM will reserve -Xmx amount of C heap for Java heap. Java heap is then expanded from reserved memory when Hotspot determines that it could benefit or needs more memory. Java heap will *never* grow bigger than the setting of -Xmx. If you don't specify -Xmx, the default is to set it to 1/4 of physical ram starting with a portion allocated (depending on which hotspot engine is invoked).
The parallel collector is more ergonomic than the default CMS collector. This means that while CMS generally won't resize to be smaller, the parallel collector will adapt to pause time and throughput goals. I've got a gc log lying about some where that clearly demonstrates the spaces resizing based on load.
Memory is generally never given back to the OS as it's a very very tricky thing to do. To give memory pack you need to guarantee 100% that you have no pointers to anything in the space you're about to give back or you risk a SEGV. In some cases, dealloc does nothing.
C applications will take as much memory as the OS will give them. Since the JVM is a C application, it will take as much memory as the OS will give it. And it sometimes does take more memory than you'd expect. However, Java heap -Xmx setting is *always* honored. If your JVM is 10g with a 2g -Xmx heap setting it is the C heap that is consuming the other 8gigs. You've got an NIO or JNI or some thing else causing a native C heap leak.
Should one let a JVM take as much heap as is possible? I think it's a pretty dangerous thing to do. If you let the JVM take *all* memory, what happens when it runs out. Current if the JVM runs out of memory (not native C heap but Java heap) there is still something the JVM can do to try to recover. Even is native heap is seemingly exhausted, there are guard pages put into the process that allow the JVM to try to do something reasonable. Current implementations (including Azul's) are all encumbered by these restrictions. Azul just happens to be a bit smarter at the moment in how they manage it. But even they are constrained as to what they can do by the underlying hardware/OS. In fact, they run in a hacked up OS that is specifically tuned for their JVM. Nothing wrong with this but....
I could go on.
Regards,
Kirk
Man, this thread contains a lot of FUD about how GC works (or doesn't work).First point, Java heap must be contiguous for the current GC implementations.
Memory is generally never given back to the OS as it's a very very tricky thing to do. To give memory pack you need to guarantee 100% that you have no pointers to anything in the space you're about to give back or you risk a SEGV. In some cases, dealloc does nothing.
Should one let a JVM take as much heap as is possible? I think it's a pretty dangerous thing to do. If you let the JVM take *all* memory, what happens when it runs out. Current if the JVM runs out of memory (not native C heap but Java heap) there is still something the JVM can do to try to recover. Even is native heap is seemingly exhausted, there are guard pages put into the process that allow the JVM to try to do something reasonable.
Current implementations (including Azul's) are all encumbered by these restrictions. Azul just happens to be a bit smarter at the moment in how they manage it. But even they are constrained as to what they can do by the underlying hardware/OS. In fact, they run in a hacked up OS that is specifically tuned for their JVM. Nothing wrong with this but....
Memory is generally never given back to the OS as it's a very very tricky thing to do. To give memory pack you need to guarantee 100% that you have no pointers to anything in the space you're about to give back or you risk a SEGV. In some cases, dealloc does nothing.
I would of thought that is something the GC needs to do anyway. If it's not 100% certain that a segment of memory is free of back references doesn't it risk data corruption if the memory reused for more managed objects in the same VM? I may be missing something, but seems like both problems are equally hard (or essentially the same).
Should one let a JVM take as much heap as is possible? I think it's a pretty dangerous thing to do. If you let the JVM take *all* memory, what happens when it runs out. Current if the JVM runs out of memory (not native C heap but Java heap) there is still something the JVM can do to try to recover. Even is native heap is seemingly exhausted, there are guard pages put into the process that allow the JVM to try to do something reasonable.
If an implementation was not constrained by requiring a contiguous heap and there was plenty of resources available, then a reasonable action could be to reserve more memory from the OS. It needn't take all the memory, it could be smart and look at available resources and system configuration (e.g. over-commit limits) and fall back to existing means to dealing with a rapidly exhausting heap, before the OOM killer kicks in. This would allow the default configuration of a JVM to be much more conservative with its initial startup defaults. The 1/4 of resources always seemed to me like an overly aggressive default.
Current implementations (including Azul's) are all encumbered by these restrictions. Azul just happens to be a bit smarter at the moment in how they manage it. But even they are constrained as to what they can do by the underlying hardware/OS. In fact, they run in a hacked up OS that is specifically tuned for their JVM. Nothing wrong with this but....
The way that Gil Tene's explains it, their OS changes would be generally applicable to all managed VMs. I.e. it is an API change for the interaction between the user application and OS for managing memory. However, I guess that it may only be applicable if the GC compacted memory in a pattern similar to Azul's.
On Aug 15, 2011, at 3:58 AM, Michael Neale wrote:
> Even for server apps it is at odds with, well, everything else. If you
> want to limit resource usage on a per process (JVM, in this case)
> there are plenty of tools that do that for you, give the system admin
> more control etc... the JVM is a unique pain in the rear in this
> regard ;)
will, it means that the JVM needs to be understood by those deploying it. But guess what, that is the same for just about every machine on the planet.
>
> There really is no good reason for it from a system management/
> perspective - like all other "odd" things most reasons given are
> speculation after the fact - I am sure there is a real reason and
> probably some ancient decision to do with running applets in
> constrained devices...
correct, but there are good GC ergonomic reasons in the current implementations for having a max memory, min memory and incremental resizing option.
BTW, I think I saw someone mention setting -Xmx == -Xms in an earlier thread. For 1.6.0 I would not recommend it as a starting point when tuning GC as it will turn off ergonomics. Which means, GC will not be adaptive.
>
> the fact that is is a -X means that it is meant to be a non core
> settings too ?
The -X means that the setting is non-standard. FYI, you don't need to have a collector to be compliant with the JVM specification (I believe there is one implementation that in fact doesn't have a collector and -Xmx setting makes no sense in that case) which would make it hard to have a standard switch setting ;-)
>
>
>
> On Aug 13, 9:32 am, Reinier Zwitserloot <reini...@gmail.com> wrote:
>> On Friday, August 12, 2011 11:37:15 PM UTC+2, mbien wrote:
>>
>>> see it as performance optimisation. A GC impl which would have to deal
>>> with fragmented, non continuous and resizing heap would have a hard
>>> time to compete with the current VM. A JVM is a operating system for
>>> java applications. You can't plug in new RAM brick in your system
>>> easily at runtime for the same reasons.
>>
>>> I would go even further and say that writing java applications with
>>> unknown memory requirements is not that professional :P
>>
>> This is spot on. For large server apps. For client apps or quick command
>> line tools, being forced to prepick total memory load (and treating the JVM
>> that's going to start up to host your app as an OS-in-a-OS), is ridiculous.
>
Regards,
Kirk
Under Windows, I've even seen users unable to run Sweet Home 3D with -
Xmx1024m! As I never found out why, I was obliged to set -Xmx to 512
under this system. :-(
That is the way it current works, you take what you need and reserve the rest. If you have an OOME you've generally got a problem, a bug in your application and it would behove you to fix the bug. If you have a temporary high water mark, use the parallel collector. It handles this situation very very well. I've got evidence of that from a JVM supporting a website that I bet everyone on this list uses (sorry can't name it) that is subject to spikes in load every time something happens in webosphereland.If an implementation was not constrained by requiring a contiguous heap and there was plenty of resources available, then a reasonable action could be to reserve more memory from the OS. It needn't take all the memory, it could be smart and look at available resources and system configuration (e.g. over-commit limits) and fall back to existing means to dealing with a rapidly exhausting heap, before the OOM killer kicks in. This would allow the default configuration of a JVM to be much more conservative with its initial startup defaults. The 1/4 of resources always seemed to me like an overly aggressive default.
>> will, it means that the JVM needs to be understood by those deploying it. But guess what, that is the same for just about every machine on the planet.
>
> Except, not every IT org can devote a deployment engineer to
> understanding the Java memory model and intricate details about
> generations and collector strategies.
there are so many things wrong with this statement I'm not even going to start.
Regards,
Kirk
Send them to a training - it's worth every penny...
Sven
> there are so many things wrong with this statement I'm not even going to start.
>
> Regards,
> Kirk
>
> --
> You received this message because you are subscribed to the Google Groups "The Java Posse" group.
> To post to this group, send email to java...@googlegroups.com.
> To unsubscribe from this group, send email to javaposse+...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/javaposse?hl=en.
>
>
--
Sven Reimers
* Senior System Engineer and Software Architect
* NetBeans Dream Team Member: http://dreamteam.netbeans.org
* NetBeans Governance Board Member: http://netbeans.org/about/os/governance.html
* Community Leader NetBeans: http://community.java.net/netbeans
* Duke's Choice Award Winner 2009
* Blog: http://nbguru.blogspot.com
* XING: https://www.xing.com/profile/Sven_Reimers8
* LinkedIn: http://www.linkedin.com/in/svenreimers
Join the NetBeans Groups:
* XING: http://www.xing.com/group-20148.82db20
* NUGM: http://haug-server.dyndns.org/display/NUGM/Home
* LinkedIn: http://www.linkedin.com/groups?gid=1860468
http://www.linkedin.com/groups?gid=107402
http://www.linkedin.com/groups?gid=1684717
* Oracle: https://mix.oracle.com/groups/18497
Any particular training in mind ;-)
Speaking of which, I hear that is still room for a few more people at the up coming open spaces conference we'll both be at.
Since it's a free (as in beer) open spaces conference, I'll spam the group with this shameless plug ;-)
<shameless plug>
http://www.javaspecialists.eu/wiki/index.php/JavaSpecialistsSymposium2011#Thursday_1st_of_September
</shameless plug>
Regards,
Kirk
Regards,
Kirk
With G1 came the move to use regions. Regions are maintained on a free list so while it seems like it maybe possible to return regions back to the OS, that would only be possible if the right regions were put onto the free list. So, it might be a future option but we're not there just yet.
BTW, it's very common to have data structures that size themselves at startup and then remain fixed in size for the duration of the run time. A number of them are found in the kernel such as process heap, bit map for file descriptors....
That is the way it current works, you take what you need and reserve the rest. If you have an OOME you've generally got a problem, a bug in your application and it would behove you to fix the bug.
Gil has a dream and a dream team that can implement it. However, what he wants to do requires that he gets a number of groups out side of his sphere of influence to accept his vision. I am totally in Gil's camp w.r.t the changes that he'd like to see coming from Intel and the Linux camp. However, both groups seem to have other priorities at the moment. I predict that Gil will be successful with the Linux camp. I'm not so sure about Intel. That said, Azul is pushing the boundaries of what a collector can do but they've not solved the problems that the Oracle HotSpot team and the IBM team haven't been able to solve just yet. I suspect once they do, the artificial OOM errors that you all are complaining about will go away.
That is the way it current works, you take what you need and reserve the rest. If you have an OOME you've generally got a problem, a bug in your application and it would behove you to fix the bug.I guess what I find annoying is the memory reservation is, from the user's perspective, only a constraint. It obviously simplifies the implementation, however as a user it doesn't buy me anything. Set the value to low and I can run out of memory with resources still available on the box, set it too high, either the app doesn't start or (as happened yesterday in one of our test environments) the guest OS can over commit virtual memory and the host OS's OOM killer takes out the guest. Not much that Java can do in that case.
It always inspiring to speak to Gil. I like that Azul are not content to settle with what's available and are constantly reaching for what's possible. I think their proposed memory interface changes will get through sooner rather than later, although the scheduler changes will be a tougher sell to the linux crowd. As for the Intel side, who knows. If they could see the potential for a hardware load value barrier outside of Java, perhaps. Although there are a hugh number of features in the Intel chips driven by the gaming industry (e.g. write-combining buffers), so perhaps the Java crowd could have some influence too :-). I think that it's awesome that Gil and his team are even trying.
Mike.
--
You received this message because you are subscribed to the Google Groups "The Java Posse" group.
To view this discussion on the web visit https://groups.google.com/d/msg/javaposse/-/bDGDQOGBVNYJ.
> really?? every machine on the planet? that attitude is a worry, and a
> good example of what I feared is still the prevailing attitude.
I don't understand.. et me restate this. End users may do the final install of an application on their phone, tablet, desktop,.. what ever.. but some expert some where as looked into the issues of how to get that done. The more complex the system, the more one off or customized the deployment, the more expertise is needed.
Regards,
Kirk
Regards,
Kirk