This FAQ answers common questions about Java HotSpot Technology and about performance in general. Unless otherwise noted, all information on this page applies to both the HotSpot Client VM and the HotSpot Server VM as of Java SE version 1.4.
First, make sure you are running with -agentlib:hprof and try -agentlib:hprof=help to see the different kinds of profiling available. If you are still having problems please see the Java Trouble-Shooting and Diagnostic Guide [PDF]
Certain applications will use a lot of file descriptors. The only thing that you can do is to set the number of file descriptors allowed on the system higher. The hard limit default is 1024 and the soft limit default is 64. To set this higher you need to modify /etc/system by adding the following 2 definitions:
There are several things to try in this arena. First, give -Xincgc a try. This uses the incremental garbage collection algorithm, which attempts to collect a fraction of the heap instead of the entire thing at once. For most programs this results in shorter pauses, although throughput is usually worse.
Next, you might try decreasing the amount of heap used. A larger heap will cause garbage collection pauses to increase because there is more heap to scan. Try -Xmx32m. If your application requires more memory than you can adjust the size of the eden (young generation space) with -XX:NewSize=... and -XX:MaxNewSize=... (for 1.3/1.4) or -Xmn in 1.4 and later. For some applications a very large eden helps, for others it will increase the times of minor collections. For most programs, collecting eden is much faster than other generations because most objects die young. If you currently invoke with something like:
which will dedicate 1/3rd of the memory to eden. For 1.3, MaxNewSize is set to 32mb on Sparc, 2.5mb on Intel based machines. NewRatio (the ratio between the young/old generations) has values of 2 on Sparc Server, 12 on client Intel, and 8 everywhere else, as you can quickly determine, this is superseded by MaxNewSize's defaults (rendering NewRatio ineffective for even moderately sized heaps). In 1.4 and later, MaxNewSize has been effectively set to infinity, and NewRatio can be used instead to set the value of the new generation. Using the above as an example, you can do the following in 1.4 and later:
If you are worried about the number of garbage collections, but less worried about pause times, then increasing the heap should cause the number of full garbage collections to decrease, this is especially true if you increase the size of the eden space as well.
Many systems have less efficient memory management than in HotSpot. To work around this, some programs keep an "object pool", saving previously allocated objects in some freelist-like data structure and reusing them instead of allocating new ones. But... Don't use object pools! Using object pools will fool the collector into thinking objects are live when they really aren't. This may have worked before exact garbage collection became popular, but this is just not a good idea for any modern Java Virtual Machines.
The Java HotSpot VM cannot expand its heap size if memory is completely allocated and no swap space is available. This can occur, for example, when several applications are running simultaneously. When this happens, the VM will exit after printing a message similar to the following.
The maximum theoretical heap limit for the 32-bit JVM is 4G. Due to various additional constraints such as available swap, kernel address space usage, memory fragmentation, and VM overhead, in practice the limit can be much lower. On most modern 32-bit Windows systems the maximum heap size will range from 1.4G to 1.6G. On 32-bit Solaris kernels the address space is limited to 2G. On 64-bit operating systems running the 32-bit VM, the max heap size can be higher, approaching 4G on many Solaris systems.
If your application requires a very large heap you should use a 64-bit VM on a version of the operating system that supports 64-bit applications. See Java SE Supported System Configurations for details.
Pooling objects will cause them to live longer than necessary. The garbage collection methods will be much more efficient if you let it do the memory management. We strongly advise taking out object pools.
Don't call System.gc(), HotSpot will make the determination of when its appropriate and will generally do a much better job. If you are having problems with the pause times for garbage collection or it taking too long, then see the pause time question above.
Starting with 1.3.1, softly reachable objects will remain alive for some amount of time after the last time they were referenced. The default value is one second of lifetime per free megabyte in the heap. This value can be adjusted using the -XX:SoftRefLRUPolicyMSPerMB flag, which accepts integer values representing milliseconds. For example, to change the value from one second to 2.5 seconds, use this flag:
This means that the general tendency is for the Server VM to grow the heap rather than flush soft references, and -Xmx therefore has a significant effect on when soft references are garbage collected.
The behavior described above is true for 1.3.1 through Java SE 6 versions of the Java HotSpot VMs. This behavior is not part of the VM specification, however, and is subject to change in future releases. Likewise the -XX:SoftRefLRUPolicyMSPerMB flag is not guaranteed to be present in any given release.
If you're using RMI, then you could be running into distributed GC. Also, some applications are adding explicit GC's thinking that it will make their application faster. Luckily, you can disable this with a command line option in 1.3 and later. Try -XX:+DisableExplicitGC along with -verbose:gc and see if this helps.
These two systems are different binaries. They are essentially two different compilers (JITs)interfacing to the same runtime system. The client system is optimal for applications which need fast startup times or small footprints, the server system is optimal for applications where the overall performance is most important. In general the client system is better suited for interactive applications such as GUIs. Some of the other differences include the compilation policy,heap defaults, and inlining policy.
Client and server systems are both downloaded with the 32-bit Solaris and Linux downloads. For 32-bit Windows, if you download the JRE, you get only the client, you'll need to download the SDK to get both systems.
Since Java SE 5.0, with the exception of 32-bit Windows, the server VM will automatically be selected on server-class machines. The definition of a server-class machine may change from release to release, so please check the appropriate ergonomics document for the definition for your release. For 5.0, it's Ergonomics in the 5.0 Java[tm] Virtual Machine.
Warming up loops for HotSpot is not necessary. HotSpot contains On Stack Replacement technology which will compile a running (interpreted) method and replace it while it is still running in a loop. No need to waste your applications time warming up seemingly infinite (or very long running) loops in order to get better application performance.
What it is: A 64-bit version of Java has been available to Solaris SPARC users since the 1.4.0 release of J2SE. A 64-bit capable J2SE is an implementation of the Java SDK (and the JRE along with it) that runs in the 64-bit environment of a 64-bit OS on a 64-bit processor. You can think of this environment as being just another platform to which we've ported the SDK. The primary advantage of running Java in a 64-bit environment is the larger address space. This allows for a much larger Java heap size and an increased maximum number of Java Threads, which is needed for certain kinds of large or long-running applications. The primary complication in doing such a port is that the sizes of some native data types are changed. Not surprisingly the size of pointers is increased to 64 bits. On Solaris and most Unix platforms, the size of the C language long is also increased to 64 bits. Any native code in the 32-bit SDK implementation that relied on the old sizes of these data types is likely to require updating.
Within the parts of the SDK written in Java things are simpler, since Java specifies the sizes of its primitive data types precisely. However even some Java code needs updating, such as when a Java int is used to store a value passed to it from a part of the implementation written in C.
What it is NOT: Many Java users and developers assume that a 64-bit implementation means that many of the built-in Java types are doubled in size from 32 to 64. This is not true. We did not increase the size of Java integers from 32 to 64 and since Java longs were already 64 bits wide, they didn't need updating. Array indexes, which are defined in the Java Virtual Machine Specification, are not widened from 32 to 64. We were extremely careful during the creation of the first 64-bit Java port to insure Java binary and API compatibility so all existing 100% pure Java programs would continue running just as they do under a 32-bit VM.
In order to run a 64-bit version of Java you must have a processor and operating system that can support the execution of 64-bit applications. The tables below list the supported 64-bit operating systems and CPUs for J2SE 1.4.2 and Java SE 5.0.
For the Solaris 64-bit packages, you must first install the 32-bit SDKor JRE and then select and install the 64-bit package on top of the 32-bit version. For all other platforms, you need onlyinterested in.
How do I select between 32 and 64-bit operation? What's the default? The options -d32 and -d64 have been added to the Java launcher to specify whether the program is to be run in a 32 or 64-bit environment. On Solaris these correspond to the ILP32 and LP64 data models, respectively. Since Solaris has both a 32 and 64-bit J2SE implementation contained within the same installation of Java, you can specify either version. If neither -d32 nor -d64 is specified, the default is to run in a 32-bit environment.
c80f0f1006