It looks like this function is called for every directMemory allocation:
Here's a stack trace from a failed memory allocation because I'm setting MAX_DIRECT_MEMORY
at java.nio.Bits.reserveMemory(Bits.java:658)
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
... there are a couple issues with the below code which makes it pretty ugly (and used OFTEN in direct memory applications):
1. ALL memory access requires a lock. That's evil if you're allocating small chunks.
2. The code to change the reserved memory counters is duplicated twice. This is a great way to introduce bugs. (how did this even get approved? do they not do code audits or require that commits be approved?)
3. If you are out of memory we call System.gc... EVIL. The entire way direct memory is reclaimed via GC is a horrible design.
4. After GC they sleep 100ms. What's that about? Why 100ms? Why not 1ms?
... I think I'm just going to dump this and go with using Unsafe and allocate my own memory directly. At LEAST for smaller allocations.
static void reserveMemory(long size, int cap) {
synchronized (Bits.class) {
if (!memoryLimitSet && VM.isBooted()) {
maxMemory = VM.maxDirectMemory();
memoryLimitSet = true;
}
// -XX:MaxDirectMemorySize limits the total capacity rather than the
// actual memory usage, which will differ when buffers are page
// aligned.
if (cap <= maxMemory - totalCapacity) {
reservedMemory += size;
totalCapacity += cap;
count++;
return;
}
}
System.gc();
try {
Thread.sleep(100);
} catch (InterruptedException x) {
// Restore interrupt status
Thread.currentThread().interrupt();
}
synchronized (Bits.class) {
if (totalCapacity + cap > maxMemory)
throw new OutOfMemoryError("Direct buffer memory");
reservedMemory += size;
totalCapacity += cap;
count++;
}
}