Hi, I'm trying to configure a LoadingCache with large byte buffers
(Netty ByteBufs) as the values. These byte buffers are loaded and
processed from parts of larger compressed files. There will be many such
large files so, there will be many cached buffers.
I configured the LoadingCache like this:
CacheBuilder.<Key, ByteBuf>newBuilder()
.expireAfterAccess(expireAfterAccessSeconds, SECONDS)
.maximumWeight(maxTotalUncompressedBytes)
.weigher((key, byteBuf) -> byteBuf.readableBytes())
//More config.
.build(xxx);
I used a custom Weigher
to relay the buffer's readable size/bytes as weight. I set the
maximumWeight as 1GB as I wanted no more than 1GB of total cached buffer
sizes.
When I loaded up a 350MB buffer it got evicted
the moment I tried to load another similar sized buffer. The total
accumulated size should've been 400MB + 350MB, which is still below 1GB.
I dug through the code and realized that there are some calculations in
LoadingCache that are not obvious to the user at all. The maximum
weight gets divided across all the internal cache segments, so the
threshold is set to lower than what the user thinks.
What
happened was that the segment was setup internally to allow only 250MB
weight. Anything larger, like the 400MB buffer would get marked for
eviction almost immediately.
These were obtained from a debug session:
concurrencyLevel 4
maxWeight 1073741824
maxSegmentWeight 268435456
The
relationship between concurencyLevel, segmentCount, maxWeight
and maxSegmentWeight are not apparent and I'm wondering how to get
around this issue. I could change the maximum weight or change the
concurrency level and play with the numbers but the former is not
intuitive and the latter will cause performance problems.
Any suggestions or help would be appreciated.
Thanks,
Ashwin.