You should be happy to hear that the test is bunk and its
configuration is far too extreme. I must have left it at a poor choice
of settings when experimenting. The configuration of a
concurrencyLevel=1,000 means that their must be a 16k pending
reorderings before a draining is attempted. Due to the number of
threads, the draining thread is descheduling and the arrival rate of
new reads exceeds its capabilities to catch up.
I rewrote the test so that its clearer, more realistic, and using a
reasonable configuration. The test shows that the draining can keep up
with a concurrencyLevel=64 and numThreads=250. In a real-world
application the arrival rate of reads wouldn't be 250 concurrently non-
stop, so this should be safe.
The draining is a slight bottleneck because it must be done by a
single-thread to arrive at a strict LRU. It can be done concurrently
if the LRU is partitioned, e.g. one per segment. This is what we do in
MapMaker, but that was done for simplicity with plans to enhance to a
top-level like CLHM does.
A pending enhancement is to optionally move draining away from a user-
thread and have it performed by an Executor. This is to avoid exposing
the amortized penalty onto a caller and we've added it to MapMaker
since it has much more clean-up work to do (this was so that we could
move soft/weak GC to user threads). A dedicated thread would probably
be able to drain CLHM even faster, so an even more extreme
configuration could be used.
Sorry for the false positive. Can you experiment with the updated test
and see if its satisfactory?
Thanks!
Ben