I found that applying amortized cost analysis to locking to be a very elegant
approach towards making an LRU cache concurrent (see ConcurrentLinkedHashMap).
This allowed avoiding the lock contention of reordering the recency dequeue
immediately to instead amortizing the penalty of being the thread chosen to
apply the batched work. I'm now exploring further ideas based on that structure,
such as sharding into multiple recency queues and recombining them via merge
sort back into a strict LRU order. I suspect that the technique could be applied
to other scenarios, but I haven't seen any existing literature leveraging it.
From: Maurice Herlihy <maurice.herl...@gmail.com>
Sent: Sat, July 10, 2010 6:51:41 AM
Subject: Next edition of AoMP?
What new topics would you like to see covered in the next edition?
You received this message because you are subscribed to the Google Groups "Art
of Multiprocessor Programming" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at