Evict dead nodes first in LRU Eviction Policy

34 views
Skip to first unread message

christia...@gmail.com

unread,
Jan 22, 2009, 8:59:43 AM1/22/09
to ConcurrentLinkedHashMap
Hi!

First of all, thanks for sharing this. After experimenting, there is
one thought I would like to share concerning the LRU EvictionPolicy.
At the moment onGet() in EvictionPolicy.LRU() copies the node and
inserts the new node on to the tail of the list (offer). The old
node's value is set to null and it remains at its position. The dead
node therefore still consumes space until it is evicted. I know this
by design since you chose a lazy removal strategy.

Now my idea is to additionally always move the dead nodes to the
beginning of the list. Then they would be evicted first and the user-
defined capacity is unaffected by them. Otherwise an active node might
be evicted because dead nodes are consuming space which leads to a
wrong capacity from the perspective of the user of the map.

If you like I can come up with an implementation of this. In any case,
I would be happy to get your thoughts.

Thanks,
Christian

Ben Manes

unread,
Jan 23, 2009, 4:44:11 AM1/23/09
to ConcurrentLinkedHashMap
Hi Christian,

Thanks for the suggestion / opening the issue. Its a good way to
motivate me to put some time back into this project. There are a
number of minor enhancements I've been meaning to get to. For an LRU,
the capacity should probably be twice the desired capacity but its
still quite flacky as you pointed out.

I updated the issue with some more background in regards to a
backtracking approach I had originally implemented. Feel free to run
with any ideas, use me as a sounding board, or coerce me to go do
something. You can get commit privilages if desired - Adam took it
and really helped out by putting together the build system.

In regards to usage, have you experimented with the Second Chance
policy to determine if its acceptable for your scenarios? I've found
that its acceptable (LRU equivilant) in the most common scenarios that
Java caches are used, so an LRU may not offer much value. While the
benchmark graphs don't express it, conceptually the reduced complexity
in locking / dead-nodes for reads should have a scalability/
performance impact. My data for the hit rates (see wiki) shows they
are equivilant with fairly real-world distribution and neither
algorithm offers many perks for less common usages (e.g. being scan
resistant). So while an LRU policy is the natural choice for a cache,
it might not be the best one to use.

Best,
Ben

christia...@gmail.com

unread,
Jan 23, 2009, 1:16:32 PM1/23/09
to ConcurrentLinkedHashMap
Hi Ben,

Thanks for your answers.

Concerning the capacity when using LRU, I think the problem is that
the actual capacity is unpredictable since multiple calls to get() for
the same key cause n dead copies of the corresponding node, that all
consume space. I understand that my suggestion of reordering the dead
nodes is not efficient. Instead, might another AtomicInteger used as a
dead node offset or counter solve the problem which will be considered
when checking for isOverflow()? Then the user-defined capacity of the
map could be ensured. Of course then the capacity of the list has to
be controlled as well, otherwise it would grow infinitely with each get
(). At the moment, I can't think of another way of having a
predictable capacity and using a lazy removal strategy without
reordering.

Regarding the Second Chance policy, using the LRU was a somewhat
instinctive decision. I have now experimented with SECOND_CHANCE and
am surprised to see that it serves my purpose equally well! However, I
think it would still be a great feature to also have LRU.

cheers,
Christian

Ben Manes

unread,
Jan 23, 2009, 6:05:16 PM1/23/09
to ConcurrentLinkedHashMap
Yes, any attempt one makes to coerce an LRU to work in a singly-linked
list fashion (queue) will be unsatasfactory (lazy is really just a
queue). All of the complexity right now is by having a doubly-linked
list, but without remove() its useless. So its definately a major
deficiency that I need to fix. I'll hopefully have some free time
next week and give it a stab. Feel free to try to beat me to it. :-)

I was really surprised at first too how good the Second Chance policy
is. When switching from Ehcache (just a LHM) in LRU mode to CLHM in
2C mode, we got equivilant hit rates in production without the lock
contention / performance problems. An LRU is so mutate heavy that
even if we fix the CLHM, it may turn out that thrashing on the same
memory barriers (psuedo locks) is worse than using a real lock. A lot
of concurrency algorithms sacrafice the ideal single-threaded
algorithm for a slightly less flexible but concurrent algorithm. I
think LRU vs. Second Chance falls into that, and I hope it will become
more wide-spread as we go many-core. Note that my adaption is
different from the traditional approach described in hardware/OS
texts, but its the same idea implementated with different constraints.

Please let me know how the CLHM works out for you if you decide to
deploy it.

Thanks!
Ben

Ben Manes

unread,
Jan 23, 2009, 7:21:42 PM1/23/09
to ConcurrentLinkedHashMap
I need to think through this further, but perhaps using links for an
LRU isn't the best approach. Rather, an array to maintain timestamps
that can be scanned on eviction to find the oldest entry. This would
work equally well to support an LFU policy. That scan doesn't need to
be O(n) or provide a perfect match, but rather one close enough. It
could instead be separated out into buckets, and one of those buckets
chosen at random to evict/add too. Reads are just a volatile write,
like the Second Chance, and evicts are fairly inexpensive.
Alternatively it could obtain a random set of nodes, a sample size,
and chose which is best to evict. The difference between an LRU and
LFU is a timestamp versus a counter. As evicts should be somewhat
rare (caches are read-centric) and not made too expensive, this
approach might work out nicely.

The above has the advantage of simplifying the code, dropping the
double links entirely, and still being fast. Of course its worth
reinvestigating the backtracking removal again, but there are other
approaches to having an LRU.

Ben Manes

unread,
Jan 27, 2009, 4:06:32 AM1/27/09
to ConcurrentLinkedHashMap
Please review issue for two potential solutions. (http://
code.google.com/p/concurrentlinkedhashmap/issues/detail?id=1)

christia...@gmail.com

unread,
Jan 27, 2009, 8:36:09 AM1/27/09
to ConcurrentLinkedHashMap
Hi Ben,

From a technical perspective I'd vote for solution #2 because using
node->counter supports the introduction of additional policies. From a
user perspective LRU will still give unexpected results when a random
bucket is chosen. What about having a counter for each bucket as well,
maybe in a separate table? Then instead of randomly choose a bucket
the one with the lowest counter could be picked. At least in regard to
my requirements the perfomance implications should be acceptable. If
the performance of the map was critical I would use a different policy
(e.g. SECOND_CHANCE).

Thanks,
Christian

Ben Manes

unread,
Jan 28, 2009, 12:23:32 AM1/28/09
to ConcurrentLinkedHashMap
Hi Christian,

Thanks for reviewing the proposals. I tend to favor the second
solution as well.

Can you explain the counter/table suggestion, and how this helps
improve predictability? I had thought that I would chose the eviction
bucket in sequential order by using "buckets[Math.abs(counter++ %
buckets.length)]". This would be equivilant to being random, because
entries would be spread across semi-randomly and would help ensure
that activity wasn't primarily on a single bucket. It wouldn't help
improve LRU predictability, though, since we'd never knowingly get the
cache's least recently used but rather just that bucket's.

If true LRU is desired than either the sorted order must be updated on
every access or writes sort a single bucket. The latter would rarely
be acceptable, even if read centric. The former is hard to do without
locks, as you've seen me struggle with, and a synchronized
LinkedHashMap is generally good enough. This is what ehcache and
others do (though ehcache has coarser locks). If the miss penalty
isn't too severe this works well enough (e.g. database load), but
isn't always acceptable. The latter is usually rare, such as we had
major performance problems with JUEL's caching for UI code (all
XHTML / EL-Expression based) until I wrote this cache. So its
admittedly a non-critical problem for most users.

Rather than maximize hit rates at a single cache, I use multiple
layers to reduce the overall miss-penalty. I back this cache with a
soft-reference "victim cache" (where evicted entries are placed and
recovered if accessed) to improve performance if there is free memory
available. I use layering to back in-memory caches with memcached,
which helps reduce the penalty further. So as long as the hit rate is
in the 90 percentile (ideally mid-90s) whatever the policy is should
be fine. Since I will never expose the iterator in a predictable
fashion (e.g. FIFO) as LHM does, there shouldn't be unexpected
results. The developer would have to be validating against a true LRU
in single-threaded mode, which seems only helpful for my own
validation purposes. So - I'm hoping! - its a non-issue.

Let me know tho! I still need to prototype proposal #2 and evaluate
its hit rate / performance...

Thanks,
Ben

christia...@gmail.com

unread,
Jan 28, 2009, 5:49:59 AM1/28/09
to ConcurrentLinkedHashMap
Hi Ben,

In solution #2 reads set node->counter to the increment of a global
counter. So its known which node is least recently used. As you write,
by choosing the bucket randomly we get the least recently used node of
the bucket instead of the whole cache. I hope I am not missing
something here, but my idea was to introduce a counter for the bucket
as well. Every read then also increments the corresponding bucket
counter to the increment of a global counter. The bucket with the
lowest counter then contains the least recently used element of the
cache. So instead of picking a bucket randomly we take the bucket with
the lowest counter. This should improve predictability to have true
LRU.

The problem is we would have to find the bucket with the lowest
counter which in any case harms performance. If the bucket counters
are stored in a hashtable we will have to iterate through it, if we
use a sorted list we'll have the effort of constantly sorting it as
well as applying locks. So the question is whether that's worth it?

cheers,
Christian

Ben Manes

unread,
Feb 3, 2009, 12:50:36 AM2/3/09
to ConcurrentLinkedHashMap
Hey Christian,

How would it know which is the least recently used? It would
definately know the most recent, but we want the element with the
least activity. A bucket could contain the hottest and coldest
elements, but your suggestion would lead us to not evict from it. I
definately see the value in using something like it for an MRU, but I
don't see the LRU. If I'm missing something, could you list out the
algorithm step-by-step?

I think your idea of ordering buckets has a lot of merit and so while
I don't grok your proposal yet I can envision other implementations.
I'm sure that there are a few approaches we should explore to achieve
some form of psuedo ordering. There's the obvious one - maintain an
ordered list in each bucket, allow reads to lock it for updates, and
then scan each bucket to determine which has the oldest element at the
head. Lock splitting is a fair approach and is probably fast enough
for most needs. Another idea is to borrow the generational approach
from garbage collectors and promote hot elements into a different of
bucket. During eviction, we'd sweep the colder buckets more
frequently than the hot buckets. This would give us a mixture of LRU
and LFU - an LFRU policy. Of course, this means that reads may
require writes for promotion and more activity is against a smaller
set of buckets, both of which would probably require locks.

Best,
Ben

christia...@gmail.com

unread,
Feb 3, 2009, 6:16:12 AM2/3/09
to ConcurrentLinkedHashMap
Hi Ben,

No you're not missing something. I was far too optimistic about the
distribution of hashcodes :-). So while my approach could be used for
MRU it doesn't help us improving LRU. Sorry for the confusion. It
seems like your idea of maintaining an ordered list of elements in
each bucket is the way to go.

cheers,
Christian

Ben Manes

unread,
Feb 26, 2009, 7:04:16 PM2/26/09
to ConcurrentLinkedHashMap
Hey Christian,

I think that I've finally figured out the algorithm for eager removal
of elements for an LRU. This was a pretty tricky problem and stumped
me for a while. As I mentioned earlier, I didn't trust my original
backtracking algorithm and none of the ideas that I whiteboarded
seemed to pan out. I believe I've finally solved this last hurdle and
will try to implement it tonight.

Table:
K1 | E1
K2 | E2
K3 | E3

Current:
1) H<->E1<->E2<->E3<->T ; Initial
2) H<->E1<->D<->E3<->T ; Removed E2, marked as dead
3) H<->D<->E3<->E4<->T ; Evicted E1 when adding E4
4) H<->E3<->E4<->E5<->T ; Evicted D when adding E4

This model, while safe, is really just a queue with an unnecessary
back pointer. The dead node approach provides safe removal, but
doesn't release a slot until it reaches the head. This is still fairly
efficient because an LRU's constant add/remove when shuffling the
value pushes the active nodes at the tail, causing only dead to be
removed. Its not perfect, but 'good enough'. This allowed me to solve
the first hurdle - how to concurrently maintain a doubly linked list.
That was a decent challenge! :-)

New:
1) H<->D<->E1<->D<->E2<->D<->E3<->D<->T ; Initial
2) H<->D<->E1<->D<->D<->E3<->D<->T ; Removed E2
3) H<->D<->E1<->D<->E3<->D<->T ; Collapse D,D into D

We can operate against the next/prev pointers safely by buffering the
active elements with a dead node between them. We do not face a race
condition where the adjacent element was also being removed, because
it operates against its dead node. Removal leaves two buffers chained,
and concurrent could leave N buffers chained. Assuming that each
removal only collapses to the right, they can operate safely in
parallel. We can easily swing the pointers around to restore a single
dead node spacing without ever facing a race.

This is one of those 'my god, that was so simple' solutions to an
annoyingly difficult algorithm! :-)

Thoughts?

Best,
Ben

Ben Manes

unread,
Feb 27, 2009, 11:16:17 PM2/27/09
to ConcurrentLinkedHashMap
Hi Christian,

If you're still out there, I would appreciate your thoughts after
looking at the following wiki page:
http://code.google.com/p/concurrentlinkedhashmap/wiki/Design

Best regards,
Ben

christia...@gmail.com

unread,
Mar 1, 2009, 8:20:57 AM3/1/09
to ConcurrentLinkedHashMap
Hi Ben,

this is brilliant! Thanks also for the wiki page, it made things
clearer to me. Concerning removal, approach #2 seems alluring but
probably it is better to start with #1 (it seems safe to me). Let me
know if you have something (even preliminary) ready to test. CLHM has
been deployed within my application for a couple of weeks now. It
works just perfect (currently using SECOND_CHANCE).

Have you thought about writing an article about this? I think it will
be useful to a lot of people.

Thanks,
Christian

Ben Manes

unread,
Mar 3, 2009, 6:01:29 PM3/3/09
to ConcurrentLinkedHashMap
Hey Christian,

I'm still working out the exact details, but I know that I am very
close. The buffer strategy brought me 95% there with just the exact
details needing to be hashed out. There's an implicit assumption
that's not shown of using per-node locks (volatile boolean flags) to
avoid race conditions with sequential elements. This is needed since
removals do need to touch other elements, but the main advantage of
the buffer is to allow two locks - not three. Using one less lock
makes the algorithm far simpler and less error prone than attempting a
3 node lock, since that opens a big can of worms...

Last night I began working on the exact details. I took a step back
and instead of immediately going with the doubly-linked list, I
started with a singly-linked using buffers. In this case, the map
points to the buffer, which points to the element. We can drop the
element, but have the extra buffer. However, we can't remove that
extra buffer cleanly because another key points to it. We can't drop
the buffer for the element we removed because we don't know who is
pointing to it. For the buffer we wish to remove, we could
potentially get its key (either on the buffer or its element,
whichever is easier) and perform a conditional replacement in the
map. That should work, but requires another write operation to the
map. Since the JDK's ConcurrentHashMap is lock-on-write, its not
ideal. If it was concurrently removed, we could probably just drop
that buffer since its no longer being pointed to. I think this would
work, but its not perfect performance-wise.

To solve this, I augmented the above to put a back pointer on the
element. This seems to work out perfectly and I could easily write a
lock-free algorithm. The head/tail are replaced with a single
sentinel node with a buffer (making the initial state look quite
funky!). This gives the O(1) insertion/removal properties of a doubly-
linked list, but loses its reverse iterator ability. Since iterating
isn't necessary (we return the ConcurrentMap's iterator, as no value
in ordering), this is acceptable. I am tentatively calling this a
"3/2 linked list".

Tonight or tomorrow I will try to work out the exact details for the
doubly-linked list described in the wiki. I think this is possible,
but it would require a few extra steps due to the additional pointer.
Since the 3/2 list solves our needs, this extra complexity only adds
performance overhead. Of course, if we wanted to offer ordered
traversal (as in fifo/lru, not sorted) then reverse ordering could be
desirable. I can't see the benefit, though.

I'll probably document all three approaches since it helps one grok
the details better. The 3/2-linked list looks to be a really unique
idea, which is pretty cool. Once I have the entire design fully
documented I'll begin writing it. I'm afraid to start too early
because there are enough race conditions that I can't just start
banging it out!

An article would fun. I'll consider it once the implementation is
done and I get some feedback. :-)

Best,
Ben
> ...
>
> read more »

Ben Manes

unread,
Mar 27, 2009, 8:11:29 PM3/27/09
to ConcurrentLinkedHashMap
Hey Christian,

I haven't gotten too far into the implementation yet, but I think I've
worked out all of the details and I am almost ready to finalize the
design. An insertion (e.g. LRU reordering of an element on a
retrieval) should only block other insertions for two instructions.
In most cases, removals can be concurrent, but otherwies block for
four instructions. In very rare cases the removal will be at the
tail. I can probably add an optimization for LRU reordering to no-op
if the element is already at the tail. A trick I need to investigate
is whether I can avoid eagerly locking the old tail and instead do it
after CAS'ing the sentinel's link to insert the new tail. At the
moment, the lock is needed to handle concurrent operations like
removals, but we might be able to detect and correct for this. If so,
insertion is just like a FIFO queue and it would probably be possible
to further reduce contention on the tail by using the elimination
technique (see "Using Elimination to Implement Scalable and LockFree
FIFO Queues").

So there are definate optimizations to investigate, but I'm really
happy with the current design as is. I would really appreciate you
taking (yet another) look. Since its really tough trying to find
people willing to give some feedback, if you know of anyone else who
might be interested please ask them on my behalf.

Thanks a bunch!
Ben
> ...
>
> read more »

christia...@gmail.com

unread,
Mar 30, 2009, 12:34:13 PM3/30/09
to ConcurrentLinkedHashMap
Hi Ben,

I will experiment with your latest version (ConcurrentLinkedHashMap2)
and see if I can provide some useful feedback. Need some time to this,
though. I also distributed your project to everybody I know who could
be interested.

Thanks,
Christian

BTW, CHLM still deployed w/o any problems.
> ...
>
> read more »

Ben Manes

unread,
Mar 30, 2009, 1:44:46 PM3/30/09
to ConcurrentLinkedHashMap
Thanks! But please don't experiment with ConcurrentLinkedHashMap2 -
its ugly and just random hacks at the moment. Its just the design doc
that I've spent meaningful time on. :-)
> ...
>
> read more »

christia...@gmail.com

unread,
Apr 9, 2009, 8:00:55 AM4/9/09
to ConcurrentLinkedHashMap
Hi Ben,

I have worked through your design doc multiple times now and see no
reason not to start with the implementation. It's great work, IMHO.
Your suggested optimization using elimination seems extremely
challengeing, at least to me. So I would probably save it for later.

Cheers,
Christian
> ...
>
> read more »

Ben Manes

unread,
Apr 14, 2009, 4:31:41 AM4/14/09
to ConcurrentLinkedHashMap
Hi Christian,

Thanks for working through it. If you feel comfortable with the
algorithm, that's a nice compliment!

I did begin working towards an implementation, but I kept having a
nagging feeling that the algorithm is was too complex. Every week or
two I've been able to simplify it and remove a few operations. So
while I am confident that the design will work correctly, I am trying
to be patient and keep pulling back the layers of complexity until the
code looks trivial. I view as my current implementation as a failure
- not because it doesn't work, it does - but because I rushed too
quickly into the coding and assumed I'd figure out details later. I
don't want to fight the code, which I think I would with the current
design. I want the code to look simple enough that it doesn't seem
impressive. That's usually when you know you have it right! :-)

Last week I found a way to simplify the algorithm further, and this
led to a big improvement. I realized that I could remove the need to
lock the tail on an insertion by using a CAS operation to swing the
Sentinel's reference to the new tail. This requires a simple change
to the removal so that "5. E_next->bwd = aux" changes from a set to a
CAS operation. This means insertions are non-blocking to other
insertions (but are blocking to removals) and that the auxiliary nodes
only serve to block adjacent concurrent removals. So my next question
was whether auxiliary nodes are required at all.

So I began whiteboarding it and, with a correction to my second
attempt, I believe I've got it. The code is much simpler, but the
concurrent flow is very hard to visualize and trust. I am hoping to
whiteboard it a few more times this week, just whenever I have some
free time after work, to look for any flaws. If it still seems
correct, I'll update the design document. I may fork it into a new
document, partially because I'm worried about how to depict the
concurrent flows so that its understandable without me walking the
reader through it in person. The algorithm is below, but don't expect
to grok it unless you white board it a few times or if I can write the
document in a readable manner.

Insertion:
------------
E_new->next = S
do {
E_new->prev = S->prev
while (!cas(S->prev, E_new->prev, E_new));
E_new->prev->next = E_new;

Removal
------------
while (!cas(E_old->prev->next, E_old, E_old->next) { /* spin */ }
while (!cas(E_old->next->prev, E_old, E_old->prev) { /* spin */ }
if (E_old->prev->next != E_old->next) {
E_old->prev->next = E_old->next;
}

Best,
Ben
> ...
>
> read more »

Ben Manes

unread,
Apr 21, 2009, 3:36:54 AM4/21/09
to ConcurrentLinkedHashMap
Hi Christian,

I've finished working out the details for a purely CAS-based doubly-
linked list. I will update the wiki page tomorrow. The implementation
should be straight-forward enough that I can get started.

Element:
- key (final)
- value (volatile)
- next (volatile)
- prev (volatile)
- aux_next (volatile)

The aux_next is simple an auxiliary reference to the next node. It is
set to "null" when not used. When used, it is set to "next" while
"next" is "null". The structure is a simple doubly-linked of
elements, prefixed by a sentinel element.

Steady-State
----------------
S <-> E_1 <-> E_2 <-> E_3 <-> S

Where 'S' is the same sentinel node (circular list).

Insertion:
---------------
// Initialize in the "locked" state
E_new->aux_next = S;
E_new->next = null;

// Link the tail to the new element
do {
E_new->prev = S->prev
} while ( !cas(E_new->prev->next, S, E_new) );

// Link the sentinel to the new tail
while ( !cas(S->prev, E_new->prev, E_new) );

// Unlock the new tail
E_new->next = S

Removal:
-------------
// Shift the next ptr to aux_next. While next is null, the element is
in a locked state.
do {
aux_next = next;
} while ( !cas(next, aux_next, null) );

// swing the previous element's next ptr to our next
while( !cas(prev->next, this, aux_next) );

// swing the next element's prev ptr to our prev
while ( !cas(aux_next->prev, this, prev) );

Iteration:
------------
The list is iterated only on an eviction. If the iterator arrives at
a node where "next" is null, it can safely move forward through the
"aux_next" pointer.

---

If you white-board it, it should be fairly easy to follow. So far I
haven't found a race on the removal. The insertion seems correct, but
I need to give it another pass as it was not my main focus. This
algorithm is simple enough that I don't believe that I have missed a
race condition, whereas all the other attempts were too complex that I
never felt comfortable enough to be assured that I didn't. Its pretty
easy to miss a race, even an obvious one, when you're overwhelmed with
all the other details.

-Ben
> ...
>
> read more »

Ben Manes

unread,
May 21, 2009, 7:30:44 AM5/21/09
to ConcurrentLinkedHashMap
Hi Christian,

I have a working implementation based on the algorithm described in my
previous post. It needs to under-go load testing to improve
validation, but so far it seems to be working out quite nicely. I
will try to rewrite the design document this week, since it is grossly
out of date. If you could perform a code review, that would be great!

Best,
Ben
> ...
>
> read more »

csa

unread,
Jun 16, 2009, 8:05:12 AM6/16/09
to ConcurrentLinkedHashMap
Hi Ben,

I have tested the latest version within my application. It now behaves
as expected using LRU eviction. Your design doc and code are both in
great quality which made understanding your changes a lot easier to
me. I have only one small issue. In lines 210 - 213 you append a node
to the tail in case the EvictionPolicy defined not to evict it. If I
understand this correctly this is only used for the SECOND_CHANCE
policy. What do you think of moving this logic to the EvictionPolicy
as it would also simplify the evict() method? Thanks again for the
great work!

Cheers,
Christian
> ...
>
> read more »

Ben Manes

unread,
Jun 19, 2009, 11:53:50 PM6/19/09
to ConcurrentLinkedHashMap
Hey Christian,

That's a great idea and I checked in your change. I really appreciate
all your help!

There does seem to be a race condition on a removal, though, and I
haven't had much time in the last few weeks to really dig into it.
There's a series of concurrent operations that arrives at an invalid
state and causes a livelock. When I looked into it previously, I
couldn't backtrack from the test failure to the state and I couldn't
figure it out the scenario by whiteboarding different flows. It
doesn't always occur and my tests often pass, but I know that an error
exists. Its a complex enough flow of multiple insertions and removals
at different states that it stumped me to find what I had missed.
Hopefully I'll find some time later to get back to this project and
fix the problem.

By the way, you might find my newish wiki post regarding my IndexMap
interesting. I've been using it for quite a while as a core feature
of a metadata-driven caching framework, and its works really nicely!
The map isn't that special and I certainly wouldn't use it directly,
but it allows for very elegant client code when exposed through a
framework. It also nice demonstrates the striped locking technique.

Best,
Ben
> ...
>
> read more »
Reply all
Reply to author
Forward
0 new messages