Otherwise, it looks like, as stated in the javadoc, LongAdder is performing better than AtomicLong where there are a lot of writers (at the expense of the readers, though).
Hi
You can have more reads and writers by adding @Thread(x) (or @GroupThread(x), don't remember) where x is the number of concurrent threads in evolved. You can have different values for reads and writers.
I found my results very difficult to use. Reentrantlock for example was much better than AtomicLong when multiple writers are involved.
It make sense in the benchmark because the thing is super-highly contended. CAS (AtomicLong) will loop and contend very hard without backing off where RentrantLock will have backoff strategy and behave better.
Be careful on how you use these results. The fact that this benchmark is highly contended gives different results than in a real life program where contension is much more manageable by the CAS operation.
My 2 cents
Georges
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
On 05/02/2014 09:04 AM, Chris Vest wrote:
> Why is there a ReentrantReadWriteLock per Thread, though?
>
> https://github.com/pingtimeout/locks-benchmark/blob/master/src/main/java/fr/pingtimeout/locksbenchmark/LocksBenchmark.java#L65
>
> Cheers,
> Chris
to avoid contention :)
Hi Pierre,
You should do these improvements:
* Move initializers to @Setup, including the initializer for RWLock
* Avoid "final" in field declarations (moving to @Setup will implicitly
solve that)
* Include error bounds in your graphs. Contrary to what some people on
this list are saying, even if your means are drastically different, the
errors might indicate the difference is not significant.
* Do a variable backoff in both readers and writers: this will help to
model the "real life" scenario where user code under protected region is
running without contention for some time, thus amortizing the cost of
synchronization/contention. BlackHole.consumeCPU with @Param int tokens
will do, e.g.
http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_21_ConsumeCPU.java
(This will explode running times, but the data would tell much more
interesting things).
Note that running past the number of CPUs may introduce weird results
since you will effectively run only so much threads your CPUs can
handle, and there would be no 100-thread contention ever on 4-core CPUs.
[...]
I would believe this graph if you actually have 32+ threads machine, and
you only have 24. Otherwise you are starving your own threads, before
StampedLock even has a chance to starve :)
I *speculate* this might be a side effect of writers have no chance to
run (e.g. stay parked on writeLock), and the readers are always reading
their own unmodified copies never spoiled by writers. This is not the
luxury unprotected field accesses have.