> Thanks for your answer. Did you mean that JUnitBenchmark is not so clean
> regarding JVM conditions ?
Not everything can be done from JUnit level -- for example when you
run benchmarking tests and there are more tests than one within the
class, then the result may be "primed" depending on the order of
executed methods (which in general can be random on newer JDKs where
reflection returns them in unpredictable order).
> I will implement it and submit my pull requests to you.
Sure, you're welcome to do so. Fork the project on github and hack to
your liking -- that's why it's there.
> reports, smart approach, maintainability, ...). Caliper doesn't seem to be
> well maintained and JMH is very abstract to me.
Fine-grained tests are very, very tricky. There are a lot of things
going on under JVM's hood. I think JMH is the best fit for these at
the moment, although I've used Caliper in the past too.
> Regarding the time precision, I run my tests on a machine on which I control
> the entire activity (minimize measurements noise) and run each tests 10000
> times in order to cover most of the situations (hardware warm up, ...).
I'm not even going into environment noise, I was only talking about
the "noise" introduced by various JVM settings, code execution
ordering resulting from how JUnit schedules tests, etc. For benchmarks
that last at least a few millis I don't think these will play a key
role, but for nano scale they very well may.
Dawid