Active/Supported

40 views
Skip to first unread message

Nick Cross

unread,
Apr 15, 2014, 11:20:32 AM4/15/14
to junit-be...@googlegroups.com

Hi,

Is this project still active/being developed etc? I notice there are a bunch of outstanding unmerged pull requests.

Nick

Dawid Weiss

unread,
Apr 16, 2014, 2:43:41 AM4/16/14
to junit-benchmarks
To be honest I didn't know about these -- github for some reason
wasn't giving me any info about them. I will review these patches (but
I can't promise I'll commit them, because, for example -- I think
cheating JUnit's architecture and avoiding before/after is an error of
the user, not the library). If you need a before-test fixture, use a
rulechain and wrap benchmark and initialization correctly...

Thanks for the tip. This said, the project is in hiatus mode right
now. There is JMH developed as part of openjdk and there's google
caliper. These projects offer a lot more stable microbenchmarks than
JUB, which from the start was thought as a "macro" benchmark library
-- to measure things in the order of milliseconds, not nanos.

But I'm still here and listening, so if there's active interest, sure,
I will go on developing it.

Dawid
> --
> You received this message because you are subscribed to the Google Groups
> "JUnitBenchmarks: Performance Benchmarking for JUnit4" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to junit-benchmar...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Nick Cross

unread,
Apr 16, 2014, 5:52:59 PM4/16/14
to junit-be...@googlegroups.com

Hi,

Thanks for the reply.

Obviously I have not looked at the patches in detail ; I was looking to see what the status of the project was.

Having had a skim through and regarding your before/after comments : what exactly is measured?
i.e. the beforeClass, the before, or just the test method?

[ reply inline ]


On Wednesday, 16 April 2014 07:43:41 UTC+1, Dawid Weiss wrote:
To be honest I didn't know about these -- github for some reason
wasn't giving me any info about them. I will review these patches (but
I can't promise I'll commit them, because, for example -- I think
cheating JUnit's architecture and avoiding before/after is an error of
the user, not the library). If you need a before-test fixture, use a
rulechain and wrap benchmark and initialization correctly...


Reviewing them would be great :-) Could you clarify what you mean regarding the rulechain? (as opposed to using @Before/@BEforeClass)

Thanks for the tip. This said, the project is in hiatus mode right
now. There is JMH developed as part of openjdk and there's google
caliper. These projects offer a lot more stable microbenchmarks than
JUB, which from the start was thought as a "macro" benchmark library
-- to measure things in the order of milliseconds, not nanos.

True. But for macro benchmarking i.e. I have a application library I want to code a series of black box performance tests against and to persist the results to compare
across versions I have not seen anything like it ;-)
 
Thanks!

Nick

Sergio Bossa

unread,
Apr 28, 2014, 7:02:40 AM4/28/14
to junit-be...@googlegroups.com
Agreed, I vote for keeping this project alive, as JMH is more intended to run micro-benchmarks and has no junit integration, while JUnit Benchmarks allows you to reuse your existing JUnit-based test infrastructure.

Dawid Weiss

unread,
Apr 29, 2014, 8:44:05 AM4/29/14
to junit-benchmarks
Took me a while... Sorry.

> Having had a skim through and regarding your before/after comments : what
> exactly is measured?
> i.e. the beforeClass, the before, or just the test method?

The answer to this stems exactly from how JUnit's default runner
works. JUB is a TestRule and as such it is embedded in the chained
sequence of Statement calls. Any per-test rules are above the
statement which runs @Before/@After so their time would be included
(and they would be executed multiple times). The way to change it is
to setup your own rule chain, with the setup/ teardown code above the
benchmarking rule, as this example shows:

https://github.com/carrotsearch/junit-benchmarks/blob/02bb71e77e90485041c12d47fbad02852b38b956/src/test/java/com/carrotsearch/junitbenchmarks/examples/BeforeAfterChaining.java#L61

I did not come up with this architecture, don't blame me for its
(perceived or real, don't know) complexity.

> Reviewing them would be great :-) Could you clarify what you mean regarding
> the rulechain? (as opposed to using @Before/@BEforeClass)

I did review those pull requests and I hope I will receive
notifications of any new ones in the future.

> True. But for macro benchmarking i.e. I have a application library I want to
> code a series of black box performance tests against and to persist the
> results to compare across versions I have not seen anything like it ;-)

Well... there are plugins to almost any continuous integration system
(like Apache Jenkins or Atlassian's Bamboo) which will show you test
times over builds. It is the same kind of stuff; JUB only magnifies
the times by repeating each test's execution multiple times (with some
warmup).

Dawid
Reply all
Reply to author
Forward
0 new messages