I realize I could run the binary and then parse the json output (or use compare.py), but it would be convenient to do this in a single c++ binary. Is there a way for user code to access the benchmark value or otherwise specify a hard threshold for a benchmark?
Is it possible to write a benchmark that has a pass/fail condition? I would like to give users the option to run benchmarks as a pass/fail test in CI. If some calculated benchmark value exceeds a threshold, I would like the binary to exit with non-zero exit code so that it can be identified by a CI test runner as a failure.
I realize I could run the binary and then parse the json output (or use compare.py), but it would be convenient to do this in a single c++ binary. Is there a way for user code to access the benchmark value or otherwise specify a hard threshold for a benchmark?
--
You received this message because you are subscribed to the Google Groups "benchmark-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to benchmark-disc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/benchmark-discuss/6256dac3-96ed-4bfe-aeb7-9ba507e26053%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
On Monday, May 6, 2019 at 2:47:15 PM UTC-7, Dominic Hamon wrote:
> It's come up before (internally) but what we generally prefer to do (due to potential noise in benchmarks) is to monitor trends and statistically significant anomalies, which are hard to capture in the library.
>
>
>
> If you wanted to do something like this i imagine you could define a custom reporter (an instance of BenchmarkReporter* can be passed at runtime using this api). That reporter receives information about each group of benchmark runs. You could couple this with an expectation for the given group, i suppose.
>
>
>
>
>
>
> Dominic Hamon | Google
> There are no bad ideas; only good ideas that go horribly wrong.
>
>
>
> On Mon, May 6, 2019 at 2:25 PM <cybrow...@gmail.com> wrote:
> Is it possible to write a benchmark that has a pass/fail condition? I would like to give users the option to run benchmarks as a pass/fail test in CI. If some calculated benchmark value exceeds a threshold, I would like the binary to exit with non-zero exit code so that it can be identified by a CI test runner as a failure.
>
>
>
> I realize I could run the binary and then parse the json output (or use compare.py), but it would be convenient to do this in a single c++ binary. Is there a way for user code to access the benchmark value or otherwise specify a hard threshold for a benchmark?
>
>
>
> --
>
> You received this message because you are subscribed to the Google Groups "benchmark-discuss" group.
>
> To unsubscribe from this group and stop receiving emails from it, send an email to benchmar...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to benchmark-disc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/benchmark-discuss/fcda5da1-e235-4012-bc03-4dfefa5979ec%40googlegroups.com.