why could compare.py be used with different iteration counts ?

58 views
Skip to first unread message

Chan Lewis

unread,
Sep 24, 2019, 12:03:37 AM9/24/19
to benchmark-discuss
Hi. What I ofter do is to use compare.py to compare different benchmarks for the same task. But I wonder why the following command makes sense.

./compare.py filters ./a.out BM_memcpy BM_copy

 The old benchmark and new ofter have different iteration counts, does the calculation (new-old)/old still make sense ? Should't we compare two cases under same circumstances ?

Roman Lebedev

unread,
Sep 24, 2019, 7:51:55 AM9/24/19
to Chan Lewis, benchmark-discuss
You *are* comparing them under the same circumstances - they both have run
for the total time they have considered to be sufficient to get
noiseless results,
which is usually 0.5s by default.

Roman.

> --
> You received this message because you are subscribed to the Google Groups "benchmark-discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to benchmark-disc...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/benchmark-discuss/2451025a-f20e-4067-8fa3-74aeb560e8f3%40googlegroups.com.

Chan Lewis

unread,
Sep 27, 2019, 1:15:56 AM9/27/19
to benchmark-discuss
Thanks for replying. But I want to compare the time taken under same iteration counts, not the other around. Time is our concern, isn't it ?

在 2019年9月24日星期二 UTC+8下午7:51:55,Roman Lebedev写道:

Dominic Hamon

unread,
Sep 27, 2019, 4:23:06 AM9/27/19
to Chan Lewis, benchmark-discuss
Time per iteration is the main concern, which is statistically significant once enough iterations have been run, ie, a minimum time has passed.

Dominic Hamon | Google
There are no bad ideas; only good ideas that go horribly wrong.


--
You received this message because you are subscribed to the Google Groups "benchmark-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to benchmark-disc...@googlegroups.com.

Chan Lewis

unread,
Sep 27, 2019, 10:47:36 PM9/27/19
to benchmark-discuss
I see, so do I need to manully do the calculations time/interation when comparing different benchmark of one program or just compare their time  regardless of theire different interation counts ?

BTW, I want to compare the templated benchmark parameterized with different type, like the following code:

template <typename Set>
void bm_set(...) {

}

BENCHMARK_TEMPLATE
(bm_set, std_set);
BENCHMARK_TEMPLATE
(bm_set, non_std_set);

I want to use compare.py filter function to compare performance between std_set and non_std_set, but I don't know how to write the command . There's no relevant information on docs.



在 2019年9月27日星期五 UTC+8下午4:23:06,Dominic Hamon写道:
Reply all
Reply to author
Forward
0 new messages