Scaling

16 views
Skip to first unread message

Martin Grajcar

unread,
Aug 25, 2013, 4:28:05 PM8/25/13
to cal...@googlegroups.com
Is it possible to somehow say that the current experiment does more work? For example when testing querying a data structure with size as a parameter, you typically get something like

size query_time fill_time
1 1 1
10 2 20
100 4 300
1000 7 4000

which means hard to compare numbers and unusable graphs. If there was a way how to tell caliper that filling N elements does N times as much work (so caliper should scale it down), this would make the results much nicer. It can be done manually by fooling around with the reps or doing more work in all experiments, but it's error prone and may stop working when the figures get big.

This scaling could also be useful for concurrent benchmarks noted in another threads.

Martin Grajcar

unread,
Sep 9, 2013, 10:41:12 AM9/9/13
to cal...@googlegroups.com
To show what I mean: Compare the original and scaled results for com.google.common.collect.SetCreationBenchmark.

The former clearly shows that creating bigger collections takes longer. Nice but pretty old news. You can't see anything else, can you?

The latter is much more usable, for example it shows that the creation of ImmutableSet slows down when it gets really big.

I did it by creating public static RuntimeWorker.setWeightPerRep(double), which feels pretty hacky, and I'd like to see a proper solution.
Reply all
Reply to author
Forward
0 new messages