Benchmarks vs. dead code elimination

713 views
Skip to first unread message

Caleb Spare

unread,
Apr 10, 2017, 5:50:03 PM4/10/17
to golang-dev
Most Go benchmark functions do

for i := 0; i < b.N; i++ {
f()
}

The testing documentation has this example:

func BenchmarkBigLen(b *testing.B) {
big := NewBig()
b.ResetTimer()
for i := 0; i < b.N; i++ {
big.Len()
}
}

and in the standard library there are many similar benchmarks such as
this one from encoding/base64:

func BenchmarkEncodeToString(b *testing.B) {
data := make([]byte, 8192)
b.SetBytes(int64(len(data)))
for i := 0; i < b.N; i++ {
StdEncoding.EncodeToString(data)
}
}

In all of these, the result of the function being benchmarked is
unused, so a sufficiently smart compiler could turn the function call
into a no-op. (It seems that the compiler doesn't do this today,
though.)

To protect against this, it is common to introduce a package-scoped
sink variable to which the result is assigned. Here's an example from
math/big:

var sink string

func BenchmarkDecimalConversion(b *testing.B) {
for i := 0; i < b.N; i++ {
for shift := -100; shift <= +100; shift++ {
var d decimal
d.init(natOne, shift)
sink = d.String()
}
}
}

Assuming that a package-scoped sink var is the best way to write
benchmarks, I have two questions:

- Should we document this recommendation? Existing benchmark examples
(as well as the https://blog.golang.org/subtests blog post) don't use
a sink and as far as I know no official Go documentation points out
this pitfall of writing benchmarks.
- Should go vet report that a function being benchmarked doesn't
assign its result in some blessed way? If such function calls can be
eliminated by the compiler, I don't think the benchmark code can be
considered "correct".

But is a package-scoped sink var even the way to go? As a non-expert
in compilers, I don't see why the compiler can't notice that the sink
var is write-only and replace the assignments with '_ =', at which
point it could eliminate any side-effect-free function calls on the
RHS of those assignments. So it seems to me that by using the
package-scoped sink vars, the benchmark author is implicitly saying "I
think the compiler is likely to become smart enough to do A, but never
smart enough to do B."

Here are two straw man alternatives to package-scoped sink vars:

- Document that the testing package ensures that functions called
inside the benchmark loop are not eliminated. Today we wouldn't have
to make any changes, and we could figure it out later once we
introduce the optimization. (One way would be to have `go test`
rewrite benchmark code to use sinks or other mechanisms.) This idea is
from dominikh.
- Similar to runtime.KeepAlive, add a function like runtime.Use or
testing.Use which ensures that a result will always be used. Today it
can be a no-op. Document that benchmark results should always be
marked as used using this function. (Or can runtime.KeepAlive be
co-opted for this purpose, even?)

If we do introduce an optimization that can eliminate benchmarked
functions, it will break many benchmarks in the standard library and
in the wild, so I think it's worth deciding on an official
recommendation sooner rather than later. If we decide that people need
to use package-scoped sink vars or runtime.Use or something else to
write correct benchmarks, it would be good to give everyone a few
cycles to fix their code before the optimization goes in.

-Caleb

Robert Griesemer

unread,
Apr 10, 2017, 5:57:57 PM4/10/17
to Caleb Spare, golang-dev
Some of the newer benchmarks do use a package-scoped variable, and as long as it is exported, a compiler won't be able to eliminate it (at least unless it does inter-package analysis together with linker optimizations - while perhaps possible, very unlikely).

The reason why the existing examples and code doesn't use sinks is because until recently (with the new optimizing backend), we didn't have to worry about it. Now we do have cases where the benchmark fails to measure the correct value if we don't use the result.

We could have a function testing.Use(x interface{}) for that purpose (though that may cause allocation depending on the value "used" and affect the measurement). Perhaps documenting the use of an exported sink as good practice is enough.

- gri


-Caleb

--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Russ Cox

unread,
Apr 10, 2017, 7:16:59 PM4/10/17
to Robert Griesemer, Caleb Spare, golang-dev
Personally I'd rather not see sinks appear everywhere until they are needed. In many cases they won't be, and the code is clearer without them. My advice is to wait until the benchmark is clearly optimized away and only then put them in. The details of the sink can depend on the context. If you have a function returning an int, it's fine to sum them up and then assign the result to a global, for example. A general func(interface{}) won't be appropriate for non-pointer values without compiler support, and that seems like overkill.

Russ

Ian Lance Taylor

unread,
Apr 10, 2017, 7:20:44 PM4/10/17
to Russ Cox, Robert Griesemer, Caleb Spare, golang-dev
On Mon, Apr 10, 2017 at 4:16 PM, Russ Cox <r...@golang.org> wrote:
>
> Personally I'd rather not see sinks appear everywhere until they are needed.
> In many cases they won't be, and the code is clearer without them. My advice
> is to wait until the benchmark is clearly optimized away and only then put
> them in. The details of the sink can depend on the context. If you have a
> function returning an int, it's fine to sum them up and then assign the
> result to a global, for example. A general func(interface{}) won't be
> appropriate for non-pointer values without compiler support, and that seems
> like overkill.

Note that gccgo will already discard assignments to an unexported
variable that is never read, among other optimizations. (I suppose
that's OK with me since it means that gccgo benchmarks will sometimes
look a lot better.) I agree that there is no single approach that
works for micro-benchmarks, you just have to be aware of the various
pitfalls.

Ian
Reply all
Reply to author
Forward
0 new messages