Keith, thank you very much for your feedback, it is highly appreciated!
With this in mind, it's time for lies, more lies, and statistics, benchmarking the three different implementations below:
func (r *Reader) Uint32() uint32 {
if r.err != nil {
return 0
}
var s struct {
_ [0]uint32
b [4]byte
}
_, r.err = r.buff.Read(s.b[:])
if r.err != nil {
return 0
}
return *(*uint32)(unsafe.Pointer(&s.b[0]))
}
func (r *Reader) Uint32X() uint32 {
if r.err != nil {
return 0
}
var v uint32
_, r.err = r.buff.Read((*[4]byte)(unsafe.Pointer(&v))[:])
if r.err != nil {
return 0
}
return v
}
func (r *Reader) Uint32N() uint32 {
if r.err != nil {
return 0
_, r.err = r.buff.Read(b)
if r.err != nil {
return 0
}
return hostnative.Uint32(b)
}
The benchmarking results using "go test -bench=. -benchtime=60s -benchmem .":
cpu: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz
BenchmarkReadUint32
BenchmarkReadUint32-8 1000000000 5.974 ns/op 0 B/op 0 allocs/op
BenchmarkReadUint32X
BenchmarkReadUint32X-8 1000000000 5.977 ns/op 0 B/op 0 allocs/op
BenchmarkReadUint32N
BenchmarkReadUint32N-8 1000000000 20.81 ns/op 4 B/op 1 allocs/op
The two "unsafe" contenders are absolutely neck-to-neck, so in terms of better readability and maintainability your proposed variant wins for me. And as I was somehow suspecting, encoding/binary takes almost 4 times as much as the first two implementations, and throwing a needless heap allocation into the bargain.