you can do this easily with the unsafe package if you really need to:
func struct2bytes(x *myStruct) []byte {
return (*[unsafe.Sizeof(*x)]byte)(unsafe.Pointer(x))[0:]
}
the struct pointer first becomes an arbitrary pointer, then a pointer
to a fixed-size array of the correct size, then a slice.
the conversions incur no runtime overhead.
you can do this easily with the unsafe package if you really need to:
func struct2bytes(x *myStruct) []byte {
return (*[unsafe.Sizeof(*x)]byte)(unsafe.Pointer(x))[0:]
}
the struct pointer first becomes an arbitrary pointer, then a pointer
to a fixed-size array of the correct size, then a slice.
the conversions incur no runtime overhead.
If you look at the implementation, it uses a lot of instructions to do
bit-shifting and such. It makes sense that it would be an order of
magnitude slower than just copying the data out of memory.
Andrew
The slowness comes from using reflect, which
allocates a lot of garbage as it walks the structures.
The bit shifting is essentially free and not the bottleneck.
Encoding/binary is for convenience, not for speed.
Russ
David
On Mon, Jul 12, 2010 at 6:52 PM, Russ Cox <r...@golang.org> wrote:
> The slowness comes from using reflect, which
> allocates a lot of garbage as it walks the structures.
> The bit shifting is essentially free and not the bottleneck.
> Encoding/binary is for convenience, not for speed.
>
> Russ
>
--
David Roundy
Yes. But I don't know exactly what that would mean.
The current reflect package is version 2 or version 3
depending on how you count, and we've used it enough
to know that there needs to be another try at some point,
and that the new version would ideally allocate less
and be easier to use. But we don't really know what it
would look like yet and are mostly letting the idea
percolate while we do other things.
Russ