On 2012/8/29 Dustin <
dsal...@gmail.com> wrote:
> Oh great, so this closes in quite a bit. Oddly, I tried initializing in
> the bench (before resetting the timer) and it had less of an effect than
> doing it in init(). I get this:
>
> BenchmarkJSONParser 1000 2419239 ns/op 9.37 MB/s
> BenchmarkGOBOrig
500 5265846 ns/op 4.64 MB/s
> BenchmarkGOBDave
500 3067748 ns/op 7.96 MB/s
> BenchmarkGOBDaveRdr
500 3073553 ns/op 7.94 MB/s
>
> Updated the gist:
https://gist.github.com/3505647
>
> At this point, I think we can downgrade from "significantly", but unless
> it's faster, it's not a helpful path. Thanks for the pointers.
Calling gob.NewDecoder is a very costly operation, Gob is best used
for streaming a lot of data. Maybe it could be made faster. Last time
I profiled it, it spent a lot of time initializing type information
for basic types that seemed unrelated to the actual data I was
decoding, but that time was more about 100-200µs, not milliseconds.
It also works much faster on structs rather than maps. Also, using
interface{} as value type is unfortunate since neither strings nor
slices fit in interfaces, they need an extra memory allocation. I
think you are mostly benchmarking the hashmap implementation and the
conversion to interface{}.
Rémy.