You're comparing the encoded values, nothing more. Gob transmits type information as well as the data. Create a new encoder for the second encode (so it no longer assumes that the "receiver" has the type info and will re-send it) and I think you'll find that it works.
If you compare the decoded value and the encoded value, you should find them the same regardless of the wire encoding, which shouldn't be of much concern as long as it works.
Even if I create another encoder I still don't get equal encoded strings.
I presumed that the best way to verify the encoding process is to encode it once decode it and re-encode the decoded value so the first encoding and the second encoding should be equal.Even if I create another encoder I still don't get equal encoded strings. So I presume that as nobody else is having problems with the gob encoder it must be something on my side.
That's very good to know. So for validation I will only compare the decoded value.I was avoided this because values are slices of stcures that contain slices of elements that have slices inside. So comparison will involve some for cycles :)
s/store/encode/
> - a single encoder instance should not be used to encode multiple types
That's incorrect. Multiple types work just fine.
> - if it is used for storage the first encoded and decoded value has extra
> data so it will not be in the same format as the rest of the stored objects.
Yes, the the type numbers used in the encoding are essentially
arbitrary, so you shouldn't depend on the precise contents of the
representation anyway.
> A work around this was to train the encoder and decoder before starting to
> use it for serialization of real objects (just encode and decode an instance
> of the data to be stored using that encoder).
That's misusing the package. If you need this property, and you almost
certainly don't, gobs are not for you.
> - even if they have somehow similar interfaces, the gob and json encoder
> have very different way of working and of course very different formats of
> data.
True.
> - gob encoding is much faster than json encoding
True.
> - gob creation of encoder/decoder + encoding/decoding process is slower then
> the same process using json. So if you create an encoder/ decoder every time
> you encode the gob is slower.
Perhaps, but the cost of creation amortizes to near zero if you encode
a reasonable amount of data.
> - there is some extra information about gob encoding
> here: http://blog.golang.org/2011/03/gobs-of-data.html
>
> Hope my conclusions are correct.
They are not. Did you read this?
http://blog.golang.org/2011/03/gobs-of-data.html
-rob
Thank you for taking the time to correct my conclusions.I am using gob to store objects in a key value store (redis or kyoto cabinet). The writing (encoding) speed is not very important but the reading must be very fast. Because json is slow and gob did not work (my mistake) I used my own encoding decoding of objects.I studied the gob encoding some more to get rid of my serialization/de-serialization code and before doing that preparation (training) of the gob encoder/decoder i kept getting "extra data in buffer" from the gob decoder. I am now using a separate (single instance) encoder/decoder for every type that I serialize and it seem to dump consistent content so the decodomg is always correct.Now my code finally works using the gob encoder but reading your answers I understand that I am wrong again. I will read the http://blog.golang.org/2011/03/gobs-of-data.html document again to see what I am missing.
Apparently I'm crazy on the first point... but yes on the second: http://code.google.com/p/appengine-go/
I guess it's just repeated invocations of gob.NewEncoder(w).Encode(v)