Hello --
So this might just be my ignorance about gzip and compression in general, but I'm kind of confused by what I'm seeing, which is:
Looking at a JSON api, we've generated an equivalent flatbuffers schema. Using flatc, we generate the flatbuffers binary from a sample json payload. The flatbuffers binary is (not surprisingly) far smaller than the JSON data that was used to generate it.
However, if we gzip the binary and the JSON, the JSON ends up being significantly smaller.
I understand that, as a percentage of the original size, it's expected that the JSON will compress much more since it has so much redundancy relative to the flatbuffers binary representation. But I wouldn't have guessed that the gzip compression would be able to make the json smaller than the flatbuffers representation.
So is this surprising? Is something about the flatbuffers format challenging for gzip? Are there best practices for organizing schemas that make the binary representation more compress-able?
(I'll post my schema and json and steps to this thread later today).
Thanks!
Justin