On Tue, Apr 27, 2010 at 2:04 PM, Kenton Varda <
ken...@google.com> wrote:
> Note that protobufs only encode structure. They do not do any compression.
> You should apply compression separately on top of your data if you need it.
> Note that this will add considerable CPU cost, so you must decide if it's a
> trade-off you want to make.
As it turns out, I've been collecting metrics on compression latency
and compression ratios for our messages to decide whether it's worth
it.
In tomcat, I've set compressionMinSize to around 100K. I get numbers
around 140 ms for mean latency and 0.18 compression ratio, for gzip.
variance is pretty big though. I wasn't expecting a good compression
ratio for protobuf messages since they are decently packed already,
but was happy to see that result.
Anyway, I don't like the latency, on the other hand, over a certain
amount it seems to make up for the latency due to network hops for
sufficiently large payloads. I'm still tweaking variables and getting
data. If it would help anyone else here, and I discover anything
useful, I will follow up.
I realize things could be improved via good api and model design
rather than having to tweak things via compression etc. but I don't
have the resources to redesign everything all at once. (nor would I
want to).
Are there protobuf user groups, and if so, is there one in Chicago?