smaller messages approach. I've implemented this strategy, which is
into messages that are roughly 110 bytes each. Then I pipe it to a C++
program that reads messages and crunches. But, I still have a problem
On May 17, 7:00 pm, Jason Hsueh <
jas...@google.com> wrote:
> There is a default byte size limit of 64MB when parsing protocol buffers -
> if a message is larger than that, it will fail to parse. This can be
> configured if you really need to parse larger messages, but it is generally
> not recommended. Additionally, ByteSize() returns a 32-bit integer, so
> there's an implicit limit on the size of data that can be serialized.
>
> You can certainly use protocol buffers in large data sets, but it's not
> recommended to have your entire data set be represented by a single message.
> Instead, see if you can break it up into smaller messages.
>
>
>
> On Mon, May 17, 2010 at 1:05 PM, sanikumbh <
saniku...@gmail.com> wrote:
> > I wanted to get some opinion on large data sets and protocol buffers.
> > Protocol Buffer project page by google says that for data > 1
> > megabytes, one should consider something different but they don’t
> > mention what would happen if one crosses this limit. Are there any
> > known failure modes when it comes to the large data sets?
> > What are your observations, recommendations from your experience on
> > this front?
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Protocol Buffers" group.
> > To post to this group, send email to
prot...@googlegroups.com.
> > To unsubscribe from this group, send email to