Performance of large 'byte' messages

2,077 views
Skip to first unread message

Mohamed Koubaa

unread,
Jun 30, 2016, 2:35:13 PM6/30/16
to prot...@googlegroups.com
Hello,

I'm using the official proto3 cpp project.

My organization is interested in using protocol buffers to exchange messages between services.  We solve physics simulation problems and deal with a mix of structured metadata and large amounts of numerical data (on the order of 1-10GB).

I ran some quick tests to investigate the feasibility of doing this with protobuf.

message ByteContainer {
  string name = 1;
  bytes payload = 2;
  string other_data = 3;
}

What I found was surprising.  Here are the relative serialization speeds of a bytes payload of 800 million bytes:
  • resizing a std::vector<uint8_t> to 800,000,000: 416 ms
  • memcpy of an initialized char* (named buffer) of the same size into that vector: 190ms
  • byte_container.set_file(buffer, length): 1004ms
  • serializing the protobuf: 2000ms
  • deserializing the protobuf: 1800ms
I understand that protobufs are not intended for messages of this scale (the documentation warns to use messages under 1MB), and that protobufs must use some custom memory allocation that is optimized in a different direction.

I think that for byte messages, it is reasonable to expect performance on the same order of magnitude of memcpy.  This is the case with Avro (although we really really don't like the avro cpp API).

Is this possible to fix in the proto library?  If not for the general 'bytes' object, what if we add a tag like:

bytes payload = 2; [huge]


Thanks!

Mohamed Koubaa
Software Developer
ANSYS Inc

Feng Xiao

unread,
Jun 30, 2016, 2:57:21 PM6/30/16
to Mohamed Koubaa, Protocol Buffers
On Thu, Jun 30, 2016 at 9:00 AM, Mohamed Koubaa <mohamed...@ansys.com> wrote:
Hello,

I'm using the official proto3 cpp project.

My organization is interested in using protocol buffers to exchange messages between services.  We solve physics simulation problems and deal with a mix of structured metadata and large amounts of numerical data (on the order of 1-10GB).

I ran some quick tests to investigate the feasibility of doing this with protobuf.

message ByteContainer {
  string name = 1;
  bytes payload = 2;
  string other_data = 3;
}

What I found was surprising.  Here are the relative serialization speeds of a bytes payload of 800 million bytes:
  • resizing a std::vector<uint8_t> to 800,000,000: 416 ms
  • memcpy of an initialized char* (named buffer) of the same size into that vector: 190ms
  • byte_container.set_file(buffer, length): 1004ms
  • serializing the protobuf: 2000ms
  • deserializing the protobuf: 1800ms
How did you serialize and deserialize the protobuf message? There are different APIs for different input/output types. For your case, I think the ParseFromArray() and SerializeToArray() should have comparable performance to memcpy.
 
I understand that protobufs are not intended for messages of this scale (the documentation warns to use messages under 1MB), and that protobufs must use some custom memory allocation that is optimized in a different direction.

I think that for byte messages, it is reasonable to expect performance on the same order of magnitude of memcpy.  This is the case with Avro (although we really really don't like the avro cpp API).

Is this possible to fix in the proto library?  If not for the general 'bytes' object, what if we add a tag like:

bytes payload = 2; [huge]


It allows you to declare the actual C++ type used for a string/bytes field. For example:

bytes payload = 2 [ctype = STRING_PIECE];  // StringPiece is basically pair<char*, size_t>.

And internally we have a ParseFromArrayWithAliasing() method that will just make the StringPiece field point to the input buffer without copying anything.

The other ctype = CORD uses a Cord class that will share memory when you copy a Cord object and do COPY-ON-WRITE.

Unfortunately we haven't gotten time to include these ctype support in opensource protobuf but they are on our list (probably after 3.0.0 is released).
 


Thanks!

Mohamed Koubaa
Software Developer
ANSYS Inc

--
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+u...@googlegroups.com.
To post to this group, send email to prot...@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.

Mohamed Koubaa

unread,
Jun 30, 2016, 4:59:48 PM6/30/16
to Feng Xiao, Protocol Buffers
Hi Feng,

I was using SerializeToOstream and ParseFromCodedStream.  I had to use the SetTotalBytesLimit, which is not required by ParseFromArray.  Does this mean that I can use a byte field with a greater size than INT_MAX with ParseFromArray?

Using SerializeToArray and ParseToArray, the performance has improved:
serializing is at 700ms, and deserializing went down to 561ms.  It is the same order of magnitude, which is a lot better.

I tried with a ~2GB byte array to quickly estimate the scaling.  Fortunately it looks to be linear!  I wonder if the assignment step (set_payload) can also be made closer to memcpy.
  • resize vector: 1050 ms
  • memcy: 560 ms
  • assigning the protobuf: ~2500 ms
  • seralizing the protobuf: ~1500 ms
  • deserializing the protobuf: ~1800 ms

It would be great to have some c++11 move semantics for this in a future version of the library.  I think this is better than the Aliasing option that you mention because that would require careful management of the lifetime of the memory being aliased.

Thanks!
Mohamed

Feng Xiao

unread,
Jun 30, 2016, 5:15:40 PM6/30/16
to Mohamed Koubaa, Protocol Buffers
On Thu, Jun 30, 2016 at 1:59 PM, Mohamed Koubaa <mohamed...@ansys.com> wrote:
Hi Feng,

I was using SerializeToOstream and ParseFromCodedStream.  I had to use the SetTotalBytesLimit, which is not required by ParseFromArray.  Does this mean that I can use a byte field with a greater size than INT_MAX with ParseFromArray?
Not really. You can't have a message larger than 2G.
 

Using SerializeToArray and ParseToArray, the performance has improved:
serializing is at 700ms, and deserializing went down to 561ms.  It is the same order of magnitude, which is a lot better.

I tried with a ~2GB byte array to quickly estimate the scaling.  Fortunately it looks to be linear!  I wonder if the assignment step (set_payload) can also be made closer to memcpy.
  • resize vector: 1050 ms
  • memcy: 560 ms
  • assigning the protobuf: ~2500 ms
  • seralizing the protobuf: ~1500 ms
  • deserializing the protobuf: ~1800 ms

It would be great to have some c++11 move semantics for this in a future version of the library.
It doesn't require C++11 move semantics to be efficient. For example, you can do: myproto.mutable_string_field()->swap(data). We will eventually support move of course, though we don't have any ETA for that.
 
I think this is better than the Aliasing option that you mention because that would require careful management of the lifetime of the memory being aliased.
Right. That's the cost some people would like to pay for better performance, but not everyone.

Mohamed Koubaa

unread,
Jun 30, 2016, 5:29:12 PM6/30/16
to Feng Xiao, Protocol Buffers
Hi Feng,

Thanks for the quick reply.  Using swap, assigning to the protobuf could be reduced to 1300ms, which is great.

Thanks!
Mohamed
Reply all
Reply to author
Forward
0 new messages