gRPC call time overhead

1,346 views
Skip to first unread message

Meir_Ve...@amat.com

unread,
Jul 27, 2016, 10:11:12 AM7/27/16
to grp...@googlegroups.com

Hello gRPC group,

 

I’m working in Applied Materials SW team, and we’re checking the possibility of replacing our RPC based CORBA with gRPC.

My benchmark  includes comparing gRPC vs CORBA performance while sending various message sizes from client to server.

The server replays immediately for each call.

Both client and server are C++ based, and I’m using windows.

I noticed that for small messages one RPC time is about ~20 ms between  the two computers (from the client call until it gets response).

But from a certain message size the call time jump to ~220 ms.

I see this 200ms overhead from above ~9500 bytes during connection, and with message size over ~1070 when I just start connection.

When performing many gRPC calls this 200ms overhead become a problem.

Please see the following graph which represents call time vs message byte size.

 

 

When using CORBA (which is also based TPC) I don’t see such overhead, a call takes ~15ms.

 

Are you familiar with such behavior?

Can you explain it?

   

Thanks

Meir Vengrover,

SW engineer, Applied Materials.

 

 

Louis Ryan

unread,
Jul 27, 2016, 8:31:16 PM7/27/16
to Meir_Ve...@amat.com, grpc-io
A step function like that is pretty odd. What kind of payload are you using? Just to isolate that out could you benchmark your message serialization for the same range of sizes?

As a general rule I would expect to see a benchmark of sequential request/responses with 'typical' 1k protobufs between two normal servers on the same LAN to be sub-ms.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/EA173EF8BDC63545970E3E597732992F81893F%40045-SN2MPN4-071.045d.mgd.msft.net.
For more options, visit https://groups.google.com/d/optout.

Meir_Ve...@amat.com

unread,
Jul 28, 2016, 9:06:30 AM7/28/16
to lr...@google.com, grp...@googlegroups.com, Assaf_...@amat.com, Vadim_G...@amat.com, Lena_S...@amat.com

Hi Louis,

In order to simplify the case I tested very simple message.

The proto file looks like:

 

rpc SimpleMessage(SimpleByteArray) returns (Empty) {}

 

message Empty {

}

 

message SimpleByteArray {

  bytes byte_array = 1;

}

 

Again I made iterations of growing message size from 500 bytes to 35KB and again around  9K the call time started to jump to around 220ms.

I also noticed that also in the large messages sometime the call time drop again to around 20ms.

The graph looks like:

 

 

The serialization and deserialization time seems negligible, even for large message (1MB) the serialization time is 2ms and the deserialization is less than 1ms.

I measured the serialization by measuring the function

SerializationTraits<M>::Serialize(message, &send_buf_, &own_buf_); in call.h;

and the deserialization by measuring the function

SerializationTraits<RequestType>::Deserialize(param.request, &req, param.max_message_size); in method_handler_impl.h

 

I will appreciate if you can give me more sampling places in the code in order to profile the gRPC and understand better this issue.

 

Thanks,

Meir

Christian Svensson

unread,
Jul 28, 2016, 9:20:46 AM7/28/16
to Meir_Ve...@amat.com, lr...@google.com, grpc.io, Assaf_...@amat.com, Vadim_G...@amat.com, Lena_S...@amat.com
Hi,

Would you mind sharing the full code for server / client you're using as well? That way people can easily try to reproduce the behavior you are seeing.

Meir_Ve...@amat.com

unread,
Jul 28, 2016, 11:07:28 AM7/28/16
to chri...@cmd.nu, lr...@google.com, grp...@googlegroups.com, Assaf_...@amat.com, Vadim_G...@amat.com, Lena_S...@amat.com

Sure,

 

You can use the grpc helloworld example for reproduction.

In the greeter_client.cc replace the following lines:

“    ClientContext context;

 

    // The actual RPC.

    Status status = stub_->SayHello(&context, request, &reply);

With:

         

    for (int i = 1; i < 100; i++)

    {

        ClientContext context;

 

        request.Clear();

       

        // Create some string with growing length for the request name

        int stringLen = i * 500;

        std::string str;

        str.resize(stringLen);

        request.set_name(str);

 

        // Say hello RPC with time measurement (I use <ctime>, you can use different time measurement method)

        std::clock_t begin = std::clock();

        Status status = stub_->SayHello(&context, request, &reply);

        std::clock_t end = std::clock();

 

        // Print time result

        std::cout << "Byte size "<< request.ByteSize() <<" , Call duration " << double(end - begin) / CLOCKS_PER_SEC << std::endl;

    }

 

In the greeter_server.cc you basically don’t need to change anything but I noticed that if the server replay with the same message length as the request this issue doesn’t occur.

Maybe it’s another clue for this issue.

So, replace the

reply->set_message(prefix + request->name());

With simple

reply->set_message("hello");

 

Run the client and server from a different computers and check the time measurement prints.

It will be great to see if someone else see the same phenomenon.

 

Thanks

Meir

Nicolas Noble

unread,
Jul 28, 2016, 11:45:58 AM7/28/16
to Meir_Ve...@amat.com, chri...@cmd.nu, Assaf_...@amat.com, Lena_S...@amat.com, Vadim_G...@amat.com, grp...@googlegroups.com, lr...@google.com
To be fair, we haven't spent a lot of time optimizing and benchmarking the windows codepath. Our first priority is Linux for GA, then other platforms after GA. We are running constant benchmark for Linux, but not Windows, for instance. But we'll get to it. Furthermore, the Windows platform and API have a few flaws that won't allow us to do as extensive optimizations as with Linux.

That being said, such a gap isn't healthy, and we should expect at least a somewhat constant function. The fact the step starts around 8KB however gives me a few ideas of where the problem might be. We'll instigate. Thanks for the reproduction steps and your report. 

image002.png
image003.png
image002.png

Meir_Ve...@amat.com

unread,
Jul 28, 2016, 2:05:44 PM7/28/16
to nno...@google.com, Assaf_...@amat.com, Lena_S...@amat.com, Vadim_G...@amat.com, grp...@googlegroups.com, lr...@google.com, Adi_...@amat.com, Yaron_...@amat.com, chri...@cmd.nu

Thank you Nicolas for your honest answer.

Anyway, if you will have any conclusion about this issue or have some fix, we will be happy to hear about it.  

The serialization and deserialization time seems negligible, even for large message (1MB) the serialization time is 2ms and the deserialization is less than 1ms.

When using CORBA (which is also based TPC) I don’t see such overhead, a call takes ~15ms.

 

Are you familiar with such behavior?

Can you explain it?

   

Thanks

Meir Vengrover,

SW engineer, Applied Materials.

 

 

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/EA173EF8BDC63545970E3E597732992F81893F%40045-SN2MPN4-071.045d.mgd.msft.net.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/EA173EF8BDC63545970E3E597732992F818B7E%40045-SN2MPN4-071.045d.mgd.msft.net.


For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages