--I performed benchmarks by modifying the python examples. I have got 260/sec sync calls (req & rep rpc) and 1900/sec streaming response. How can I increase the thruput without multithreading and increase batch sizes? Thx.
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+u...@googlegroups.com.
To post to this group, send email to prot...@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.
python sync call 260 time/secpython streaming response 1900 times/seccpp sync call 9k times/seccpp streaming response 360k times/sec
--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/c548d590-efb2-417b-8d80-27d1a2abf2da%40googlegroups.com.
+grpc groupOn Fri, May 13, 2016 at 8:46 PM, Yan Yan <lamh...@gmail.com> wrote:I performed benchmarks by modifying the python examples. I have got 260/sec sync calls (req & rep rpc) and 1900/sec streaming response. How can I increase the thruput without multithreading and increase batch sizes?
I have got 7k/sec streaming response on python client. Is this expected?
7k/sec streaming response still a little slow in my use cases. Can I do the grpc and protobuf stuff in c++ and integrate it with a python client?
It is a user requirement that the client has to be in python.