[Python] GRPC Server performance bottleneck

46 views
Skip to first unread message

mark.es...@gmail.com

unread,
Feb 12, 2020, 12:58:48 AM2/12/20
to grpc.io

I have asked this question in SO
https://stackoverflow.com/questions/60181972/python-grpc-server-performance-bottleneck

but i just would like to try my luck in here as well...


I have written a grpc server that contains multiple rpc services. Some are unary and some are server side streaming.

It connects to a grpc kubernetes server so I am using the python kubernetes client to query the server

Currently I am having some performance problems as I think if there are multiple request coming in that it buffers for every worker to finish up before it can serve the incoming request.

def startServer():
    global server
    server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
    servicer_grpc.add_Servicer_to_server(Servicer(), server)
    server.add_insecure_port('[::]:' + str(port))
    server.start()

My questions are:

  1. How can I improve my performance? Will adding more max_workers in the threadpoolexecutor helps?

  2. How can I diagnose the problem and isolate which is causing the slowdown?

  3. I am thinking if the size of the response matters in this case as I am streaming bytestring to the client. Is there a way to measure the size of the response or does it matter in python grpc?

I would like to know how do you diagnose your python grpc server so that you would know where to improve?

Lidi Zheng

unread,
Feb 12, 2020, 1:22:53 PM2/12/20
to mark.es...@gmail.com, grpc.io
I answered in SO. Let's continue the discussion in SO.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/c305b3f7-f047-4d6a-b90b-eaa1bc2ae926%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages