Hi gRPC group members.
I was looking how the RPC's are distributed on server side.
When client side is sending to that specific server using 10 channels 100K unary RPC's per channel I can see that RPC's are processed unevenly on server side, some thread with its own completion queue got more, some got less. As each thread is doing the same exact processing, I expected that by the ned of processing there would be an even number of processed events.
May I ask if it is expected and why?
Another question regarding overall performance. For example I can see that client can send data faster than server can process it. Because of that at some point, client has to wait until it can continue sending data. No matter if client has 1 channel and sends 100K unary requests, and if server has 10 threads and a completion queue per thread. Multiple threads sometimes even had worse performance than a single thread on server side.
This results in growth of RAM memory consumption on client side to very large numbers. There are no memory leaks(direct and indirect), but even after all RPC's are processed and all channels are destroyed, the memory footprint doesn't go down.
May I ask for any suggestions in this regards?
I'm using generic way as I have own serialization, and unary as I was planning to create multiple servers eventually as pods, and use load balancing to send RPC's in round robin manner to them.
Thank you in advance!