Background:
Machine: ~3.0Ghz, 8 cores (4 logical), 32.0 GB RAM
I was looking into grpc performance on large amounts of data to see if it was viable for our use, data size could be over 10GB. The basic payload would just be an array of floats. Using a synchronous server/client and streams on linux I was able to get around 1.3GB/s throughput on a message. This was by streaming the data in ~200-300KB chunks. When the chunks go above 1MB the throughput starts to slow down, < 100KB chunks start to greatly slow down as well, < 1MB seemed to be a good sweet spot. Sending large non-streamed messages was much lower < 500MB/s, so streams seemed the way to go.
Tried the same tests that yielded ~1.3GB/s (on linux) on windows (win10). Those same tests achieved ~300MB/s on windows.
Question:
Is there a good way to increase performance on windows (or just in general) for large streamed messages? We like some of the benefits of grpc/protobufs, especially the ability to just send a client a proto file so they can write their own client in their choice of language. I was expecting a decrease in performance on windows but not by that magnitude. We aren't looking at changing the underpinnings of gRPC for this project. Mainly looking at if there are some good ways to increase performance of streams on windows (particularly on the server side).
We have plenty of other options to get optimal data rate transfers but were hoping we could use gRPC out of the box so we could hand a client a proto file and they could handle the "rest".
Very new to gRPC/protobufs so I could be missing a lot of things so I might be missing some crucial information.
Thanks!