Thanks a lot for responding Nathaniel.
In honesty the use case is very slight simplification to a utility generator function. The difference is only a couple of lines, and arguably it would be clearer to explicitly create a new request each time anyway. As you don't guarantee this behaviour, even if a change is very unlikely, I'll just go for the safe option.
Sorry to piggyback something else: I often send binary data and associated metadata together over gRPC. The utility function in question is to chunk the binary data into a request stream while including the metadata in the first request of the stream. It's a pity that gRPC doesn't do something like this built-in. I appreciate it's hard given the separation between gRPC and protobuf, but I think a slightly leaky API that takes both a protobuf and a binary payload (or a named list of binary payloads) would be a reasonable compromise. In Python this could even use the buffer protocol for near-C++ efficiency (imagine: here's my protobuf and some numpy arrays). I notice that TensorFlow solves this problem by having a load of custom gRPC code (which I don't understand to be honest!), which I think shows a gap in gRPC's API. But I'm not paying for gRPC, so who am I to complain :=)
On Thursday, March 29, 2018 at 2:24:38 PM UTC+1, Nathaniel Manista wrote:
This happens to be the case today...