stream = new FileInputStream(file);
byte[] buffer = new byte[1024];
ByteBufAllocator byteBufAllocator = ByteBufAllocator.DEFAULT;
int length;
while ((length = stream.read(buffer)) > 0) {
response.onNext(VideoResponse.newBuilder().setVideoBytes(ByteString.copyFrom(buffer)).build());
if (ByteBufAllocator.DEFAULT instanceof ByteBufAllocatorMetricProvider) {
ByteBufAllocatorMetric metric = ((ByteBufAllocatorMetricProvider) byteBufAllocator).metric();
System.out.println(metric.usedDirectMemory()/(1024*1024));
}
}Thanks Eric for the reply.Couple of things. I am not able to get this when you say that gRPC java is not optimized for this type of large file. My individual message size is 1024 bytes only. The program reads this much bytes from the file at a time and sends this to onNext() call of stream observer. So, each message sent on wire for each onNext() must be little over 1kb only.
Also, I just changed the gRPC version to 1.29.0 and saw that the native memory usage is almost zero with this version. The high usage that I saw was on version 1.21.0.
Regards,
On Thursday, May 14, 2020 at 3:26:12 AM UTC+5:30, Eric Gribkoff wrote:gRPC Java may not be optimized for this type of large file transfer, and is likely copying the provided bytes more than once which could explain why you see additional direct memory allocation. These should be all released once the data goes out over the wire. It looks from your code snippet that the entire file is immediately buffered - we would expect to see the memory consumption decrease as bytes are sent out to the client.Further investigation of this would probably be best carried out on the grpc java repository at https://github.com/grpc/grpc-java/ if you'd care to file an issue/reproduction case there.Thanks,EricOn Monday, May 11, 2020 at 3:22:35 AM UTC-7 ravi....@gmail.com wrote:I have been testing gRPC behaviour for larger message size on my machine. I have got a single client to which I am streaming a video file of 603mb via streaming gRPC. I ran into OOM while testing and found that in case of slow clients, response messages were getting queued up and I was getting below error msg:io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 1873805512, max: 1890582528)Now this is fixable via flow control(using onReady() channel handler) but out of curiosity I increased my direct memory to 4GB via -XX:MaxDirectMemorySize=4g jvm flag to force queuing up of all response messages and hence the client can consume on it's own pace. It got completed successfully. But I observed that I ended up using 2.4GB of direct memory. I checked it via usedDirectMemory() eposed by netty for ByteBufAllocatorMetric.Isn't this too much for a 603mb file as it is 3 times more than the total file size. Below is the code snippet that I am using:stream = new FileInputStream(file);
byte[] buffer = new byte[1024];
ByteBufAllocator byteBufAllocator = ByteBufAllocator.DEFAULT;
int length;
while ((length = stream.read(buffer)) > 0) {
response.onNext(VideoResponse.newBuilder().setVideoBytes(ByteString.copyFrom(buffer)).build());
if (ByteBufAllocator.DEFAULT instanceof ByteBufAllocatorMetricProvider) {
ByteBufAllocatorMetric metric = ((ByteBufAllocatorMetricProvider) byteBufAllocator).metric();
System.out.println(metric.usedDirectMemory()/(1024*1024));
}
}
--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/e9584663-f06e-4a22-9307-520409511eab%40googlegroups.com.