How does grpc guarantee the request (and response) order?

2344 views
Skip to first unread message

you lun

unread,
Jan 5, 2016, 9:42:46 PM1/5/16
to grpc.io
For example. If I send two consecutive call to a server to set a value to 1 and 2 in very short time in a single channel. Is that possible that the final value in the server is 1 (the value of the first call)?

If not, how does grpc achieve this?

Thanks,
Alun

Michael Lumish

unread,
Jan 5, 2016, 10:24:27 PM1/5/16
to you lun, grpc.io
As far as I understand, we give no guarantees about the order in which messages will be delivered for separate calls, even on the same channel. Even if it did, the calls could be handled by different threads on the server, which could arbitrarily interleave and execute their handling code in any order.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/b8a17bcd-e269-44ee-a06d-1e43c2dccdc8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

you lun

unread,
Jan 5, 2016, 10:43:30 PM1/5/16
to grpc.io, high...@gmail.com
If that's the case, we could not use grpc as a messaging system, because messaging system usually require message sequence guarantee.
If we want to implement absolute sequence, we have to do it in application code, e.g. wait for the first response and then send the second request.

I think race condition needs to be addressed for a good rpc system.

在 2016年1月6日星期三 UTC+8上午11:24:27,Michael Lumish写道:

Craig Tiller

unread,
Jan 6, 2016, 12:17:33 AM1/6/16
to you lun, grpc.io

If you need ordering you could use a grpc stream, which guarantees in order delivery of each message sent on the stream.


david vennik

unread,
Apr 29, 2022, 1:45:31 AMApr 29
to grpc.io
A simple hypothetical case, as I am currently working on, let's say we are sending messages to a gRPC microservice that encodes data of variable lengths. We want the responses to come back and match the requests, and not get answers from other messages as answer to a message, a case which is likely to occur to a very long message interspersed with short messages.

As far as I can tell, the gRPC streaming method does not guarantee returning responses to match their requests. If it does make this guarantee, then as far as I can tell from the generated code, this would mean that ready messages are going to pile up when one long message causes a longer delay in processing.

I am encountering this question in my current work and my feeling is that the correct way to handle it is to add identifiers to messages, similar to how JSONRPC2 standard requires for all requests and responses, and the client has to be written the same way as the server, it must run a loop and hold a queue of pending requests and dispatch them when the result comes back with the matching message ID, and the client needs to make IDs on all its requests so this queue is kept in order and the client side code runs the code on the responses sequentially as they return and not assume that the messages are same order in as out.
Reply all
Reply to author
Forward
0 new messages