Bidirectional synchronous events: using client & server at the same time?

52 views
Skip to first unread message

Dave Rabelink

unread,
Jan 31, 2018, 2:57:04 AM1/31/18
to grpc.io

Hello,

I'm trying to implement grpc to be a bridge between a JAVA application (A) and a C# (.NET) server application (B) (always running on the same machine).

1) The client A should be able to call synchronous service methods on the server B (request->reply).
2) The server B should be able to throw synchronous "events" back to the client A and wait for the response (request->reply).


The synchronous calls from the client was easy to implement.
The server is running on port X, the client connect to it and calls the defined methods in the service.

But I have trouble finding the best way to implement (2) where the server is calling the client and waiting for the response.

It seems that bidirectional streams is not the way to do this as the messages are asynchronous and will not wait for the receiver to respond while blocking the server.
It should really be a blocking call.


Let me explain with an example.

The server has to notify the client at a certain moment to ask for the status of the client. So the server has to send a message to the client and wait for it to respond giving the status as reply data back.

I could not find a clear sample anywhere (maybe I'm not using the correct words in my search actions), so I'm not sure what the best implementation would be.

Personally I came up with this solution: The client A is at the same time also a server. And the server B is at the same time a client.

So, client A will connect to server B using port X
The client will call the server inidicating it subcribes for events from the server by starting a server object on port Y.
The server will then connect to the client on port Y.

When the server then needs to notify the client it calls a method of the client on port Y which is synchronous and blocking.
The client calls methods on the server using port X.

Would this be a valid way to implement these requirements or is there a better way to do this?
Any help is appreciated, to have the best conceptual view on such an architecture.


Can someone indicate this would be a valid sulution for the requirement

falco...@gmail.com

unread,
Jan 31, 2018, 6:50:54 AM1/31/18
to grpc.io
You could either do this using the server-client model you described (both have an server implemented and use the other as client) or you can use a bidirectional stream, implementing an observer pattern. One example can be found with the GRPCLB proto: https://github.com/grpc/grpc/blob/master/src/proto/grpc/lb/v1/load_balancer.proto

The client A calls the BalanceLoad method on the server B. It will send the InitialLoadBalanceRequest directly after connecting. Server B knows that and expects the first message in the stream to be that message. After it got that message, it's responding by sending the message InitialLoadBalanceResponse back to A, which is waiting (a)synchronously for that. After that initialization, A can send ClientStats messages to B as async events and B can send ServerList to A as async events. But you always are able to just wait for the response synch is you know this will come directly as response as shown with the InitialLoadBalance Req/Rsp. pair.

Dave Rabelink

unread,
Feb 2, 2018, 9:01:14 AM2/2/18
to grpc.io
Thanks Benjamin for this info.
I decided to implement Client/Server model on both sides.
This is working as expected.



Op woensdag 31 januari 2018 12:50:54 UTC+1 schreef Benjamin Krämer:

Carl Mastrangelo

unread,
Feb 5, 2018, 6:30:06 PM2/5/18
to grpc.io
This seems more complicated than it needs to be.  My suggestion would be the following:  

Have a single service, running on a single port, with a single streaming method.   The requests and replies are a union of the kinds of commands the client and server want to send to each other.   These messages aren't synchronous, but they are in order.  The "synchronous" APIs in the client aren't really synchronized at all, it's just the stub that converts it to be that way.   In gRPC, every RPC is async.  

What does this mean for you?  You can fake a sync RPC by just waiting until you get the message type you want.   As an example:


service Foo {
  rpc Bar(stream BarRequest) returns (stream BarResponse); 
}

message BarRequest {
  oneof req {
    message ClientSyncReq client_sync_req = 1;
    message ServerSyncResp server_sync_resp = 2;
  }
}

message BarResponse {
  oneof resp {
    message ClientSyncResp client_sync_resp = 1;
    message ServerSyncReq server_sync_req = 2;
  }
}

When you client wants to make a call, it issues a BarRequest with the ClientSyncReq field set, and has a switch statement on the response.  When a ClientSyncResp message comes back, it will return the result to whoever made the original call.  When the server wants to query the client, it sends a ServerSyncReq (note this is in the BarResponse!).  The client responds by sending a "request" containing a ServerSyncResponse.    Each side can control their own state which allows them to tell if calls have been made in a legal order.  

Be aware that the client must initiate this interaction.   The server cannot start the RPC, but once it is bootstrapped you can proceed.
Reply all
Reply to author
Forward
0 new messages