Suggested approach to chain gRPC Calls

136 views
Skip to first unread message

Jens Troeger

unread,
Jan 27, 2023, 12:12:38 PM1/27/23
to grpc.io
Hello,

Suppose I have a Servicer A whose request handler needs to call another Servicer B. Creating a channel and a stub every time A’s handler runs doesn’t seem efficient or sensible.

What’s the suggested/proper way to chain gRPC calls? (In Python, if that matters.)

Can I hook into the Servicer’s boot, open a channel and create a stub, and then pass down that stub through the context? Is there some interceptor support that’s useful?

Which leads me to the next question: can I control how many threads and processes a Servicer can spawn for request handling? I assume each of those needs their own channel and stub to call to another Servicer?

Much thanks!
Jens

Xuan Wang

unread,
Feb 1, 2023, 8:05:51 PM2/1/23
to grpc.io
Hi, you're correct that channel should be created either during server start or during initialization, you can then pass the stub reference to server handler. As for the threads,  you can use thread_pool to control executors used by the server: https://github.com/grpc/grpc/blob/master/src/python/grpcio/grpc/__init__.py#L2030

Best,
Xuan

Jens Troeger

unread,
Feb 1, 2023, 8:24:28 PM2/1/23
to grpc.io
Thank you, Xuan!

Hi, you're correct that channel should be created either during server start or during initialization, you can then pass the stub reference to server handler. 

Is there a recommended approach to hook into the server start/initialization? And suppose I can create a stub, how do I then pass it down to the handlers? Is there state I can share for intercepts, or how do I best create and access such a “global” resource?

Cheers,
Jens 

Richard Belleville

unread,
Feb 1, 2023, 8:26:09 PM2/1/23
to grpc.io
Jens,

In general, the best way to do this is to simply create the channel in your Servicer constructor and then use self._channel or self._stub from your server handlers.

Jens Troeger

unread,
Feb 2, 2023, 9:35:19 AM2/2/23
to grpc.io
Thanks, that seems to work!

But it leads me to the next questions:
  1. If for whatever reason the channel is closed, do I need to reopen it or does that channel instance manage disconnects itself? Or: how do I keep the stub alive?
  2. I presume that I should close the channel in the __del__() method of my Servicer?
Cheers,
Jens

Richard Belleville

unread,
Feb 2, 2023, 3:40:23 PM2/2/23
to Jens Troeger, grpc.io
1. In general, the channel will remain open indefinitely regardless of the state of the underlying TCP connection.
2. In most cases, __del__ will probably fine, but __del__ is not reliably called when an object goes out of scope. It may happen an arbitrary amount of time after that. If you're really worried about deterministically closing the connection, you'll want to add an explicit close method that closes the channel.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/f813855d-15c2-488c-9a18-fa135e093fc1n%40googlegroups.com.

Jens Troeger

unread,
Feb 2, 2023, 4:07:09 PM2/2/23
to grpc.io
Thank you, Richard!

1. In general, the channel will remain open indefinitely regardless of the state of the underlying TCP connection.

That’s good to know 👍
 
2. In most cases, __del__ will probably fine, but __del__ is not reliably called when an object goes out of scope. It may happen an arbitrary amount of time after that. If you're really worried about deterministically closing the connection, you'll want to add an explicit close method that closes the channel.

Well, I’m opening the Channel and create the Stub in the Servicer’s __init__() initializer, so the only sensible place to close the Channel would be upon finalizing/destroying the instance (hence using __del__()). I understand that that’s not a reliable way of freeing that resource but considering the Channel is tied to the lifetime of the Servicer instance, it’s probably as good as it gets… 🤔

Cheers,
Jens
Reply all
Reply to author
Forward
0 new messages