golang: stubs backed by an interface instead of a concrete object

124 views
Skip to first unread message

Josh Humphries

unread,
Dec 16, 2016, 12:09:59 PM12/16/16
to grpc.io, zel...@squareup.com
I've seen the idea proposed more than once that the generated stubs be backed by an interface -- something along the lines of a channel. Most recently, it was during discussion of client interceptors. It's also come up as a way of doing in-process dispatch without having to go through the network stack and (much more importantly) serialization and deserialization.

There have been objections to the idea, and I just wanted to understand the rationale behind them. I have a few ideas as to what the arguments for the current approach might be. Is it one or more of these? Or are there other arguments tat I am overlooking or nuance/detail I missed in the bullets below?
  1. Surface Area. The main argument I can think of is that the API isn't yet sufficiently mature to lock down the interface now. So exporting only a single concrete type to be used by stubs makes the API surface area smaller, allowing more flexibility in changes later. To me, this implies that introduction of such an interface is an option in the future. (I don't particularly agree with this argument since the interface surface area could have been exposed instead of the existing grpc.Invoke and grpc.NewClientStream methods.)
  2. Overhead. It could be argued that the level of indirection introduced by the use of an interface could be too much overhead. I'd really like to see a benchmark that shows this if this is the case. It seems hard to imagine that a single interface vtable-dispatch would be measurable overhead considering what else happens in the course of a call. (Perhaps my imagination is broke...)
  3. Complexity. I suppose it might be argued that introducing another type, such as a channel interface, complicates the library and the existing flow. I happen to strongly disagree with such an argument. I think the interface could be added in a fairly painless way that would still support older generated code. This was described in this document. But were this part of the objection, I'd like to hear more.

For context: I have some ideas I want to build for other kinds of stubs -- like providing special stubs that make batch streaming calls look like just issuing a bunch of unary RPCs, or for making a single bidi-stream conversation resemble a sequence of normal service calls (for some other service) that happen to be pinned to the same stream.

All of these currently require non-trivial code generation -- either specialized to the use, or I just provide my own interface-based dispatch and build all of these things on top of that. But it feels like a fundamental hole in the existing APIs that I cannot do this already.

The Java implementation has a layered architecture with Stubs on top, Transports on the bottom, and Channel in-between. The Go implementation exposes nothing along the lines of channel, instead combining it with the transport into a single clientConn. This is incredibly limiting.

----
Josh Humphries
Software Engineer

Carl Mastrangelo

unread,
Jan 3, 2017, 7:07:04 PM1/3/17
to grpc.io, zel...@squareup.com, j...@fullstory.com, Qi Zhao, Menghan Li
Hi Josh,

I cc'd a few people who might be able to help.

Josh Humphries

unread,
Jan 4, 2017, 1:44:41 PM1/4/17
to Carl Mastrangelo, grpc.io, zel...@squareup.com, Qi Zhao, Menghan Li
I've gotten reasonably far with just doing this myself using my own protoc plugin, that augments what's already produced by the grpc plugin.

Making the channel an interface does tease out a few other things, that could give more weight to the "API surface area" argument for not exposing such an interface:
  1. The mechanism for server-side unary RPC handlers to send back custom headers and trailers would also have to change. The current mechanism, e.g. grpc.SetHeader(...), assumes that there will be a transport.Stream in context. For custom dispatch (like might be done from an in-process channel), this should instead be a grpc.ServerStream, which could be intercepted/wrapped (currently, unary RPC interceptors cannot intercept metadata; only stream interceptors can).
  2. The CallOption type needs to be a little less opaque for custom channels to interpret/use them. This could be handled by exporting an EffectiveCallOptions struct, which is what CallOption instances actually operate on (e.g. they mutate fields in the struct).
The above point (about intercepting metadata for unary calls), IMO raises an argument that unary and streaming RPCs should be unified a little further. Keeping the generated stubs/handlers for unary calls simple is great. But the fact that the internal client and server handling also get forked early on to handle these two cases is a little strange. Not only does it mean that there are parts of dispatch that must be written twice (once to handle unary methods and again to handle streaming methods) but the same is true for user code for users that write interceptors. Ideally, everything should just use the stream abstraction, all the way up to the point of dispatching to user's code (server side) / immediately after being invoked by user's code (client side). FWIW, this is how the Java implementation works, and I think it is successful.


----
Josh Humphries
Software Engineer

Zellyn Hunter

unread,
Jan 4, 2017, 1:48:26 PM1/4/17
to Josh Humphries, Carl Mastrangelo, grpc.io, Qi Zhao, Menghan Li
For whatever weight my opinion holds, I would urge folks to proceed with these kinds of simplifications/unifications now, rather than trying to preserve compatibility at this early stage: there are many, many more years of GRPC ahead than there are behind!

Zellyn
--
Zellyn Hunter
Payments
Atlanta, GA  |  678-612-5126
Reply all
Reply to author
Forward
0 new messages