grpc for database driver

28 views
Skip to first unread message

ren...@earthlink.net

unread,
Oct 17, 2018, 11:58:32 AM10/17/18
to grpc.io
Hello,

I wrote a database driver using Go and grpc. I ended up using the "streaming" rpc because that was the only way it seems to have a "stable connection".

When reviewing the API docs, it was the only way I could implement a "connection", otherwise for each rpc call, a new connection (tcp) is made for each request - at least as far as I could tell - and there is no "lifecycle" events on the server side to determine when a connection is dead, etc.

Am I reading the documentation correctly and implementing this correctly?

Is it also a possibility that if I use a TLS connection, then the grpc connection will be stable across rpc calls ?

Looking for direction here (it works now, but something doesn't fell right).

You can see the code at github.com/robaho/keydbr

Thanks.
Robert

Eric Anderson

unread,
Oct 17, 2018, 1:01:44 PM10/17/18
to ren...@earthlink.net, grpc-io
On Wed, Oct 17, 2018 at 8:58 AM <ren...@earthlink.net> wrote:
When reviewing the API docs, it was the only way I could implement a "connection", otherwise for each rpc call, a new connection (tcp) is made for each request

No. A stream is the only way to guarantee multiple requests go to the same server, but normal RPCs can/do reuse connections. Since gRPC is using HTTP/2, as long as you re-use the ClientConn gRPC is able to issue lots of RPCs on the same connection, even concurrently.

ClientConn may create more than one connection, but this is for things like load balancing, where you need distribute traffic to multiple backends. But by default, ClientConn will have a single connection it will use for all RPCs, and automatically replace that connection if something is wrong with it.

and there is no "lifecycle" events on the server side to determine when a connection is dead, etc.

Yes, this is on purpose. The vast majority of use cases should have no need for this level of API. We do provide mechanisms to detect broken connections and release idle connections, however. (That is implemented in Go)

Is it also a possibility that if I use a TLS connection, then the grpc connection will be stable across rpc calls ?

Yes. If a call fails, that shouldn't impact the TLS connection. Simply continue using the same ClientConn and gRPC should manage the connection for you.

You can see the code at github.com/robaho/keydbr

I would not suggest closing the ClientConn on error. Even if there is a failure with the connection and RPCs fail, the ClientConn will properly reconnect. It will also do things like exponential backoff on repeated connection failures to avoid cascading failures.

I suggest sharing a single ClientConn per `addr` (in your app) as much as possible.

robert engels

unread,
Oct 17, 2018, 2:54:42 PM10/17/18
to Eric Anderson, grpc-io
Thanks for your help.

Yes, I see how I can re-use the ClientConn across db open requests, the problem I have is that even if I set a ‘MAX-IDLE’ (and add client code to ‘ping’ within this interval), I don’t see any method on the server side to detect when the connection was dropped - there does not seem to be a callback (or channel) in the Go api ?

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oPKeLnEso9Sypc-O_KAT_d0i6g63%3D2pjBqmvj-J21CsVg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Eric Anderson

unread,
Oct 17, 2018, 3:01:29 PM10/17/18
to ren...@earthlink.net, grpc-io
On Wed, Oct 17, 2018 at 11:54 AM robert engels <ren...@earthlink.net> wrote:
Yes, I see how I can re-use the ClientConn across db open requests, the problem I have is that even if I set a ‘MAX-IDLE’ (and add client code to ‘ping’ within this interval), I don’t see any method on the server side to detect when the connection was dropped - there does not seem to be a callback (or channel) in the Go api ?

There's no API to know when the connection is dropped. All the semantics are per-RPC. RPC's can be "cancelled." Normally the connection is only dropped if there are no RPCs. But if there is an I/O error or similar, then the RPCs will be treated as cancelled. You can be notified when the RPC is cancelled via Context.Done().

robert engels

unread,
Oct 17, 2018, 3:06:45 PM10/17/18
to Eric Anderson, grpc-io
Ok, so my original statement about being forced to use the ‘streaming rpc’ was the correct way. I thought you said that was the case, but then you offered up what seemed like other solutions that would allow me to use individual rpcs…

But then why have the ‘idle time’ - if the connections are terminated when there are not more rpcs ? I am not sure why the server process can’t be notified when a connection is “terminated” due to idle timeout (or any other reason - like IO errors) - so it can clean up cached resources - doesn’t make a lot of sense to me?

robert engels

unread,
Oct 17, 2018, 3:07:41 PM10/17/18
to Eric Anderson, grpc-io
Sorry, meaning, that I need to add my own ‘idle timeout’ on the server application level to basically accomplish the same thing that the rpc connection is already doing…

Eric Anderson

unread,
Oct 17, 2018, 3:37:13 PM10/17/18
to ren...@earthlink.net, grpc-io
On Oct 17, 2018, at 2:06 PM, robert engels <ren...@earthlink.net> wrote:
Ok, so my original statement about being forced to use the ‘streaming rpc’ was the correct way. I thought you said that was the case, but then you offered up what seemed like other solutions that would allow me to use individual rpcs…

You said you were forced to use streaming because of some specific reasons. I tried to correct some specific incorrect understanding, namely that "a new connection (tcp) is made for each request" is incorrect. You also asked about connection lifecycle notifications, but it is unclear what for. I think part of the problem is it's not really clear to me what semantics you're looking for in "stable connection." It's in quotes and can be interpreted a few different ways.

It sounds like you may be wanting to use streaming since it provides a lifetime. For example, if you were wanting to implement a transaction, a stream could be a quite nice approach. The stream provides a "context" for all the messages while also providing a lifetime for the transaction; if the RPC is cancelled the transaction could be aborted.

I'm sorry, but I wasn't trying to say you should or shouldn't use streaming. I was only trying to inform the constraints that may cause you to choose streaming.

But then why have the ‘idle time’ - if the connections are terminated when there are not more rpcs ? I am not sure why the server process can’t be notified when a connection is “terminated” due to idle timeout (or any other reason - like IO errors) - so it can clean up cached resources - doesn’t make a lot of sense to me?

On Wed, Oct 17, 2018 at 12:07 PM robert engels <ren...@earthlink.net> wrote:
Sorry, meaning, that I need to add my own ‘idle timeout’ on the server application level to basically accomplish the same thing that the rpc connection is already doing…

If you will be using a single stream for all the data, then yes, you may want to close it after an idle period. You may even want to close it after a certain lifetime (this is useful for re-distributing load across multiple backends, for instance).

I don't know what you are trying to do well enough to understand what you are wanting to do with the connection notification. I will say that for gRPC, we make no association between a "client" and a "connection." A single client can have more than one connection to the same backend (this can happen normally due to GOAWAY, but also possibly with client-side load balancing) and a single connection to a backend can have multiple clients (in the case of an L7/HTTP proxy).
Reply all
Reply to author
Forward
0 new messages