When reviewing the API docs, it was the only way I could implement a "connection", otherwise for each rpc call, a new connection (tcp) is made for each request
and there is no "lifecycle" events on the server side to determine when a connection is dead, etc.
Is it also a possibility that if I use a TLS connection, then the grpc connection will be stable across rpc calls ?
--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oPKeLnEso9Sypc-O_KAT_d0i6g63%3D2pjBqmvj-J21CsVg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Yes, I see how I can re-use the ClientConn across db open requests, the problem I have is that even if I set a ‘MAX-IDLE’ (and add client code to ‘ping’ within this interval), I don’t see any method on the server side to detect when the connection was dropped - there does not seem to be a callback (or channel) in the Go api ?
Ok, so my original statement about being forced to use the ‘streaming rpc’ was the correct way. I thought you said that was the case, but then you offered up what seemed like other solutions that would allow me to use individual rpcs…
But then why have the ‘idle time’ - if the connections are terminated when there are not more rpcs ? I am not sure why the server process can’t be notified when a connection is “terminated” due to idle timeout (or any other reason - like IO errors) - so it can clean up cached resources - doesn’t make a lot of sense to me?
Sorry, meaning, that I need to add my own ‘idle timeout’ on the server application level to basically accomplish the same thing that the rpc connection is already doing…