--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/79d883f9-cbc3-4281-9c16-3f5b7edaff3e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
In my setup, I use async api. My main thread processes events/tags while I have a separate thread to pump the completion queue. Now, suppose I want to shutdown my server instance. Currently, I can’t call grpc::Server::Shutdown without making sure that all my existing events in progress are finished first and I have destroyed every reference to all the library objects (say ServerContext which theoretically should be independent of server life time) . However, without calling grpc::Server::Shutdown, my server may keep on getting new events from new clients requesting new rpcs. It’d be much easier if I had an api that could allow me to cancel pending rpcs (the ones that have not been started yet). At the moment, only way for me to do this would be to keep a list of rpcs that have not started and manually call serverContext->TryCancel on them.
Are you suggesting that with new flow, I can simply call grpc::server::Shutdown, have it cancel all the pending rpcs/events (some of which can cause freeing of additional resources like ServerContext) and the things that are attaching their lifetime to server would instead hang on to the GrpcLibrary?
Thanks.
Users reported bugs related to this issue. Some of the issues can be avoided/worked around by strengthening the requirement or minor tweaks of the code.Some are not so easy to fix without potential performance overhead. The internal doc contains a couple of more links of related issues people encountered.
Hi there,I'd very much like to discuss this issue. Switching to explicit initialization increases friction for users, but keeping it the existing way just increases friction for the library writers (unless the code ends up being so failure-prone that it affects users through a loss of stability). Has there been a user feature request for explicit initialization?
On Tuesday, December 12, 2017 at 9:40:21 AM UTC-8, Yang Gao wrote:Hi,I have created a gRFC in https://github.com/grpc/proposal/pull/48 which will add a new C++ class to make gRPC C++ library lifetime explicit.If you have comments and suggestions, please use this thread to discuss.Thanks.
--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/fd79a64c-048a-4dd5-9d2b-ae7256009e40%40googlegroups.com.
I create an encapsulation for Rpc object that will have it’s method executed in response to the events. The object contains a ServerContext which is naturally destroyed when the Rpc object goes out of scope. The Rpc object is destroyed when the aysnc done event from grpc is received. All the rpcs also queue up their ‘request’ event in completion queue.
When my application wants to shutdown, I call the server shutdown routine followed by a completion queue shutdown. During this processing, the ‘done’ tag is received for queued up rpcs and rpcs in process. There is no other way for receiving the ‘done’ tag for these rpcs. So my Rpc object is destroyed in parallel.
It should also be noted that grpc can send a “done” event for an rpc while an async op for the same rpc is waiting. So when I receive the done tag, I wait for the other async ops to finish for the object. Once I know that everything is cleaned up, I go ahead and destroy.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/0acc4edd-7033-45dc-b3a9-35a14657c050%40googlegroups.com.