gRPC A6: Retries

4,728 views
Skip to first unread message

ncte...@google.com

unread,
Feb 10, 2017, 7:31:01 PM2/10/17
to grpc.io
I've created a gRFC describing the design and implementation plan for gRPC Retries.

Take a look at the gRPC on Github.

Michael Rose

unread,
Feb 11, 2017, 4:57:57 PM2/11/17
to grpc.io
A few questions:

1) Under this design, is it possible to add a load balancing constraints for retried/hedged requests? Especially during hedging, I'd like to be able to try a different server since the original server might be garbage collecting or have otherwise collected a queue of requests such that a retry/hedge to this server will not be very useful. Or, perhaps the key I'm looking up lives on a specific subset of storage servers and therefore should be balanced to that specific subset. While that's the domain of a LB policy, what information will hedging/retries provide to the LB policy?

2) "Clients cannot override retry policy set by the service config." -- is this intended for inside Google? How about gRPC users outside of Google which don't use the DNS mechanism to push configuration? It seems like having a client override for retry/hedging policy is pragmatic.

3) Retry backoff time -- if I'm reading it right, it will always retry in random(0, current_backoff) milliseconds. What's your feeling on this vs. a retry w/ configurable jitter parameter (e.x. linear 1000ms increase w/ 10% jitter). Is it OK if there's no minimum backoff?

Regards,
Michael

Josh Humphries

unread,
Feb 12, 2017, 9:26:45 AM2/12/17
to Michael Rose, grpc.io
On Sat, Feb 11, 2017 at 4:57 PM, 'Michael Rose' via grpc.io <grp...@googlegroups.com> wrote:
A few questions:

1) Under this design, is it possible to add a load balancing constraints for retried/hedged requests? Especially during hedging, I'd like to be able to try a different server since the original server might be garbage collecting or have otherwise collected a queue of requests such that a retry/hedge to this server will not be very useful. Or, perhaps the key I'm looking up lives on a specific subset of storage servers and therefore should be balanced to that specific subset. While that's the domain of a LB policy, what information will hedging/retries provide to the LB policy?

2) "Clients cannot override retry policy set by the service config." -- is this intended for inside Google? How about gRPC users outside of Google which don't use the DNS mechanism to push configuration? It seems like having a client override for retry/hedging policy is pragmatic.

3) Retry backoff time -- if I'm reading it right, it will always retry in random(0, current_backoff) milliseconds. What's your feeling on this vs. a retry w/ configurable jitter parameter (e.x. linear 1000ms increase w/ 10% jitter). Is it OK if there's no minimum backoff?


I was about to ask the same thing :)

The current text:
The failed RPCs will be retried after x seconds, where x is defined as random(0, current_backoff).

What I would instead expect:
The failed RPCs will be retried after x seconds, where x is defined as random(current_backoff*(1-jitter), current_backoff*(1+jitter)) where jitter would be 0.25, for example, to indicate 25% jitter.



Regards,
Michael

On Friday, February 10, 2017 at 5:31:01 PM UTC-7, ncte...@google.com wrote:
I've created a gRFC describing the design and implementation plan for gRPC Retries.

Take a look at the gRPC on Github.

CONFIDENTIALITY NOTICE: This email message, and any documents, files or previous e-mail messages attached to it is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/62809dba-3349-4a60-9aa9-ccc044d27f53%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Eric Gribkoff

unread,
Feb 12, 2017, 9:24:59 PM2/12/17
to Michael Rose, grpc.io
Hi Michael,

Thanks for the feedback. Responses to your questions (and Josh's follow-up question on retry backoff times) are inline below.

On Sat, Feb 11, 2017 at 1:57 PM, 'Michael Rose' via grpc.io <grp...@googlegroups.com> wrote:
A few questions:

1) Under this design, is it possible to add a load balancing constraints for retried/hedged requests? Especially during hedging, I'd like to be able to try a different server since the original server might be garbage collecting or have otherwise collected a queue of requests such that a retry/hedge to this server will not be very useful. Or, perhaps the key I'm looking up lives on a specific subset of storage servers and therefore should be balanced to that specific subset. While that's the domain of a LB policy, what information will hedging/retries provide to the LB policy?


We are not supporting explicit load balancing constraints for retries. The retry attempt or hedged RPC will be re-resolved through the load-balancer, so it's up to the service owner to ensure that this has a low-likelihood of issuing the request to the same backend. This is part of a decision to keep the retry design as simple as possible while satisfying the majority of use cases. If your load-balancing policy has a high likelihood of sending requests to the same server each time, hedging (and to some extent retries) will be less useful regardless. There will be metadata attached to the call indicating that it's a retry, but it won't include information about which servers the previous requests went to.

 
2) "Clients cannot override retry policy set by the service config." -- is this intended for inside Google? How about gRPC users outside of Google which don't use the DNS mechanism to push configuration? It seems like having a client override for retry/hedging policy is pragmatic.


In general, we don't want to support client specification of retry policies. The necessary information about what methods are safe to retry or hedge, the potential for increased load, etc., are really decisions that should be left to the service owner. The retry policy will definitely be a part of the service config. While there are still some security-related discussions about the exact delivery mechanism for the service config and retry policies, I think your concern here  should be part of the service config design discussion rather than something specific to retry support.
 
3) Retry backoff time -- if I'm reading it right, it will always retry in random(0, current_backoff) milliseconds. What's your feeling on this vs. a retry w/ configurable jitter parameter (e.x. linear 1000ms increase w/ 10% jitter). Is it OK if there's no minimum backoff?


You are reading the backoff time correctly. There are a number of ways of doing this, (see https://www.awsarchitectureblog.com/2015/03/backoff.html) but choosing between random(0, current_backoff) is done intentionally and should generally give the best results. We do not want a configurable "jitter" parameter. Empirically, the retries should have more varied backoff time, and we also do not want to let service owners specify very low values for jitter (e.g., 1% or even 0), as this would cluster all retries tightly together and further contribute to server overloading.

Best,

Eric Gribkoff
 

Regards,
Michael

On Friday, February 10, 2017 at 5:31:01 PM UTC-7, ncte...@google.com wrote:
I've created a gRFC describing the design and implementation plan for gRPC Retries.

Take a look at the gRPC on Github.

elemen...@gmail.com

unread,
Feb 12, 2017, 10:26:23 PM2/12/17
to grpc.io, mic...@fullcontact.com
> We are not supporting explicit load balancing constraints for retries. The retry attempt or hedged RPC will be re-resolved through the load-balancer, so it's up to the service owner to ensure that this has a low-likelihood of issuing the request to the same backend.

That seems fairly difficult for any service with request-dependent routing semantics. Lets use a DFS as an example: many DFSes maintain N replicas of a given file block. In the case where you send a hedged request for a block, your likelihood is 1/N of requerying the same DFS node which might well have a slow disk. At least for us using HDFS, N=3 most of the time; a therefore 33% chance of requerying the same node. Even assuming a smart load balancing service which intelligently removes poorly performing storage nodes from service, it still seems desirable to ensure hedged requests go to a different node. Not having a story for more informed load balancing seems like it makes a lot of use cases more difficult than they need to be.

Regards,
Michael
Hi Michael,

To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

Josh Humphries

unread,
Feb 13, 2017, 10:02:39 AM2/13/17
to Eric Gribkoff, Michael Rose, grpc.io
On Sun, Feb 12, 2017 at 9:24 PM, 'Eric Gribkoff' via grpc.io <grp...@googlegroups.com> wrote:
Hi Michael,

Thanks for the feedback. Responses to your questions (and Josh's follow-up question on retry backoff times) are inline below.

On Sat, Feb 11, 2017 at 1:57 PM, 'Michael Rose' via grpc.io <grp...@googlegroups.com> wrote:
A few questions:

1) Under this design, is it possible to add a load balancing constraints for retried/hedged requests? Especially during hedging, I'd like to be able to try a different server since the original server might be garbage collecting or have otherwise collected a queue of requests such that a retry/hedge to this server will not be very useful. Or, perhaps the key I'm looking up lives on a specific subset of storage servers and therefore should be balanced to that specific subset. While that's the domain of a LB policy, what information will hedging/retries provide to the LB policy?


We are not supporting explicit load balancing constraints for retries. The retry attempt or hedged RPC will be re-resolved through the load-balancer, so it's up to the service owner to ensure that this has a low-likelihood of issuing the request to the same backend. This is part of a decision to keep the retry design as simple as possible while satisfying the majority of use cases. If your load-balancing policy has a high likelihood of sending requests to the same server each time, hedging (and to some extent retries) will be less useful regardless. There will be metadata attached to the call indicating that it's a retry, but it won't include information about which servers the previous requests went to.

 
2) "Clients cannot override retry policy set by the service config." -- is this intended for inside Google? How about gRPC users outside of Google which don't use the DNS mechanism to push configuration? It seems like having a client override for retry/hedging policy is pragmatic.


In general, we don't want to support client specification of retry policies. The necessary information about what methods are safe to retry or hedge, the potential for increased load, etc., are really decisions that should be left to the service owner. The retry policy will definitely be a part of the service config. While there are still some security-related discussions about the exact delivery mechanism for the service config and retry policies, I think your concern here  should be part of the service config design discussion rather than something specific to retry support.
 
3) Retry backoff time -- if I'm reading it right, it will always retry in random(0, current_backoff) milliseconds. What's your feeling on this vs. a retry w/ configurable jitter parameter (e.x. linear 1000ms increase w/ 10% jitter). Is it OK if there's no minimum backoff?


You are reading the backoff time correctly. There are a number of ways of doing this, (see https://www.awsarchitectureblog.com/2015/03/backoff.html) but choosing between random(0, current_backoff) is done intentionally and should generally give the best results. We do not want a configurable "jitter" parameter. Empirically, the retries should have more varied backoff time, and we also do not want to let service owners specify very low values for jitter (e.g., 1% or even 0), as this would cluster all retries tightly together and further contribute to server overloading.

In that case, perhaps it should be random(0, 2*current_backoff) so that the mean is the targeted backoff (with effectively 100% jitter). Otherwise, documentation will need to be very clear that the actual expected value for backoff is 1/2 of any configured values.
 

Best,

Eric Gribkoff
 

Regards,
Michael

On Friday, February 10, 2017 at 5:31:01 PM UTC-7, ncte...@google.com wrote:
I've created a gRFC describing the design and implementation plan for gRPC Retries.

Take a look at the gRPC on Github.

CONFIDENTIALITY NOTICE: This email message, and any documents, files or previous e-mail messages attached to it is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/62809dba-3349-4a60-9aa9-ccc044d27f53%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

Mark D. Roth

unread,
Feb 13, 2017, 10:04:15 AM2/13/17
to elemen...@gmail.com, grpc.io, mic...@fullcontact.com, David Garcia Quintas
(+dgq)

I think this is actually a question to be addressed in the load-balancing affinity design, which David is working on.  I suspect that the main thing we need to do is to expose the request metadata that indicates that a request is a retry to the LB policy, so that it can use that information to make its decision.  Then it's up to the LB policy to notice that a request is a retry and apply any necessary logic for that case.


To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

For more options, visit https://groups.google.com/d/optout.



--
Mark D. Roth <ro...@google.com>
Software Engineer
Google, Inc.

Mark D. Roth

unread,
Feb 27, 2017, 11:53:20 AM2/27/17
to Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Eric Anderson, Saila Talagadadeevi, Menghan Li
While talking with Craig on Friday, we realized that we need to make the wire protocol a bit stricter in order to implement retries.

Currently, the spec allows status to be sent either as part of initial metadata or trailing metadata.  However, as per the When Retries are Valid section of the gRFC, an RFC becomes committed when "the client receives a non-error response (either an explicit OK status or any response message) from the server".  This means that in a case where the server sends a retryable status, if the status is not included in the initial metadata, the client will consider the RPC committed as soon as it receives the initial metadata, even if the only thing sent after that is the trailing metadata that includes the status.  Thus, we need to require that whenever the server sends status without sending any messages, the server should include the status in the initial metadata (and then close the stream without bothering to send trailing metadata) instead of sending both initial metadata and then trailing metadata.

Noah, can you please add a note about this to the gRFC?

Based on a previously encounted interop problem (see https://github.com/markdroth/grpc/pull/3, which was included in https://github.com/grpc/grpc/pull/7201), I believe that grpc-go already does the right thing here (although Saila and Menghan should confirm that).  However, since that previously encountered problem did not show up with Java or C++, I suspect that those stacks do not do the right thing here.

Craig has confirmed that C-core needs to be fixed in this regard, and I've filed https://github.com/grpc/grpc/issues/9883 for that change.

Eric and Penn, can you confirm that Java will need to be changed?  I'm hoping that this isn't too invasive of a change, but please let us know if you foresee any problems.

Please let me know if anyone has any questions or problems with any of this.  Thanks!

On Fri, Feb 10, 2017 at 4:31 PM, ncteisen via grpc.io <grp...@googlegroups.com> wrote:
I've created a gRFC describing the design and implementation plan for gRPC Retries.

Take a look at the gRPC on Github.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

For more options, visit https://groups.google.com/d/optout.

mic...@improbable.io

unread,
Feb 28, 2017, 10:37:21 AM2/28/17
to grpc.io, ncte...@google.com, cti...@google.com, zda...@google.com, ej...@google.com, sai...@google.com, meng...@google.com
Right, let me chip in and continue the discussion from: https://github.com/grpc/proposal/pull/12#issuecomment-283063869 here. My comments are based on experience building a gRPC-Go interceptor for retries and using it in production at Improbable. It's important to note that we're pretty heavy users of gRPC (using it across 3 languages: Go, Java and C++), as we have quite a few people around (myself included) who are familiar with gRPC/Stubby from their prior jobs.

Now, to recap the points:


Thank you for the comments. We are trying to keep high-level discussion on the email thread (see here) but my responses to your points (b) and (c) are below.

> > b) retry logic would benefit a lot from knowing whether the method is idempotent or not. II understand that this is supposed to be handled by "service configs", but realistically they're really hard to use. Few people would put their retry logic in DNS TXT entries, and even fewer people operate the gRPC LB protocol. Can we consider adding a .proto option (annotation) to Method definitions?

> The re are two concerns here. One is that saying a method is idempotent really just means "retry on status codes x y and z". If we pick a canonical set of idempotent status codes, we are forcing every service owner to obey these semantics if they want to use retries. However, the gRPC status code mechanism is very flexible (even allowing services to return UNAVAILABLE arbitrarily) so we'd prefer to force service owners to consider the semantics of their application and pick a concrete set of status codes rather than just flipping an "idempotent" switch. The second concern is around the ease of use of the service config. The intent is for the service config to be a universally useful mechanism, and we want to avoid just baking everything into the proto. Concerns about the delivery mechanism for service config shouldn't invalidate its use for encoding retry policy, and may be something we have to tackle separately.

I appreciate the push for the service config, and given my past SRE experience, I totally appreciate it. However, in the open source world simple solutions seem to get most traction. I would be hesitant to tie a very very important feature (such as retries) to the service config adoption. 

I understand your concerns around the flexibility of service codes, and I wasn't advocating being prescriptive about it. However, I do think that having an option inside the .proto is a very valid approach. As a user of gRPC for internal and external purposes there are three cases how I use .protos as interfaces:
 * have internal teams use each-other's services, in which case they code against code-generated interfaces that come out of .proto files
 * to our end users provide a set of "published" .proto files and guides of how to generate and use them in the language of their choice through gRPC code generation (the true power of gRPC).
 * to our end users provide rich client APIs for any language, in which case all bets are off and any thing can be implemented

As such, for both external and internal services, the .proto is the canonical contract, controlled by the team building the service. Thus having something like the following is satisfying the most common use case:

rpc RemoveTag(TagRemoveRequest) returns (google.protobuf.Empty) {
    option (grpc.extensions) = {
      retriable_codes: ["UNAVAILABLE", "RESOURCE_EXHAUSTED"]
    };
  };


> > c) One thing I found out useful "in the wild" is the ability to limit the Deadline of the retriable call. For example, the "parent" RPC call (user invoked) has a deadline of 5s, but each retriable call only 1s. This allows you to skip a "deadlining" server and retry against one that works.
> This is covered by our hedging policy. There doesn't seem to be any reason to cancel the first RPC in your scenario, as it may be just about to complete on the server and cancellation implies the work done so far is wasted. Instead, hedging allows you to send the first request, wait one second, send a second request, and accept the results of whichever completes first.

Ok, this makes sense. However, can we make sure that if the second hedged request completes before the first one, we make sure that the spec expects to CANCEL the first call, so we can potentially free up resources? One unfortunate thing of working outside an environment where everything is Stubby, is that a lot of the time request handling holds up resources. For example, you establish another HTTP1.1 connection to a backend as part of serving your RPC. I'll augment my gRPC interceptors for Go to use this. 

On Monday, 27 February 2017 16:53:20 UTC, Mark D. Roth wrote:
While talking with Craig on Friday, we realized that we need to make the wire protocol a bit stricter in order to implement retries.

Currently, the spec allows status to be sent either as part of initial metadata or trailing metadata.  However, as per the When Retries are Valid section of the gRFC, an RFC becomes committed when "the client receives a non-error response (either an explicit OK status or any response message) from the server".  This means that in a case where the server sends a retryable status, if the status is not included in the initial metadata, the client will consider the RPC committed as soon as it receives the initial metadata, even if the only thing sent after that is the trailing metadata that includes the status.  Thus, we need to require that whenever the server sends status without sending any messages, the server should include the status in the initial metadata (and then close the stream without bothering to send trailing metadata) instead of sending both initial metadata and then trailing metadata.

Noah, can you please add a note about this to the gRFC?

Based on a previously encounted interop problem (see https://github.com/markdroth/grpc/pull/3, which was included in https://github.com/grpc/grpc/pull/7201), I believe that grpc-go already does the right thing here (although Saila and Menghan should confirm that).  However, since that previously encountered problem did not show up with Java or C++, I suspect that those stacks do not do the right thing here.

Craig has confirmed that C-core needs to be fixed in this regard, and I've filed https://github.com/grpc/grpc/issues/9883 for that change.

Eric and Penn, can you confirm that Java will need to be changed?  I'm hoping that this isn't too invasive of a change, but please let us know if you foresee any problems.

Please let me know if anyone has any questions or problems with any of this.  Thanks!
On Fri, Feb 10, 2017 at 4:31 PM, ncteisen via grpc.io <grp...@googlegroups.com> wrote:
I've created a gRFC describing the design and implementation plan for gRPC Retries.

Take a look at the gRPC on Github.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/30e29cbc-439c-46c4-b54f-6e97637a0735%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Mark D. Roth

unread,
Feb 28, 2017, 1:09:48 PM2/28/17
to mic...@improbable.io, grpc.io, Noah Eisen, Craig Tiller, Penn Zhang, Eric Anderson, Saila Talagadadeevi, Menghan Li
On Tue, Feb 28, 2017 at 7:37 AM, <mic...@improbable.io> wrote:
Right, let me chip in and continue the discussion from: https://github.com/grpc/proposal/pull/12#issuecomment-283063869 here. My comments are based on experience building a gRPC-Go interceptor for retries and using it in production at Improbable. It's important to note that we're pretty heavy users of gRPC (using it across 3 languages: Go, Java and C++), as we have quite a few people around (myself included) who are familiar with gRPC/Stubby from their prior jobs.

Now, to recap the points:


Thank you for the comments. We are trying to keep high-level discussion on the email thread (see here) but my responses to your points (b) and (c) are below.

> > b) retry logic would benefit a lot from knowing whether the method is idempotent or not. II understand that this is supposed to be handled by "service configs", but realistically they're really hard to use. Few people would put their retry logic in DNS TXT entries, and even fewer people operate the gRPC LB protocol. Can we consider adding a .proto option (annotation) to Method definitions?

> The re are two concerns here. One is that saying a method is idempotent really just means "retry on status codes x y and z". If we pick a canonical set of idempotent status codes, we are forcing every service owner to obey these semantics if they want to use retries. However, the gRPC status code mechanism is very flexible (even allowing services to return UNAVAILABLE arbitrarily) so we'd prefer to force service owners to consider the semantics of their application and pick a concrete set of status codes rather than just flipping an "idempotent" switch. The second concern is around the ease of use of the service config. The intent is for the service config to be a universally useful mechanism, and we want to avoid just baking everything into the proto. Concerns about the delivery mechanism for service config shouldn't invalidate its use for encoding retry policy, and may be something we have to tackle separately.

I appreciate the push for the service config, and given my past SRE experience, I totally appreciate it. However, in the open source world simple solutions seem to get most traction. I would be hesitant to tie a very very important feature (such as retries) to the service config adoption. 

I understand your concerns around the flexibility of service codes, and I wasn't advocating being prescriptive about it. However, I do think that having an option inside the .proto is a very valid approach. As a user of gRPC for internal and external purposes there are three cases how I use .protos as interfaces:
 * have internal teams use each-other's services, in which case they code against code-generated interfaces that come out of .proto files
 * to our end users provide a set of "published" .proto files and guides of how to generate and use them in the language of their choice through gRPC code generation (the true power of gRPC).
 * to our end users provide rich client APIs for any language, in which case all bets are off and any thing can be implemented

As such, for both external and internal services, the .proto is the canonical contract, controlled by the team building the service. Thus having something like the following is satisfying the most common use case:

rpc RemoveTag(TagRemoveRequest) returns (google.protobuf.Empty) {
    option (grpc.extensions) = {
      retriable_codes: ["UNAVAILABLE", "RESOURCE_EXHAUSTED"]
    };
  };

There are a couple of reasons why we don't want to support configuring this via the .proto file.

First, we don't want to require the use of protobuf in order to use gRPC.  We want people to be able to use gRPC with whatever serialization mechanism they want (e.g., thrift).  We could potentially extend the serializer abstraction that we currently use to provide a way to feed in this kind of configuration information, but that would make the interface to the serializer much more complex, which is undesirable.  And in the case of the C-core gRPC implementation, it would require reimplementing the serializer part in each wrapped language, rather than doing it just once in C-core.

Also, changes to the .proto file would require recompilation, which is undesirable.  We want things like retry parameters to be something that service owners can change at run-time without requiring action (such as recompiling) from all of their clients.

(In a few cases, we have considered providing a way to configure something through both the service config and the .proto file, but we've always wound up deciding against it, because it would add a lot of complexity.  Not only would we have to support two different code-paths for this, but we would also need to define complex conflict resolution logic to handle the case where we get different configuration from the two code-paths.)

I do recognize that there are problems with the TXT record approach to service configs (although I would like more user input on that proposal, so I would encourage you to weigh in on this in https://groups.google.com/d/topic/grpc-io/DkweyrWEXxU/discussion).  At the moment, my thinking is that we do want to make the TXT record mechanism available as a lowest-common-denominator approach, but there will definitely need to be alternatives.  For example, people can write their own third-party resolvers to use things like Zookeeper instead of DNS.  You could even write a resolver that just grabs the service config out of a local file and figure out your own mechanism for distributing that file to clients.

Anyway, I think this question is really more about the service config design than about the retry design.  But as I said above, I would encourage you to comment on the service config design thread.
 
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

For more options, visit https://groups.google.com/d/optout.

Eric Gribkoff

unread,
Feb 28, 2017, 3:24:34 PM2/28/17
to mic...@improbable.io, grpc.io, Noah Eisen, Craig Tiller, Penn (Dapeng) Zhang, Eric Anderson, sai...@google.com, Menghan Li
On Tue, Feb 28, 2017 at 7:37 AM, <mic...@improbable.io> wrote:
Right, let me chip in and continue the discussion from: https://github.com/grpc/proposal/pull/12#issuecomment-283063869 here. My comments are based on experience building a gRPC-Go interceptor for retries and using it in production at Improbable. It's important to note that we're pretty heavy users of gRPC (using it across 3 languages: Go, Java and C++), as we have quite a few people around (myself included) who are familiar with gRPC/Stubby from their prior jobs.

Now, to recap the points:


Thank you for the comments. We are trying to keep high-level discussion on the email thread (see here) but my responses to your points (b) and (c) are below.

> > b) retry logic would benefit a lot from knowing whether the method is idempotent or not. II understand that this is supposed to be handled by "service configs", but realistically they're really hard to use. Few people would put their retry logic in DNS TXT entries, and even fewer people operate the gRPC LB protocol. Can we consider adding a .proto option (annotation) to Method definitions?

> The re are two concerns here. One is that saying a method is idempotent really just means "retry on status codes x y and z". If we pick a canonical set of idempotent status codes, we are forcing every service owner to obey these semantics if they want to use retries. However, the gRPC status code mechanism is very flexible (even allowing services to return UNAVAILABLE arbitrarily) so we'd prefer to force service owners to consider the semantics of their application and pick a concrete set of status codes rather than just flipping an "idempotent" switch. The second concern is around the ease of use of the service config. The intent is for the service config to be a universally useful mechanism, and we want to avoid just baking everything into the proto. Concerns about the delivery mechanism for service config shouldn't invalidate its use for encoding retry policy, and may be something we have to tackle separately.

I appreciate the push for the service config, and given my past SRE experience, I totally appreciate it. However, in the open source world simple solutions seem to get most traction. I would be hesitant to tie a very very important feature (such as retries) to the service config adoption. 

I understand your concerns around the flexibility of service codes, and I wasn't advocating being prescriptive about it. However, I do think that having an option inside the .proto is a very valid approach. As a user of gRPC for internal and external purposes there are three cases how I use .protos as interfaces:
 * have internal teams use each-other's services, in which case they code against code-generated interfaces that come out of .proto files
 * to our end users provide a set of "published" .proto files and guides of how to generate and use them in the language of their choice through gRPC code generation (the true power of gRPC).
 * to our end users provide rich client APIs for any language, in which case all bets are off and any thing can be implemented

As such, for both external and internal services, the .proto is the canonical contract, controlled by the team building the service. Thus having something like the following is satisfying the most common use case:

rpc RemoveTag(TagRemoveRequest) returns (google.protobuf.Empty) {
    option (grpc.extensions) = {
      retriable_codes: ["UNAVAILABLE", "RESOURCE_EXHAUSTED"]
    };
  };


> > c) One thing I found out useful "in the wild" is the ability to limit the Deadline of the retriable call. For example, the "parent" RPC call (user invoked) has a deadline of 5s, but each retriable call only 1s. This allows you to skip a "deadlining" server and retry against one that works.
> This is covered by our hedging policy. There doesn't seem to be any reason to cancel the first RPC in your scenario, as it may be just about to complete on the server and cancellation implies the work done so far is wasted. Instead, hedging allows you to send the first request, wait one second, send a second request, and accept the results of whichever completes first.

Ok, this makes sense. However, can we make sure that if the second hedged request completes before the first one, we make sure that the spec expects to CANCEL the first call, so we can potentially free up resources? One unfortunate thing of working outside an environment where everything is Stubby, is that a lot of the time request handling holds up resources. For example, you establish another HTTP1.1 connection to a backend as part of serving your RPC. I'll augment my gRPC interceptors for Go to use this. 


Definitely. As soon as one request succeeds, or fails with a fatal status code (e.g., FAILED_PRECONDITION), the spec requires that remaining requests be cancelled and the result sent to the client application layer. The Summary of Retry and Hedging Logic section outlines this behavior. 

Thanks,

Eric
 
On Monday, 27 February 2017 16:53:20 UTC, Mark D. Roth wrote:
While talking with Craig on Friday, we realized that we need to make the wire protocol a bit stricter in order to implement retries.

Currently, the spec allows status to be sent either as part of initial metadata or trailing metadata.  However, as per the When Retries are Valid section of the gRFC, an RFC becomes committed when "the client receives a non-error response (either an explicit OK status or any response message) from the server".  This means that in a case where the server sends a retryable status, if the status is not included in the initial metadata, the client will consider the RPC committed as soon as it receives the initial metadata, even if the only thing sent after that is the trailing metadata that includes the status.  Thus, we need to require that whenever the server sends status without sending any messages, the server should include the status in the initial metadata (and then close the stream without bothering to send trailing metadata) instead of sending both initial metadata and then trailing metadata.

Noah, can you please add a note about this to the gRFC?

Based on a previously encounted interop problem (see https://github.com/markdroth/grpc/pull/3, which was included in https://github.com/grpc/grpc/pull/7201), I believe that grpc-go already does the right thing here (although Saila and Menghan should confirm that).  However, since that previously encountered problem did not show up with Java or C++, I suspect that those stacks do not do the right thing here.

Craig has confirmed that C-core needs to be fixed in this regard, and I've filed https://github.com/grpc/grpc/issues/9883 for that change.

Eric and Penn, can you confirm that Java will need to be changed?  I'm hoping that this isn't too invasive of a change, but please let us know if you foresee any problems.

Please let me know if anyone has any questions or problems with any of this.  Thanks!

On Fri, Feb 10, 2017 at 4:31 PM, ncteisen via grpc.io <grp...@googlegroups.com> wrote:
I've created a gRFC describing the design and implementation plan for gRPC Retries.

Take a look at the gRPC on Github.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/30e29cbc-439c-46c4-b54f-6e97637a0735%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Mark D. Roth <ro...@google.com>
Software Engineer
Google, Inc.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

Saila Talagadadeevi

unread,
Feb 28, 2017, 4:12:56 PM2/28/17
to Eric Gribkoff, mic...@improbable.io, grpc.io, Noah Eisen, Craig Tiller, Penn (Dapeng) Zhang, Eric Anderson, Menghan Li
I have chatted about retry override at the call level with Mark this morning. In addition to config driven RPC/return error code based retires, it will be good if we can enable call by call override for retry. This will make sure grpc provided retry is always used and users do not have to write their implementation if the global config does not work for application specific scenarios.


Eric Anderson

unread,
Mar 1, 2017, 1:21:09 PM3/1/17
to Mark D. Roth, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
On Mon, Feb 27, 2017 at 8:53 AM, 'Mark D. Roth' via grpc.io <grp...@googlegroups.com> wrote:
While talking with Craig on Friday, we realized that we need to make the wire protocol a bit stricter in order to implement retries.

Currently, the spec allows status to be sent either as part of initial metadata or trailing metadata.

Currently the spec doesn't say when it is appropriate. This is because the spec is only on the HTTP/2 level and doesn't actually define gRPC semantics.

I think you mean HTTP headers and trailers instead of using the term "metadata." gRPC always has trailing metadata, but may not have initial metadata. Status must come on the trailing metadata. In HTTP parlance, it may come in the initial headers only when those initial headers are the end of the response.

However, as per the When Retries are Valid section of the gRFC, an RFC becomes committed when "the client receives a non-error response (either an explicit OK status or any response message) from the server".

Just to be clear, the only time "an explicit OK status" would matter is with a streaming call. In a unary call the OK status will always be after the response message.
 
This means that in a case where the server sends a retryable status, if the status is not included in the initial metadata, the client will consider the RPC committed as soon as it receives the initial metadata, even if the only thing sent after that is the trailing metadata that includes the status.

What? That does not seem to be a proper understanding of the text, or the text is wrongly worded. Why would the RPC be "committed as soon as it receives the initial metadata"? That isn't in the text... In your example it seems it would be committed at "the trailing metadata that includes a status" as long as that status was OK, as per the "an explicit OK status" in the text.

Thus, we need to require that whenever the server sends status without sending any messages, the server should include the status in the initial metadata (and then close the stream without bothering to send trailing metadata) instead of sending both initial metadata and then trailing metadata.

This is generally good practice assuming you mean "headers" instead of "metadata". But I don't see any argument here for requiring it and I don't see any impact to retry.

Since an application can force initial headers to be sent (at least in Java), this can't really be a strong requirement. Java does do this generally though, as was required for our Auth support and similar conversion of gRPC status codes to HTTP status codes.

Based on a previously encounted interop problem (see https://github.com/markdroth/grpc/pull/3, which was included in https://github.com/grpc/grpc/pull/7201), I believe that grpc-go already does the right thing here (although Saila and Menghan should confirm that).  However, since that previously encountered problem did not show up with Java or C++, I suspect that those stacks do not do the right thing here.

If my correction of the nomenclature is correct, then Java already does this for the most part. This isn't something that can be enforced in Java. But the normal stub delays sending the initial metadata until the first response message. If the call is completed without any message, then only trailing metadata is sent.

Eric Anderson

unread,
Mar 1, 2017, 1:24:53 PM3/1/17
to Mark D. Roth, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
On Wed, Mar 1, 2017 at 10:20 AM, Eric Anderson <ej...@google.com> wrote:
On Mon, Feb 27, 2017 at 8:53 AM, 'Mark D. Roth' via grpc.io <grp...@googlegroups.com> wrote:
However, as per the When Retries are Valid section of the gRFC, an RFC becomes committed when "the client receives a non-error response (either an explicit OK status or any response message) from the server".

Just to be clear, the only time "an explicit OK status" would matter is with a streaming call. In a unary call the OK status will always be after the response message.

Actually, that's still a bit misleading. Just to be more clear, the only time it matters is with a successful streaming response that has zero-messages. All other cases are either failure or have at least one message. And the status always comes after a message, if there is one.

Mark D. Roth

unread,
Mar 1, 2017, 1:44:17 PM3/1/17
to Saila Talagadadeevi, Eric Gribkoff, Michal Witkowski, grpc.io, Noah Eisen, Craig Tiller, Penn (Dapeng) Zhang, Eric Anderson, Menghan Li
As we've discussed, I don't think it makes sense to support an additional API for retries without a compelling use-case that can't be addressed some other way.  I think that the use-case you described where an application wants to choose whether or not to retry based on the content of the request would be better served by splitting the RPC API into two methods, one of which supports retries and one of which doesn't.


For more options, visit https://groups.google.com/d/optout.

Mark D. Roth

unread,
Mar 1, 2017, 1:51:46 PM3/1/17
to Eric Anderson, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
On Wed, Mar 1, 2017 at 10:20 AM, 'Eric Anderson' via grpc.io <grp...@googlegroups.com> wrote:
On Mon, Feb 27, 2017 at 8:53 AM, 'Mark D. Roth' via grpc.io <grp...@googlegroups.com> wrote:
While talking with Craig on Friday, we realized that we need to make the wire protocol a bit stricter in order to implement retries.

Currently, the spec allows status to be sent either as part of initial metadata or trailing metadata.

Currently the spec doesn't say when it is appropriate. This is because the spec is only on the HTTP/2 level and doesn't actually define gRPC semantics.

I think you mean HTTP headers and trailers instead of using the term "metadata." gRPC always has trailing metadata, but may not have initial metadata. Status must come on the trailing metadata. In HTTP parlance, it may come in the initial headers only when those initial headers are the end of the response.

However, as per the When Retries are Valid section of the gRFC, an RFC becomes committed when "the client receives a non-error response (either an explicit OK status or any response message) from the server".

Just to be clear, the only time "an explicit OK status" would matter is with a streaming call. In a unary call the OK status will always be after the response message.
 
This means that in a case where the server sends a retryable status, if the status is not included in the initial metadata, the client will consider the RPC committed as soon as it receives the initial metadata, even if the only thing sent after that is the trailing metadata that includes the status.

What? That does not seem to be a proper understanding of the text, or the text is wrongly worded. Why would the RPC be "committed as soon as it receives the initial metadata"? That isn't in the text... In your example it seems it would be committed at "the trailing metadata that includes a status" as long as that status was OK, as per the "an explicit OK status" in the text.

The language in the above quote is probably not as specific as it should be, at least with respect to the wire protocol.  The intent here is that the RPC should be considered committed when it receives either initial metadata or a payload message.

It is necessary that receiving initial metadata commits the RPC, because we need to report the initial metadata to the caller when it arrives.  If we retry after that and get a different set of metadata, then we are giving the application an inconsistent view of the result.

Noah, we should probably clarify the wording here. 
 

Thus, we need to require that whenever the server sends status without sending any messages, the server should include the status in the initial metadata (and then close the stream without bothering to send trailing metadata) instead of sending both initial metadata and then trailing metadata.

This is generally good practice assuming you mean "headers" instead of "metadata". But I don't see any argument here for requiring it and I don't see any impact to retry.

Since an application can force initial headers to be sent (at least in Java), this can't really be a strong requirement. Java does do this generally though, as was required for our Auth support and similar conversion of gRPC status codes to HTTP status codes.

I think it's fine for the server application to force initial metadata to be sent.  It's just that once the client sees that metadata, it will consider the RPC to be committed, and no retry will be attempted even if the request subsequently fails.
 

Based on a previously encounted interop problem (see https://github.com/markdroth/grpc/pull/3, which was included in https://github.com/grpc/grpc/pull/7201), I believe that grpc-go already does the right thing here (although Saila and Menghan should confirm that).  However, since that previously encountered problem did not show up with Java or C++, I suspect that those stacks do not do the right thing here.

If my correction of the nomenclature is correct, then Java already does this for the most part. This isn't something that can be enforced in Java. But the normal stub delays sending the initial metadata until the first response message. If the call is completed without any message, then only trailing metadata is sent.

Interesting.  If that's the case, then why did that interop test only fail with Go, not with Java?
 

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

For more options, visit https://groups.google.com/d/optout.

Eric Anderson

unread,
Mar 1, 2017, 2:32:37 PM3/1/17
to Mark D. Roth, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
On Wed, Mar 1, 2017 at 10:51 AM, 'Mark D. Roth' via grpc.io <grp...@googlegroups.com> wrote:
On Wed, Mar 1, 2017 at 10:20 AM, 'Eric Anderson' via grpc.io <grp...@googlegroups.com> wrote:
What? That does not seem to be a proper understanding of the text, or the text is wrongly worded. Why would the RPC be "committed as soon as it receives the initial metadata"? That isn't in the text... In your example it seems it would be committed at "the trailing metadata that includes a status" as long as that status was OK, as per the "an explicit OK status" in the text.

The language in the above quote is probably not as specific as it should be, at least with respect to the wire protocol.  The intent here is that the RPC should be considered committed when it receives either initial metadata or a payload message.

If initial metadata causes a commit, then the "any response message" will never apply, as initial metadata always comes first. So even the corrected intent you propose is questionable since one of the two conditions of "either initial metadata or a payload message" will never occur. Now, maybe the document is wrong or based on false assumptions and needs to be fixed, but the plain reading of text seems the only coherent interpretation at this point.

It is necessary that receiving initial metadata commits the RPC, because we need to report the initial metadata to the caller when it arrives.

That's not strictly true. It could be buffered until it was decided it is a "good" response. Yes, we may not want to do that, but it doesn't seem "necessary" unless it was discussed earlier in the thread.

If my correction of the nomenclature is correct, then Java already does this for the most part. This isn't something that can be enforced in Java. But the normal stub delays sending the initial metadata until the first response message. If the call is completed without any message, then only trailing metadata is sent.

Interesting.  If that's the case, then why did that interop test only fail with Go, not with Java?

Very good question. I don't know. I can't read that code well enough to figure out what was actually happening. My naïve reading of the change makes it look like PHP is now processing the initial metadata when previously it wasn't.

I don't see anything strange in Java's server that would change the behavior. I had previously thought that Go was the only implementation that always sent initial metadata on server-side. So I'm quite surprised to hear it being the only one that doesn't send initial metadata when unnecessary.

Eric Gribkoff

unread,
Mar 1, 2017, 5:47:42 PM3/1/17
to Eric Anderson, Mark D. Roth, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
I think the terminology here gets confusing between initial/trailing metadata, gRPC rule names, and HTTP/2 frame types. Our retry design doc was indeed underspecified in regards to dealing with initial metadata, and will be updated. I go over all of the considerations in detail below. 

For clarity, I will use all caps for the names of HTTP/2 frame types, e.g., HEADERS frame, and use the capitalized gRPC rule names from the specification.

The gRPC specification ensures that a status (containing a gRPC status code) is only sent in Trailers, which is contained in an HTTP/2 HEADERS frame. The only way that the gRPC status code can be contained in the first HTTP/2 frame received is if the server sends a Trailers-Only response.

Otherwise, the gRPC spec mandates that the first frame sent be the Response-Headers (again, sent in an HTTP/2 HEADERS frame). Response-Headers includes (optional) Custom-Metadata, which is usually what we are talking about when we say "initial metadata".

Regardless of whether the Response-Headers includes anything in its Custom-Metadata, if the gRPC client library notifies the client application layer of what metadata is (or is not) included, we now have to view the RPC as committed, aka no longer retryable. This is the only option, as a later retry attempt could receive different Custom-Metadata, contradicting what we've already told the client application layer.

We cannot include gRPC status codes in the Response-Headers along with "initial metadata". It's perfectly valid according to the spec for a server to send metadata along a stream in its Response-Headers, wait for one hour, then (without having sent any messages), close the stream with a retryable error.

However, the proposal that a server include the gRPC status code (if known) in the initial response is still sound. Concretely, this means: if a gRPC server has not yet sent Response-Headers and receives an error response, it should send a Trailers-Only response containing the gRPC status code. This would allow retry attempts on the client-side to proceed, if applicable. This is going to be superior to sending Response-Headers immediately followed by Trailers, which would cause the RPC to become committed on the client side (if the Response-Header metadata is made available to the client application layer) and stop retry attempts.

We still can encounter the case where a server intentionally sends Response-Headers to open a stream, then eventually closes the stream with an error without ever sending any messages. Such cases would not be retryable, but I think it's fair to argue that if the server *has* to send metadata in advance of sending any responses, that metadata is actually a response, and should be treated as such (i.e., their metadata just ensured the RPC will be committed on the client-side). 

Rather than either explicitly disallowing such behavior by modifying some specification (this behavior is currently entirely unspecified, so while specification is worthwhile, it should be separate from the retry policy design currently under discussion), we can just change the default server behavior of C++, and Go if necessary, to match Java. In Java servers, the Response-Headers are delayed until some response message is sent. If the server application returns an error status before sending a message, then Trailers-Only is sent instead of Response-Headers.

We can also leave it up to the gRPC client library implementation to decide when an RPC is committed based on received Response-Headers. If and while the client library can guarantee that the presence (or absence) of initial metadata is not visible to the client application layer, the RPC can be considered uncommitted. This is an implementation detail that should very rarely be necessary if the above change is made to default server behavior, but it would not violate anything in the retry spec or semantics.

Eric

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

Noah Eisen

unread,
Mar 1, 2017, 6:51:27 PM3/1/17
to elemen...@gmail.com, grpc.io, mic...@fullcontact.com
Hi Michael,

To address your comments, we will be making a small change to the load balancing policy with respect to hedging RPCs. The change will support passing the local lb_policy a list of previously used addresses. The list will essentially be, "if possible, don't choose one of these addresses." For most cases this will solve your concern about the relation between affinity routing and hedging.

These changes will only occur in the local lb_policy. We do not want to send any extra data over the wire due to performance concerns.

gRPC support for affinity routing is ongoing, but this change to the existing policy will make it easier to have hedging and affinity routing work together in the future.

On Sun, Feb 12, 2017 at 7:26 PM, <elemen...@gmail.com> wrote:
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

Michael Rose

unread,
Mar 1, 2017, 6:54:33 PM3/1/17
to Noah Eisen, Michael Rose, grpc.io
To address your comments, we will be making a small change to the load balancing policy with respect to hedging RPCs. The change will support passing the local lb_policy a list of previously used addresses. The list will essentially be, "if possible, don't choose one of these addresses." For most cases this will solve your concern about the relation between affinity routing and hedging.

It does! Thank you for your consideration, I definitely look forward to testing it out.

These changes will only occur in the local lb_policy. We do not want to send any extra data over the wire due to performance concerns.

Seems reasonable to me. Out of curiosity, are there any use cases for doing so (other than perhaps server-aided hedge canceling)?

Michael Rose
Team Lead, Identity Resolution
FullContact fullcontact.com
m: +1.720.837.1357 | t: @xorlev
Inline image 1

We’re hiring awesome people!

See our open positions

Noah Eisen

unread,
Mar 1, 2017, 7:01:06 PM3/1/17
to Michael Rose, Michael Rose, grpc.io
The only use case we can think of so far would be an alternative solution to this routing affinity and hedging interaction. We initially discussed putting the previously tried addresses in the metadata of an RPC, and then the actual load balancing service would have access to it. But as mentioned, this was written off because of the extra overhead.

Mark D. Roth

unread,
Mar 2, 2017, 10:20:16 AM3/2/17
to Eric Gribkoff, Eric Anderson, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
On Wed, Mar 1, 2017 at 2:47 PM, 'Eric Gribkoff' via grpc.io <grp...@googlegroups.com> wrote:
I think the terminology here gets confusing between initial/trailing metadata, gRPC rule names, and HTTP/2 frame types. Our retry design doc was indeed underspecified in regards to dealing with initial metadata, and will be updated. I go over all of the considerations in detail below. 

For clarity, I will use all caps for the names of HTTP/2 frame types, e.g., HEADERS frame, and use the capitalized gRPC rule names from the specification.

The gRPC specification ensures that a status (containing a gRPC status code) is only sent in Trailers, which is contained in an HTTP/2 HEADERS frame. The only way that the gRPC status code can be contained in the first HTTP/2 frame received is if the server sends a Trailers-Only response.

Otherwise, the gRPC spec mandates that the first frame sent be the Response-Headers (again, sent in an HTTP/2 HEADERS frame). Response-Headers includes (optional) Custom-Metadata, which is usually what we are talking about when we say "initial metadata".

Regardless of whether the Response-Headers includes anything in its Custom-Metadata, if the gRPC client library notifies the client application layer of what metadata is (or is not) included, we now have to view the RPC as committed, aka no longer retryable. This is the only option, as a later retry attempt could receive different Custom-Metadata, contradicting what we've already told the client application layer.

We cannot include gRPC status codes in the Response-Headers along with "initial metadata". It's perfectly valid according to the spec for a server to send metadata along a stream in its Response-Headers, wait for one hour, then (without having sent any messages), close the stream with a retryable error.

However, the proposal that a server include the gRPC status code (if known) in the initial response is still sound. Concretely, this means: if a gRPC server has not yet sent Response-Headers and receives an error response, it should send a Trailers-Only response containing the gRPC status code. This would allow retry attempts on the client-side to proceed, if applicable. This is going to be superior to sending Response-Headers immediately followed by Trailers, which would cause the RPC to become committed on the client side (if the Response-Header metadata is made available to the client application layer) and stop retry attempts.

We still can encounter the case where a server intentionally sends Response-Headers to open a stream, then eventually closes the stream with an error without ever sending any messages. Such cases would not be retryable, but I think it's fair to argue that if the server *has* to send metadata in advance of sending any responses, that metadata is actually a response, and should be treated as such (i.e., their metadata just ensured the RPC will be committed on the client-side). 

Rather than either explicitly disallowing such behavior by modifying some specification (this behavior is currently entirely unspecified, so while specification is worthwhile, it should be separate from the retry policy design currently under discussion), we can just change the default server behavior of C++, and Go if necessary, to match Java. In Java servers, the Response-Headers are delayed until some response message is sent. If the server application returns an error status before sending a message, then Trailers-Only is sent instead of Response-Headers.

We can also leave it up to the gRPC client library implementation to decide when an RPC is committed based on received Response-Headers. If and while the client library can guarantee that the presence (or absence) of initial metadata is not visible to the client application layer, the RPC can be considered uncommitted. This is an implementation detail that should very rarely be necessary if the above change is made to default server behavior, but it would not violate anything in the retry spec or semantics.

I think that leaving this unspecified will lead to interoperability problems in the future.  I would rather have the spec be explicit about this, so that all future client and server implementations can interoperate cleanly.
 

For more options, visit https://groups.google.com/d/optout.

Eric Gribkoff

unread,
Mar 2, 2017, 11:09:36 AM3/2/17
to Mark D. Roth, Eric Anderson, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
I've update the gRFC document to include the latest discussions here.

On Thu, Mar 2, 2017 at 7:20 AM, Mark D. Roth <ro...@google.com> wrote:
On Wed, Mar 1, 2017 at 2:47 PM, 'Eric Gribkoff' via grpc.io <grp...@googlegroups.com> wrote:
I think the terminology here gets confusing between initial/trailing metadata, gRPC rule names, and HTTP/2 frame types. Our retry design doc was indeed underspecified in regards to dealing with initial metadata, and will be updated. I go over all of the considerations in detail below. 

For clarity, I will use all caps for the names of HTTP/2 frame types, e.g., HEADERS frame, and use the capitalized gRPC rule names from the specification.

The gRPC specification ensures that a status (containing a gRPC status code) is only sent in Trailers, which is contained in an HTTP/2 HEADERS frame. The only way that the gRPC status code can be contained in the first HTTP/2 frame received is if the server sends a Trailers-Only response.

Otherwise, the gRPC spec mandates that the first frame sent be the Response-Headers (again, sent in an HTTP/2 HEADERS frame). Response-Headers includes (optional) Custom-Metadata, which is usually what we are talking about when we say "initial metadata".

Regardless of whether the Response-Headers includes anything in its Custom-Metadata, if the gRPC client library notifies the client application layer of what metadata is (or is not) included, we now have to view the RPC as committed, aka no longer retryable. This is the only option, as a later retry attempt could receive different Custom-Metadata, contradicting what we've already told the client application layer.

We cannot include gRPC status codes in the Response-Headers along with "initial metadata". It's perfectly valid according to the spec for a server to send metadata along a stream in its Response-Headers, wait for one hour, then (without having sent any messages), close the stream with a retryable error.

However, the proposal that a server include the gRPC status code (if known) in the initial response is still sound. Concretely, this means: if a gRPC server has not yet sent Response-Headers and receives an error response, it should send a Trailers-Only response containing the gRPC status code. This would allow retry attempts on the client-side to proceed, if applicable. This is going to be superior to sending Response-Headers immediately followed by Trailers, which would cause the RPC to become committed on the client side (if the Response-Header metadata is made available to the client application layer) and stop retry attempts.

We still can encounter the case where a server intentionally sends Response-Headers to open a stream, then eventually closes the stream with an error without ever sending any messages. Such cases would not be retryable, but I think it's fair to argue that if the server *has* to send metadata in advance of sending any responses, that metadata is actually a response, and should be treated as such (i.e., their metadata just ensured the RPC will be committed on the client-side). 

Rather than either explicitly disallowing such behavior by modifying some specification (this behavior is currently entirely unspecified, so while specification is worthwhile, it should be separate from the retry policy design currently under discussion), we can just change the default server behavior of C++, and Go if necessary, to match Java. In Java servers, the Response-Headers are delayed until some response message is sent. If the server application returns an error status before sending a message, then Trailers-Only is sent instead of Response-Headers.

We can also leave it up to the gRPC client library implementation to decide when an RPC is committed based on received Response-Headers. If and while the client library can guarantee that the presence (or absence) of initial metadata is not visible to the client application layer, the RPC can be considered uncommitted. This is an implementation detail that should very rarely be necessary if the above change is made to default server behavior, but it would not violate anything in the retry spec or semantics.

I think that leaving this unspecified will lead to interoperability problems in the future.  I would rather have the spec be explicit about this, so that all future client and server implementations can interoperate cleanly.
 

It's fair to say in the retry design that we must count an RPC as committed as soon the Response-Headers arrive, and the doc now states this explicitly. 

If you mean that we also need to change the gRPC spec to say *when* the server sends Response-Headers, I disagree. This is outside of the scope of a retry design. Retries will work fine whenever servers choose to send Response-Headers: since Response-Headers include initial metadata, which can contain arbitrary information, this is exactly the same from a retry perspective as the server sending any other response, and it commits the RPC. We can go so far as saying servers *should* delay sending Response-Headers until a message is sent by the server application layer, and the doc now states this explicitly.

Changing the gRPC spec to say that servers *must* delay sending Response-Headers until a message is sent may be a good idea, but it is not a requirement for retries and, in my opinion, should be left to a separate discussion. The semantics and operations of a retry policy are already clear, regardless of when servers choose to send Response-Headers, and the existing spec already allows the desirable behavior for retries with the Trailers-Only frame.

Eric

Mark D. Roth

unread,
Mar 2, 2017, 11:15:35 AM3/2/17
to Eric Gribkoff, Eric Anderson, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
On Thu, Mar 2, 2017 at 8:09 AM, Eric Gribkoff <ericgr...@google.com> wrote:
I've update the gRFC document to include the latest discussions here.

On Thu, Mar 2, 2017 at 7:20 AM, Mark D. Roth <ro...@google.com> wrote:
On Wed, Mar 1, 2017 at 2:47 PM, 'Eric Gribkoff' via grpc.io <grp...@googlegroups.com> wrote:
I think the terminology here gets confusing between initial/trailing metadata, gRPC rule names, and HTTP/2 frame types. Our retry design doc was indeed underspecified in regards to dealing with initial metadata, and will be updated. I go over all of the considerations in detail below. 

For clarity, I will use all caps for the names of HTTP/2 frame types, e.g., HEADERS frame, and use the capitalized gRPC rule names from the specification.

The gRPC specification ensures that a status (containing a gRPC status code) is only sent in Trailers, which is contained in an HTTP/2 HEADERS frame. The only way that the gRPC status code can be contained in the first HTTP/2 frame received is if the server sends a Trailers-Only response.

Otherwise, the gRPC spec mandates that the first frame sent be the Response-Headers (again, sent in an HTTP/2 HEADERS frame). Response-Headers includes (optional) Custom-Metadata, which is usually what we are talking about when we say "initial metadata".

Regardless of whether the Response-Headers includes anything in its Custom-Metadata, if the gRPC client library notifies the client application layer of what metadata is (or is not) included, we now have to view the RPC as committed, aka no longer retryable. This is the only option, as a later retry attempt could receive different Custom-Metadata, contradicting what we've already told the client application layer.

We cannot include gRPC status codes in the Response-Headers along with "initial metadata". It's perfectly valid according to the spec for a server to send metadata along a stream in its Response-Headers, wait for one hour, then (without having sent any messages), close the stream with a retryable error.

However, the proposal that a server include the gRPC status code (if known) in the initial response is still sound. Concretely, this means: if a gRPC server has not yet sent Response-Headers and receives an error response, it should send a Trailers-Only response containing the gRPC status code. This would allow retry attempts on the client-side to proceed, if applicable. This is going to be superior to sending Response-Headers immediately followed by Trailers, which would cause the RPC to become committed on the client side (if the Response-Header metadata is made available to the client application layer) and stop retry attempts.

We still can encounter the case where a server intentionally sends Response-Headers to open a stream, then eventually closes the stream with an error without ever sending any messages. Such cases would not be retryable, but I think it's fair to argue that if the server *has* to send metadata in advance of sending any responses, that metadata is actually a response, and should be treated as such (i.e., their metadata just ensured the RPC will be committed on the client-side). 

Rather than either explicitly disallowing such behavior by modifying some specification (this behavior is currently entirely unspecified, so while specification is worthwhile, it should be separate from the retry policy design currently under discussion), we can just change the default server behavior of C++, and Go if necessary, to match Java. In Java servers, the Response-Headers are delayed until some response message is sent. If the server application returns an error status before sending a message, then Trailers-Only is sent instead of Response-Headers.

We can also leave it up to the gRPC client library implementation to decide when an RPC is committed based on received Response-Headers. If and while the client library can guarantee that the presence (or absence) of initial metadata is not visible to the client application layer, the RPC can be considered uncommitted. This is an implementation detail that should very rarely be necessary if the above change is made to default server behavior, but it would not violate anything in the retry spec or semantics.

I think that leaving this unspecified will lead to interoperability problems in the future.  I would rather have the spec be explicit about this, so that all future client and server implementations can interoperate cleanly.
 

It's fair to say in the retry design that we must count an RPC as committed as soon the Response-Headers arrive, and the doc now states this explicitly. 

If you mean that we also need to change the gRPC spec to say *when* the server sends Response-Headers, I disagree. This is outside of the scope of a retry design. Retries will work fine whenever servers choose to send Response-Headers: since Response-Headers include initial metadata, which can contain arbitrary information, this is exactly the same from a retry perspective as the server sending any other response, and it commits the RPC. We can go so far as saying servers *should* delay sending Response-Headers until a message is sent by the server application layer, and the doc now states this explicitly.

Changing the gRPC spec to say that servers *must* delay sending Response-Headers until a message is sent may be a good idea, but it is not a requirement for retries and, in my opinion, should be left to a separate discussion. The semantics and operations of a retry policy are already clear, regardless of when servers choose to send Response-Headers, and the existing spec already allows the desirable behavior for retries with the Trailers-Only frame.

I agree that we don't need to say anything about whether or not the server delays sending Response-Headers until a message is sent.  However, I think we should say that if the server is going to immediately signal failure without sending any messages, it should send Trailers-Only instead of Response-Headers followed by Trailers.

Eric Gribkoff

unread,
Mar 2, 2017, 11:24:40 AM3/2/17
to Mark D. Roth, Eric Anderson, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
On Thu, Mar 2, 2017 at 8:15 AM, Mark D. Roth <ro...@google.com> wrote:
On Thu, Mar 2, 2017 at 8:09 AM, Eric Gribkoff <ericgr...@google.com> wrote:
I've update the gRFC document to include the latest discussions here.

On Thu, Mar 2, 2017 at 7:20 AM, Mark D. Roth <ro...@google.com> wrote:
On Wed, Mar 1, 2017 at 2:47 PM, 'Eric Gribkoff' via grpc.io <grp...@googlegroups.com> wrote:
I think the terminology here gets confusing between initial/trailing metadata, gRPC rule names, and HTTP/2 frame types. Our retry design doc was indeed underspecified in regards to dealing with initial metadata, and will be updated. I go over all of the considerations in detail below. 

For clarity, I will use all caps for the names of HTTP/2 frame types, e.g., HEADERS frame, and use the capitalized gRPC rule names from the specification.

The gRPC specification ensures that a status (containing a gRPC status code) is only sent in Trailers, which is contained in an HTTP/2 HEADERS frame. The only way that the gRPC status code can be contained in the first HTTP/2 frame received is if the server sends a Trailers-Only response.

Otherwise, the gRPC spec mandates that the first frame sent be the Response-Headers (again, sent in an HTTP/2 HEADERS frame). Response-Headers includes (optional) Custom-Metadata, which is usually what we are talking about when we say "initial metadata".

Regardless of whether the Response-Headers includes anything in its Custom-Metadata, if the gRPC client library notifies the client application layer of what metadata is (or is not) included, we now have to view the RPC as committed, aka no longer retryable. This is the only option, as a later retry attempt could receive different Custom-Metadata, contradicting what we've already told the client application layer.

We cannot include gRPC status codes in the Response-Headers along with "initial metadata". It's perfectly valid according to the spec for a server to send metadata along a stream in its Response-Headers, wait for one hour, then (without having sent any messages), close the stream with a retryable error.

However, the proposal that a server include the gRPC status code (if known) in the initial response is still sound. Concretely, this means: if a gRPC server has not yet sent Response-Headers and receives an error response, it should send a Trailers-Only response containing the gRPC status code. This would allow retry attempts on the client-side to proceed, if applicable. This is going to be superior to sending Response-Headers immediately followed by Trailers, which would cause the RPC to become committed on the client side (if the Response-Header metadata is made available to the client application layer) and stop retry attempts.

We still can encounter the case where a server intentionally sends Response-Headers to open a stream, then eventually closes the stream with an error without ever sending any messages. Such cases would not be retryable, but I think it's fair to argue that if the server *has* to send metadata in advance of sending any responses, that metadata is actually a response, and should be treated as such (i.e., their metadata just ensured the RPC will be committed on the client-side). 

Rather than either explicitly disallowing such behavior by modifying some specification (this behavior is currently entirely unspecified, so while specification is worthwhile, it should be separate from the retry policy design currently under discussion), we can just change the default server behavior of C++, and Go if necessary, to match Java. In Java servers, the Response-Headers are delayed until some response message is sent. If the server application returns an error status before sending a message, then Trailers-Only is sent instead of Response-Headers.

We can also leave it up to the gRPC client library implementation to decide when an RPC is committed based on received Response-Headers. If and while the client library can guarantee that the presence (or absence) of initial metadata is not visible to the client application layer, the RPC can be considered uncommitted. This is an implementation detail that should very rarely be necessary if the above change is made to default server behavior, but it would not violate anything in the retry spec or semantics.

I think that leaving this unspecified will lead to interoperability problems in the future.  I would rather have the spec be explicit about this, so that all future client and server implementations can interoperate cleanly.
 

It's fair to say in the retry design that we must count an RPC as committed as soon the Response-Headers arrive, and the doc now states this explicitly. 

If you mean that we also need to change the gRPC spec to say *when* the server sends Response-Headers, I disagree. This is outside of the scope of a retry design. Retries will work fine whenever servers choose to send Response-Headers: since Response-Headers include initial metadata, which can contain arbitrary information, this is exactly the same from a retry perspective as the server sending any other response, and it commits the RPC. We can go so far as saying servers *should* delay sending Response-Headers until a message is sent by the server application layer, and the doc now states this explicitly.

Changing the gRPC spec to say that servers *must* delay sending Response-Headers until a message is sent may be a good idea, but it is not a requirement for retries and, in my opinion, should be left to a separate discussion. The semantics and operations of a retry policy are already clear, regardless of when servers choose to send Response-Headers, and the existing spec already allows the desirable behavior for retries with the Trailers-Only frame.

I agree that we don't need to say anything about whether or not the server delays sending Response-Headers until a message is sent.  However, I think we should say that if the server is going to immediately signal failure without sending any messages, it should send Trailers-Only instead of Response-Headers followed by Trailers.
 

This is in the retry gRFC doc now (https://github.com/ncteisen/proposal/blob/ad060be281c45c262e71a56e5777d26616dad69f/A6.md#when-retries-are-valid). The wire spec almost says it: "Trailers-Only is permitted for calls that produce an immediate error" (https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md). Do you want this changed in the wire spec itself or is the inclusion in the gRFC for retries sufficient?

Thanks,

Eric

Mark D. Roth

unread,
Mar 2, 2017, 11:38:06 AM3/2/17
to Eric Gribkoff, Eric Anderson, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
On Thu, Mar 2, 2017 at 8:24 AM, Eric Gribkoff <ericgr...@google.com> wrote:


On Thu, Mar 2, 2017 at 8:15 AM, Mark D. Roth <ro...@google.com> wrote:
On Thu, Mar 2, 2017 at 8:09 AM, Eric Gribkoff <ericgr...@google.com> wrote:
I've update the gRFC document to include the latest discussions here.

On Thu, Mar 2, 2017 at 7:20 AM, Mark D. Roth <ro...@google.com> wrote:
On Wed, Mar 1, 2017 at 2:47 PM, 'Eric Gribkoff' via grpc.io <grp...@googlegroups.com> wrote:
I think the terminology here gets confusing between initial/trailing metadata, gRPC rule names, and HTTP/2 frame types. Our retry design doc was indeed underspecified in regards to dealing with initial metadata, and will be updated. I go over all of the considerations in detail below. 

For clarity, I will use all caps for the names of HTTP/2 frame types, e.g., HEADERS frame, and use the capitalized gRPC rule names from the specification.

The gRPC specification ensures that a status (containing a gRPC status code) is only sent in Trailers, which is contained in an HTTP/2 HEADERS frame. The only way that the gRPC status code can be contained in the first HTTP/2 frame received is if the server sends a Trailers-Only response.

Otherwise, the gRPC spec mandates that the first frame sent be the Response-Headers (again, sent in an HTTP/2 HEADERS frame). Response-Headers includes (optional) Custom-Metadata, which is usually what we are talking about when we say "initial metadata".

Regardless of whether the Response-Headers includes anything in its Custom-Metadata, if the gRPC client library notifies the client application layer of what metadata is (or is not) included, we now have to view the RPC as committed, aka no longer retryable. This is the only option, as a later retry attempt could receive different Custom-Metadata, contradicting what we've already told the client application layer.

We cannot include gRPC status codes in the Response-Headers along with "initial metadata". It's perfectly valid according to the spec for a server to send metadata along a stream in its Response-Headers, wait for one hour, then (without having sent any messages), close the stream with a retryable error.

However, the proposal that a server include the gRPC status code (if known) in the initial response is still sound. Concretely, this means: if a gRPC server has not yet sent Response-Headers and receives an error response, it should send a Trailers-Only response containing the gRPC status code. This would allow retry attempts on the client-side to proceed, if applicable. This is going to be superior to sending Response-Headers immediately followed by Trailers, which would cause the RPC to become committed on the client side (if the Response-Header metadata is made available to the client application layer) and stop retry attempts.

We still can encounter the case where a server intentionally sends Response-Headers to open a stream, then eventually closes the stream with an error without ever sending any messages. Such cases would not be retryable, but I think it's fair to argue that if the server *has* to send metadata in advance of sending any responses, that metadata is actually a response, and should be treated as such (i.e., their metadata just ensured the RPC will be committed on the client-side). 

Rather than either explicitly disallowing such behavior by modifying some specification (this behavior is currently entirely unspecified, so while specification is worthwhile, it should be separate from the retry policy design currently under discussion), we can just change the default server behavior of C++, and Go if necessary, to match Java. In Java servers, the Response-Headers are delayed until some response message is sent. If the server application returns an error status before sending a message, then Trailers-Only is sent instead of Response-Headers.

We can also leave it up to the gRPC client library implementation to decide when an RPC is committed based on received Response-Headers. If and while the client library can guarantee that the presence (or absence) of initial metadata is not visible to the client application layer, the RPC can be considered uncommitted. This is an implementation detail that should very rarely be necessary if the above change is made to default server behavior, but it would not violate anything in the retry spec or semantics.

I think that leaving this unspecified will lead to interoperability problems in the future.  I would rather have the spec be explicit about this, so that all future client and server implementations can interoperate cleanly.
 

It's fair to say in the retry design that we must count an RPC as committed as soon the Response-Headers arrive, and the doc now states this explicitly. 

If you mean that we also need to change the gRPC spec to say *when* the server sends Response-Headers, I disagree. This is outside of the scope of a retry design. Retries will work fine whenever servers choose to send Response-Headers: since Response-Headers include initial metadata, which can contain arbitrary information, this is exactly the same from a retry perspective as the server sending any other response, and it commits the RPC. We can go so far as saying servers *should* delay sending Response-Headers until a message is sent by the server application layer, and the doc now states this explicitly.

Changing the gRPC spec to say that servers *must* delay sending Response-Headers until a message is sent may be a good idea, but it is not a requirement for retries and, in my opinion, should be left to a separate discussion. The semantics and operations of a retry policy are already clear, regardless of when servers choose to send Response-Headers, and the existing spec already allows the desirable behavior for retries with the Trailers-Only frame.

I agree that we don't need to say anything about whether or not the server delays sending Response-Headers until a message is sent.  However, I think we should say that if the server is going to immediately signal failure without sending any messages, it should send Trailers-Only instead of Response-Headers followed by Trailers.
 

This is in the retry gRFC doc now (https://github.com/ncteisen/proposal/blob/ad060be281c45c262e71a56e5777d26616dad69f/A6.md#when-retries-are-valid). The wire spec almost says it: "Trailers-Only is permitted for calls that produce an immediate error" (https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md). Do you want this changed in the wire spec itself or is the inclusion in the gRFC for retries sufficient?

I think it would be good to also change the wire spec doc.  We should do something like changing "is permitted" to "SHOULD be used".  We may even want to specifically mention that this is important for retry functionality to work right.

Eric Anderson

unread,
Mar 2, 2017, 12:03:31 PM3/2/17
to Mark D. Roth, Eric Gribkoff, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
On Thu, Mar 2, 2017 at 8:38 AM, Mark D. Roth <ro...@google.com> wrote:
On Thu, Mar 2, 2017 at 8:24 AM, Eric Gribkoff <ericgr...@google.com> wrote:
On Thu, Mar 2, 2017 at 8:15 AM, Mark D. Roth <ro...@google.com> wrote:
I agree that we don't need to say anything about whether or not the server delays sending Response-Headers until a message is sent.  However, I think we should say that if the server is going to immediately signal failure without sending any messages, it should send Trailers-Only instead of Response-Headers followed by Trailers.
 


The language is still confusing:
The client receives a non-error response from the server. Because of the gRPC wire specification, this will always be a Response-Headers frame containing the initial metadata.

What does "non-error response" mean there? I would have expected that means receiving a Status in some way (which is part of Response), as otherwise how is "error" decided. But the next part shows that isn't the case since Status isn't in Response-Headers.

The wire spec almost says it: "Trailers-Only is permitted for calls that produce an immediate error" (https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md). Do you want this changed in the wire spec itself or is the inclusion in the gRFC for retries sufficient?

I think it would be good to also change the wire spec doc.  We should do something like changing "is permitted" to "SHOULD be used".  We may even want to specifically mention that this is important for retry functionality to work right.

Changing to 'should' sounds fine. Although maybe there should be a note that clients can't decide if something is an 'immediate error' so there must not be any validation for it client-side.

Eric Gribkoff

unread,
Mar 2, 2017, 12:13:28 PM3/2/17
to Eric Anderson, Mark D. Roth, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
On Thu, Mar 2, 2017 at 9:03 AM, 'Eric Anderson' via grpc.io <grp...@googlegroups.com> wrote:
On Thu, Mar 2, 2017 at 8:38 AM, Mark D. Roth <ro...@google.com> wrote:
On Thu, Mar 2, 2017 at 8:24 AM, Eric Gribkoff <ericgr...@google.com> wrote:
On Thu, Mar 2, 2017 at 8:15 AM, Mark D. Roth <ro...@google.com> wrote:
I agree that we don't need to say anything about whether or not the server delays sending Response-Headers until a message is sent.  However, I think we should say that if the server is going to immediately signal failure without sending any messages, it should send Trailers-Only instead of Response-Headers followed by Trailers.
 


The language is still confusing:
The client receives a non-error response from the server. Because of the gRPC wire specification, this will always be a Response-Headers frame containing the initial metadata.

What does "non-error response" mean there? I would have expected that means receiving a Status in some way (which is part of Response), as otherwise how is "error" decided. But the next part shows that isn't the case since Status isn't in Response-Headers.


The second sentence is defining what non-error response means: a Response-Headers frame. The only alternative (an "error" response) is Trailers-Only. I can chose a name other than "non-error response" to make this clear.
 
The wire spec almost says it: "Trailers-Only is permitted for calls that produce an immediate error" (https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md). Do you want this changed in the wire spec itself or is the inclusion in the gRFC for retries sufficient?

I think it would be good to also change the wire spec doc.  We should do something like changing "is permitted" to "SHOULD be used".  We may even want to specifically mention that this is important for retry functionality to work right.

Changing to 'should' sounds fine. Although maybe there should be a note that clients can't decide if something is an 'immediate error' so there must not be any validation for it client-side.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

Mark D. Roth

unread,
Mar 2, 2017, 12:24:55 PM3/2/17
to Eric Gribkoff, Eric Anderson, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
On Thu, Mar 2, 2017 at 9:13 AM, Eric Gribkoff <ericgr...@google.com> wrote:


On Thu, Mar 2, 2017 at 9:03 AM, 'Eric Anderson' via grpc.io <grp...@googlegroups.com> wrote:
On Thu, Mar 2, 2017 at 8:38 AM, Mark D. Roth <ro...@google.com> wrote:
On Thu, Mar 2, 2017 at 8:24 AM, Eric Gribkoff <ericgr...@google.com> wrote:
On Thu, Mar 2, 2017 at 8:15 AM, Mark D. Roth <ro...@google.com> wrote:
I agree that we don't need to say anything about whether or not the server delays sending Response-Headers until a message is sent.  However, I think we should say that if the server is going to immediately signal failure without sending any messages, it should send Trailers-Only instead of Response-Headers followed by Trailers.
 


The language is still confusing:
The client receives a non-error response from the server. Because of the gRPC wire specification, this will always be a Response-Headers frame containing the initial metadata.

What does "non-error response" mean there? I would have expected that means receiving a Status in some way (which is part of Response), as otherwise how is "error" decided. But the next part shows that isn't the case since Status isn't in Response-Headers.


The second sentence is defining what non-error response means: a Response-Headers frame. The only alternative (an "error" response) is Trailers-Only. I can chose a name other than "non-error response" to make this clear.

It would probably be simpler to simply say "The RPC is committed when the client receives Response-Headers."
 
 
The wire spec almost says it: "Trailers-Only is permitted for calls that produce an immediate error" (https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md). Do you want this changed in the wire spec itself or is the inclusion in the gRFC for retries sufficient?

I think it would be good to also change the wire spec doc.  We should do something like changing "is permitted" to "SHOULD be used".  We may even want to specifically mention that this is important for retry functionality to work right.

Changing to 'should' sounds fine. Although maybe there should be a note that clients can't decide if something is an 'immediate error' so there must not be any validation for it client-side.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oON-6sgSW%3DLLJZLABLm_RFCFgNb%2Bki6%2BbwJuxMMPXMxUA%40mail.gmail.com.

For more options, visit https://groups.google.com/d/optout.

Eric Anderson

unread,
Mar 2, 2017, 6:20:34 PM3/2/17
to Mark D. Roth, Eric Gribkoff, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
On Thu, Mar 2, 2017 at 9:24 AM, Mark D. Roth <ro...@google.com> wrote:
On Thu, Mar 2, 2017 at 9:13 AM, Eric Gribkoff <ericgr...@google.com> wrote:
On Thu, Mar 2, 2017 at 9:03 AM, 'Eric Anderson' via grpc.io <grp...@googlegroups.com> wrote:
The language is still confusing:
The client receives a non-error response from the server. Because of the gRPC wire specification, this will always be a Response-Headers frame containing the initial metadata.

What does "non-error response" mean there? I would have expected that means receiving a Status in some way (which is part of Response), as otherwise how is "error" decided. But the next part shows that isn't the case since Status isn't in Response-Headers.


The second sentence is defining what non-error response means: a Response-Headers frame. The only alternative (an "error" response) is Trailers-Only. I can chose a name other than "non-error response" to make this clear.

It would probably be simpler to simply say "The RPC is committed when the client receives Response-Headers."

It is possible to receive Trailers-Only in the non-error case, assuming streaming is supported.

Eric Gribkoff

unread,
Mar 2, 2017, 7:04:23 PM3/2/17
to Eric Anderson, Saila Talagadadeevi, Noah Eisen, Mark D. Roth, Craig Tiller, Menghan Li, Penn Zhang, grpc.io
This is fine. Only RPCs that receive a non-OK status code may be retried; we don't need to cover this case in defining committed.

Eric Anderson

unread,
Mar 2, 2017, 7:29:41 PM3/2/17
to Mark D. Roth, Eric Gribkoff, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
The spec was updated today to say:
gRPC servers must delay sending Response-Headers until the server's first response (a Length-Prefixed-Message) is to be sent on the stream.

Why is this must? It was changed from should, so this seems intentional. Java can't support must.

Eric Gribkoff

unread,
Mar 2, 2017, 7:46:11 PM3/2/17
to Eric Anderson, Mark D. Roth, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
Your quote is missing the first part of the sentence.

To avoid unnecessarily committing an RPC on the client, gRPC servers must delay sending Response-Headers until the server's first response (a Length-Prefixed-Message) is to be sent on the stream.

The intent was "in order to achieve A, you must do B.", not you must always do B. If Response-Headers are always immediately sent, retries will never be possible. Hence, gRPC servers "must" delay the Response-Header to avoid unnecessarily committing an RPC.

Using should here instead would almost convey the same message, but needs further qualification. How about:

If Response-Headers are always immediately sent, retries will never be possible. Hence, gRPC servers should delay the Response-Header to avoid unnecessarily committing an RPC. Once Response-Headers are sent, retries will not be possible.

My intent was not to say we are changing the wire spec to must.  But Response-Headers constitute a response to the client and, if your gRPC server sends them eagerly, retries will never occur. This is allowable by the wire spec but should somehow be noted - somewhat strongly - in the retry specification.

Michael Lumish

unread,
Mar 2, 2017, 7:52:58 PM3/2/17
to Eric Gribkoff, Eric Anderson, Mark D. Roth, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
Considering that the goal seems to be avoiding committing the RPC until the server-side application has started processing the call, perhaps we could say something like "gRPC servers should delay the Response-Header until the first response message or until the application code chooses to send headers".

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

Eric Anderson

unread,
Mar 2, 2017, 9:15:00 PM3/2/17
to Eric Gribkoff, Mark D. Roth, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
On Thu, Mar 2, 2017 at 4:46 PM, Eric Gribkoff <ericgr...@google.com> wrote:
Your quote is missing the first part of the sentence.

To avoid unnecessarily committing an RPC on the client, gRPC servers must delay sending Response-Headers until the server's first response (a Length-Prefixed-Message) is to be sent on the stream.

The intent was "in order to achieve A, you must do B.", not you must always do B. If Response-Headers are always immediately sent, retries will never be possible. Hence, gRPC servers "must" delay the Response-Header to avoid unnecessarily committing an RPC.

That is quite ambiguous. To me it reads, "In order for the spec to maintain property A, you must do B." To be as you say, I would have expected an "If" or similar conditional at the beginning. Without a conditional, it always applies (even if only to support a niche use case). To me there is no ambiguity with should, since it is clear it is highly encouraged but may be optional in some cases, and you know what you would be losing if you chose not to.

Using should here instead would almost convey the same message, but needs further qualification.

Hmm... I guess I just don't see that.

How about:

If Response-Headers are always immediately sent, retries will never be possible.

That is not entirely true, due at least to network failures (I'd have to think about it more to weed out other possibilities).

 Hence, gRPC servers should delay the Response-Header to avoid unnecessarily committing an RPC.

"delay" is not specific enough. A server should not sleep(1000) before sending Response-Headers :-). Michael's proposed language seemed fine in this regard, mostly because it more closely matches the existing language.

Once Response-Headers are sent, retries will not be possible.

Also not entirely true. It matters when they are received, not sent. Sort of nit, but our specs are so complex, being precise reduces confusion and effort during reading.

Eric Gribkoff

unread,
Mar 2, 2017, 10:44:20 PM3/2/17
to Eric Anderson, Mark D. Roth, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
Michael's suggestion sounds good to me.

Let me try again. I propose we change it to:

gRPC servers should delay the Response-Headers until the first response message or until the application code chooses to send headers. If the application code closes the stream with an error before sending headers or any response messages, gRPC servers should send the error in Trailers-Only.

I think these two sentences are now clear. If there is still any ambiguity, suggestions for better phrasing would be appreciated.

Thanks,

Eric

Eric Anderson

unread,
Mar 3, 2017, 11:52:53 AM3/3/17
to Eric Gribkoff, Mark D. Roth, Noah Eisen, grpc.io, Craig Tiller, Penn Zhang, Saila Talagadadeevi, Menghan Li
On Thu, Mar 2, 2017 at 7:44 PM, 'Eric Gribkoff' via grpc.io <grp...@googlegroups.com> wrote:

gRPC servers should delay the Response-Headers until the first response message or until the application code chooses to send headers. If the application code closes the stream with an error before sending headers or any response messages, gRPC servers should send the error in Trailers-Only.

SGTM 

Eric Anderson

unread,
Mar 10, 2017, 2:19:10 AM3/10/17
to Noah Eisen, grpc.io
I see that retries add the x-grpc-retry-pushback-ms and x-grpc-retry-attempts metadata keys. Is there a reason to prefix them with the x-, even though the rest of the grpc keys just use the grpc- prefix? I didn't see any discussion on that.

I also saw this in the spec:
The value for this field will be a human-readable integer.

I'm not sure that really contributes anything. "three", "0x3", "03", and "三" are all human-readable. I'd assume something akin to "base 10-encoded, positive integer, without unnecessary leading zeros" is probably intended. I'm also assuming not to send the metadata on the initial request, but that would be another assumption.

I'd also note x-grpc-retry-pushback-ms doesn't define the format, although the mention of the special-case -1 does imply some about it.

On Fri, Feb 10, 2017 at 4:31 PM, ncteisen via grpc.io <grp...@googlegroups.com> wrote:
I've created a gRFC describing the design and implementation plan for gRPC Retries.

Take a look at the gRPC on Github.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

Mark D. Roth

unread,
Mar 13, 2017, 4:41:39 PM3/13/17
to Noah Eisen, grpc.io
After much discussion with the DNS and security folks, we've decided on a way to address the potential security issue of allowing an attacker to inject a service config with a large number of retries or hedged requests.  We will do this by imposing an upper bound on the max number of retries or hedged requests that are configurable via the service config.  That upper bound will be 5 by default, but applications will be able to explicitly override it if needed via a channel argument.

This approach not only limits the damage that can be caused by a malicious attacker but also damage that can be caused by a simple typo.

Noah, can you please add a section about this to the design doc?  Thanks!

On Fri, Feb 10, 2017 at 4:31 PM, ncteisen via grpc.io <grp...@googlegroups.com> wrote:
I've created a gRFC describing the design and implementation plan for gRPC Retries.

Take a look at the gRPC on Github.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/30e29cbc-439c-46c4-b54f-6e97637a0735%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



ol...@coda.io

unread,
Feb 26, 2019, 6:42:39 PM2/26/19
to grpc.io
Hi, I just wanted to bump this up - any updates on shipping retry support for public use? Thanks.


On Monday, March 13, 2017 at 1:41:39 PM UTC-7, Mark D. Roth wrote:
After much discussion with the DNS and security folks, we've decided on a way to address the potential security issue of allowing an attacker to inject a service config with a large number of retries or hedged requests.  We will do this by imposing an upper bound on the max number of retries or hedged requests that are configurable via the service config.  That upper bound will be 5 by default, but applications will be able to explicitly override it if needed via a channel argument.

This approach not only limits the damage that can be caused by a malicious attacker but also damage that can be caused by a simple typo.

Noah, can you please add a section about this to the design doc?  Thanks!
On Fri, Feb 10, 2017 at 4:31 PM, ncteisen via grpc.io <grp...@googlegroups.com> wrote:
I've created a gRFC describing the design and implementation plan for gRPC Retries.

Take a look at the gRPC on Github.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/30e29cbc-439c-46c4-b54f-6e97637a0735%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Mark D. Roth

unread,
Feb 27, 2019, 10:20:10 AM2/27/19
to ol...@coda.io, grpc.io
Unfortunately, we still can't provide any ETA on this.  We will definitely get back to this at some point, but we've got other higher priority work that will be occupying our attention for the next few quarters, so the soonest we might be able to get back to this would be toward the end of the year.

FWIW, I am personally committed to getting this done at some point, because I've already sunk about a year's worth of time into it, and I don't want that to have been for nothing. :)


For more options, visit https://groups.google.com/d/optout.

liuwenz...@163.com

unread,
Mar 5, 2019, 10:44:12 PM3/5/19
to grpc.io
before you provide retry feat, I guess we can use interceptor to do the same thing, like: https://github.com/grpc-ecosystem/go-grpc-middleware/blob/master/retry/retry.go does?

在 2019年2月27日星期三 UTC+8下午11:20:10,Mark D. Roth写道:

Mark D. Roth

unread,
Mar 6, 2019, 10:03:49 AM3/6/19
to liuwenz...@163.com, grpc.io
Yes, you can write your own interceptors to perform retries, although they won't have quite the same functionality as the built-in implementation will.  For example, there's no way to guarantee from an interceptor that each attempt is routed to a different server when doing client-side load balancing.


For more options, visit https://groups.google.com/d/optout.

al...@uber.com

unread,
Mar 19, 2019, 10:05:18 PM3/19/19
to grpc.io
I have a query about:

When gRPC receives a non-OK response status from a server, this status is checked against the set of retryable status codes in retryableStatusCodes to determine if a retry attempt should be made.

I was wondering why it wasn't chosen to have a set of fatalStatusCodes, to determine if a retry attempt should not be made ?

- Especially with respect to Postel's law.

thanks,

A.

Mark D. Roth

unread,
Mar 20, 2019, 10:06:07 AM3/20/19
to al...@uber.com, grpc.io
In general, unless an application is explicitly designed to allow an RPC to be retried, it's not safe to do so.  As a result, we wanted service owners to make an explicit choice about which ones they deem safe to retry, rather than accidentally configuring retries in a case where it's not safe.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

For more options, visit https://groups.google.com/d/optout.

asat...@gmail.com

unread,
Mar 20, 2019, 11:10:02 AM3/20/19
to grpc.io
суббота, 11 февраля 2017 г., 3:31:01 UTC+3 пользователь ncte...@google.com написал:

Michael Rose

unread,
Mar 20, 2019, 11:40:20 AM3/20/19
to Mark D. Roth, al...@uber.com, grpc.io
For some more color, we (internally) have made outages worse by retrying on status codes we shouldn't, sometimes through multiple layers of services resulting in essentially DDoSing our own services. For instance if you retry 3 times at each client, and your service passes through N layers, then you have 3^N retries. A service I worked on ended up 4 layers deep with misconfigured retry behavior that resulted in 81 retries per top-level request. That was fun, attempting to slough off ~82x our normal traffic. :)

Also as Mark said, it may not always be correct to retry: not all RPCs are idempotent and  may have state implications, so this really should be a case-by-case (and a code-by-code) decision. No sense in retrying something that isn't transient.

You received this message because you are subscribed to a topic in the Google Groups "grpc.io" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/grpc-io/zzHIICbwTZE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to grpc-io+u...@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

Alun Evans

unread,
Mar 20, 2019, 1:29:40 PM3/20/19
to Michael Rose, Mark D. Roth, grpc.io
Michael, Mark,

thanks for the feedback, sounds fair.

We've sometimes had the opposite experience where we have a long tail of
clients using older versions, which makes it hard to upgrade the server
side to emit a new error.

All praise the mono-repo I guess.


thanks,


A.
>>>> <https://github.com/grpc/proposal/pull/12>.
>>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to grpc-io+u...@googlegroups.com.
>>> To post to this group, send email to grp...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/grpc-io.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/grpc-io/b78c2861-49ea-4fe3-a0dd-70e5ed199432%40googlegroups.com
>>> <https://groups.google.com/d/msgid/grpc-io/b78c2861-49ea-4fe3-a0dd-70e5ed199432%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>> --
>> Mark D. Roth <ro...@google.com>
>> Software Engineer
>> Google, Inc.
>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "grpc.io" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/grpc-io/zzHIICbwTZE/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> grpc-io+u...@googlegroups.com.
>> To post to this group, send email to grp...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/grpc-io/CAJgPXp7_9vhjoJEy%2Bb-t%2B70ooZwbZ8FWZte2wiL93M1LAAN6hg%40mail.gmail.com
>> <https://groups.google.com/d/msgid/grpc-io/CAJgPXp7_9vhjoJEy%2Bb-t%2B70ooZwbZ8FWZte2wiL93M1LAAN6hg%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>

--
Alun Evans

Pau Freixes

unread,
Jun 26, 2019, 11:48:55 AM6/26/19
to grpc.io
Hi, 

Can I ask what's the status of the implementation? From what I can see in the code [1] this might be already implemented. I'm wondering if the retry feature is still experimental and if it implements everything that is stated in the gRFC [2] document.



On Saturday, February 11, 2017 at 1:31:01 AM UTC+1, ncte...@google.com wrote:
I've created a gRFC describing the design and implementation plan for gRPC Retries.

Take a look at the gRPC on Github.

Mark D. Roth

unread,
Jun 26, 2019, 12:18:59 PM6/26/19
to Pau Freixes, grpc.io
Unfortunately, the status of this hasn't changed recently.  We will definitely get back to this at some point, but we've got other higher priority work that will be occupying our attention for the next few quarters, so the soonest we might be able to get back to this would be toward the end of the year.

The current implementation in C-core is only partially complete.  The basic retry code is there, but there is still an outstanding design question of how we handle stats for retries, and there is not yet any support for transparent retries nor for hedging.  And even the basic retry code is extremely invasive and has not yet received any production testing, so there are probably numerous bugs waiting to be found.

I would not recommend using this code in its current state.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.

Pau Freixes

unread,
Jun 27, 2019, 9:36:19 AM6/27/19
to Mark D. Roth, grpc.io
Hi Mark,

Thanks for the update

> The current implementation in C-core is only partially complete. The basic retry code is there, but there is still an outstanding design question of how we handle stats for retries, and there is not yet any support for transparent retries nor for hedging. And even the basic retry code is extremely invasive and has not yet received any production testing, so there are probably numerous bugs waiting to be found.

What do you mean by "there is not yet any support for transparent
retries"? I thought hat retries were done under the hood transparently
when some conditions are met - like using the response codes.

Regarding the stats thing is it related to having the capacity of
retrying when the number of "errors" is lower than a specific
threshold?

Thanks for the advice on not using this in production. Does it mean
that gRPC community still believe that the way to go for having a all
of the needs covered for calling external dependencies - retrying,
circuit breakers, etc - it's by implementing their own wrappers on top
of the gRPC clients? What is being done within google right now?


Thanks!

--
--pau

Mark D. Roth

unread,
Jun 27, 2019, 10:06:00 AM6/27/19
to Pau Freixes, grpc.io
On Thu, Jun 27, 2019 at 6:36 AM Pau Freixes <pfre...@gmail.com> wrote:
Hi Mark,

Thanks for the update

> The current implementation in C-core is only partially complete.  The basic retry code is there, but there is still an outstanding design question of how we handle stats for retries, and there is not yet any support for transparent retries nor for hedging.  And even the basic retry code is extremely invasive and has not yet received any production testing, so there are probably numerous bugs waiting to be found.

What do you mean by "there is not yet any support for transparent
retries"? I thought hat retries were done under the hood transparently
when some conditions are met - like using the response codes.

Transparent retries are the ones described in this section of the spec:


We have not yet implemented that functionality in C-core.
 

Regarding the stats thing is it related to having the capacity of
retrying when the number of "errors" is lower than a specific
threshold?

No, it's a more basic problem than that.  Whenever there are multiple attempts on a given RPC, the additional attempts don't show up at all in stats recorded via systems like census.
 

Thanks for the advice on not using this in production. Does it mean
that gRPC community still believe that the way to go for having a all
of the needs covered for calling external dependencies - retrying,
circuit breakers, etc - it's by implementing their own wrappers on top
of the gRPC clients? What is being done within google right now?

At the moment, the best way to do this is probably to write an interceptor.
 


Thanks!

--
--pau

nathani...@gmail.com

unread,
Jul 24, 2020, 10:54:35 AM7/24/20
to grpc.io
Mark,

Are there any updates to this or does the latest post still stand?

Thanks,
Nathan

Mark D. Roth

unread,
Jul 24, 2020, 11:02:03 AM7/24/20
to nathani...@gmail.com, grpc.io
Unfortunately, nothing has changed here.  At this point, the soonest we could get back to this would probably be sometime in Q2 next year.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.

Guillermo Romero

unread,
Sep 30, 2020, 9:01:08 AM9/30/20
to grpc.io
Hi:
    I'm using Jboss Netty as Grpc client, and my doubts are related to the Retry Policy. My understanding is that the Retry Policy is related to the internal message transport between the client and the server using the gRPC protocol.
 But my problem is related to the TCP breaks, there is a way of write a TCP retry policy?

Mark D. Roth

unread,
Sep 30, 2020, 11:09:25 AM9/30/20
to Guillermo Romero, grpc.io
gRPC client channels will automatically reconnect to the server when the TCP connection fails.  That has nothing to do with the retry feature, and it's not something you need to configure -- it will happen automatically.

Now, if an individual request is already in-flight when the TCP connection fails, that will cause the request to fail.  And in that case, retrying the request would be what you want.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.

Guillermo Romero

unread,
Sep 30, 2020, 11:35:19 AM9/30/20
to grpc.io
Thanks Mark:

    So.
       what's level this retry policy works?.


final Map<String, Object> retryPolicy = new HashMap<>();
retryPolicy.put("maxAttempts", 10D);
retryPolicy.put("initialBackoff", "10s");
retryPolicy.put("maxBackoff", "30s");
retryPolicy.put("backoffMultiplier", 2D);
retryPolicy.put("retryableStatusCodes", Arrays.<Object>asList("UNAVAILABLE" , "RESOURCE_EXHAUSTED" , "INTERNAL"));
final Map<String, Object> methodConfig = new HashMap<>();
methodConfig.put("retryPolicy", retryPolicy);

final Map<String, Object> serviceConfig = new HashMap<>();
serviceConfig.put("methodConfig", Collections.<Object>singletonList(methodConfig));

I'm having a problem with netty client, it thows an exception  when tcp breaks an not try to reconnect N times (MaxAttemps) -


Mark D. Roth

unread,
Sep 30, 2020, 11:39:51 AM9/30/20
to Guillermo Romero, grpc.io, Penn (Dapeng) Zhang
As per discussion earlier in this thread, we haven't yet finished implementing the retry functionality, so it's not yet enabled by default.  I believe that in Java, you may be able to use it, albeit with some caveats.  Penn (CC'ed) can tell you what the current status is in Java.

Nathan Roberson

unread,
Sep 30, 2020, 12:12:29 PM9/30/20
to grpc.io
I would advocate for finishing this implementation and releasing for C++ as a high priority item. :)

Mark D. Roth

unread,
Sep 30, 2020, 12:37:20 PM9/30/20
to Nathan Roberson, grpc.io
It's definitely something that we want to finish.  I personally spent almost a year working on the C-core implementation, and it's mostly complete, but not quite enough to actually use yet -- there's still a bit of missing functionality to implement, and there are some design issues related to stats that we need to resolve.

Unfortunately, we've had other higher priority items come up that have required us to set this aside.  I hope to be able to get back to finishing this up in Q2 next year.

Eric Anderson

unread,
Sep 30, 2020, 12:37:24 PM9/30/20
to Mark D. Roth, Guillermo Romero, grpc.io, Penn (Dapeng) Zhang
You need to call `enableRetry()` on the channel builder. See the retry example and example config.

I think your methodConfig may not be selected because there is no 'name' list for methods to match. Now that we support wildcard service names, you could probably use methodConfig.put("name", Arrays.asList(Collections.emptyMap())).

I'll note that reconnect attempts are completely separate from RPC retries. gRPC always has reconnect behavior enabled.

zda...@google.com

unread,
Sep 30, 2020, 1:20:58 PM9/30/20
to grpc.io
Agree with Eric. I'll also note that if connection is broken in the middle of an RPC after the client receives partial data from the server, say, only the response headers, then although the channel will be reconnecting automatically by the library, that individual RPC is not retried automatically by the library, see the definition of committed in the retry design for details.

Rishabh Mor

unread,
May 31, 2021, 1:34:24 AM5/31/21
to grpc.io
How can we know the status of retries in go-grpc ? I see hedging is not implemented but apart from that pretty much everything else is there. Are the go-grpc retries being used internally within Google ?

Thanks,
Rishabh
Reply all
Reply to author
Forward
0 new messages