--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/1F613F43-F3EE-494B-A33E-B161EC6E18CA%40cxense.com.
For more options, visit https://groups.google.com/d/optout.
It looks like grpc-java is calling netty's incrementAndGetNextStreamId [0] which returns int. Does this really mean that grpc only supports 2^31 requests per channel?
I’m happy to create a github issue if this can be seen as a bug and not a known limitation.
UNAVAILABLE: Stream IDs have been exhausted
The background for my question is that we had an outage caused by the limitation
On 6. mai 2016, at 18:25, Eric Anderson <ej...@google.com> wrote:On Fri, May 6, 2016 at 1:49 AM, Erik Gorset <erik....@cxense.com> wrote:It looks like grpc-java is calling netty's incrementAndGetNextStreamId [0] which returns int. Does this really mean that grpc only supports 2^31 requests per channel?As Josh said, the limit is per transport, not per channel. There is already code intended to swap to a new transport, but maybe it is buggy/suboptimal.I’m happy to create a github issue if this can be seen as a bug and not a known limitation.Please make an issue. This is a bug.
UNAVAILABLE: Stream IDs have been exhaustedThat status appears to be coming from here. The behavior then seems to be that that particular RPC will fail but future RPCs should start going to a new transport. That alone is suboptimal but not too bad; a transient failure of 1 out of 2^30 RPCs should be recoverable by applications, otherwise they are probably going to have a bad time from other failures. However, it won't necessarily be only 1 RPC that fails, since it will take a small amount of time to divert traffic to a new transport, and all RPCs during that time would fail. It'd be good to address that.However, I think the larger problem is that calling close doesn't trigger things quickly enough, especially if you have long-lived streams, since it delays until all the RPCs on that transport are complete. There is no upper-bound on how long a stream could live, so a Channel could be broken for quite some time.The background for my question is that we had an outage caused by the limitationIf your RPCs are short-lived and my analysis is correct, I wouldn't expect an outage, but instead a temporary failure. Is the lifetime of some of your RPCs long? If so, then I think that would help confirm my theory.
2017-07-27 08:30:19,794 [DEBUG] [r-ELG-49-2]
verification of certificate failed
java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at sun.security.validator.PKIXValidator.<init>(Unknown Source)
at sun.security.validator.Validator.getInstance(Unknown Source)
at sun.security.ssl.X509TrustManagerImpl.getValidator(Unknown Source)
at sun.security.ssl.X509TrustManagerImpl.checkTrustedInit(Unknown Source)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(Unknown Source)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(Unknown Source)
at io.netty.handler.ssl.ReferenceCountedOpenSslClientContext$ExtendedTrustManagerVerifyCallback.verify(ReferenceCountedOpenSslClientContext.java:223)
at io.netty.handler.ssl.ReferenceCountedOpenSslContext$AbstractCertificateVerifier.verify(ReferenceCountedOpenSslContext.java:606)
at org.apache.tomcat.jni.SSL.readFromSSL(Native Method)
at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.readPlaintextData(ReferenceCountedOpenSslEngine.java:470)
at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:927)
at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1033)
at io.netty.handler.ssl.SslHandler$SslEngineType$1.unwrap(SslHandler.java:200)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1117)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1039)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:565)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:479)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Unknown Source)
Caused by: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at java.security.cert.PKIXParameters.setTrustAnchors(Unknown Source)
at java.security.cert.PKIXParameters.<init>(Unknown Source)
at java.security.cert.PKIXBuilderParameters.<init>(Unknown Source)
... 32 more
java.util.concurrent.ExecutionException: io.grpc.StatusRuntimeException: UNAVAILABLE: Channel closed while performing protocol negotiation