RedisCommandExecutionException: ERR max number of clients reached

406 views
Skip to first unread message

abhay....@gmail.com

unread,
Dec 10, 2018, 2:51:36 AM12/10/18
to lettuce-redis-client-users


How can i get these exceptions to come up to the Application Level?
So that the current connections can be closed and new ones can be opened

2018-12-09 23:05:59,527 [pool-2-thread-1] WARN  c.l.r.protocol.ReconnectionHandler - Reconnection attempt without a RedisChannelInitializer in the channel pipeline
2018-12-09 23:05:59,528 [pool-2-thread-1] WARN  c.l.r.protocol.ReconnectionHandler - Reconnection attempt without a RedisChannelInitializer in the channel pipeline
2018-12-09 23:05:59,527 [lettuce-eventExecutorLoop-10-1] ERROR c.l.r.protocol.ReconnectionHandler - Cannot initialize channel.
java.util.concurrent.ExecutionException: com.lambdaworks.redis.RedisCommandExecutionException: ERR max number of clients reached
        at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
        at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
        at com.lambdaworks.redis.protocol.ReconnectionHandler.reconnect(ReconnectionHandler.java:87)
        at com.lambdaworks.redis.protocol.ConnectionWatchdog.reconnect(ConnectionWatchdog.java:236)
        at com.lambdaworks.redis.protocol.ConnectionWatchdog.run(ConnectionWatchdog.java:221)
        at com.lambdaworks.redis.protocol.ConnectionWatchdog$2.lambda$run$323(ConnectionWatchdog.java:181)
        at io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:73)
        at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:36)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
        at java.lang.Thread.run(Thread.java:748)
Caused by: com.lambdaworks.redis.RedisCommandExecutionException: ERR max number of clients reached
        at com.lambdaworks.redis.protocol.AsyncCommand.completeResult(AsyncCommand.java:85)
        at com.lambdaworks.redis.protocol.AsyncCommand.complete(AsyncCommand.java:75)
        at com.lambdaworks.redis.pubsub.PubSubCommandHandler.decode(PubSubCommandHandler.java:53)
        at com.lambdaworks.redis.protocol.CommandHandler.channelRead(CommandHandler.java:146)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:278)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:278)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:278)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:278)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:962)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:485)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:399)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:371)
        ... 3 common frames omitted

mpa...@pivotal.io

unread,
Dec 10, 2018, 4:47:33 AM12/10/18
to lettuce-redis-client-users
Reconnect exceptions are typically handled in ConnectionWatchdog and not propagated to commands. Lettuce 5.3 will emit reconnect failures as an event, see wiki [0] for further details.

abhay....@gmail.com

unread,
Jan 21, 2019, 12:53:31 AM1/21/19
to lettuce-redis-client-users
How can such issues be avoided? I have a thread that tries to connect with redis server - which dies after 3 attempts.
The original exercise was to close existing connections with redis - as redis server was restarted with password option.
The client thread tries a reconnection for 3 attempts and then just dies.

After a while the below exceptions are seen and the process runs out of memory eventually.
Existing client options are 

ClientOptions.builder()

.disconnectedBehavior(DisconnectedBehavior.REJECT_COMMANDS)

.requestQueueSize(500)

.pingBeforeActivateConnection(true)

.build());


Any help will be greatly appreciated.

Regards,
Abhay

Mark Paluch

unread,
Jan 21, 2019, 2:41:23 AM1/21/19
to lettuce-redis-client-users
You cannot really fix the issue as limiting the number of clients is a Redis server configuration. 

The only thing that you can do on the client side is to use a single connection for your application if that works. You can share a single connection across multiple threads if you don't use transactions and if you do not use blocking commands such as BLPOP.

Cheers, 
Mark

abhay....@gmail.com

unread,
Jan 21, 2019, 4:51:01 AM1/21/19
to lettuce-redis-client-users
Thanks for your inputs.
However from the client side the connection creation is limited to 3 attempts. Code excerpt below:-

redisClient.connectPubSub(RedisURI.create(host, port));

connectionPublish = redisClient.connectPubSub(RedisURI.create(host, port));


The number of client TCP connections went up to 10000.

 From the heap dump - the number of ConnectionWatchDog instances shoots upto 1 million!!.


I am not able to attach an image here but thats the case.

heap.jpg



Regards,

Abhay

abhay....@gmail.com

unread,
Jan 21, 2019, 4:54:43 AM1/21/19
to lettuce-redis-client-users
Also the versions used are Lettuce 4.2.2 with Netty 4.1.32.

Thanks,
Abhay

abhay....@gmail.com

unread,
Jan 22, 2019, 7:30:27 AM1/22/19
to lettuce-redis-client-users
I would like to add along with this there is another exception seen like the below:

diameter-endpoint-s105.weave.local  2019-01-22 12:04:12,525 [lettuce-eventExecutorLoop-10-11] ERROR c.l.r.protocol.ReconnectionHandler - Cannot initialize channel.

java.util.concurrent.ExecutionException: java.lang.NullPointerException

        at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)

        at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)

        at com.lambdaworks.redis.protocol.ReconnectionHandler.reconnect(ReconnectionHandler.java:87)

        at com.lambdaworks.redis.protocol.ConnectionWatchdog.reconnect(ConnectionWatchdog.java:258)

        at com.lambdaworks.redis.protocol.ConnectionWatchdog.run(ConnectionWatchdog.java:243)

        at com.lambdaworks.redis.protocol.ConnectionWatchdog$2.lambda$run$393(ConnectionWatchdog.java:198)

        at io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:73)

        at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66)

        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)

        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

        at java.lang.Thread.run(Thread.java:748)

Caused by: java.lang.NullPointerException: null

        at com.lambdaworks.redis.protocol.RedisStateMachine.decode(RedisStateMachine.java:122)

        at com.lambdaworks.redis.protocol.RedisStateMachine.decode(RedisStateMachine.java:90)

        at com.lambdaworks.redis.pubsub.PubSubCommandHandler.decode(PubSubCommandHandler.java:50)

        at com.lambdaworks.redis.protocol.CommandHandler.channelRead(CommandHandler.java:185)

        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)

        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)

        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)

        at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)

        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)

        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)

        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)

        at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)

        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)

        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)

        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)

        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)

        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)

        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)

        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)

        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)

        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656)

        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:591)

        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:508)

        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470)

        ... 3 common frames omitted


Is this a side effect of the MAX number of clients reached or some other issue?


Regards,

Abhay

Mark Paluch

unread,
Jan 22, 2019, 7:50:01 AM1/22/19
to lettuce-redis-client-users
That's a different one, related to https://github.com/lettuce-io/lettuce-core/issues/576.

abhay....@gmail.com

unread,
Jan 22, 2019, 7:58:08 AM1/22/19
to lettuce-redis-client-users
Thanks - for the NPE i will upgrade the Lettuce version!! 
As an aside for the ERR Max clients, the code that is in place is something like this:-

 DefaultClientResources.builder().reconnectDelay(Delay.constant(1000, TimeUnit.MILLISECONDS)).build();


However i observe that the same exception (ERR Max clients) is more often,if i am guessing correct the said exception as per the above code snippet should be once every one second?


Should i explain more?

Regards,

Abhay

abhay....@gmail.com

unread,
Jan 22, 2019, 8:07:53 AM1/22/19
to lettuce-redis-client-users

Here is what i mean. Look at the timestamps of the two log statements

2019-01-22 12:04:14,694 [lettuce-eventExecutorLoop-10-10] ERROR c.l.r.protocol.ReconnectionHandler - Cannot initialize channel.

java.util.concurrent.ExecutionException: com.lambdaworks.redis.RedisCommandExecutionException: ERR max number of clients reached


2019-01-22 12:43:18,938 [lettuce-eventExecutorLoop-10-3] ERROR c.l.r.protocol.ReconnectionHandler - Cannot initialize channel.
java.util.concurrent.ExecutionException: com.lambdaworks.redis.RedisCommandExecutionException: ERR max number of clients reached

 2019-01-22 12:43:18,938 [lettuce-eventExecutorLoop-10-4] ERROR c.l.r.protocol.ReconnectionHandler - Cannot initialize channel.
java.util.concurrent.ExecutionException: com.lambdaworks.redis.RedisCommandExecutionException: ERR max number of clients reached

It is continuous for more than 4 seconds.
Does the reconnect delay not play any role here? If not ,is there a way to better handle the reconnect delay?

Regards,
Abhay
Reply all
Reply to author
Forward
0 new messages