Hello all -
During a planned or actual failover scenario in elasticache redis we're experiencing the following behavior.
Jedis switches to new primary node (at a very poor performance level
creating thousands of new connections and spitting out large amounts of
errors), e.g.
- redis.clients.jedis.exceptions.JedisClusterOperationException: Cluster retry deadline exceeded.
- redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out
- redis.clients.jedis.JedisFactory: Error while close
And after recovery of the original primary node jedis continue to create thousands of connections with the secondary failover node without ever recovering the original performance.