Jedis memory problems

810 views
Skip to first unread message

Igor Shaldev

unread,
Nov 10, 2014, 11:09:56 AM11/10/14
to jedis...@googlegroups.com
Hello,

I'm working on a Java servlet which uses Redis as it's data source. I initialize the pool on the servlet init and I am using try-with-resources to take the jedis resource. I am using nginx and 4 tomcat processes to split the load. When traffic peaks the tomcat processes start to consume memory heavily, until it reaches the max ram per process. All tomcat processes are connected to the same redis instance (I tried using master-slave, the same thing happened). The memory never gets released (I know this because, if I disable the traffic the tomcat processes never release the memory).

I tried changing the JedisConfigPool settings, but it did not help:
JedisPoolConfig config = new JedisPoolConfig();
config
.setMaxTotal(5000);
config
.setMaxWaitMillis(200l);
config
.setBlockWhenExhausted(false);


The only thing that happend it started to generate lots of logs which previously did not appeared:

Nov 08, 2014 12:46:39 PM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [Request] in context with path [/foobared] threw exception
redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
    at redis.clients.util.Pool.getResource(Pool.java:42)
    at redis.clients.jedis.JedisPool.getResource(JedisPool.java:84)
    at org.foo.bar.Request.doPost(Request.java:99)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:646)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
    at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:503)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:421)
    at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1070)
    at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1736)
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1695)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    at java.lang.Thread.run(Thread.java:745)
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: connect timed out
    at redis.clients.jedis.Connection.connect(Connection.java:150)
    at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:71)
    at redis.clients.jedis.BinaryJedis.connect(BinaryJedis.java:1783)
    at redis.clients.jedis.JedisFactory.makeObject(JedisFactory.java:65)
    at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:819)
    at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:452)
    at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:360)
    at redis.clients.util.Pool.getResource(Pool.java:40)
    ... 26 more
Caused by: java.net.SocketTimeoutException: connect timed out
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:579)
    at redis.clients.jedis.Connection.connect(Connection.java:144)
    ... 33 more

Probably it reached the redis connection limit and cannot connect anymore. I tried raising the maxclients to 100000 and also nofile for the redis user in /etc/security/limits.conf but it did not help.

I tried using jedis 2.6.0 and 2.5.1, the same thing happened.

Can someone help me solve this problem?

Thanks

Best,
Igor

Robert DiFalco

unread,
Nov 10, 2014, 12:05:51 PM11/10/14
to jedis...@googlegroups.com
There's a lot of strange things here. WHY are you creating >5000 redis clients? Each of those has 8192 bytes for the reader and another 8192 bytes for the writer. I've never created a pool with more than 12 clients. Also, why aren't you blocking when the pool is exhausted?

Just from your information I would say that you are not properly returning Jedis objects to the pool after you are done with them. I would trace through your code to make sure you return objects to the pool, re-enable blocking when exhausted, and crank your max pooled objects WAY down.

Igor Shaldev

unread,
Nov 10, 2014, 12:25:16 PM11/10/14
to jedis...@googlegroups.com
Hi Robert,

Thanks for your reply.

I've tried with lower clients (as low as 8, and as high as 128) but the same thing happens. I will remove the blockWhenExausted option. And for the returning objects to the pool properly, I use try-with-resources. Here is a small sample:

try (Jedis jedis = pool.getResource()) {
     
Pipeline pipe = jedis.pipelined();
     
//do some operations
     pipe
.sync();
}



I think this is sufficient to return the object to the pool, if I am mistaken please correct me.

Also my application never uses more than 70 clients, so that's a misconfiguration.

Thanks again for replying.

Robert DiFalco

unread,
Nov 10, 2014, 12:36:50 PM11/10/14
to jedis...@googlegroups.com
If you have the same problem when you setMaxTotal to 8 and setBlockWhenExhausted to true then Jedis is probably not the cause of your memory problems. Can you try again with these settings and tell me what your result is? Really there is no reason for maxTotal to be any greater than the number of threads you use to service client requests in your web server.

Igor Shaldev

unread,
Nov 10, 2014, 12:53:17 PM11/10/14
to jedis...@googlegroups.com
At the moment, it is working this way, when JedisPoolConfig is not changed, and these are the options, maxTotal is 8, and setBlockWhenExhausted is set to true. But when traffic reaches high volume this happens. 

I forgot to mention why I tried changing BlockWhenExausted. I used an debuging application attached to my app, and it showed all 6000 threads are busy, some even for more than 30secs, and they were all waiting on pool.getResource(). So I started changing the settings to allow processing more request. But if redis is not the problem, then I probably reached the limit of my app and traffic.

Robert DiFalco

unread,
Nov 10, 2014, 1:09:24 PM11/10/14
to jedis...@googlegroups.com
How are you getting to 6000 threads? Don't you have an upper bound around 20 or so? In any event, it's doubtful that Jedis is the cause of your memory problems. Even just the stack space for 6000 threads could cause a memory issue. Your requests are probably just not completing fast enough for your load.

You should probably measure the throughput of your server. You should also look at using NIO in your server to keep thread usage down.

Robert DiFalco

unread,
Nov 10, 2014, 1:10:37 PM11/10/14
to jedis...@googlegroups.com
Oh and you should attach a profiler to your server so you can find out where your memory problem really is. But if you maximum number of Jedis objects at any one time is 8 then it is  unlikely that Jedis is the issue.
Reply all
Reply to author
Forward
0 new messages