pablochacin
unread,Feb 18, 2013, 1:52:17 PM2/18/13Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to jedis...@googlegroups.com
Hi all
I've been experiencing a problem on which my application hangs after some two or three days running.
I've taken a core dump of the tomcat (6.0.36) and after analyzing it, I found some inconsistencies. From the 200 thread defined in Tomcat I found that
- 93 were waiting on the borrowObject method of Apache commons Generic Pool, called from the redis.clients.utils.Util.Pool.getResource() method.
- 52 were waiting on java.net.SocketInputStream.read called from redis.clients.util.RedisInputStream.fill() method
The rest of the threads were waiting for work.
Now, I have the pool defined with 500 connections and Redis has a connection limit (maxclients) of 10000, so how it is possible that 93 threads were waiting for a connection?
The answer came after analyzing the heap dump. The GenericObjectPool shows that there are 500 active connections (_numActive field), but there are only
38 instances of Jedis Client class. That is, somehow the pool has a wrong counter for active connections. More over, it there are 38 connections, how can be
52 threads waiting on a jedis connection socket?
From this analysis, it seams to me that the JedisPool (or more likely, the apache pool) is leaking connections.
Interestingly, I've found some old reports of similar problems with apache's jdbc connector based on this same object pool.
I's this a know issue? If so, what would you suggest me to do, as I can't change Jedis Implementation.
Regards
Pablo