|Troubleshooting long Redis call (zrange)||Alvin Tran||8/15/12 9:17 PM|
I'm making a web service (made with Python and using Cherrypy) that will make calls to a Redis instance (via redis-py) to retrieve data from a zrange. However, using an ab test (using 10000 connections and 10 concurrent connections), I occasionally see that my call to Redis occasionally takes more than 1 second (sometimes up to 22 seconds) to complete. Upon checking my slowlogs on this Redis instance, I find that I have no logs of any activity exceeding 10 ms to complete ("slowlog get" yielded an empty list). Thus, this makes me think that the roundtrip from my service to Redis (and vice versa) is what's taking long. I was wondering what could be causign this, if this is normal behavior, and if there is anything I could do to prevent such scenarios from occurring.
|Re: Troubleshooting long Redis call (zrange)||Josiah Carlson||8/16/12 10:17 AM|
Cherrypy is running behind some server. What is that server? Does it
kill threads/processes after a given number of requests? Are you using
the standard redis-py connection pooling? Are you doing standard sync
requests, or are you using the async baruvka package?
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To view this discussion on the web visit
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to
> For more options, visit this group at
|Re: Troubleshooting long Redis call (zrange)||Alvin Tran||8/16/12 10:54 AM|
Cherrypy is running behind gunicorn (with a gevent worker class). Gunicorn kills and restarts threads after a certain amount of time passes (by default, 30) without any response, and I'm using the default. I'm also using the standard redis-py and its connection pool.
Just out of curiosity, what is the async baruvka package? I was also under the impression that gunicorn (with async workers) would allow for asynchronous calls with redis-py. Is this true, or is my assumption off?
|Re: Troubleshooting long Redis call (zrange)||Josiah Carlson||8/16/12 11:32 AM|
Turns out my spelling is off. It's Brukva:
https://github.com/kmerenkov/brukva But that's also pretty old...
probably not what you're looking for (the Python async modules listing
in my head is old).
It also seems that if you make the proper monkey-patching call to
gevent, redis-py will still work correctly in a gunicorn+gevent
situation. Are you doing the expected "monkey.patch_all()" call?
While the ab run is going on, have you checked Redis' info output to
verify that you only have a limited number of concurrent connections?
If instead of hitting Redis, you make a trivial select call to a
database, do you get the same performance issue?
|Re: Troubleshooting long Redis call (zrange)||Alvin Tran||8/16/12 2:19 PM|
I am not really using any monkey-patching code, since I just specified to gunicorn's config file that the worker class was gevent. Upon looking at my Redis info output, though, I see that I can get up to 100 connections when using an ab of 10 concurrent connections. I am not sure if it's normal for the connections to stay up like that after the ab test finishes, though (if it's not normal, am I supposed to have redis-py call a disconnect command, or something, after I'm done with it?)
|Re: Troubleshooting long Redis call (zrange)||Josiah Carlson||8/17/12 9:48 AM|
If you are seeing 100 connections after an ab run with 10 concurrent
requests, it's possible that your Redis client is not using pooling.
Create a single connection object (redis.Redis()), then just call
methods on it. That will automatically create and reuse connections by
If it wasn't reusing connections, then it's possible that your OS was
having problems allocating and/or freeing sockets; which is a common
source of latency during early development and later scaling.