Connection Pool in redis python client redis-py

4,316 views
Skip to first unread message

sarang shravagi

unread,
Jul 2, 2013, 5:49:30 AM7/2/13
to redi...@googlegroups.com
Hey,
I'm using redis for some caching purpose, which is working fine, and going to use it with very high throughput.  
Rate of read and write on single db is going to be around 200 per sec.
 
I'm using Redis class since long time, so continuing with it
redis_handle = Redis(host="", db = "", port ="")

I'm having couple of question:

1.Does redis provides connection polling automatically?
2.Does creating redis_handle at global level will be a good idea?
3.What is best practice to support this much throughput rate?

Thanks a lot in advance :)

Andy McCurdy

unread,
Jul 2, 2013, 11:14:20 AM7/2/13
to redi...@googlegroups.com
On Jul 2, 2013, at 2:49 AM, sarang shravagi <sarang....@gmail.com> wrote:

Hey,
I'm using redis for some caching purpose, which is working fine, and going to use it with very high throughput.  
Rate of read and write on single db is going to be around 200 per sec.
 
I'm using Redis class since long time, so continuing with it
redis_handle = Redis(host="", db = "", port ="")

I'm having couple of question:

1.Does redis provides connection polling automatically?

redis-py does, yes. Behind the scenes, if you don't explicitly pass a connection_pool to a Redis instance, one will be created for you based on the host, db, port, etc. parameters you do pass. Note that in this case the connection pool is only used for that specific client instance, meaning if you're creating Redis instances in various places throughout your code, each will create a separate connection pool. There's two ways to solve that (see next item):

2.Does creating redis_handle at global level will be a good idea?

In general, I suggest you either:

a. create a global redis client instance and have your code use that.
b. create a global connection pool and pass that to various redis instances throughout your code.

Both of these accomplish the same thing. Both are threadsafe.

Note however that pipeline instances and pubsub instances are *not* threadsafe. Don't create those at a global/module level. Instead, create new pubsub/pipeline instances in each thread.

3.What is best practice to support this much throughput rate?


Connection pooling helps. Pipelines help a lot too. You should install hiredis too. redis-py will automatically use hiredis if it's available.

Thanks a lot in advance :)

--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

sarang

unread,
Jul 2, 2013, 11:27:05 AM7/2/13
to redi...@googlegroups.com
Can you elaborate in detail about pipeline instance and pubsub instance, did not get it.
 


--
You received this message because you are subscribed to a topic in the Google Groups "Redis DB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/redis-db/m9k2DN7GX-M/unsubscribe.
To unsubscribe from this group and all its topics, send an email to redis-db+u...@googlegroups.com.

To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Regards,
Sarang

Andy McCurdy

unread,
Jul 2, 2013, 12:39:54 PM7/2/13
to redi...@googlegroups.com
Pipeline and pubsub objects both maintain internal state. Creating a global instance that multiple threads access isn't safe. Likewise, don't pass pipeline or pubsub objects between threads.

Instead, each thread that needs to use a pipeline or pubsub instance should create their own from the global redis object.

Josiah Carlson

unread,
Jul 2, 2013, 2:44:20 PM7/2/13
to redi...@googlegroups.com
On Tue, Jul 2, 2013 at 2:49 AM, sarang shravagi <sarang....@gmail.com> wrote:
Hey,
I'm using redis for some caching purpose, which is working fine, and going to use it with very high throughput.  
Rate of read and write on single db is going to be around 200 per sec.

This honestly made me laugh out loud. Hearing "very high throughput" and "200 per sec" being used together without irony in relation to Redis is what did it. I mean no offense, it's just not the scale I'm used to ;)

Trust me when I say that you'll have some head-room for a while with those QPS numbers.

 - Josiah
 

sarang shravagi

unread,
Jul 3, 2013, 7:36:39 AM7/3/13
to redi...@googlegroups.com
Ok :)

gradetwo

unread,
Jul 9, 2013, 10:34:15 PM7/9/13
to redi...@googlegroups.com
my solution: https://gist.github.com/gradetwo/5962997
use @classmethod to setup connection



nangunoori shivani

unread,
Jan 20, 2021, 11:45:04 AM1/20/21
to Redis DB
Will reconnection be taken care by connection pool? 
Thanks
Reply all
Reply to author
Forward
0 new messages