how many keys Redis can hold

1,068 views
Skip to first unread message

Oleg Ruchovets

unread,
Sep 1, 2013, 2:06:53 PM9/1/13
to redi...@googlegroups.com
Hi.
   I have ~ 1.000.000.000 keys  each key is ~ 100 bytes size  = ~100GB data.
Question:
   Is it feasible to work with such amount of keys using redis?
Of course I am going to deploy redis to couple of machines.

It would be great if community can share the experience with big keys number.

Thanks
Oleg. 

 

Yiftach Shoolman

unread,
Sep 1, 2013, 3:46:29 PM9/1/13
to redi...@googlegroups.com
That should work using a large instance, or sharding (prefered). 


--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/groups/opt_out.



--

Yiftach Shoolman
+972-54-7634621

Oleg Ruchovets

unread,
Sep 1, 2013, 4:41:27 PM9/1/13
to redi...@googlegroups.com
Great , thank you for the link.
  Another question that I wanted to ask. Does it make sense to get 5000:10000 requests per second? I've read a lot of materials with different benchmarks but the results is very different.
Thanks
Oleg.
I


--
You received this message because you are subscribed to a topic in the Google Groups "Redis DB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/redis-db/ylJ49qOWD-8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to redis-db+u...@googlegroups.com.

Yiftach Shoolman

unread,
Sep 1, 2013, 4:50:56 PM9/1/13
to redi...@googlegroups.com
If you have simple get/set requests then Redis can easily do it on almost every modern HW/VM.

If you are talking about complex commands like ZUNIONSTORE or ZINTERSTORE then it really depends on the complexity parameters (N,M,K....). 

IMO - test your use cases and decide about your architecture according to the results of these tests. 

Josiah Carlson

unread,
Sep 2, 2013, 2:23:57 AM9/2/13
to redi...@googlegroups.com
Simple gets and sets at that volume wouldn't be an issue as long as you have a fast enough network.

 - Josiah

Oleg Ruchovets

unread,
Sep 2, 2013, 2:57:19 AM9/2/13
to redi...@googlegroups.com
Are we talking about single instance of Redis or it is require sharding to spread requests load?

Thanks
Oleg.

Josiah Carlson

unread,
Sep 2, 2013, 1:10:41 PM9/2/13
to redi...@googlegroups.com
If you had a fast enough network connection, you'd likely be able to do 10-20x that without issue on one machine with one process without sharding. Given 8-16 cores, and a fast enough network, you may be able to shard it on the one machine and push up to 500k-1M requests/second.

 - Josiah

Jon

unread,
Sep 2, 2013, 1:27:20 PM9/2/13
to redi...@googlegroups.com
You probably want to shard it into seperate machines AND shard it into hashes -- hashes can dramatically reduce the amount of memory used...your use case sounds very similar to this article: http://instagram-engineering.tumblr.com/post/12202313862/storing-hundreds-of-millions-of-simple-key-value-pairs

Oleg Ruchovets

unread,
Sep 2, 2013, 2:21:07 PM9/2/13
to redi...@googlegroups.com
Not exactly my use case :-) , but very useful article.
   Is sorted set optimized also? 

Thanks
Oleg.



--

Tanguy Le Barzic

unread,
Sep 2, 2013, 2:56:00 PM9/2/13
to redis-db
Yes, sorted sets can also be more efficiently encoded (see zset-max-ziplist-entries and zset-max-ziplist-value in the redis conf).


--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages