What is the ideal value size range for redis? Is 100KB too large?

23,155 views
Skip to first unread message

S Ahmed

unread,
Jan 7, 2016, 11:18:38 AM1/7/16
to redi...@googlegroups.com
Hello,

Is there a upper limit to the suggest size of the value stored for a particular key in redis?

Is 100KB too large?

If 100KB is suitable, is there any restrictions as to what type of "type" this is stored in? e.g. hash versus list etc.

Thanks!

Itamar Haber

unread,
Jan 7, 2016, 11:36:10 AM1/7/16
to redi...@googlegroups.com
Hi

On Thu, Jan 7, 2016 at 6:18 PM, S Ahmed <sahme...@gmail.com> wrote:
Hello,

Is there a upper limit to the suggest size of the value stored for a particular key in redis?


512MB is the current limit for Strings.
 
Is 100KB too large?


No.
 
If 100KB is suitable, is there any restrictions as to what type of "type" this is stored in? e.g. hash versus list etc.


Every member (out of the possible 2^32) of a List/Hash/Set/Sorted Set is an-up-to-0.5GB String, so with these you can actually store much more under each key (2^32 * 0.5GB = 2048ZB?). The decision for using one data structure, however, should be derived by the queries nature and the underlying data.
 
Thanks!

--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at https://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.



--

Itamar Haber | Chief Developer Advocate
Redis Watch Newsletter | Curator and Janitor
Redis Labs | ~ of Redis

Mobile: +1 (415) 688 2443
Office: +1 (650) 461 4652
Mobile (IL): +972 (54) 567 9692
Office (IL): +972 (3) 720 8515 Ext. 123
Email: ita...@redislabs.com
Twitter: @itamarhaber
Skype: itamar.haber

Blog  |  Twitter  |  LinkedIn


Didier Spezia

unread,
Jan 7, 2016, 11:59:54 AM1/7/16
to Redis DB
Redis is not really designed to store very large objects.

On top of the theoretical limits mentioned by Itamar, you also need to consider
the eventual constraints on the communication buffers.

When a GET command applies on a large object, the object is first serialized
in the communication buffer, and then written to the client socket. This copy
has a cost. The bigger object, the higher cost. A 500 MB object would require
a 500 MB communication buffer. If you have multiple connections dealing
with this object, each connection will require a 500 MB communication buffer.
You can imagine it does not scale very well.

Please read http://redis.io/topics/clients to understand the constraints
associated to communication buffers.

100KB is fine, and probably a few MB would be fine as well. But please do not
try to get close to the 512 MB theoretical limit ...

Best regards,
Didier.

Greg Andrews

unread,
Jan 7, 2016, 12:42:41 PM1/7/16
to redi...@googlegroups.com
Just remember that writing large values into Redis and reading them back out involves passing them through the server's network interface.  If your client applications are constantly writing/reading large chunks of data, they could exceed the capacity of the network interface.  If you're not writing/reading the large chunks of data often, then it usually won't be a problem.

That's one benefit of sharding your data and spreading it across multiple servers.  Each server adds network capacity as well as ram capacity.

  -Greg

--

Gitted

unread,
Jan 7, 2016, 10:10:25 PM1/7/16
to Redis DB
Interesting.  Does memcached behave differently in this regard?  Or it has the same issue?

Greg Andrews

unread,
Jan 8, 2016, 5:24:40 AM1/8/16
to redi...@googlegroups.com
If I were on a memcache mailing list and someone asked about the feasability of keeping large values in keys, I would call their attention to the potential for clogging the network interface's bandwidth in the vary same way.  Same for a MySQL or Postgres mailing list where someone proposed tables kept in ram for speed and asked about columns with large blobs of data.  It's not a Redis thing.  It's a 'writing/reading large chunks of data constantly' thing.

--

S Ahmed

unread,
Jan 8, 2016, 12:08:34 PM1/8/16
to redi...@googlegroups.com
Hi Greg,

I understand.  I know memcached is used allot for entire webpage caching, so that would be probably between 5-50K of HTML content.

ddorian43

unread,
Jan 8, 2016, 12:54:23 PM1/8/16
to Redis DB
Varnish is usually used for entire-html-page caching.
Memcache is usually used for object caching.

Stefano Fratini

unread,
Jan 9, 2016, 5:32:18 AM1/9/16
to Redis DB
Memcache recommends values no bigger than 1mb

I would suggest the same for Redis. Even more so not going over the 100kb is most likely a good idea

Especially if you are hosting on AWS, having bigger objects lead to networking bottlenecks (the amazon guys are very stingy with bandwidth - and smarty so)

A list or set can grow to any size (within the limits of Redis) as long as you extract < 100kb/call :)
Reply all
Reply to author
Forward
0 new messages