memory optimization

215 views
Skip to first unread message

prateek gupta

unread,
Jul 2, 2014, 5:29:26 AM7/2/14
to redi...@googlegroups.com
Hi currently i m  setting my maxmemory to 500MB and volatile LRU algo 
and my payload for SET command is 3KB and im able to insert 157824 keys  

and even if i use HSET im just able to set same amount of keys max

i have set following in advanced config


hash-max-ziplist-entries 1000000
hash-max-ziplist-value 10000

list-max-ziplist-entries 512
list-max-ziplist-value 64

set-max-intset-entries 1000000

zset-max-ziplist-entries 1000000
zset-max-ziplist-value 10000



So i want to insert more keys to it using the same amount of RAM of 500MB 

how do do it ???

Jan-Erik Rediger

unread,
Jul 2, 2014, 5:43:44 AM7/2/14
to redi...@googlegroups.com
Compress your data.
157824 x 3 kB = 473.5 MB.

Redis is not some magic tool that will magically reduce your data size.
If you use some compression algorithm before sending to redis you might
save some space.
> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
> To post to this group, send email to redi...@googlegroups.com.
> Visit this group at http://groups.google.com/group/redis-db.
> For more options, visit https://groups.google.com/d/optout.

prateek gupta

unread,
Jul 3, 2014, 12:53:00 AM7/3/14
to redi...@googlegroups.com
what is the threshold post which Redis would stop encoding or size of the payload the parameters 
hash-max-ziplist-entries
hash-max-ziplist-value 
would stop encoding or if encoding for such a huge payload doesnt happen 

as   if i put my payload in bytes im able to see a difference of memory almost to 8-9 time in HSet than with set

Josiah Carlson

unread,
Jul 3, 2014, 2:09:00 AM7/3/14
to redi...@googlegroups.com
After you have a few hundred entries in a single ziplist-encoded hash, the overhead of the key and structures involved are a fraction of a percent of the encoded size, so all you are doing at that point is wasting computational resources in the encoding/decoding steps of using a hash that large. If you want to minimize key use to make things like "KEYS *" useful (though I would still discourage such things), I'd cap it at a few thousand elements at most, if only to minimize memory scanning and churn during insertion/deletion (if those matter to you).

With respect to plain sets, yes, you do see an 8-9x reduction in space when data is stored as a hash-set vs. a ziplist-encoded set, but that is at the expense of performance during insertion, removal, member checking, etc. As in many things with computers, there is a time/space tradeoff. Redis chose (generally) to be fast, not small. It has made some concessions with the ziplist encodings, which addresses the issue for most people (though it is not perfect).

Rather than asking the limits of what Redis can do for you, if you were to tell us what you actually wanted to do with Redis, we would likely be able to give you better advice. I say this after having answered several hundred questions like yours over the last 4+ years.

 - Josiah

--
Reply all
Reply to author
Forward
0 new messages