hash-level LRU eviction policy

332 views
Skip to first unread message

mef

unread,
Mar 19, 2011, 5:39:06 PM3/19/11
to Redis DB
I've been using redis as an LRU cache (maxmemory with allkeys-lru
policy), and it's been working great. However, for the benefits of the
hash-zipmap compression, and for ease of organization and clarity, I
store multiple hashes in redis, each one acting as a key-value store
for a different part of the cache. So my redis root looks something
like:

"cache1" => hash(k=>v, k2=>v2, etc),
"cache2" => hash(k=>v, k2=>v2, etc),
"cache3" => hash(k=>v, k2=>v2, etc),

All hashes stored in redis can be evicted without any problem, but
rather than evict an entire key-value hash when maxmemory is hit, it'd
be better if it could just sample and evict some of the keys in my
hashes. As it stands now, I have to either accept that one big chunk
of my cache will get evicted when maxmemory is hit, or I have to put
all my cache entries directly in the root. Neither are the end of the
world, but the former is it a bit wasteful, and the latter is a bit
cluttered.

Any plans for a hash-level LRU eviction policy? Would it be hard to
implement? Otherwise, can anyone suggest a better way for what I'm
doing?

Love redis, thanks for everyone who works on it and supports it.
Especially Salvatore and VMware!

Josiah Carlson

unread,
Mar 20, 2011, 3:55:41 AM3/20/11
to redi...@googlegroups.com, mef

Has been discussed before. General feeling is that it would add too
much complexity to the hash type, and basically make them just as
inefficient as the root hash.

A good solution is to keep this information yourself in a zset and do
the clearing yourself. Alternatively, if your data has any sort of
time component, using the time component as your "sharding" key into
the different hashes, and clean out the old ones.

Regards,
- Josiah

Reply all
Reply to author
Forward
0 new messages