Is it possible to work with Redis like a capped collection of MongoDB?.I would like to store just the last 100000 key/values pairs. If a new key/value comes, I would like to delete the oldest and set the new one. I don't know if it's possible to do something like that with Redis.
--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.
Itamar Haber | Chief Developers Advocate
Redis Watch Newsletter - Curator and Janitor
Redis Labs - Enterprise-Class Redis for Developers
Mobile: +1 (415) 688 2443
Mobile (IL): +972 (54) 567 9692
Email: ita...@redislabs.com
Twitter: @itamarhaber
Skype: itamar.haber
Is it possible to work with Redis like a capped collection of MongoDB?.I would like to store just the last 100000 key/values pairs. If a new key/value comes, I would like to delete the oldest and set the new one. I don't know if it's possible to do something like that with Redis.
local k = KEYS[1]
local collection = KEYS[2]
local v = ARGV[1]
local limit = tonumber(ARGV[2])
redis.call("LPUSH",collection,k)
redis.call("SET",k,v)
local size = redis.call("LLEN",collection)
if size > limit then
local val = redis.call('RPOP',collection)
if val then
redis.call("DEL",val)
end
end
return redis.call("LLEN",collection)
Mongo capped collections work primary by document size (and optional limited by the number of documents).IMHO, limiting maxmemory, setting a TTL (a really large one to prevent too early expiration) and setting maxmemory-policy volatile-ttl should do the job on a global level. The Redis instance behaves then like one capped collection and you cannot use the Redis instance for longer lived data.