Re: Redis expiration rate

812 views
Skip to first unread message

Josiah Carlson

unread,
May 7, 2013, 1:50:25 PM5/7/13
to redi...@googlegroups.com
This is normal behavior.

By default, Redis only looks for keys to expire (along with some other operations) 10 times/second. You can increase this value (the configuration option is called 'hz'), but it will result in Redis using more processor during idle. You could also perform your own RANDOMKEY calls, which will have much the same effect, except that it only does the key expiration part of the normal operations that are done 10 times/second. While repeated calls to RANDOMKEY is a total hack for incrementally expiring old data, it is easily tuned depending on your current load and how much "old" data you are willing to have.

You seem to collect roughly 20-35% extra unused space daily, how many keys of persistent and volatile data do you have? That will inform you of how many times you could/should send RANDOMKEY per time period to reduce that to something that you would find more reasonable.

Regards,
 - Josiah






On Tue, May 7, 2013 at 3:55 AM, Dan C <dco...@gmail.com> wrote:
Hi,

I am not really sure if I am doing something wrong or if what is happening is normal.
I attach an image with one redis instance memory use.
At the image there are 2 memory drops. Those drops coincide in time with a "keys *". After some tests I think I can conclude those drops are caused by "keys" command forcing to free already expired keys. If I am right it means a lot of keys are already expired but still using memory. Is this normal? I think in a very busy redis instance with big TTLs it can be a problem. Is there maybe an option to modify this behavior?

The image is the memory of a redis 2.2 but I tested the same on a 2.6.13 and the same happens.

Thanks,
Dan.

--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Dan C

unread,
May 8, 2013, 4:48:18 AM5/8/13
to redi...@googlegroups.com
Hi Josiah,

In this redis instance I have around 5M keys, almost all of them with a 10h TTL.
Let me see if I understood you properly. By default 10times/second (hz) redis will look for keys to expire. Does this mean that it will remove (and free memory) for ALL the already expired keys? Or it will just remove some of them?

I did a test yesterday. In that test I just had an instance (2.6.13) and I SETed thousands of keys. Half of them with a TTL of 30 and the other half with a TTL of 60.
The fisrt time I SET 12k keys everything seems fine. It works perfectly.
Then I SET 18k keys with a TTL of 5000, and start SETing thousands of the 30 and 60 TTL.
After the last set with TTL 60 finished I started looking the memory and the dbsize. As I understand it, theoretically 60 seconds after the last SET all keys would have to be expired (except for the 18k with TTL 5000) and so dbsize would have to be 18k. It doesn't happen like that. After 60 seconds (and after 300 seconds) dbsize is not yet 18k, it is "discounting" yes, so keys are being expired, deleted and memory is being freed, but much slower than what I was expecting knowing the 60 TTL (which would be all of them after 60s). The stranger thing yet is that, if in any moment after the 60s I do a "keys *" on the instance, memory is freed and dbsize is 18k (in fact there are 18k keys).
So, from my point of view this means that still if the redis process expiring keys (hz) is really deleting them, it is not deleting ALL the keys which are already expired. And somehow when I use "keys *" it really forces ALL expired keys to be deleted and memory freed.

Sorry for this extended explanation, but I think my first post was too vague and I am not sure if this is this behavior supposed to be normal.


Thanks a lot!

Jan-Erik Rediger

unread,
May 8, 2013, 10:44:14 AM5/8/13
to redi...@googlegroups.com
On every interval redis expires just a few keys, not all of them. "KEYS *" does read ALL keys in the database and that is why all keys get expired on doing so (and that is why KEYS should only used in development, never in production, it's slow)

Josiah Carlson

unread,
May 8, 2013, 10:44:28 AM5/8/13
to redi...@googlegroups.com
Replies inline.

On Wed, May 8, 2013 at 1:48 AM, Dan C <dco...@gmail.com> wrote:
Hi Josiah,

In this redis instance I have around 5M keys, almost all of them with a 10h TTL.
Let me see if I understood you properly. By default 10times/second (hz) redis will look for keys to expire. Does this mean that it will remove (and free memory) for ALL the already expired keys? Or it will just remove some of them?

No. It picks random keys, checking to see if they expire.

I did a test yesterday. In that test I just had an instance (2.6.13) and I SETed thousands of keys. Half of them with a TTL of 30 and the other half with a TTL of 60.
The fisrt time I SET 12k keys everything seems fine. It works perfectly.
Then I SET 18k keys with a TTL of 5000, and start SETing thousands of the 30 and 60 TTL.
After the last set with TTL 60 finished I started looking the memory and the dbsize. As I understand it, theoretically 60 seconds after the last SET all keys would have to be expired (except for the 18k with TTL 5000) and so dbsize would have to be 18k. It doesn't happen like that. After 60 seconds (and after 300 seconds) dbsize is not yet 18k, it is "discounting" yes, so keys are being expired, deleted and memory is being freed, but much slower than what I was expecting knowing the 60 TTL (which would be all of them after 60s). The stranger thing yet is that, if in any moment after the 60s I do a "keys *" on the instance, memory is freed and dbsize is 18k (in fact there are 18k keys).

Here's the critical fact that you are missing: Redis does not keep a list of the keys that should expire. It keeps a counter. So when it goes to expire keys, it doesn't have a list to iterate over - it performs some random probes in the hash table. Keys that need to be expired are expired. Keys that don't get skipped.

Think of it like this. You've got some keys. Some of them have already expired, some haven't. If you do a random probe into the space looking for keys to delete due to expiration, whether you find some will depend on the number of probes you do, and your likelihood of finding a key to expire. As an example, if you have 100k keys, 75k of which are past their expiration times, 25k of which are not... then when Redis randomly probes for keys to delete, it's generally going to find a key to delete 3/4 of the time. But that quickly dwindles (depending on how often you are writing more keys with TTLs to Redis), as those old keys are deleted pretty reliably. Once you get to about 20-25% of your keys being ready for deletion, then only about 15-20% of the random checks will return keys that can be deleted.

Then there's the other part that if Redis finds a lot of keys to expire during its random probing, it takes extra time to look for keys to clear out.
 
So, from my point of view this means that still if the redis process expiring keys (hz) is really deleting them, it is not deleting ALL the keys which are already expired. And somehow when I use "keys *" it really forces ALL expired keys to be deleted and memory freed.

It doesn't delete all of them with random probing. Statistically speaking, it is very improbable that it would actually find them all. When performing a KEYS call, Redis visits all of the keys, notices which ones have expired, and deletes them. The visiting all keys is part of why KEYS is so slow (compared to other commands).
 

Sorry for this extended explanation, but I think my first post was too vague and I am not sure if this is this behavior supposed to be normal.

Perfectly normal.

 - Josiah

Dan C

unread,
May 8, 2013, 11:28:00 AM5/8/13
to redi...@googlegroups.com
Ok! Thanks Josiah and Jan-Erik,

I get it now.
So, the only way to "expire more" is increasing the probes. As I understand it the only way to do so is with the "hs" parameter.
In my case though I will have to change something. Probably the TTL and/or the "hs" value. From my 1,7GMB database more than 500MB are expired. I think using 500MB of RAM in already expired keys is a big waste of memory!
Anyway, is it normal this proportion? I mean, it seems to me that 1/3 of the database already expired (as I can see after de "keys *") is quite a lot.

Thanks a lot guys!

Dan.

Josiah Carlson

unread,
May 8, 2013, 12:26:01 PM5/8/13
to redi...@googlegroups.com
Actually, the option is 'hz', as in 'hertz'.

For a more in-depth look at what goes on, visit: http://redis.io/commands/expire and go to the "How Redis expires keys" section. If you wanted to have less expired keys hanging out, you could definitely increase the hz configuration, but that probably won't get you what you want, as that will basically find less than 25% of 100*hz keys to expire (so if you increase your checks to 100, which is not recommended, you'd expire less than 2500 keys every second). You could also set the maxmemory configuration option, which will result in Redis being more aggressive with its expiration. Or, you could just have a client that repeatedly calls RANDOMKEY (as I've mentioned before).

Really though, there are four things that will actually solve your perceived problem:
1. You keep a list of keys that expire, and you manually expire them (this used to be pretty common; you can use a ZSET to keep keys and their expiration times)
2. Call KEYS every hour or so to expire everything (if you provide a pattern that doesn't match any keys, then you reduce the data that Redis needs to send you, while still getting expiration)
3. Get a bigger machine and deal with the fact that Redis will tend to keep up to about 25% of your keys as expired.
4. Write less data to Redis

 - Josiah

Dan C

unread,
May 9, 2013, 3:26:14 AM5/9/13
to redi...@googlegroups.com
Thank you very much Josiah!

It couldn't be more clear. Now it's time to decide the way to go.

Again: Thanks!

Salvatore Sanfilippo

unread,
May 9, 2013, 9:26:52 AM5/9/13
to Redis DB
Hello Dan,

you may try to hack the following defines and recompile if you want
Redis to expire more:

#define REDIS_EXPIRELOOKUPS_PER_CRON 10 /* lookup 10 expires per loop */
#define REDIS_EXPIRELOOKUPS_TIME_PERC 25 /* CPU max % for keys collection */

By default it will never use more than 25% of CPU time for lazy
expiring of keys, but you may want to raise this, and even the lookups
performed per loop.

I would try with 50 lookups and 50% max CPU to see how this changes
the behavior.

Modifying the hz parameter is unlikely to result into an improvement
as Redis will split that 25% max CPU time into smaller parts because
the function to expire keys is called more often.

Thanks,
Salvatore
Salvatore 'antirez' Sanfilippo
open source developer - VMware
http://invece.org

Beauty is more important in computing than anywhere else in technology
because software is so complicated. Beauty is the ultimate defence
against complexity.
— David Gelernter

Salvatore Sanfilippo

unread,
May 9, 2013, 9:28:24 AM5/9/13
to Redis DB
p.s. however raising "hz" will make the expiring more incremental

Dan C

unread,
May 10, 2013, 5:31:49 AM5/10/13
to redi...@googlegroups.com
Thank you Salvatore,

I'll try to find time to "play" with those defines.
Reply all
Reply to author
Forward
0 new messages