Allkeys-lru

0 views
Skip to first unread message

Glauco Schlembach

unread,
Aug 5, 2024, 4:56:17 AM8/5/24
to kingdicucor
Iam using redis as a datastore rather than cache, but there is a maxmemory limit set, In my understanding the maxmemory specifies the RAM that redis can use, should it not swap the data back to disk once the memory limit is reached.I have a mixture of keys while some have their expiry set and others don'tI have tried both volatile-lru and allkeys-lru, as specified in the documentation both remove the old keys based on the property. What configuration should I use to avoid data loss? Should I set an expiry on all keys and use volatile-lru? What am I missing?

You control what Redis does when memory is exhausted with maxmemory and maxmemory-policy. Both are settings in redis.conf. Take a look. Swapping memory out to disk is not an option in recent Redis versions.


If Redis can't remove keys according to the policy, or if the policy is set to 'noeviction', Redis will start to reply with errors to commands that would use more memory, like SET, LPUSH, and so on, and will continue to reply to read-only commands like GET.


If maxmemory is reached, you lose data only if the eviction policy set in maxmemory-policy indicates Redis to evict some keys and how to select these keys (volatile or all, lfu/lru/ttl/random). Otherwise, Redis start rejecting write commands to preserve the data already in memory. Read commands continue to be served.


If your operating system has virtual memory enabled, and the maxmemory setting allows Redis to go over the physical memory available, then your OS (not Redis) starts to swap out memory to disk. You can expect a performance drop then.


Once your memory is full, an LRU algorithm kicks in, evicting least recently used keys. In allkeys-lru, it doesn't matter whether a key is expired or not and what is the TTL - the least used items will be evicted. In volatile-lru only expiring keys will be evicted using this algorithm.


When Redis is used as a cache, it is often convenient to let it automaticallyevict old data as you add new data. This behavior is well known in thedeveloper community, since it is the default behavior for the popularmemcached system.


This page covers the more general topic of the Redis maxmemory directive used to limit the memory usage to a fixed amount. It also extensively covers the LRU eviction algorithm used by Redis, which is actually an approximation ofthe exact LRU.


The maxmemory configuration directive configures Redisto use a specified amount of memory for the data set. You canset the configuration directive using the redis.conf file, or later usingthe CONFIG SET command at runtime.


When the specified amount of memory is reached, how eviction policies are configured determines the default behavior.Redis can return errors for commands that could result in more memorybeing used, or it can evict some old data to return back to thespecified limit every time new data is added.


Picking the right eviction policy is important depending on the access patternof your application, however you can reconfigure the policy at runtime whilethe application is running, and monitor the number of cache misses and hitsusing the Redis INFO output to tune your setup.


Use the allkeys-lru policy when you expect a power-law distribution in the popularity of your requests. That is, you expect a subset of elements will be accessed far more often than the rest. This is a good pick if you are unsure.


The volatile-lru and volatile-random policies are mainly useful when you want to use a single instance for both caching and to have a set of persistent keys. However it is usually a better idea to run two Redis instances to solve such a problem.


It is also worth noting that setting an expire value to a key costs memory, so using a policy like allkeys-lru is more memory efficient since there is no need for an expire configuration for the key to be evicted under memory pressure.


Redis LRU algorithm is not an exact implementation. This means that Redis isnot able to pick the best candidate for eviction, that is, the key thatwas accessed the furthest in the past. Instead it will try to run an approximationof the LRU algorithm, by sampling a small number of keys, and evicting theone that is the best (with the oldest access time) among the sampled keys.


However, since Redis 3.0 the algorithm was improved to also take a pool of goodcandidates for eviction. This improved the performance of the algorithm, makingit able to approximate more closely the behavior of a real LRU algorithm.


What is important about the Redis LRU algorithm is that you are able to tune the precision of the algorithm by changing the number of samples to check for every eviction. This parameter is controlled by the following configuration directive:


The reason Redis does not use a true LRU implementation is because itcosts more memory. However, the approximation is virtually equivalent for anapplication using Redis. This figure comparesthe LRU approximation used by Redis with true LRU.


The test to generate the above graphs filled a Redis server with a given number of keys. The keys were accessed from the first to the last. The first keys are the best candidates for eviction using an LRU algorithm. Later more 50% of keys are added, in order to force half of the old keys to be evicted.


As you can see Redis 3.0 does a better job with 5 samples compared to Redis 2.8, however most objects that are among the latest accessed are still retained by Redis 2.8. Using a sample size of 10 in Redis 3.0 the approximation is very close to the theoretical performance of Redis 3.0.


Note that LRU is just a model to predict how likely a given key will be accessed in the future. Moreover, if your data access pattern closelyresembles the power law, most of the accesses will be in the set of keysthe LRU approximated algorithm can handle well.


Starting with Redis 4.0, the Least Frequently Used eviction mode is available. This mode may work better (provide a betterhits/misses ratio) in certain cases. In LFU mode, Redis will try to trackthe frequency of access of items, so the ones used rarely are evicted. This meansthe keys used often have a higher chance of remaining in memory.


LFU is approximated like LRU: it uses a probabilistic counter, called a Morris counter to estimate the object access frequency using just a few bits per object, combined with a decay period so that the counter is reduced over time. At some point we no longer want to consider keys as frequently accessed, even if they were in the past, so that the algorithm can adapt to a shift in the access pattern.


However unlike LRU, LFU has certain tunable parameters: for example, how fastshould a frequent item lower in rank if it gets no longer accessed? It is also possible to tune the Morris counters range to better adapt the algorithm to specific use cases.


The decay time is the obvious one, it is the amount of minutes a counter should be decayed, when sampled and found to be older than that value. A special value of 0 means: we will never decay the counter.


The counter logarithm factor changes how many hits are needed to saturate the frequency counter, which is just in the range 0-255. The higher the factor, the more accesses are needed to reach the maximum. The lower the factor, the better is the resolution of the counter for low accesses, according to the following table:


Redis stores its data, called keys, in memory only and uses eviction policies to free memory to write new data. Eviction policies fall into two main categories: general policies that apply to all keys and policies that use a Time to Live (TTL) expiration value. General policies consume less memory but require more CPU processing when Redis samples to choose which key to evict. TTL policies require you to set the TTL from your application. The extra TTL data consumes a bit more memory but TTL policies require less CPU processing when Redis is determining which keys to evict.


With the noeviction policy set, Redis may stop responding if it runs out of memory but no data is ever evicted. This policy is generally appropriate only when your application removes keys itself. This is the Redis default setting and poses the least chance of data loss.


allkeys-lru helps keep Redis from becoming unresponsive due to insufficient memory and operates on the assumption that you no longer need the least recently used keys. When Redis begins to run out of memory, it samples a small set of keys using an algorithm, then evicts the least recently used key from that set. Because of the sampling algorithm, the key may not be the least recently used of all keys in memory.


allkeys-lfu helps keep Redis from becoming unresponsive due to insufficient memory and operates on the assumption that you no longer need the least frequently used keys. When Redis begins to run out of memory, it samples a small set of keys using an algorithm, then evicts the least frequently used key from that set. Because of the sampling algorithm, the key may not be the least frequently used of all keys in memory.


This policy is similar to allkeys-lru. Redis evicts keys least recently used first, but only samples keys that are expired. This policy operates on the assumption that expired keys that are also least recently used are no longer required by your application.


Redis evicts keys least frequently used first, but only samples keys that are expired. This policy operates on the assumption that expired keys that are also least frequently used are no longer required by your application.


The volatile-ttl policy frees memory by evicting expired keys, regardless of when the key was last used. This policy allows you to tell Redis which keys are most important by explicitly setting an expiration value.


When Redis is used as a cache, sometimes it is handy to let it automatically evict old data as you add new one. This behavior is very well known in the community of developers, since it is the default behavior of the popular memcached system.

3a8082e126
Reply all
Reply to author
Forward
0 new messages