vm.overcommit_memory=1 vs. LRU. Fight!

120 views
Skip to first unread message

Stuart Reynolds

unread,
May 30, 2018, 2:31:44 PM5/30/18
to Redis DB
I'd like to configure redis as an LRU cache. 
recommends setting a specific amount max-memory. However, really I'd prefer to to use all the available memory and apply the eviction policy when this fails (since the total host memory is a volatile thing in VM worlds and I don't want to configure redis each time I reconfigure host hardware).

However, this seems to be in conflict with vm.overcommit_memory=1.
which is recommended here: redis https://redis.io/topics/admin 

Why is vm.overcommit_memory=1 recommended? It doesn't seem safe and would seem to prevent applying the eviction policy when out of system memory.

- Stuart

Stuart Reynolds

unread,
May 30, 2018, 2:37:02 PM5/30/18
to redi...@googlegroups.com
I guess -- more to the point, is there a way to configure maxmemory
to, say 90% of the available system memory?
> --
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to redis-db+u...@googlegroups.com.
> To post to this group, send email to redi...@googlegroups.com.
> Visit this group at https://groups.google.com/group/redis-db.
> For more options, visit https://groups.google.com/d/optout.

Chris Stanaway

unread,
May 30, 2018, 3:16:35 PM5/30/18
to redi...@googlegroups.com

Also, I don't think you can use Redis the way you described.  Specifically, you said you want to use it as a LRU cache (presumably by setting maxmemory-policy to either volatile-lru or allkeys-lru), but not set a maxmemory.  You stated:

Looking at:
   https://redis.io/topics/lru-cache
recommends setting a specific amount max-memory.

However, there is no such "recommendation".  It is a requirement.

Setting maxmemory to zero results into no memory limits. This is the default behavior for 64 bit systems, while 32 bit systems use an implicit memory limit of 3GB.
 
When the specified amount of memory is reached, it is possible to select among different behaviors, called policies. Redis can just return errors for commands that could result in more memory being used, or it can evict some old data in order to return back to the specified limit every time new data is added.

The exact behavior Redis follows when the maxmemory limit is reached is configured using the maxmemory-policyconfiguration directive.

If there is no specified amount of memory (maxmemory), then it is not reached.  If it is not reached, then none of the different behaviors (policies) are applied.

How the eviction process works
It is important to understand that the eviction process works like this:
  • A client runs a new command, resulting in more data added.
  • Redis checks the memory usage, and if it is greater than the maxmemory limit , it evicts keys according to the policy.

Also, per the extensive documentation in redis.conf:

# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
[...snip...]
# This option is usually useful when using Redis as an LRU or LFU cache
[...snip...]
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached.

If you don't set a maxmemory, then the maxmemory-policy never kicks in and Redis will continue to consume all available memory and you risk either Redis will crash for out of memory or the Linux kernel OOM killer will kill the Redis process.

On Wed, May 30, 2018 at 1:36 PM, Stuart Reynolds <stuart....@gmail.com> wrote:
I guess -- more to the point, is there a way to configure maxmemory
to, say 90% of the available system memory?

On Wed, May 30, 2018 at 11:26 AM, Stuart Reynolds
<stuart....@gmail.com> wrote:
> I'd like to configure redis as an LRU cache.
>
> Looking at:

> recommends setting a specific amount max-memory. However, really I'd prefer
> to to use all the available memory and apply the eviction policy when this
> fails (since the total host memory is a volatile thing in VM worlds and I
> don't want to configure redis each time I reconfigure host hardware).
>
> However, this seems to be in conflict with vm.overcommit_memory=1.
>
> Why is vm.overcommit_memory=1 recommended? It doesn't seem safe and would
> seem to prevent applying the eviction policy when out of system memory.
>
> - Stuart
>
> --
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To unsubscribe from this group and stop receiving emails from it, send an

> To post to this group, send email to redi...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.

To post to this group, send email to redi...@googlegroups.com.

Stuart Reynolds

unread,
May 30, 2018, 3:56:32 PM5/30/18
to redi...@googlegroups.com
Thanks.

Hmm... hard coding an amount of memory isn't very devops friendly --
it needs to be reset when the amount of physical RAM for the VM goes
up (or worse) or goes down.

It seems like setting:
maxmemory = 90percent
might be a more friendly default whenever maxmemory-policy is defined.

The config's defaults are to simply run out of system memory. For me
(I think) this manifested with redis rejecting new sockets, and there
was no helpful messages in the log (at least, applying a maxmemory
limit fixed this)
:-/
>> > email to redis-db+u...@googlegroups.com.
>> email to redis-db+u...@googlegroups.com.
> email to redis-db+u...@googlegroups.com.
> To post to this group, send email to redi...@googlegroups.com.
> Visit this group at https://groups.google.com/group/redis-db.
> For more options, visit https://groups.google.com/d/optout.

hva...@gmail.com

unread,
May 30, 2018, 11:26:55 PM5/30/18
to Redis DB
I see two options:
  1. You have a configuration management system that installs Redis and creates the redis.conf config file.  Have your config management system measure the machine memory and set maxmemory according to your desires.  When you change machine types, re-deploy Redis.  (good config mgmt systems won't actually re-install an app that's already present, only update the config file and restart the app)
  2. Change the start-up script(s) to measure the memory, set maxmemory in the config file, and start Redis.  As above, the start-up script(s) can be whatever you want, since they are created by your config management system as it installs Redis.  Every time Redis is (re-)started, its maxmemory is adjusted to the desired amount for the machine it's on.
IMO 90% is not a good threshold.  The OS uses memory for the disk read/write cache, and starving the kernel of this cache can slow the machine down.  If your Redis config saves data to disk, it also makes use of the disk cache, and a too-small cache will slow down Redis's writes.  My advice is leave 80% free for the kernel to use as cache.  If your Redis is replicating to a slave as well as saving snapshots to disk, then you will want to leave more free memory.  This post from a few years ago does a good job explaining why.

hva...@gmail.com

unread,
Jun 3, 2018, 3:55:29 PM6/3/18
to Redis DB
I meant to say I recommend configuring Redis to use 80% of the memory rather than 90%, leaving 20% free for the kernel to use for the disk cache (and various small OS processes).
Reply all
Reply to author
Forward
0 new messages