Hi,
I’m trying to understand a (seemingly) peculiar issue I’m having where our Redis used memory seems to be about 2x the size of our actual key space.
We are running 3.2.9 in a master / slave configuration. Our redis memory use varies between ~20GB and 28GB depending on day of week and time of day. The strangeness I see however is that restarting a redis instance will reduce this to about half of whatever
the redis used memory was at the time of restart, and inspecting the .rdb dump of a BGSAVE will indicate that the actual key space seems to again be about %50 of our total redis used memory (which makes sense, a restart essentially being SYNCing a BGSAVE from
the master).
Typically within 12-24 hours the restarted instance is again at the same used memory levels as its peers.
Our fragmentation ratio remains at a low level (1.03 at time of this email).
We do write temporary keys to slaves as part of a pipeline process, but take great care to issue deletes at the end. I have also observed this behavior from a slave that has no clients using it and is only replicating from the master.
I understand the mechanics of redis re-using memory allocated to it from the OS and that an .rdb dump does not accurately reflect the redis used memory (should be within %10 though right?), but it is strange to me that we should see redis using such a
high amount of memory (again, seems like 2x the actual size of the “live” key space).
I do not think this is a problem with redis but is probably something we are doing wrong, and am wondering if anyone has any suggestions or or information to help me identify if this is actually a real problem or expected, and maybe what I can do to figure
this out.
Tyler Sullens
DevOps Engineer
NPR Digital Media