Fragmentation is a measurement of how much of the allocated memory is "waste". I.e., not in use. With a small database (small allocated memory), would a relatively large fragmentation be a problem? Not really. Yes, it suggests a larger percentage of waste, but if the total is small, the waste will be small too. Generally speaking, a larger fragmentation ratio on a small database is not a large issue.
The
redis.io documentation page on memory optimization has
a section discussing fragmentation which includes (in the last bullet point) this description:
- Because of all this, the fragmentation ratio is not reliable when you
had a memory usage that at peak is much larger than the currently used memory.
The fragmentation is calculated as the amount of memory currently in use
(as the sum of all the allocations performed by Redis) divided by the physical
memory actually used (the RSS value).
Can a slave have a higher peak memory usage than a master? Yes, it's possible. E.g., if there are commands sent to slaves that read large amounts of data from the database (such as large lists or hashes or long string values), which would be buffered for transmission to the clients, and no such reads are performed on the master. The first thing to do is check the peak usage and RSS of the master and slaves. If the peak consumption is higher as the documentation page mentions, that would explain the different ratio, and you can decide whether the peak usage is a symptom of a problem or just a normal consequence of the workload the slaves perform.
But without any other information besides these are small databases, I think it's not actually a problem.
-Greg