1:C 16 Feb 2021 19:21:19.896 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:M 16 Feb 2021 19:17:39.128 * Background saving terminated with success
49974:C 16 Feb 2021 19:17:39.033 * RDB: 0 MB of memory used by copy-on-write
49974:C 16 Feb 2021 19:17:39.032 * DB saved on disk
1:M 16 Feb 2021 19:17:39.028 * Background saving started by pid 49974
1:M 16 Feb 2021 19:17:39.027 * 1 changes in 3600 seconds. Saving...
1:M 16 Feb 2021 18:17:38.123 * Background saving terminated with success
47896:C 16 Feb 2021 18:17:38.027 * RDB: 0 MB of memory used by copy-on-write
47896:C 16 Feb 2021 18:17:38.027 * DB saved on disk
1:M 16 Feb 2021 18:17:38.023 * Background saving started by pid 47896
1:M 16 Feb 2021 18:17:38.022 * 1 changes in 3600 seconds. Saving...
1:M 16 Feb 2021 17:17:37.105 * Background saving terminated with success
Basically the "background saving" logs keeps repeating thought time with "Redis is starting" logs in between at what appears to be random intervals (coinciding in time with the connection error logs).
After checking the pod metrics, OOM was discarded since the pod memory use is very low (since is a test environment with little action), here are the pod memory metric since Redis was added to the Kubernetes deployment:
The pod has a limit of 256MiB but the memory use is a constant 2-3 MiB. We also checked Kubernetes logs, but it shows 0 restarts and no state change since it was deployed.
The image we're using is a vanilla alpine Redis 6.0.10 without any config change.
Any one has an idea what the issue could be?