Helm Chart for redis cluster deployment?

383 views
Skip to first unread message

Balaji J

unread,
Jan 7, 2020, 7:00:21 AM1/7/20
to Redis DB
Hi,

Is there helm chart in any public repository to deploy redis cluster ?
Please let me know if anyone have tried this or have charts for redis cluster.

Thanks,
...Balaji

Akash Kumar Dutta

unread,
Jan 7, 2020, 12:07:21 PM1/7/20
to Redis DB
I've worked on the same and setting up redis cluster in kube is in general not a good idea tbh.

Redis is not made for natted environments, but with announce ip configuration, redia master slave does work. For cluster, the cluster meet and cluster merge don't support hostnames as arguments over natting, thus you can't automate your deployments. Also, now clients need to use redis sentinel libraries, which is again a challenge for natted environments.

There are a lot of challenges for making it work. Master slave model is enough in general, and redis-ha chart in helm is what one might nee at all

Balaji J

unread,
Jan 10, 2020, 2:25:15 AM1/10/20
to Redis DB
thanks Akash.

Does it mean that the redis community itself not planning to support Kubernetes deployment of redis-cluster?
Because our requirement is to scale horizontally with redis-cluster depending on the load both read+write. So redis-cluster is what we have in mind for deployment. But using Kubernetes for orchestration is not possible if redis-cluster deployment is needed is what you are saying, rite?

I thought redis cluster should be easily deployable in cloud-native since it heavily used by many cloud applications.
Please clarify.

Thanks,
...Balaji

Akash Kumar Dutta

unread,
Jan 10, 2020, 4:18:56 AM1/10/20
to Redis DB
Clarification: If you want cloud applications to access it, it is fine. In our usecase, we wanted to use redis outside cloud as well.

AFAIK, there is no helm chart for that. But the challenges I mentioned will be there.

I also went down to automate the deployments all by myself and I was successful. I was able to create a cluster as my config. See here: https://stackoverflow.com/questions/59173074/redis-cluster-with-custom-cluster-config. This will help you.

Tuco

unread,
Jan 11, 2020, 1:19:10 AM1/11/20
to Redis DB
I think

1) Reading from slaves is a bad idea: Redis is very fast, reading and writing from master is good enough. Slaves are used for failover, so that if a master fails, the cluster can still be in a working state. Some redis clients provide the capability to read from slaves, and applications use slaves for reading. If a connection breaks between a master and slave(which can happen often) or in case of high writes or any network fluctuation, the slaves will try to sync from master, and the application will not work because its reading from slave. If the application is written well and doesn't use any bad commands like keys, and doesn't use Lua, which can hog the CPU, causing redis to be slow, writing and reading from master is good enough. Also the redis should be monitored well to find out when should be cluster be expanded.

2) Redis should not be auto scaled up and down based on traffic: Redis should be scaled up or down based on data, and not based on traffic. That too should happen manually. Autoscaling by default was meant to be for stateless applications like web or application servers and not for databases. If anything, the auto scale up or down should happen for increasing/decreasing the slaves where application can read from slaves, like if you are using MySQL, and you can set your application to read from slaves, and write to master, and then do a auto scale of slaves. In multi master databases, like redis cluster, scale up/down will involve slot and data migration, which can sometimes fail, leaving the slots in migrating/importing state, leaving the cluster in an inconsistent state. Further, auto scaling in kubernetes needs to happen based on a indice. Most of the times that is CPU. I checked long time ago but scaling based on memory was not given, simply because memory got from OS != memory used by redis. If you are trying to use CPU to scale up/down, you are already on the wrong path, because if your redis is using more CPU, you are using it the wrong way(may be you are using Lua) or you need to expand your redis cluster manually.

3) Redis Cluster on Kubernetes: We are using redis cluster on kubernetes in our no premise cloud. Used this wonderful link as a starting point, with slight modifications.  This is a not helm chart, but indeed works well. 




On Friday, January 10, 2020 at 12:55:15 PM UTC+5:30, Balaji J wrote:

Balaji J

unread,
Jan 27, 2020, 9:04:52 AM1/27/20
to Redis DB
thanks for the clarifications, Tuco.

Did you implement scale-up also using that opensource u have mentioned? If so, how is the memory threshold monitored for the scale-up? is it using some k8s way or own scripts to trigger scale up?

Also how is the single-point-of-failure issue solved in k8s deployments of redis cluster? For example in a 3master redis-cluster, if 2 of the master gets scheduled in single worker node then the failure of that worker node will result in redis cluster going down due to the quorum failure. Can this be solvd?

Please clarify.
Reply all
Reply to author
Forward
0 new messages