Clustering in Redis

71 views
Skip to first unread message

Amit Khosla

unread,
Jan 3, 2019, 2:29:10 AM1/3/19
to Redis DB
Hi,

We are currently using redis 3 nodes cluster via sentinel. In this case, we have high load on master and slaves are only used for syncing. I also read a bit about redis cluster where we can shard the data, but again to have HA, we need slaves to those individual node.

Is there a way I can have the three nodes have sharding as well as ha, i.e. Data on node 1 is replicated on 2 & 3. Data on 2 is replicated on 1 & 3 etc. And if some node fails, till the time new node does not join back, one of the remaining can support (act as master). And when new node joins, we can have either repartitioning or move to original state.

Thanks & Regards
Amit

hva...@gmail.com

unread,
Jan 3, 2019, 5:25:33 AM1/3/19
to Redis DB
The features you're asking for amount to three Redis server instances that are configured so that each Redis instance is a master that replicates to two slaves, and is also a slave of two masters.

This is not a design that the open source Redis server supports. A Redis server instance can replicate to two (or more) slaves, but a slave can receive replication from only one master.

Further, I believe you don't actually want that because it doesn't actually reduce the load on the Redis server processes.  To explain:  You propose sharding keys among three master Redis server processes, call them A, B, and C.  Each would receive 1/3rd of the writes from Redis clients.  But process B would receive its own writes and, being a slave of A and C, would also receive the writes replicated from A, and from C.  In the same way, C receives its own writes, and also the writes replicated from A and B.  And, of course, A receives its own writes and also the writes from B and C.  On top of that, all three Redis server instances must hold in RAM the keys from all three servers.  This design doesn't save on writes and doesn't save on RAM consumption.  At first it sounds good, but it doesn't hold up under close inspection.

There is an architecture that will bring you very close to what you're asking for on three servers.  That's a Redis Cluster where each server has two Redis server processes instead of just one.  One of the processes is a master, and the other is a slave.  The slave process takes replication from the master on a different server rather than the same server.  You need to have enough CPU and RAM for the two processes, but this is a configuration that gets you sharding and redundancy on only three servers, and it's supported by Redis Cluster.

Laurent MINOST

unread,
Jan 30, 2019, 4:45:18 PM1/30/19
to Redis DB
Hi,

I have found this thread through a Google Search since I'm currently testing a Redis Cluster with the same architecture you described : 3 masters / 3 slaves and 1 M/S on each physical server :

Server1 : MasterA / SlaveC
Server2 : MasterB / SlaveA
Server3 : MasterC / SlaveB

Everything is working well for the failover part : if I stop Master A on Server1, then Slave A which is hosted on Server2 become a Master and the cluster is still working as expected ... BUT the problem iMO is when the old Master A on Server1 is coming back, because in this case the old Master A is then elected as a Slave and the whole architecture is now defeated by this beahavior since I now have :

Server 1 : SlaveA / SlaveC
Server 2 : MasterB / MasterA
Server 3 : MasterC / SlaveB

The cluster is now sensible to any problem on Server2 since if this server goes down, we will have only one MasterC available and Redis Cluster will not be working properly, isn't it ?

Is it possible to define either ? :
- an option (like auto_failback with Hearbeat for some of you guys who knows it) so when Old MasterA rejoins the cluster then it became again the New MasterA and current Master A then became again the Slave A.
- a parameter defined in the redis.conf to tell that this Redis service should always act as a Master one or a Slave one, like weights and so it will be taken in consideration after a switch occured and old master join back.

Thanks for your answers !

Regards,

Laurent MINOST

Amit Khosla

unread,
Jan 30, 2019, 9:11:50 PM1/30/19
to redi...@googlegroups.com
Thanks for answers!!

I am more focused on reads. As reading from slave is not 100% sure of getting correct data.

About the problem mentioned above, you are right, it may end up single node acting as master to all.

Let me rethink more about this.

--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at https://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.


--
Thanks & Regards
Amit Khosla
Ph: 9911797132

Laurent MINOST

unread,
Jan 31, 2019, 2:20:15 AM1/31/19
to Redis DB
Hi,

Thanks for your answer, but does the cluster will be still available?

Regards,

Laurent

hva...@gmail.com

unread,
Jan 31, 2019, 10:40:27 AM1/31/19
to Redis DB
Yes, the Redis Cluster automation was not written to induce failover when a master instance is working properly.  Only when a master instance fails.  This means it won't induce a failover back to the original configuration when the failed instance returns from the dead and re-joins the cluster.  This kind of event (failing over when there is no error) is a task that the human beings must decide to perform (at a time properly scheduled during a maintenance window, with appropriate notices to the impacted users, etc.).

Speaking for myself, I agree with the Redis developers that the decision to trigger a failover is not something that software should make when the cluster is functioning properly.  I.e., when all master and slave instances are healthy and serving production traffic.  The system administrators/operators should make that decision.

So I see two available solutions here:
  • Don't install Redis Cluster on the minimum possible number of machines (3), but instead install it on one machine per instance (in this case 6).  Now a single machine failure cannot take down multiple masters.
  • Set up monitoring to detect the risky situation where a single machine has more than one master instance, and sound an alert for the risky situation.  The admins are alerted right away and can take action to prevent trouble.

Reply all
Reply to author
Forward
0 new messages