Redis layout - Pls Help

122 views
Skip to first unread message

sherin...@shipwire.com

unread,
Jan 4, 2016, 4:04:07 PM1/4/16
to Redis DB
Iam planning to have 2 redis servers in one data center and other 2 in other datacenter. One shuld be master. How many sentinels do u suggest for this structure ? Also is it gud to have one master and one slave in one datacenter and remaining 2 slaves in other datacenter or do we need to have seperate masters in 2 datacenters ? I need the master from one Datacenter to replicate to the slaves in other datacenter?

sherin...@shipwire.com

unread,
Jan 5, 2016, 3:27:46 AM1/5/16
to Redis DB
Can anyone please help !!

Jason Sia

unread,
Jan 5, 2016, 10:03:34 PM1/5/16
to redi...@googlegroups.com
Hi,

You can checkout the documentation at http://redis.io/topics/sentinel on the topic Example Sentinel Deployments.  Normally, it is advised to also have sentinels on different servers other than your Redis servers.  You can see different scenarios in the documentation, with the pros and the cons.  As a personal opinion, you can have 2 slaves, one slave in the same datacenter and the other slave on the other datacenter for geographic redundancy.

Thanks,
Jason

--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at https://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.

Sherin Sunny

unread,
Jan 6, 2016, 12:18:10 AM1/6/16
to redi...@googlegroups.com
Suppose the datacenter1 have c1(master and slave) and c2 as slave and datacenter2 have c3 and c4 as slaves.

Another solution I am thinking about changing the slave-priority of c3 and c4 to 0 so that it wont be a master even if a split brain happens. Is it workable solution or do u have a seperate solution that  can  be implemented ?
Another way is to have c1 and c2 as slave and c1 as master. C3 as slave of c1. One redis sentinel monitoring c1,c2. Other sentinel monitoring c3 and c4. But in this case the master need to be rotated among c3 and c4 depending on the master change in c1 and c2 because a master from c1,c2 need to write only to the master in c3,c4. Out of all these 2 solution, which one you find as easy to implement?

--
You received this message because you are subscribed to a topic in the Google Groups "Redis DB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/redis-db/p98BE3FyJy8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to redis-db+u...@googlegroups.com.

To post to this group, send email to redi...@googlegroups.com.
Visit this group at https://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.



--
Regards
Sherin Sunny
Devops

The Baldguy

unread,
Jan 6, 2016, 11:13:03 AM1/6/16
to Redis DB


On Monday, January 4, 2016 at 3:04:07 PM UTC-6, sherin...@shipwire.com wrote:
Iam planning to have 2 redis servers in one data center and other 2 in other datacenter. One shuld be master. How many sentinels do u suggest for this structure ? Also is it gud to have one master and one slave in one datacenter and remaining 2 slaves in other datacenter or do we need to have seperate masters in 2 datacenters ? I need the master from one Datacenter to replicate to the slaves in other datacenter?


Sherin there are a few things to note here. 

1. The number of Sentinels is not dependent on slave count, but rather on your requirements for failover-trigger sensitivity. As the documentation linked earlier will tell you, you should run at least three Sentinels, and always an odd number. Quorum  for the managed pod should the be half+one of the number of Sentinels.

2. You need to analyze your business requirements for slaves across datacenters. Replication over WAN will be significantly slower than replication within the same DC due to Latency/Speed of Light issues. What problem is running a pair of slaves in a second DC intended to solve? Is it for DR? Is it for local read speed? Is it for something else? That will inform your decision as to how to layout the slaves.

3. Sentinel across DCs is also a tricky thing. Ideally you'd want each DC to have the Sentinels to manage the pod, but in this case you are looking at an even number. As such you will run into quorum issues. For example, with 6 Sentinels split evenly across the DCs and a Quorum of four, you can not have automatic failover if you lose the master DC as you will not be able to reach quorum. 

4. Failover across DCs is not a simple matter of master promotion. It is unlikely your Sentinel IPs will be preserved in a different datacenter. Thus if you lose the master DC the clients will need to be reconfigured to use the failover-DC's sentinels.

5. There is a nasty intersection of numbers three and four.

6. "good", not "gud"

7. Spell out "you"

8. You can not have multiple masters for the same data

With the above in mind, and not knowing your actual business requirements and such, here is my suggestion.

DataCenter A:
  1 master
  1 slave
  3 individual Sentinels

DataCenter B:
  2 slaves with slave-priority set to 0
  3 individual Sentinels - without pods connected

Basically DC-B is a spinning spare. It has data replication, but can not be promoted. The Sentinels in DC-B do NOT have anything to do and are not configured with the pod until you lose DC-A. When you lose DC-A you:

 1. Manually pick a slave in DC-B to promote
 2. Promote the selected slave to master
 3. Add the master into the DC-B Sentinels 
 4. Update the clients 
      a) Remove the old IP Sentinel address(es)
      b) add the IP address(es) of the "new" Sentinels

Note: all of these are manual steps.

How to handle DC-A coming back online will be determined entirely by your business requirements for multiple DCs.


Cheers,
Bill


sherin...@shipwire.com

unread,
Jan 6, 2016, 12:03:32 PM1/6/16
to redi...@googlegroups.com
Thanks Bill.

Here is my plan.

Lets DC1 have 3 redis sentinels, C1 as master and c2 as slave. DC2 have 3 redis sentinels, C3 and C4 as slaves. The app always conect to C1 master. So C3 and C4 need to replicate from C1 as a backup to the DC2.

I will make C3 and C4 slave priority as 0. So it never become a master when the app is connecting from DC1. When we need to switch to DC2 during maintenance or during DC1 got completely down, a manual failover will be triggerred and it will invoke the reconfig script in sentinel to change slave priority in c3 and c4 and make one as master.

So Iam planning to use 6 sentinels amd quorum value as 3. So if one Datacenter completely goes down also there will be 3 quorum to manually promote the DC2 server as a slave. Is this fine ? The app in DC1 will be connecting to the haproxy. Ha proxy will be in front of the redis servers so to make sure it write only to master. Pls tell me this is a goof method ? We use c3 and c4 in datacenter 2 as a DR to replicate from datacenter 1 master. We need a master in DC2 only when the entire DC1 goes down or we do a maintenance.

Sent from my T-Mobile 4G LTE Device

sherin...@shipwire.com

unread,
Jan 6, 2016, 12:40:39 PM1/6/16
to redi...@googlegroups.com
Also could you please explain your 4th step:


>> Update the clients 
    >>  a) Remove the old IP Sentinel address(es)
    >>  b) add the IP address(es) of the "new" Sentinels


Why I need to update sentinel IPs since initially all sentinels are monitoring the primary datacenter master C1. Sentinel will take care of these automatically. Right ?

sherin...@shipwire.com

unread,
Jan 6, 2016, 4:36:21 PM1/6/16
to Redis DB
Also please explain what should be the quorum for datacenter 1 and if sentinels in datacenter 2 is not doing anything then what it need to monitor in the sentinel.conf ?
Why cant use 6 sentinels and quorum as 3 so that even if all the servers in datacenter 1 goes down it have the quorum majority to elect a master in datacenter2.

Please explain your thoughts as I am not an experieced redis guy.

sherin...@shipwire.com

unread,
Jan 6, 2016, 9:00:18 PM1/6/16
to Redis DB
Also I have noticed one more thing.

When the redis master got halted or shutdown  and when it comes back, it is coming up as a master due to which all other servers reconfigured this master and data will get lost. How can I prevent this situation so that once the server comes after reboot then it need to be a slave and not a master? Pls help


On Monday, January 4, 2016 at 1:04:07 PM UTC-8, sherin...@shipwire.com wrote:

The Baldguy

unread,
Jan 7, 2016, 10:50:53 AM1/7/16
to Redis DB


On Wednesday, January 6, 2016 at 11:03:32 AM UTC-6, sherin...@shipwire.com wrote:
Thanks Bill.

Here is my plan.

Lets DC1 have 3 redis sentinels, C1 as master and c2 as slave. DC2 have 3 redis sentinels, C3 and C4 as slaves. The app always conect to C1 master. So C3 and C4 need to replicate from C1 as a backup to the DC2.

I will make C3 and C4 slave priority as 0. So it never become a master when the app is connecting from DC1. When we need to switch to DC2 during maintenance or during DC1 got completely down, a manual failover will be triggerred and it will invoke the reconfig script in sentinel to change slave priority in c3 and c4 and make one as master.

Stop here. Detail this out. What Sentinel will trigger this? Your configured/active sentinels are in DC 1 and they just went away. How will you select a new master? As you will have to manually select one you need to document how to decide it now (not for us of course, that is entirely your call but get it done *before* the fecal matter hits the rotating air mover). It will also demonstrate for you (and your team) that this is not something Sentinel/Redis can automate for you.
 

So Iam planning to use 6 sentinels amd quorum value as 3. So if one Datacenter completely goes down also there will be 3 quorum to manually promote the DC2 server as a slave. Is this fine ?

No, this is bad. With this quorum if, I mean when, there is a network issue between Sentinels-in-12 and the master in DC1 they can promote Slave2 *while the original master is still a master*. This is known as split-brain operations (SBO) and is quite undesirable. Your DC2 sentinels should *not* be configured at all until you fully lose DC1.
 
The app in DC1 will be connecting to the haproxy. Ha proxy will be in front of the redis servers so to make sure it write only to master.

Ah, new information. Before going that route I'd recommend checking to see if your client library properly supports Sentinel discovery. If it does, use that. It is quicker and less complex. Note that if you do need the HAProxy you will need 4 for HA and DR. You will need two in each DC.
 
Pls tell me this is a goof method ? We use c3 and c4 in datacenter 2 as a DR to replicate from datacenter 1 master. We need a master in DC2 only when the entire DC1 goes down or we do a maintenance.

You can do it if you do *not* use the Sentinels in DC2 until DC1 fails.

Cheers,
Bill 

The Baldguy

unread,
Jan 7, 2016, 10:58:57 AM1/7/16
to Redis DB


On Wednesday, January 6, 2016 at 11:40:39 AM UTC-6, sherin...@shipwire.com wrote:
Also could you please explain your 4th step:


>> Update the clients 
    >>  a) Remove the old IP Sentinel address(es)
    >>  b) add the IP address(es) of the "new" Sentinels


Why I need to update sentinel IPs since initially all sentinels are monitoring the primary datacenter master C1. Sentinel will take care of these automatically. Right ?

  Nope. First, you will not be monitoring anything with the secondary Sentinels until you lose DC1. So they can't change anything. Further consider what happens under your proposed configuration when DC1 goes down. What will the DC2 sentinels do? They will not have a valid slave to promote because the only slaves they can talk to are non-promotable. It only gets worse from there.

  This item was assuming you were using Sentinel support, and still holds if you do. With a Sentinel supporting client library you don't configure the clients with the Master IP, but the pod name and the Sentinels to talk to. Since you are running two different sets of Sentinels you will need two different configurations: DC1 is up and DC 1 is down.

  However, if running HAProxy in front, you will still need to reconfigure the clients (or DNS) to point to the HAproxies in DC2 instead of DC1. Unless you've got some wicked smart network admins (likely using BGP) you are unlikely to have trans-DC portable IP addresses for your HAProxy nodes. Thus a full DC failover will need the app to b configured for operations DC2 instead of DC1.

Cheers,
Bill

The Baldguy

unread,
Jan 7, 2016, 11:11:14 AM1/7/16
to Redis DB


On Wednesday, January 6, 2016 at 3:36:21 PM UTC-6, sherin...@shipwire.com wrote:
Also please explain what should be the quorum for datacenter 1 and if sentinels in datacenter 2 is not doing anything then what it need to monitor in the sentinel.conf ?
Why cant use 6 sentinels and quorum as 3 so that even if all the servers in datacenter 1 goes down it have the quorum majority to elect a master in datacenter2.

This is why:
Also I have noticed one more thing.
When the redis master got halted or shutdown  and when it comes back, it is coming up as a master due to which all other servers reconfigured this master and data will get lost. How can I prevent this situation so that once the server comes after reboot then it need to be a slave and not a master? Pls help 

  The master *will* come back up as a master and Sentinel will then reconfigure it as a slave to the newly promoted master. The only way to avoid this is to have the init script check sentinel and reconfigure if it isn't the master *before* starting, or to always reconfigure it as a slave on startup. This is not a good idea because it can mean not having any masters and waiting on Sentinel before starting. You are only seeing this behavior because you have SBO going on because by having a quorum of 50% of an even number you can get three sentinels thinking A is master while 3 other Sentinels think B is master.
 
  Sentinel, when configured with reasonable quorums, normally prevents this by the configuration epoch. But when you get split-brain you can have the same configuration epoch for two different configurations.

  Sentinel was designed and written taking into account the various variables and failure modes. It knows what is needed to Do The Right Thing the vast majority of the time. Basically you need to stop fighting it and trying to do things they way you *think* they should be and let Sentinel do it's job the way it was designed to - or write your own version which will do things your way (which will still break your setup). The Sentinel docs are quite clear, so follow them.

  There is no automated common "right way" to do cross-DC failovers of an entire DC so Sentinel doesn't have that (plus Redis isn't really designed for multi-DC operations). As a result you have to figure out your specific requirements, tolerances, and procedures and implement them. The simplest route is the one I listed where the second DC is receiving replication *only*, but the loss of DC1 is handled manually.

Cheers,
Bill


sherin...@shipwire.com

unread,
Jan 7, 2016, 11:59:56 AM1/7/16
to redi...@googlegroups.com

>>The simplest route is the one I listed where the second DC is receiving replication *only*, but the loss of DC1 is handled manually.

What should be the quorum value in this setting for sentinel in DC1.?
What should sentinel in DC2 conf look like?
Do u mean DC1 have one master and slave and 3 sentinels monitoring only DC1 master. ?

Could you please send me the settings in this way of setup ?




-------- Original message --------
From: The Baldguy <ucn...@gmail.com>
Date:01/07/2016 8:11 AM (GMT-08:00)
To: Redis DB <redi...@googlegroups.com>
Cc:
Subject: [redis-db] Re: Redis layout - Pls Help



sherin...@shipwire.com

unread,
Jan 7, 2016, 12:51:21 PM1/7/16
to Redis DB
>>Stop here. Detail this out. What Sentinel will trigger this? Your configured/active sentinels are in DC 1 and they just went away. How will you select a new master? As you will have to manually select one you need to document how to decide it now (not for us of course, that is entirely your call but get it done *before* the fecal matter hits the rotating air mover). It will also demonstrate for you (and your team) that this is not something Sentinel/Redis can automate for you.


During maintenance first we switch DNS to DC2 and pass a fail over to the current master in DC1 so that the reconfig script can check the DNS and reconfigure the new master in DC2. After that we shutdown all the servers in DC1.>>


>>No, this is bad. With this quorum if, I mean when, there is a network issue between Sentinels-in-12 and the master in DC1 they can promote Slave2 *while the original master is still a master*. This is known as split-brain operations (SBO) and is quite undesirable. Your DC2 sentinels should *not* be configured at all until you fully lose DC1. 

At a time the slave-priority in redis servers in one datacenter is always be 0. So how can it get promoted to master during split brain?

>> Not use sentinels in DC2. So how the sentinels conf in DC2 should look like? what should be there in the sentinel.conf for dc2 if it is not monitoring anything?

sentinel monitor masterone datacenter1master 6379 3
sentinel down-after-milliseconds masterone 5000

Above is my current configuration in sentinel conf in dc2 as all sentinels are monitoring to the master in datacenter1. If it is not monitoring anything then its not possible for me to switch the master to the redis servers in second datacenter when we manually do a DNS switch to the second datacenter. 

So please help.

thanks
sunny

sherin...@shipwire.com

unread,
Jan 7, 2016, 1:51:00 PM1/7/16
to Redis DB
>>With a Sentinel supporting client library you don't configure the clients with the Master IP, but the pod name and the Sentinels to talk to. Since you are running two different sets of Sentinels you will need two different configurations: DC1 is up and DC 1 is down.

pod name means the mastername? My sentinel conf looks as given below:

sentinel monitor masterone <datacenter1_masterip> 6379 3
sentinel down-after-milliseconds masterone 5000
daemonize yes
logfile "/etc/redis-sentinel/sentinel.log"
sentinel failover-timeout masterone 10000
sentinel client-reconfig-script masterone /tmp/master-redis1.sh

Is the above conf correct ? You told about adding 2 different conf. So the conf for DC1 is down should be in DC2 sentinels ? How should it look like? which master it should monitor?


Thanks
Sunny

sherin...@shipwire.com

unread,
Jan 7, 2016, 2:36:09 PM1/7/16
to Redis DB
>>>However, if running HAProxy in front, you will still need to reconfigure the clients (or DNS) to point to the HAproxies in DC2 instead of DC1. Unless you've got some wicked smart network admins (likely using BGP) you are unlikely to have trans-DC portable IP addresses for your HAProxy nodes. Thus a full DC failover will need the app to b configured for operations DC2 instead of DC1.

I am using haproxy and the app will be connecting to this haproxy. So what should be the sentinel.conf in DC2. Does it looks the same as below:

sentinel monitor masterone <dc1serverip> 6379 3
sentinel down-after-milliseconds masterone 5000

This is something that I get confused. How can I add 2 different configuration in one sentinel instance?

Thanks
Sunny


On Thursday, January 7, 2016 at 7:58:57 AM UTC-8, The Real Bill wrote:
Message has been deleted

sherin...@shipwire.com

unread,
Jan 7, 2016, 2:46:21 PM1/7/16
to Redis DB
Hi Bill,

Hi Bill,

See the below link. HAProxy 1.5+ comes with a new built-in TCP health check feature for Redis to perform an automatic failover. To avoid having to change Redis IP/Port in the front-end client application after each failover, setup HAProxy with the TCP health check to test if a Redis instance is a master or slave.


So it will work fine. right ?


My concern now is how to configure sentinel in DC2. I configure normally sentinel conf to monitor mastername <ipofcurrentmaster> port

I can manually point the app to connect to haproxy 2 if DC1 goes down.

Is there any other way to do that in sentinel conf of dc2?


regards
sunny

Greg Andrews

unread,
Jan 7, 2016, 2:52:14 PM1/7/16
to redi...@googlegroups.com
I don't want to distract you from the help Bill is giving, but I think I can clarify something:

Sentinel provides 3 services:
  1. Monitor one or more groups of Redis servers for failure.  (a "group" is a master and the slaves that replicate from that master)
  2. Perform 'failover' to a new master when the original master fails.  I.e., configure a slave to become master, configure other slaves to replicate from the new master.
  3. Tell the client applications which server in the group is the master, and which are the slaves.
That 3rd service works like this: The client application connects to Sentinel and sends a query to find out the master of the group, and sends another query to find out the slaves.  Automatic failover to the new master is not done through a load balancer, but through asking Sentinel for the new master and then connecting to it.

When Bill said "pod" he meant the same thing as "group" above - a list of Redis servers that are related to each other because one is the master and the others are slaves of that master.

A set of Sentinel servers can monitor, fail over, and report on more than one group of Redis servers.

Your haproxy configuration is tying to replace Sentinel's management of which Redis server is the master and how the client applications discover which machine to use as the master.  I believe you're going to have a hard time trying to make haproxy and Sentinel work together in this way when they weren't designed to.

  -Greg

--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.

Greg Andrews

unread,
Jan 7, 2016, 2:58:14 PM1/7/16
to redi...@googlegroups.com
A detail I omitted by mistake:


That 3rd service works like this: The client application connects to Sentinel and sends a query to find out the master of the group, and sends another query to find out the slaves.  Automatic failover to the new master is not done through a load balancer, but through asking Sentinel for the new master and then connecting to it.

In Sentinel each group of Redis servers is given a name.  When the client asks Sentinel for information, it sends the group name to Sentinel, and Sentinel replies with the name of the master server (or the list of slave servers).

  -Greg

sherin...@shipwire.com

unread,
Jan 7, 2016, 3:00:23 PM1/7/16
to Redis DB
sentinel can failover to the new master when the current master goes down but haproxy queries each redis server to see which is the master and it connects to it. If the sentinel is not there then when the current master goes down then entire application goes down. This is the reason why I use haproxy and sentinel. I am using haproxy to connect to the app instead of sentinel sending the current master to app.

if I dont use haproxy then how will the app find the new master when the  current master goes down? Do I need to write any script or which sentinel server IP I need to configure in the app conf file?

Thanks

Sherin Sunny

unread,
Jan 7, 2016, 3:07:06 PM1/7/16
to redi...@googlegroups.com
In sentinel each group is given a name. Yes then how it works as per the settings that Bill told:

1) C1 is currently master and C2,C3,C4 are slaves to master C1. C1 and C2 in datacenter 1 and C3, C4 are in datacenter 2. Bill told to make C3,C4 as slave-priority to 0, so it never get promoted to the master.

I also need to install sentinel in C1,C2,C3,C4. So C1,C2 sentinel monitor the master group for C1 and what about the group in  C3,C4 sentinels Conf ?

--
You received this message because you are subscribed to a topic in the Google Groups "Redis DB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/redis-db/p98BE3FyJy8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to redis-db+u...@googlegroups.com.

To post to this group, send email to redi...@googlegroups.com.
Visit this group at https://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.

Sherin Sunny

unread,
Jan 7, 2016, 3:10:23 PM1/7/16
to redi...@googlegroups.com
Also in this set up one more issue comes up, when datacenter 1 comes backup, the C1 in datacenter first need to replicate from the new master - C3 or C4 in datacenter 2. How will it become possible with this set up because sentinels in C1 and C2 already have the configuration as c1 as master. All these become a manual process then. right ?

Greg Andrews

unread,
Jan 7, 2016, 3:12:15 PM1/7/16
to redi...@googlegroups.com
 <sherin...@shipwire.com> wrote:
if I dont use haproxy then how will the app find the new master when the  current master goes down?

As I said, the app connects to Sentinel and asks Sentinel for the name of the Redis master server.  Sentinel gives the name of the new master to the application.

  -Greg

sherin...@shipwire.com

unread,
Jan 7, 2016, 3:18:52 PM1/7/16
to Redis DB
Could you please help me how the sentinel conf looks like in DC2 redis servers. I normally use the below format but you guys are telling that there is a way to use it without giving the IP address of master. How it is?

my sentinel conf looks like:


sentinel monitor mymaster 192.x.x.x  6379 3
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 10000


where 192.x.x.x is the current master IP and mymaster is the name of the master group name.

sunny

The Real Bill

unread,
Jan 7, 2016, 4:36:48 PM1/7/16
to Redis DB


On Thursday, January 7, 2016 at 1:46:21 PM UTC-6, sherin...@shipwire.com wrote:
Hi Bill,

Hi Bill,

See the below link. HAProxy 1.5+ comes with a new built-in TCP health check feature for Redis to perform an automatic failover. To avoid having to change Redis IP/Port in the front-end client application after each failover, setup HAProxy with the TCP health check to test if a Redis instance is a master or slave.


So it will work fine. right ?

I'm well aware of HAproxy+Sentinel and that check exposes you to split brain scenario. Ask yourself what happens when each Redis believes itself to be master. How will HAproxy determine who is correct?



My concern now is how to configure sentinel in DC2. I configure normally sentinel conf to monitor mastername <ipofcurrentmaster> port


When DC1 fails, you configure it the same way as you do in DC1, but with the IP of the master you chose to promote in DC2.
 
I can manually point the app to connect to haproxy 2 if DC1 goes down.

Is there any other way to do that in sentinel conf of dc2?

No.

sherin...@shipwire.com

unread,
Jan 7, 2016, 4:37:05 PM1/7/16
to Redis DB
>>You can do it if you do *not* use the Sentinels in DC2 until DC1 fails.

Bill,

When the DC1 comes back up then how will I be able to tell DC1 redis server ti replicate from the DC2 since the sentinels in DC1 is monitoring only DC1 redis servers and DC2 sentinels have nothing to do with the DC1 redis servers?

sherin...@shipwire.com

unread,
Jan 7, 2016, 4:41:20 PM1/7/16
to Redis DB
>>When DC1 fails, you configure it the same way as you do in DC1, but with the IP of the master you chose to promote in DC2.

So when DC1 comes back then what will be the IP in the sentinels in DC1 and DC2 ? Becoz servers in DC1 need to replicate from DC2 first.

>>I'm well aware of HAproxy+Sentinel and that check exposes you to split brain scenario. Ask yourself what happens when each Redis believes itself to be master. How will HAproxy determine who is correct?

How can I split brain happens if the slave-priority is 0 for all servers in the one datacenter when the other is active ?

sherin...@shipwire.com

unread,
Jan 7, 2016, 4:49:01 PM1/7/16
to Redis DB
Here in this case the same scenario happens:

I can configure c3 or c4 as master in DC2 when DC1 goes down. So when DC1 goes down then DC2 have 3 sentinels, one master and one slave.

But when the DC1 comes back, the C1 can come back as a master and it wont replicate from the DC1. What should I do in this case?

Also reconfigure the sentinel can be done by the reconfig script inside sentinel or need to do manually command by command from the server?
Reply all
Reply to author
Forward
0 new messages