both Sentinel and Redis Cluster will do a best-effort attempt to promote the best slave among the set of available slaves. However this is just a best-effort attempt so it is possible to still lose a write synchronously replicated to multiple slaves.
Yes, but your example doesn't address the rule you want Sentinel to apply.
Your example is: You have one master and more three slaves. Now the master goes down, and Sentinel is allowed to fail over (transfer the master role to another server) because there are three remaining nodes. One can become the master, and there will still be two slaves to satisfy your WAIT command requirement.
My example is different: You used to have one master and three slaves, but for some reason one of the slaves is not available. Now the master goes down. You are saying Sentinel should not fail over because there are only two remaining nodes - if one becomes master there aren't enough slaves to satisfy your WAIT command requirement.
I'm pointing out that in my example, if Sentinel is not allowed to fail over (transfer the master role to another server), you will have a major outage. Writes will be lost because the client cannot connect to the master to send them. Also, if the event that caused the master to be unavailable also makes the master lose data (corrupted persistence file, or the most recent writes weren't written to disk yet), then the resurrected master could have less data than the slaves who stayed alive. When those slaves perform a full resync with the resurrected master, they would drop the same data.
I'm not sure if Sentinel can be configured to do as you ask. I'm just asking if, in the scenario where you're asking Sentinel to refrain from failing over, the potential loss of data is okay.
Am Mittwoch, 28. Dezember 2016 02:03:11 UTC+1 schrieb hva...@gmail.com:
Yes, but your example doesn't address the rule you want Sentinel to apply.
Your example is: You have one master and more three slaves. Now the master goes down, and Sentinel is allowed to fail over (transfer the master role to another server) because there are three remaining nodes. One can become the master, and there will still be two slaves to satisfy your WAIT command requirement.no, once again: 3 nodes in total, one is master. If the master goes down there are 2 remaining slaves, one can become master, but only if both 2 slaves are up and available.
If a slave fails, you can't accept writes because you have only one slave instead of two. If the master fails sentinel promotes one of your two slaves. At that point you have one master and one slave, and therefore can't accept the writes when requiring two slaves.
Thus with a wait specifying two slaves, and three total Redis nodes (1m, 2s) you have zero ability to tolerate a failure of any Redis node in the system as far writes are concerned.
For such a write condition you need a minimum slave count of three to handle only a single failure. After that it wouldn't matter from a write perspective what sentinel does because you can't tolerate a second failure anyway.
This is a case when adding more conditions decreases the integrity of the system as opposed to increasing it.
Cheers,
Bill
"i have a 3 node sentinel setup and i'm using WAIT to guarantee that my write reaches at least 2 nodes before returning success."Sorry if you misunderstood, but the master is a node as well (2 nodes = master node + 1 slave).
i was talking about WAIT requiring 1 slave to get the update, so i can loose 1 machine.