absolute minimum setup to achieve HA with persistence

76 views
Skip to first unread message

Raghavan Chockalingam

unread,
May 14, 2015, 4:19:12 AM5/14/15
to redi...@googlegroups.com
i have read most of redis HA related docs and i would like a recommendation from the community. 

assuming a single master would be able to handle all the ops and does not need sharding. what is the minimum/simplest recommended setup that would support the below?
1. no loss of data (set)
2. slave should become master, if master is unavailable
3. possibly, original master should be made master if it comes back
4. no false promotion of slave to master
5. no manual intervention in case of failures

a note: load balancer is available in our environment, incase that might simplify the setup. the nodes will be in a managed datacenter.
 here i refer, node as vm and instance as a redis process

if sentinel is suggested, 
1. how many instances are needed
2. what is instance node configuration required (vcpu,mem)
3. how the instances are recommended to be striped across blades in the datacenter

Thanks

Salvatore Sanfilippo

unread,
May 14, 2015, 4:42:43 AM5/14/15
to Redis DB
For your setup I suggest three Sentinels running in the same VM as
three Redis instances. Pick the right durability settings (AOF+fsync
one-sec should fit) and make sure to enable the options to refuse
writes when not enough slaves are connected (see Sentinel doc, is
clearly covered). Note that "no data loss" under all scenarios is not
possible with Redis in HA. The setup I outlined is pretty solid in the
real world but there are set of failures / partitions where data will
be lost, however you can pretty much bound the window of lost writes
during a failure event. Note: make sure to use a good Sentinel-capable
client.

Another thing you could do if you have Sets, is to use instead N
unrelated Redis masters and implement read-repair client side. This
way you end with an AP system with eventually consistent data
structures that have the property of eventually always remember
elements added, even if for some time during failures certain elements
may disappear (otherwise you need a CP system). An example of such a
system is SoundCloud Roshi, based on Redis. This solution would not
lose writes as long as W-1 failures occur (where W is the number of
instances you write to).

Salvatore
> --
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to redis-db+u...@googlegroups.com.
> To post to this group, send email to redi...@googlegroups.com.
> Visit this group at http://groups.google.com/group/redis-db.
> For more options, visit https://groups.google.com/d/optout.



--
Salvatore 'antirez' Sanfilippo
open source developer - Pivotal http://pivotal.io

"If a system is to have conceptual integrity, someone must control the
concepts."
— Fred Brooks, "The Mythical Man-Month", 1975.
Message has been deleted

Raghavan Chockalingam

unread,
May 16, 2015, 10:00:31 AM5/16/15
to redi...@googlegroups.com
thanks for your answer.

would it be better to run sentinel instances on their own nodes(vms) on different blades?
also, for the cluster (3.0) solution, how many masters does it require as a minimum?
Reply all
Reply to author
Forward
0 new messages