Redis High Availability

70 views
Skip to first unread message

Vladimir Petter

unread,
May 31, 2017, 5:54:46 PM5/31/17
to Redis DB
Hello.

My understanding is that today for Redis HA I can use Redis Cluster or Redis Sentinels. In both cases I would configure primaries and secondaries. Secondaries are in hot stand by state ready to take over as soon as Redis Cluster or Redis Sentinels promote them to primaries. I am wondering what if I put Redis persistent data on shared storage that can fail-over between several nodes in a storage cluster. Just to make sure this is clear: storage cluster is NOT a Redis Cluster, it is a completely different cluster product. Then I can make my Redis instance HA using the same cluster that I am using to make storage HA. In other words Redis instance would fail-over together with the volume that keeps it's data. That would allow me to avoid running secondaries, and simply rely on primary to be able to fail-over from one node to another node along with shared disk. Avoiding secondaries saving non-trivial amount of RAM that I can now use to run more primaries. A down side seems to be that on fail-over primary would have to rebuild state from the shared [cold] storage. Is there any value in doing Redis clustering that way? Am I missing something? I would greatly appreciate your thoughts or pointers to a discussion where this topic might already have been discussed.

Thanks,
Vladimir.

Mani Gandham

unread,
Jun 2, 2017, 4:10:08 AM6/2/17
to Redis DB
That's not really HA then, it's just durability and replication, but at the storage layer. HA typically means high-availability during failure, so running multiple instances that are able to takeover with no (or very minimal) downtime.

You can certainly have very durable storage using your storage cluster with Redis just being an application that writes and reads from that storage, but there's still a risk of data not making it to disk before a failure of the Redis application. Also yes, you would have possible significant downtime since if the Redis server fails, you have to start up a new server, point it to the existing disk and then wait for it to load all the data into RAM again.

Replicating data with a secondary means you get read scalability since you can read from the replica, and it's probably safer since the network is usually faster than disk, meaning your data will be on the replica before it's written to disk on the master.
Reply all
Reply to author
Forward
0 new messages