Redis Sentinel and Redis Cluster

3,928 views
Skip to first unread message

Salvatore Sanfilippo

unread,
Jul 11, 2012, 5:06:25 AM7/11/12
to Redis DB
Hello dear users,

Often I see questions about the release date of Redis Sentinel and
Redis Cluster on twitter and here.
This email will try to clarify what is the status of the projects and
what the goals.

Redis Cluster is one of our major *long term plans*, I'm not going to
abandone the project, and the alpha implementation we have of a subset
of the Redis Cluster project into the unstable branch, and the tests I
had the opportunity to perform in the previous months, showed that the
general design is good.
However it is a *big project* and it is not something I'm not going to
ship if not working in a perfect way, so it's a Very Long Term goal.
This means that it will be developed in successive bursts. Like, after
Sentinel is complete, I'll work on it for a few weeks exclusively to
push it forward, and so forth. At some point it will be complete, but
I don't know when.

The reason for this is simple: prioritizing what is needed by the
community. After pausing Redis Cluster we had the opportunity to
release Redis 2.6, that is a major step forward for a majority of our
users. It adds Lua scripting, but also improved Redis in a lot of
areas that makes it better for what the Redis users are doing right
now.
Similarly we want to do this again with a Redis 2.8 release. So you
can see this as two parallel rails: one provides incremental
evolutions, and one is focused on the larger Redis Cluster.

Redis Sentinel is one of the milestones of the incremental evolutions
rail: a lot of people are running Redis as a single instance, or a
cluster of instances with client side sharding.
This people have a need that is much imminent of Redis Cluster and
that can be provided with a fraction of efforts: a way to run their
Redis instances in High Availability.
Sentinel is a project that tries to address this problem in the same
vision of Redis itself: easy to configure and use, small, reliable.

Because Sentinel is small enough, but still is a distributed system
with its complexities, I adopted the development technique of "stop
the world till it is complete", only pausing the Sentinel development
in the event of a critical bug discovered in Redis. Otherwise I'm
focused 100% on it, and this is the status and roadmap:

1) Redis Sentinel is 70% complete and working! We have the monitoring
capabilities, and now part of the failover working great.
2) Redis Sentinel repository will be opened to the public *before* end of July.
3) For end of July we expect to have a not-completely finished
product, but already testable.
4) After a few more weeks of experimenting with it, it should be
usable in production environments.

Probably Redis Sentinel will be in 10 days already functionally better
than any other way to HA Redis currently, we just need some time for
the community to test it, provide feedbacks, find issues that can only
be discovered with the help of users, and so forth. I say this because
what is already finished, the monitoring part, is already better at
monitoring Redis than any other system around doing it.

Cheers,
Salvatore

--
Salvatore 'antirez' Sanfilippo
open source developer - VMware
http://invece.org

Beauty is more important in computing than anywhere else in technology
because software is so complicated. Beauty is the ultimate defence
against complexity.
— David Gelernter

M. Edward (Ed) Borasky

unread,
Jul 11, 2012, 2:14:56 PM7/11/12
to redi...@googlegroups.com
> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>

So - is this right?

2.6 (Lua scripting) - currently at RC5, "formal release" imminent?
2.8 (Redis Sentinel) - soon to be released for beta testing as 2.7 something?
3.0 (Redis Cluster) - date unknown?

--
Twitter: http://twitter.com/znmeb Computational Journalism Server
http://j.mp/compjournoserver

Data is the new coal - abundant, dirty and difficult to mine.

Salvatore Sanfilippo

unread,
Jul 12, 2012, 4:51:18 AM7/12/12
to redi...@googlegroups.com
On Wed, Jul 11, 2012 at 8:14 PM, M. Edward (Ed) Borasky <zn...@znmeb.net> wrote:

> So - is this right?
>
> 2.6 (Lua scripting) - currently at RC5, "formal release" imminent?
> 2.8 (Redis Sentinel) - soon to be released for beta testing as 2.7 something?
> 3.0 (Redis Cluster) - date unknown?

It is mostly right, but actually Sentinel is currently implemented in
the unstable branch and will be possible to use it to monitor and
failover Redis 2.4 instances, so basically Sentinel will be available
for production ASAP, just you'll have to compile the unstable branch
to compile the redis-sentinel binary.

Salvatore

> --
> Twitter: http://twitter.com/znmeb Computational Journalism Server
> http://j.mp/compjournoserver
>
> Data is the new coal - abundant, dirty and difficult to mine.
>

Dvir Volk

unread,
Jul 12, 2012, 7:47:46 AM7/12/12
to redi...@googlegroups.com
how about offering it as a separate downloadable package once it's done, until it gets merged to the stable version?
maybe even create a makefile that only builds the sentinel executable.

Salvatore Sanfilippo

unread,
Jul 12, 2012, 8:16:02 AM7/12/12
to redi...@googlegroups.com
On Thu, Jul 12, 2012 at 1:47 PM, Dvir Volk <dvi...@gmail.com> wrote:
> how about offering it as a separate downloadable package once it's done,
> until it gets merged to the stable version?
> maybe even create a makefile that only builds the sentinel executable.

I'm optimist that this will not be needed, since Sentinel interaction
with Redis itself is minimal so when it's ready back-porting it into
2.6 without any bad impact on the rest of the code base will be
trivial :)

Salvatore

Daniel Mezzatto

unread,
Jul 12, 2012, 5:41:10 PM7/12/12
to redi...@googlegroups.com
Will there be a special INFO output for Sentinels? I would love to have access to the table that holds the information about which instances are Master and which are Slaves.
>> > redis-db+unsubscribe@googlegroups.com.
>> > For more options, visit this group at
>> > http://groups.google.com/group/redis-db?hl=en.
>> >
>>
>>
>>
>> --
>> Salvatore 'antirez' Sanfilippo
>> open source developer - VMware
>> http://invece.org
>>
>> Beauty is more important in computing than anywhere else in technology
>> because software is so complicated. Beauty is the ultimate defence
>> against complexity.
>>        — David Gelernter
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Redis DB" group.
>> To post to this group, send email to redi...@googlegroups.com.
>> To unsubscribe from this group, send email to
>> redis-db+unsubscribe@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/redis-db?hl=en.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to
> redis-db+unsubscribe@googlegroups.com.

Daniel Mezzatto

unread,
Jul 12, 2012, 5:45:04 PM7/12/12
to redi...@googlegroups.com
Dumb question. Just remembered that this is possible via SENTINEL subcommands.

Dvir Volk

unread,
Jul 12, 2012, 5:45:07 PM7/12/12
to redi...@googlegroups.com
yes, it works exactly like that.
in fact, by connecting to a sentinel, a client can auto configure itself to have instant knowledge of the entire cluster.
you get a list of slaves, the master, and other sentinels. you have a pubsub channel of changes in the state of nodes and failovers.
plus, since the sentinels use the masters to discover each other, you can discover all sentinels by connecting to the master.

this means that a client that conects to ANY node on a sentinel monitored cluster - be it a slave, a master or a sentinel - can easily get a full view of its topology. 


To view this discussion on the web visit https://groups.google.com/d/msg/redis-db/-/U1GGfTRoOWkJ.

To post to this group, send email to redi...@googlegroups.com.
To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.

Daniel Mezzatto

unread,
Jul 12, 2012, 6:16:08 PM7/12/12
to redi...@googlegroups.com
We did a simpler version of Redis Sentinel back in 2010 to elect the new Master after a failing one was detected. We made a simple binary protocol via UDP for the "cloudd" (as we called the process) communicate with each other. We rely a lot on the "role" field from the INFO command output.

The difference was that a Master was always a Master. If a Master fails, a Slave becomes the new Master. When the failed Master comes back to life, it will be a Master again and the new Mater will become a Slave again. We did that because of the load ballance of the cluster. Our update process tends to send a lot of commands to the Master that is not propagated to the Slave (such as TTLs, HMGETs and so on). If a machine ended having only Master instances, it would have a load much higher than the machine that had only Slave instances during the update process.

Pedro Melo

unread,
Jul 13, 2012, 3:27:55 AM7/13/12
to redi...@googlegroups.com
Hi,

On Thu, Jul 12, 2012 at 11:16 PM, Daniel Mezzatto
<daniel....@gmail.com> wrote:
> The difference was that a Master was always a Master. If a Master fails, a
> Slave becomes the new Master. When the failed Master comes back to life, it
> will be a Master again and the new Mater will become a Slave again. We did
> that because of the load ballance of the cluster. Our update process tends
> to send a lot of commands to the Master that is not propagated to the Slave
> (such as TTLs, HMGETs and so on). If a machine ended having only Master
> instances, it would have a load much higher than the machine that had only
> Slave instances during the update process.

Good point.

I think this sort of extra-layers of functionality/smarts/behavior are
site/deployment specific, although some general use cases will
probably arise in time.

So it makes more sense to keep them outside of Sentinel itself. After
Sentinel is released you can re-write your 'cloudd' as a process that:

1. connect to any of the Redis instances (be it master, slave or
sentinel) running on your cluster;
2. get a list of Sentinel instances (via INFO);
3. collect to a couple of them (for redundancy);
4. start monitoring for changes of topology (via PubSub);
5. create initial topology map;
6. if one of your master servers is online and is not the actual
master for the sub-cluster (according to your own configuration file
for the desired topology), ask one of the Sentinel's to promote it to
master.

Steps 1 through 5 are common to all of the solutions for these
behavioral Sentinel plugins and I guess as soon as a public alpha is
available you'll see packages available for your language that do most
of them.

Step 6 is where your business logic will reside. I haven't read the
Sentinel spec recently and I don't remember if you can actually ask
Sentinel to change the topology for us, but I'm sure if its not there
Salvatore will add that feature eventually, it makes a lot of sense
*wink, wink* ;).

Bye,
--
Pedro Melo
@pedromelo
http://www.simplicidade.org/
http://about.me/melo
xmpp:me...@simplicidade.org
mailto:me...@simplicidade.org
Reply all
Reply to author
Forward
0 new messages