Using an hostname for "slaveof"

244 views
Skip to first unread message

Omri Bahumi

unread,
Mar 18, 2015, 11:03:32 AM3/18/15
to redi...@googlegroups.com
Hi all,

We've currently reviewing a master-slave architecture that utilizes Consul's DNS server for Redis master configuration (in our cloud environment a master may be replaced, resulting a new IP address).
Looking at the code (and #2352 on Github), apparently the master hostname lookup is synchronous.
Since we don't want to block the main thread, we've been considering few options which I wanted to discuss:
  1. Using an async resolver
  2. Using consul-template to render the Redis configuration file and reloading the config (requiring Redis to support graceful config reload)
  3. Using consul-template to render /etc/hosts so master DNS lookups will never block
  4. Using a TCP load balancer that would perform the DNS lookup (running on localhost)
IMHO, options #1 and #2 are preferable, but require a code change in Redis.
About option #2, I think it would be beneficial for a lot more use-cases. Even only supporting the "CONFIG SET" supported configuration options for a reload.

The reason we're not considering sentinel is we're running a WAN replication.

Cheers,
Omri.

Josiah Carlson

unread,
Mar 18, 2015, 11:51:28 AM3/18/15
to redi...@googlegroups.com
Everyone's got a different perspective for what is most important. I generally try to go with whatever has the fewest moving parts, to minimize potential mistakes and failures. In your 4 example solutions, #3 has the fewest moving parts (rewrite /etc/hosts, no modifying existing daemons), and looks simplest to implement (especially if you haven't been doing anything in hosts before), so I would probably try that solution first.

That said, having a graceful config reload would be useful for some use-cases, yours included. The only question is: how long you are willing to maintain your own Redis fork unless/until your patch is accepted and released? You could get a better idea of a timeline for this by checking the bug tracker for pull requests and how quickly they are accepted.

 - Josiah


--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.

Dvir Volk

unread,
Mar 18, 2015, 12:06:03 PM3/18/15
to redi...@googlegroups.com
On Wed, Mar 18, 2015 at 5:51 PM Josiah Carlson <josiah....@gmail.com> wrote:
That said, having a graceful config reload would be useful for some use-cases, yours included. The only question is: how long you are willing to maintain your own Redis fork unless/until your patch is accepted and released? You could get a better idea of a timeline for this by checking the bug tracker for pull requests and how quickly they are accepted.

(For context - I'm working with Omri (OP) and this came up in a discussion between us). I think what Omri is asking here is what the community and Salvatore think of the options that require code rewrites. 
We really don't want to keep a fork obviously, and BTW we do have other use cases for CONFIG RELOAD. 

Josiah Carlson

unread,
Mar 18, 2015, 2:48:19 PM3/18/15
to redi...@googlegroups.com
Config reload is a useful feature by itself. It would be useful to get a patch in at some point, regardless.

But if your needs are time-sensitive, /etc/hosts rewrite has the fewest hard 3rd party dependencies and blockers.

You can do both if you have the time and energy for addressing a short-term need, and a long-term plan.

 - Josiah

Omri Bahumi

unread,
Mar 18, 2015, 4:50:35 PM3/18/15
to redi...@googlegroups.com
Rewriting /etc/hosts is an ugly hack. I did suggest it since it's a feasible solution.

Josiah Carlson

unread,
Mar 18, 2015, 6:22:50 PM3/18/15
to redi...@googlegroups.com
Both #3 and #4 are workarounds for existing limitations of otherwise functional software. Whether you call #3 an "ugly hack" or a "practical solution to a temporary problem" is a matter of opinion.

You can also set up a caching DNS resolver that you can pre-fill as your failover occurs and run it on your slave machines, as an alternative to #3 and #4.

Can you do #2 and fulfill your requirements in a timeline that is satisfactory? If yes, then do it and make Redis better. If you don't know, then it sounds like you need a contingency plan while you wait; which is either running your own fork, doing #3 or #4, or even the caching DNS resolver that I offered above as a sort-of strawman.

 - Josiah

The Baldguy

unread,
Mar 19, 2015, 12:27:16 AM3/19/15
to redi...@googlegroups.com
What about using Consul's watch and handler feature to have a small agent which, upon change in consul, uses the redis api on the slave to point to the new master and then config save? No restart, no hosts rewrite, no peoxy, and no mucking in redis code.

Omri Bahumi

unread,
Mar 19, 2015, 3:07:33 AM3/19/15
to redi...@googlegroups.com
I don't like this for two reasons:
  1. I'm using other methods for configuration management, using "config save" is bad for me
  2. Watching for changes on Consul and running "slaveof" _and_ updating the config file feels fragile to me. It is another possible solution
Again, I feel like a good solution for this problem must involve a code change. I wanted to start a discussion about options #1 and #2.
Having Redis support SIGHUP for config reload would probably help a whole lot of other use cases, such as changing "save", "maxmemory", and so on.
When configuring Redis using a configuration management tool, using "config save" is not a viable option and implementing both config file rewrite and "CONFIG SET" feels (a) duplication of work and (b) fragile.

On Thu, Mar 19, 2015 at 6:27 AM, The Baldguy <ucn...@gmail.com> wrote:
What about using Consul's watch and handler feature to have a small agent which, upon change in consul, uses the redis api on the slave to point to the new master and then config save?  No restart, no hosts rewrite, no peoxy, and no mucking in redis code.

--
You received this message because you are subscribed to a topic in the Google Groups "Redis DB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/redis-db/ey72NdOaPGY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to redis-db+u...@googlegroups.com.

To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.



--

       
Omri Bahumi
System ArchitectEverythingMe
 om...@everything.me  (+972) 52-4655544  @omribahumi                       

Bill Anderson

unread,
Mar 19, 2015, 11:50:18 AM3/19/15
to redi...@googlegroups.com




On Mar 19, 2015, at 02:07, Omri Bahumi <om...@everything.me> wrote:

I don't like this for two reasons:
  1. I'm using other methods for configuration management, using "config save" is bad for me


So don't do the Config save, part.  

  1. Watching for changes on Consul and running "slaveof" _and_ updating the config file feels fragile to me. It is another possible solution
Watching consul changes would be no less fragile than using consul to manage the DNS entries in the first place. Indeed it would be less fragile than setting up additional load balancers, local caches, or especially modifying redis code to alter lookups and/or Config reload. It is less fragile, by virtue of fewer moving parts, than having a Config management tool update it. It is the least amount of effort needed to get the solution, and is the correct solution for the scenario you describe. 

The key difficulty here is managing redis via Config file changes with a tool not made for it. Given redis already has the ability to be managed by using the api and Config save, I always recommend against going the other route the moment you introduce master/slave replication and use anything to dynamically failover or alter slaves based on availability of their master. 

Historically we in the *NIX world have used the file and reload method, and it shows in how we develop and what we use to manage systems. Redis offers us a more robust method, but being from a different paradigm it is a bit alien and can, as you say, feel fragile. Yet reloading a Config file can be quite fragile.  We generally don't think about it that way because "it is the way we do things" - it is in our comfort zone. 

If you put the same question to someone who comes from, for example, the world of Cisco routers which have their Config managed via api and then have the router itself save any changes what you are planning would sound fragile to them. 

I was building SaltStack modules for managing redis when the relative complexity of file-first really hit me. I wound up having the module talk to Redis to tell it what to do and have Redis manage it's Config file instead. My Redis Config management life became much easier at that point. Not to mention the fact that speed of changes are orders of magnitude faster. That speed likely matters. If your slaves are under such usage that restarting the daemon is a problem, I'd suspect waiting around for Config management then reload, then SYNC, is likely to be problematic as well. 

Again, I feel like a good solution for this problem must involve a code change. I wanted to start a discussion about options #1 and #2.
Having Redis support SIGHUP for config reload would probably help a whole lot of other use cases, such as changing "save", "maxmemory", and so on.

While I'm not fundamentally opposed to a reload, both of those can be changed at runtime. There are very few settings which cannot, and of those some of them should/can not be changed in just a reload. Where to store your PID file comes to mind. So does autorehashing but Salvatore said he would change that as it should be configurable via the api. 


When configuring Redis using a configuration management tool, using "config save" is not a viable option and implementing both config file rewrite and "CONFIG SET" feels (a) duplication of work and (b) fragile.

Config set and Config write are rock solid on their own, and in use daily on thousands of instances. There is nothing fragile about it. Trying to manage the Config file from two different tools is asking for trouble and partially reimplementing what Redis already does.  This is why I always recommend you don't manage redis "file first" but rather the same away you manage, for example, a router: Make changes in running-memory Config and when certain of them save them.

The discussion is actually of two different things. Redis being able to reload its Config is one item. Managing slaves during a master failover is a different discussion with different requirements. 

Ultimately in the situation you described the proper place for updating a slave's master when the master changes IPs is in whatever tool you are using, homegrown or not. That tool is by definition the source of authority. When the master changes you want only that change to be made. By talking directly to the slave and changing only that setting you can have that. By running it through a "third party" such as Puppet, Saltstack, etc. you introduce the possibility for other changes to corrupt, or even break, the failover process. 

All that said, if you really want a simple route to a solution which uses file-first-Config you could use Inotify to watch the Config file, pull out the settings, and do a Config set for each of them to get the same effect - and without waiting on Config reload support or worrying about a reload causing other issues. Then you can retain file-first semantics.
Reply all
Reply to author
Forward
0 new messages