consul-replicate: global scope values in consul K/V and full mesh replication across DCs

304 views
Skip to first unread message

Petr Kriz

unread,
Oct 13, 2016, 9:00:41 AM10/13/16
to Consul
Hi.

I'm currently investigating the possibility of using consul's K/V store as the source for our application related configuration data. The problem I'm facing stems from our specific use-case in multi-datacenter environment. We'd like to store some of the data as global scoped to the store from any of the datacenters and have it replicated to every other one automatically. I was able to achieve this with consul-replicate, but only by including the source consul datacenter as one of the keys in our subtree structure. This was crucial to prevent deletion of already existing keys in the destination (stored locally or replicated from different datacenter.)

The solution to have the replication source datacenter as the key in the tree works nice, but the processing of data is quite complicated on clients now (eg. consul-template). It's forcing me to start-up client for every datacenter available and create configuration files separately for each of the datacenters as well.

Is there some other way how to achieve the full mesh replication of some specific K/V store subtree across multiple DC without overwriting/deleting data already present in the destination please?

I might have missed something or I might be trying to use consul for something it's not intended for, so please point that out if that's the case. ;)

Thanks 4 help.

Petr Kriz

David Adams

unread,
Oct 13, 2016, 9:46:04 AM10/13/16
to consu...@googlegroups.com
Consul is not intended to be a multi-master database, so from that POV what you are trying to do is not possible. My own solution to this need is to only perform writes to a single datacenter, and replicate from that datacenter to all the rest. For config-management purposes this works just fine.

I don't really understand your issues with consul-template. Can you share your config, kv layout, and what you are trying to achieve?

--
This mailing list is governed under the HashiCorp Community Guidelines - https://www.hashicorp.com/community-guidelines.html. Behavior in violation of those guidelines may result in your removal from this mailing list.
 
GitHub Issues: https://github.com/hashicorp/consul/issues
IRC: #consul on Freenode
---
You received this message because you are subscribed to the Google Groups "Consul" group.
To unsubscribe from this group and stop receiving emails from it, send an email to consul-tool+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/consul-tool/4fe45e2b-2731-4a17-be01-33fc8e3921b5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Petr Kriz

unread,
Oct 19, 2016, 3:52:35 AM10/19/16
to Consul
On Thursday, October 13, 2016 at 3:46:04 PM UTC+2, David Adams wrote:
Consul is not intended to be a multi-master database, so from that POV what you are trying to do is not possible. My own solution to this need is to only perform writes to a single datacenter, and replicate from that datacenter to all the rest. For config-management purposes this works just fine.

I was thinking about this as well, but this would create the SPOF in that DC, when it goes down. This is unacceptable for us, we've many DC all around the world and we treat them equal in the scope of our services and infrastructure design.


I don't really understand your issues with consul-template. Can you share your config, kv layout, and what you are trying to achieve?

Well, let's talk about Icinga (version 1) monitoring tool as the example. We're currently heavily depending on puppet exported resources and its built-in support for nagios_* configuration objects with this. But it's starting to take ages to reconfigure monitoring this way on each of the Icinga cores with many and many resources to collect and realize there... It's therefore my priority to find another and much faster solution for our monitoring configuration management.

Please also note, we've got many Icinga core instances around the world, most of the DCs share one, but there're several present in one DC as well. We need to treat the configuration as global, most of the objects remain targeted for Icinga core within the same DC, but there are also some, targeted for other instance (or instances) around the world.

I've ended up with following K/V structure for Icinga configuration exports to consul. It was crucial for me to include the source consul DC as one of the keys there, to prevent subtree overwrite by consul-replicate and I've also included a tag, which I use to realize the configuration at the right place to be loaded by correct Icinga core instance. (I'm including only two DCs in the example and nagios_host object only for simplicity.)

K/V structure for Icinga 1:
- global/CONSUL_DC/icinga/ICINGA_INSTANCE_TAG/host/NODE_FQDN/data/...

consul-replicate sources:
- dc1: global/dc2@dc2
- dc2: global/dc1@dc1


The consul-template is running on each of the Icinga cores, creating all the configuration files necessary for that instance. This includes specific prefixes and configuration object files for each of the nagios objects and each of the consul DCs too:

template {
  source = "/etc/consul-template/templates/nagios_host_dc1.ctmpl"
  destination = "/etc/nagios/nagios_host_consul_dc1.cfg"
  perms = 0600
}

template {
  source = "/etc/consul-template/templates/nagios_host_dc2.ctmpl"
  destination = "/etc/nagios/nagios_host_consul_dc2.cfg"
  perms = 0600
}

Each of the template itself is then hooked to specific subtree withing the same consul DC and its own ICINGA_INSTANCE_TAG via the tree function:

/etc/consul-template/templates/nagios_host_dc1.ctmpl:
 {{- range $node_fqdn, $data := tree "global/dc1/icinga/ICINGA_INSTANCE_TAG/host/@dc1" | byKey -}}

Now, there are many of these nagios objects being managed for each of the dozens DCs... well and it's just a lot of templates and config files. But the puppet is in charge here, creating the configuration for consul-template dynamically, so it's not a real concern for us now.


Reply all
Reply to author
Forward
0 new messages