So I just realised that Consul and Nomad use the word 'datacenter' to mean subtly different things, so I'm updating my architecture to reflect (my understanding of) the difference. I just want to check what I'm doing actually makes sense.
Everything is running on AWS. I've got multiple 'Clouds' (Develop, Test, Production), each of which contains one or more 'Clusters', each running on an AWS Region (us-east-1, ap-southeast-2, etc). I use UserData to dynamically build the Consul and Nomad configuration on first boot of each EC2.
My plan is that each Cluster will have three Control servers (Running both Consul and Nomad Servers) and one or more Agent servers (Running both Consul and Nomad Agents). Consul will be configured on each cluster as a single 'Datacenter', named "<cloud>-<region>" - so "test-us-east-1" or "prod-ap-southeast-2". Nomad will be configured on each cluster as a single 'Region', named the same.
On booth, each EC2 will look for other EC2s tagged as Control servers, to write their private IPs into the Consul configuration as servers to retry_join and the public IPs in retry_join_wan. They'll also look up their availability zone, and set that as their Nomad 'Datacenter'.
Once the 'Clusters' (Datacenters in Consul-speak, Regions in Nomad-speak) are all running, I'll use the Consul API '/v1/agent/join/<<Public IP Address of one server from a different cluster>>?wan=1' to hook up the Clusters within a 'Cloud' so that they can see each other. We're considering hooking up all of the Clusters from all the Clouds into a single WAN - thus the cloud name being part of the datacenter/region name (as well as disambiguating the cluster names).
Does that make sense? Is it a sensible and efficient way to configure things? Am I using the words right, or is my understanding still flawed?