I'm running in an AWS VPC and my settings are like this:
I'm advertising the public IPs of each Consul master.
I have port 8301 TCP/UDP open to
0.0.0.0/0 on all servers.
I have port 8300-8302 TCP/UDP open on just the consul masters, again to world.
An alternative way to do this that I'm going to move to shortly now that 0.5.0 is out:
Port 8301 TCP/UDP open to
10.0.0.0/16 (Assuming that's your VPC subnet) on all servers.
Port 8301 TCP/UDP open on all servers to the array of public ips of your consul servers (give them EIPs and use some method like
https://github.com/skymill/aws-ec2-assign-elastic-ip to lock them to a set of EIPs)
Port 8300 TCP open on Consul masters to
10.0.0.0/16 + [array of all consul servers in all your environments] (RPC calls over 8300 are
forwarded with the master nodes in the current datacenter acting as a proxy - see:
http://www.consul.io/docs/guides/datacenters.html )
Port 8301 TCP/UDP open to
10.0.0.0/16 on all Consul masters.
Port 8302 TCP/UDP open to [array of all consul servers in all your environments]. Again, EIPs are your friend here.
Encryption should be on pretty much always in my opinion. All Consul masters advertise their public IP, all non-Masters advertise their internal IP.
Two things that make this work:
1. RPC calls are proxied between Consul servers in different datacenters, so each Consul master only needs to be able to talk to a set comprised of [all Consul masters in all datacenters] + [all servers in the local datacenter].
2. As of v0.5.0 you can define a service-specific IP for each service so you don't need to advertise the node's public IP any longer just to expose 1 service:
http://www.consul.io/docs/agent/services.html