Brian,
This is my humble opinion mostly with the intention to make you life extremely simple when using consul. Here it is:
Something like consul-haproxy is typically fit for externally exposed load-balanced services with a single entry point that cannot change very often, usually DNS queries with a not so small TTL. These are usually websites and externally exposed APIs where a unique domain should end up pointing to all the nodes offering that service and you cannot be certain that consumer's resolution respects TTLs or any caching in the middle that could take up to 72 hours to be fixed. Haproxy is also a must when a well known port is to be used, since SRV isn't very widely supported.
However, inside your cluster you can have shorter TTL, you can know that DNS is properly respected and there aren't DNS caches and you can use SRV records in many cases. When a service goes down many implementations of different services consumers (e.g redis gem for ruby) will handle the reconnection and since you'd be using a FQDN (e.g. redis.service.cluster) the language itself or sometimes the OS (e.g ruby) would respect the DNS TTL and the resolution would happen again. Consul randomly gives you a record of any of the redis offerers effectively giving you a sort of internal load balancing that eliminates the need for haproxy or similar. But do notice that the port isn't standard and is to be read from the SRV record.
To simplify your question, I think you need to use consul-haproxy mostly for publicly exposed services where the consumer cannot access the cluster's DNS (.consul domains) or when consumer's cannot adapt to a change in the destination port. In any other cases you should use SRV records or a tool to query consul's registry and ask yourself some of this questions before you use consul DNS:
- Will the service consumer reconnect if something fails?
- Will my implementation/language automatically resolve the domain again respecting TTL.
- How could I enable the previous two behaviors if they're not 'natively' available?
It is true that there can be cases where it could get complicated to point to internal services without something like haproxy, but this usually has to do more with limitations in the service consumer. If you can get around working in the consumers it's way simpler for you, it's less services to deploy and manage and it's a way better usage of your resources. With consul and adapted consumers you have simply and reliable load balancing very easily, if you now add haproxy you are in someway replicating a part of the functionality and introducing more failure points. If things are still very unclear check an example (ruby in this case but similar solutions exists for other languages)
https://github.com/WeAreFarmGeek/diplomat where a gem queries the consul registry to get the relevant information for a PostgreSQL DB.
I understand that modifying consumer can be difficult in several cases. In that case you could consider something simpler than many haproxy instances and run routing containers (very similar essentially but looks much easier to me), check this to get your head spinning a bit:
https://github.com/progrium/ambassadord The link that I sent you could actually run outside docker as the compiled go binary, but using docker let's you run it very very easily. Since ambassadord can understand SRV records directly you take a lot of the haproxy config from your hands and I think it's just simpler to run a binary with a parameter or two than haproxy with configuration files.
Hope this helped in some way.
Cheers