So "there is no one way" ... "all opinions are my own" ... "blah blah blah".
OK ... so there are a number of ways to handle service discovery. As you correctly mentioned, you can leverage Consul's DNS capabilities as an easy way to resolve the server name/IP, which also happens to be the easiest method IMO, so lets run with that.
In order to do this, we first need to make sure your DNS resolves "service.consul". As you are likely already aware, in a typical Docker configuration, Docker will use it's own internal name resolution for discovery then forward to the DNS entries configured on the host once that fails. This means, as long as your docker node can resolve "service.consul" then so will your containers (unless explicitly configured to look somewhere else).
I personally borrowed from Hashicorp's "1 million container challenge" in my own design:
https://github.com/hashicorp/c1m, but you don't have to do it that way. In this design, the docker node has consul installed locally and dnsmasq installed and configured with conditional forwarding to it. So basically anything ending in ".consul" forwards to the local
127.0.0.1:8600, everything else goes to the normal DNS routes. Again, me following Hashicorp's design: if you'd prefer just setting up conditional forwarders on your DNS servers that will work perfectly fine too.
Once we know that containers can resolve "service.consul", the next step is to make sure your container publishes itself to consul so it can be discovered. Here nomad can help as you can define not only the service but even health checks right from the nomad job using the service stanza:
https://www.nomadproject.io/docs/job-specification/service.html. I highly reocmend configuring health checks as well, as this allows consul to enable/disable services based on response, which will assist you in the final part of your question coming up.
So, now we have a resolvable consul system, that will respond with an IP of a known healthy container when queried. "wiki.service.consul" will work as long as my "wiki" service is healthy somewhere. cool, right? But we can also take it a step further. You asked about load balancers and really this get's into the next level of consul integration: the API. There are a couple routes you can play with here, really. Systems like Traefik and Fabio are load balancers that can talk native to consul and dynamically update as containers come online/offline. If you have a need/desire to stay with more traditional load balancers like HAProxy, you can use tools like consul-template to dynamically configure and maintain their configuration. As an example of this, I actually have this little guy stored:
https://github.com/Justin-DynamicD/haproxy-consultemplate. This is a consul template I wrote that automatically updates HAProxy with any service in consul that has been tagged "proxy". It then has a few extra K/V lookup tricks to allow me to overwrite the defaults. Really should update the readme there ... it was kinda written for me.
I hope this helps gives you some ideas on how you can get things working.