For context, we run Consul in AWS, and we package the consul binary into a .deb package on our local APT repo. So when we upgrade Consul on the servers, I've written an Ansible playbook that prompts to ask for the datacenter to upgrade, then matches the hosts based on AWS tags, then cycles through them one at a time, stopping Consul, updating the APT package, starting Consul, waiting a few seconds, running some sanity checks that the cluster is alive and some expected minimum number of WAN and LAN members are present, then moves on to the next server in the cluster. Obviously you'd do this slightly differently if you're using Docker or immutable cloud servers or some other architecture, but the main point is that it works pretty smoothly to stop a server, upgrade/replace it, start Consul back up, check everything's working, then move on to the next. I've gone through this process twice now since we've polished it off, and it works very smoothly.
As for upgrading the agents, the process is basically the same, but I do some more sanity checking, and I allow more than one machine to be upgrading at a time. We don't yet have any hard production dependencies on Consul, so taking the local agent down for a few seconds can be done without much thinking about the apps, but if your apps rely on the agent to be 100% available, you'll want to take more care with the agent upgrades.
Our plans in the future are to move to launch all server types that we can using Amazon's Auto Scaling Groups (without the load-scaling bits turned on--just the maintain-cluster pieces) and immutable AMI images that we bake when updates need to be made and cycle into place. In that pattern, I expect we would add an extra server with the new version and cycle out the old one at a time as described by Chris Stevens. Going the plus-one route is safer, of course.