We schedule the kube master components with fleet, and then register the kube master with an ELB in AWS so there's always a stable dns name for it. The kubelet and kube-proxy components can point to the ELB for the apiserver. If you don't have an api-driven load-balancer at your disposal you could do some tricks to bind a roaming IP to your kube master or something similar.
Another related issue: kubernetes doesn't yet have the concept of taking a machine out for maintenance (gracefully moving all the deployed pods to another node) which some might consider a pre-requisite for using locksmithd to update your CoreOS nodes. It'll happen eventually :)
-o