Unless things have changed recently, the default kubedns result for a standard k8s service will have a single IP: the virtual IP of the service. This in turn causes the gRPC client to configure just a single socket connection and route all requests on the one socket. Since kubeproxy load balancing through that virtual IP is layer 4, this can result in very poor load balancing, especially if you have unbalanced clients (e.g. a small number of clients that constitute the majority of RPC traffic).
If you use "headless" services in k8s on the other hand, the DNS query should return multiple pod IPs. This causes the gRPC client to in turn maintain multiple connections, one to each destination pod. However, I am not sure how responsive this will be to topology changes (as pod instances are auto-scaled, or killed and rescheduled, they will move around and change IP address). It would require disabling DNS caching and making sure the service host name is resolved/polled regularly.
Another solution is to use a custom gRPC resolver that talks to the k8s API to watch the service topology and convey the results to the gRPC client. For Go, this is implemented in an open-source package:
github.com/sercand/kuberesolver
(Most of my experience is with Go. So your mileage may vary if using a runtime other than Go. But I think the various implementations largely behave quite similarly.)