I've spent a fair amount of time trying to do this.
I also wanted to include password authentication and encryption using spiped as redis recommends.
I got it all running after also including predixy as a proxy, as redis-cluster can't run within the NAT layer used by K8S networking (I'm using flannel): even if I use a cluster-aware redis client out on the network, calls to the cluster that don't get the right redis instance/pod on the first try get a redirect to the redis client, telling it to try again at an IP address that's internal to the K8S cluster. For it to work natively, you'd need to have an outside-routable IP space within the cluster. Gah.
There are a handful of redis proxy solutions but the one I got working was predixy.
So I ultimately ended up with a statefulset of six pods: three each of redis masters and slaves, and all six also running spiped and predixy. A nodeport service presented the spiped front end to the external service, and a DNS round-robin load-balanced across the worker nodes (this is all on-prem, BTW, so no option of an AWS load balancer or the like).
It all appeared to work. I could connect to redis from redis-cli on a workstation with an spiped tunnel to the round-robin. But as soon as I tried to set/get values, or subscribe/publish messages, I got intermittent delays or outright timeouts and dropped messages.
For now I've fallen back to a regular single VM running redis and spiped.
I'd be delighted to hear from anyone else who's made this work, especially if they've also included any kind of authentication and encryption.
Randy in Seattle