[etcd node] x 3 <-----> [k8s node] x N
Where:
* [k8s node] is a machine that contains all "master" components + kubelet, ingress, proxy, and pods
* All k8s nodes have access to a k8s-supported NAS
* N is any number more than 1
* Each etcd node uses DAS and has an offline backup
What is your experience with this setup? I'm particularly interested in its stability and performance.
Also interested to know about experiences with different deployments
Sharing my initial thoughts on HA k8s outside the cloud:
https://www.relaxdiego.com/2017/08/hakube.html
--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
I'm curious now about the observed performance/stability differences between consistent reads on/off. If anyone else has some insights on that matter, please do share. Thanks!
Regards,
Mark
> One thing that I did not realise initially is that it is absolutely vital to be diligent about securing the etcd peer and client communication. In a single-node setup you can get away with binding to localhost, but if you put etcd on the network and do not require authentication anyone who can reach it can subvert any and all Kubernetes authorization. You probably also don't want to use the same CA as for Kubernetes here. Only the kube-apiserver needs etcd client access. For the same reason, you should not ever use this etcd cluster for anything else. Run a new cluster inside of Kubernetes instead.
+1! We're using an internal PKI setup for all our intra-cluster communication.