On 4/19/21 9:27 AM, jay vyas wrote:
> Hi folks: This might be a naive question .... as im not privy to all the
> history around how kube-proxy deals with hostNetwork services and
> headless services and so on but...
>
> *I guess my high level question is ... how do people normally start
> kube-proxy with an in-cluster configuration ? isnt there a fundamental
> chicken or egg problem ? *
Yes. You can't run kube-proxy with the (default) in-cluster
configuration. You have to either pass --server or --kubeconfig to give
it an actual apiserver name/IP to use, or else override the value of
KUBERNETES_SERVER_HOST set by kubelet in its environment.
> OK Great, now pods can access the apiserver through 10.0.96.1....
> however ..... should kube proxy *generally * be smart enough to write
> this rule *before* it actually connects to the apiserver?
But before it connects to the apiserver, it doesn't know what the
endpoints are...
More generally, if something goes wrong (eg, iptables-restore failure),
it seems like it's better to have kube-proxy talking to the apiserver
directly rather than depending on its own output.
> - in the context of load balancing, you could have kube proxy itself
> access the apiserver through a loadbalanced endpoint, i.e.
> --api-servers=1.2.3.4, 2.3.4.5, ...
Someone brought up this idea recently... in the SIG meeting or in a KEP?
In theory, kube-proxy could be extended to allow you to pass multiple
apiserver IPs to it so it could try to loadbalance on its own.
However, kube-proxy isn't the only component that needs to do this. Eg,
if you are running kube-proxy in a pod, then kubelet will also have
needed to know the non-service-network IP(s) of the apiservers. So it is
better to have an external load balancer, or round-robin DNS name.
However, doing that entirely within Kubernetes is tricky, because of the
chicken-and-egg issues... This is more of an install/deployment tool
(kubeadm/openshift-install/etc) sort of thing than a
kubelet/kube-proxy/kube-apiserver sort of thing.
-- Dan