I've been working on adding network policies to an existing application and have run into a few issues. I'm currently using the network policy capabilities within Google Kubernetes Engine.This was my initial attempt was the following network policy, intended to allow communication between the pods in the cluster but nowhere else.apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: default-internalspec:podSelector: {}policyTypes:- Ingress- Egressegress:- to:- podSelector: {}ingress:- from:- podSelector: {}The first issue I ran into is with liveness/readiness probes on pods. My initial policy doesn't seem to allow traffic from the kubelet, I'm guessing because it runs on the underlying host rather than as a pod. Adding an allowed CIDR range of 10.0.0.0/8 to the ingress rules fixed the issue, but is more permissive than I would like ideally. Is there a way to specifically whitelist traffic from the kubelet?The other issue I ran into is that I wasn't able to find a way to allow traffic specifically to the Kubernetes master. This came up while trying to use kube-state-metrics with Prometheus. Is there a way to whitelist traffic specifically for the Kubernetes master? Running within GKE, whitelisting 10.0.0.0/8 didn't work since the master nodes are managed separately and are not in the local network (though that makes the following error message from kube-state-metrics a bit confusing, perhaps the kubernetes service that is just an endpoint for the master in GKE is also obeying the network policy, and that's what is failing?).
This doesn't sound like expected behavior. Can you share the command you used to create your cluster?
I tried this myself and found liveness/readiness probes to work as expected even with NetworkPolicy in place, so it might be an environmental difference.