Issues configuring network policies

1,197 views
Skip to first unread message

Aaron Taylor

unread,
Dec 6, 2017, 10:11:59 AM12/6/17
to Kubernetes user discussion and Q&A
I've been working on adding network policies to an existing application and have run into a few issues. I'm currently using the network policy capabilities within Google Kubernetes Engine.

This was my initial attempt was the following network policy, intended to allow communication between the pods in the cluster but nowhere else.

kind: NetworkPolicy
metadata:
  name: default-internal
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
  egress:
  - to:
    - podSelector: {}
  ingress:
  - from:
    - podSelector: {}

The first issue I ran into is with liveness/readiness probes on pods. My initial policy doesn't seem to allow traffic from the kubelet, I'm guessing because it runs on the underlying host rather than as a pod. Adding an allowed CIDR range of 10.0.0.0/8 to the ingress rules fixed the issue, but is more permissive than I would like ideally. Is there a way to specifically whitelist traffic from the kubelet?

The other issue I ran into is that I wasn't able to find a way to allow traffic specifically to the Kubernetes master. This came up while trying to use kube-state-metrics with Prometheus. Is there a way to whitelist traffic specifically for the Kubernetes master? Running within GKE, whitelisting 10.0.0.0/8 didn't work since the master nodes are managed separately and are not in the local network (though that makes the following error message from kube-state-metrics a bit confusing, perhaps the kubernetes service that is just an endpoint for the master in GKE is also obeying the network policy, and that's what is failing?).

F1206 14:46:57.274207       1 main.go:187] Failed to create client: ERROR communicating with apiserver: Get https://10.123.123.1:443/version: dial tcp 10.123.123.1:443: i/o timeout

Daniel Nardo

unread,
Dec 11, 2017, 2:37:07 PM12/11/17
to Kubernetes user discussion and Q&A


On Wednesday, December 6, 2017 at 7:11:59 AM UTC-8, Aaron Taylor wrote:
I've been working on adding network policies to an existing application and have run into a few issues. I'm currently using the network policy capabilities within Google Kubernetes Engine.

This was my initial attempt was the following network policy, intended to allow communication between the pods in the cluster but nowhere else.

kind: NetworkPolicy
metadata:
  name: default-internal
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
  egress:
  - to:
    - podSelector: {}
  ingress:
  - from:
    - podSelector: {}

The first issue I ran into is with liveness/readiness probes on pods. My initial policy doesn't seem to allow traffic from the kubelet, I'm guessing because it runs on the underlying host rather than as a pod. Adding an allowed CIDR range of 10.0.0.0/8 to the ingress rules fixed the issue, but is more permissive than I would like ideally. Is there a way to specifically whitelist traffic from the kubelet?

The other issue I ran into is that I wasn't able to find a way to allow traffic specifically to the Kubernetes master. This came up while trying to use kube-state-metrics with Prometheus. Is there a way to whitelist traffic specifically for the Kubernetes master? Running within GKE, whitelisting 10.0.0.0/8 didn't work since the master nodes are managed separately and are not in the local network (though that makes the following error message from kube-state-metrics a bit confusing, perhaps the kubernetes service that is just an endpoint for the master in GKE is also obeying the network policy, and that's what is failing?).

This part I can try to answer.  The kubelets in GKE communicate over the external IP addresses of the master/nodes.   Network policy won't apply the Master.  You could block it from the perspective (outbound) from the kubelet to the master, but you cannot currently apply it (inbound) to the master.  I think you already alluded to it but to block ingress to the master itself, you need something like this: https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks

ca...@tigera.io

unread,
Dec 14, 2017, 5:52:58 PM12/14/17
to Kubernetes user discussion and Q&A

> The first issue I ran into is with liveness/readiness probes on pods. My initial policy doesn't seem to allow traffic from the kubelet, I'm guessing because it runs on the underlying host rather than as a pod. Adding an allowed CIDR range of 10.0.0.0/8 to the ingress rules fixed the issue, but is more permissive than I would like ideally. Is there a way to specifically whitelist traffic from the kubelet?

This doesn't sound like expected behavior. Can you share the command you used to create your cluster?

I tried this myself and found liveness/readiness probes to work as expected even with NetworkPolicy in place, so it might be an environmental difference.

Reply all
Reply to author
Forward
0 new messages