externalTrafficPolicy=local and hairpin allowed on iptables

189 views
Skip to first unread message

Amim Knabben

unread,
May 30, 2021, 5:04:42 PM5/30/21
to kubernetes-sig-network
Hello,

I'm doing a few tests using 2 pods with 1 service for named pod-1 with the following setup:

externalTrafficPolicy=local
NodePort=30720

kind-control-plane    172.18.0.3
kind-worker              172.18.0.4

x-40043              pod-1    10.244.0.11   kind-control-plane
x-40043              pod-2    10.244.2.9    kind-worker

If pod-1 (where the server is listening) hits 172.18.0.4:30720 the hop DOES NOT forward to 172.18.0.3 as expected, and the packet is marked as a drop. This is the iptables rule:

 4   240 KUBE-MARK-DROP  all  --  any    any     anywhere             anywhere             /* x-40043/s-x-40043-pod-1:service-port-tcp-80 has no local endpoints */ 

The issue here comes when the hairpin connects on the high port, for this case pod-2 accessing 172.18.0.4:30720, ends up using the pod IP and being forwarded to the correct service.

   1    60 KUBE-SVC-CHGKFLJR7MW3FVQV  all  --  any    any     10.244.0.0/16        anywhere             /* Redirect pods trying to reach external loadbalancer VIP to clusterIP */

This does not happen on IPVS  mode (all blocked) though. Should the network model allow traffic for internal cases?

--

AMIM KNABBEN

Antonio Ojea

unread,
May 31, 2021, 1:25:31 AM5/31/21
to Amim Knabben, kubernetes-sig-network
This is supertricky, Services and Traffic locality :), can you paste a manifest so we can reproduce it?

A quick search in kubernetes/kubernetes shows a lot of related issues, 
do we know why it doesn't happen with IPVS, is it that it doesn't consider the traffic as Internal? 
or is there some implementation detail of IPVS?


Amim Knabben

unread,
May 31, 2021, 7:20:24 AM5/31/21
to kubernetes-sig-network
Antonio,

These are the specs I'm using for both pods (the same as the netpol policy tests spec):

apiVersion: v1
kind: Pod
metadata:
  labels:
    pod: pod-1 | 2 
  name: pod-1 | 2
  namespace: x-40043
spec:
  containers:
  - command:
    - /agnhost
    - serve-hostname
    - --tcp
    - --http=false
    - --port
    - "80"
    image: k8s.gcr.io/e2e-test-images/agnhost:2.31
    name: cont-80-tcp
    ports:
    - containerPort: 80
      name: serve-80-tcp
      protocol: TCP

---

apiVersion: v1
kind: Service
metadata:
  name: s-x-40043-pod-1
  namespace: x-40043
spec:
  externalTrafficPolicy: Local
  ports:
  - name: service-port-tcp-80
    nodePort: 30720
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    pod: pod-1
  type: NodePort

---

connect cmd:

kubectl exec pod-1 -c cont-80-tcp -n x-40043 -- /agnhost connect 172.18.0.4:30720 --timeout=1s --protocol=tcp

I did not dive deep on IPVS mode, so I'm not sure how to answer these but found the issues related..
but I have that service validator running with IPVS mode and the result was different for iptables and IPVS (which block the hairpin) on TestNodePortLocal/NodePort_Traffic_Local/ExternalTrafficPolicy=local


--
Reply all
Reply to author
Forward
0 new messages