Can't reach Kubernetes service from outside of node when kube-proxy in iptables mode

1,319 views
Skip to first unread message
Assigned to peter.pr...@gmail.com by me

Peter Price

unread,
Feb 6, 2017, 6:20:55 PM2/6/17
to Kubernetes user discussion and Q&A
[I've also posted this on StackOverflow, as I'm not sure about which is the best forum to ask.]

I have a Single-Node (master+node) Kubernetes deployment running on CoreOS, with kube-proxy running in iptables mode, flannel for container networking, without Calico.

kube-proxy.yaml

    apiVersion: v1
    kind
: Pod
    metadata
:
      name
: kube-proxy
     
namespace: kube-system
    spec
:
      hostNetwork
: true
      containers
:
     
- name: kube-proxy
        image
: quay.io/coreos/hyperkube:v1.5.2_coreos.0
        command
:
       
- /hyperkube
        - proxy
        - --master=http:/
/127.0.0.1:8080
       
- --hostname-override=10.0.0.144
       
- --proxy-mode=iptables
       
- --bind-address=0.0.0.0
       
- --cluster-cidr=10.1.0.0/16
       
- --masquerade-all=true
        securityContext
:
          privileged
: true


I've created a deployment, then exposed that deployment using a Service of type NodePort.

    user@node ~ $ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \
     
--labels=app=hostnames \
     
--port=9376 \
     
--replicas=3


    user@node
~ $ kubectl expose deployment hostnames \
     
--port=80 \
     
--target-port=9376 \
     
--type=NodePort


    user@node
~ $ kubectl get svc hostnames
    NAME        CLUSTER
-IP   EXTERNAL-IP   PORT(S)        AGE
    hostnames  
10.1.50.64   <nodes>       80:30177/TCP   6m

I can curl successfully from the node (loopback and eth0 IP):

    user@node ~ $ curl localhost:30177
    hostnames
-3799501552-xfq08


    user@node ~ $ curl 10.0.0.144:30177
    hostnames
-3799501552-xfq08


However, I cannot curl from outside the node. I've tried from both a client machine outside the node's network (with correct firewall rules), and a machine inside the node's private network, with the network's firewall completely open between the two machines, with no luck.

I'm fairly confident that it's an iptables/kube-proxy issue, because if I modify the kube-proxy config from --proxy-mode=iptables to --proxy-mode=userspace I can access from both external machines. Also, if I bypass kubernetes and run a docker container I have no problems with external access.

Here are the current iptables rules:

    user@node ~ $ iptables-save
   
# Generated by iptables-save v1.4.21 on Mon Feb  6 04:46:02 2017
   
*nat
   
:PREROUTING ACCEPT [0:0]
   
:INPUT ACCEPT [0:0]
   
:OUTPUT ACCEPT [0:0]
   
:POSTROUTING ACCEPT [0:0]
   
:DOCKER - [0:0]
   
:KUBE-MARK-DROP - [0:0]
   
:KUBE-MARK-MASQ - [0:0]
   
:KUBE-NODEPORTS - [0:0]
   
:KUBE-POSTROUTING - [0:0]
   
:KUBE-SEP-4IIYBTTZSUAZV53G - [0:0]
   
:KUBE-SEP-4TMFMGA4TTORJ5E4 - [0:0]
   
:KUBE-SEP-DUUUKFKBBSQSAJB2 - [0:0]
   
:KUBE-SEP-XONOXX2F6J6VHAVB - [0:0]
   
:KUBE-SERVICES - [0:0]
   
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
   
:KUBE-SVC-NWV5X2332I4OT4T3 - [0:0]
   
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
   
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
   
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
   
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
   
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
   
-A POSTROUTING -s 10.1.0.0/16 -d 10.1.0.0/16 -j RETURN
   
-A POSTROUTING -s 10.1.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
   
-A POSTROUTING ! -s 10.1.0.0/16 -d 10.1.0.0/16 -j MASQUERADE
   
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
   
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
   
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/hostnames:" -m tcp --dport 30177 -j KUBE-MARK-MASQ
   
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/hostnames:" -m tcp --dport 30177 -j KUBE-SVC-NWV5X2332I4OT4T3
   
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
   
-A KUBE-SEP-4IIYBTTZSUAZV53G -s 10.0.0.144/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
   
-A KUBE-SEP-4IIYBTTZSUAZV53G -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-4IIYBTTZSUAZV53G --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.0.0.144:6443
   
-A KUBE-SEP-4TMFMGA4TTORJ5E4 -s 10.1.34.2/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
   
-A KUBE-SEP-4TMFMGA4TTORJ5E4 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.2:9376
   
-A KUBE-SEP-DUUUKFKBBSQSAJB2 -s 10.1.34.3/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
   
-A KUBE-SEP-DUUUKFKBBSQSAJB2 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.3:9376
   
-A KUBE-SEP-XONOXX2F6J6VHAVB -s 10.1.34.4/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
   
-A KUBE-SEP-XONOXX2F6J6VHAVB -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.4:9376
   
-A KUBE-SERVICES -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
   
-A KUBE-SERVICES ! -s 10.1.0.0/16 -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
   
-A KUBE-SERVICES -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3
   
-A KUBE-SERVICES -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
   
-A KUBE-SERVICES ! -s 10.1.0.0/16 -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
   
-A KUBE-SERVICES -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
   
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
   
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-4IIYBTTZSUAZV53G --mask 255.255.255.255 --rsource -j KUBE-SEP-4IIYBTTZSUAZV53G
   
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-4IIYBTTZSUAZV53G
   
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-4TMFMGA4TTORJ5E4
   
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-DUUUKFKBBSQSAJB2
   
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-XONOXX2F6J6VHAVB
    COMMIT
   
# Completed on Mon Feb  6 04:46:02 2017
   
# Generated by iptables-save v1.4.21 on Mon Feb  6 04:46:02 2017
   
*filter
   
:INPUT DROP [0:0]
   
:FORWARD DROP [0:0]
   
:OUTPUT ACCEPT [67:14455]
   
:DOCKER - [0:0]
   
:DOCKER-ISOLATION - [0:0]
   
:KUBE-FIREWALL - [0:0]
   
:KUBE-SERVICES - [0:0]
   
-A INPUT -j KUBE-FIREWALL
   
-A INPUT -i lo -j ACCEPT
   
-A INPUT -i eth0 -j ACCEPT
   
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
   
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
   
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
   
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
   
-A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
   
-A INPUT -p icmp -m icmp --icmp-type 3 -j ACCEPT
   
-A INPUT -p icmp -m icmp --icmp-type 11 -j ACCEPT
   
-A FORWARD -j DOCKER-ISOLATION
   
-A FORWARD -o docker0 -j DOCKER
   
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
   
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
   
-A FORWARD -i docker0 -o docker0 -j ACCEPT
   
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
   
-A OUTPUT -j KUBE-FIREWALL
   
-A DOCKER-ISOLATION -j RETURN
   
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
    COMMIT
   
# Completed on Mon Feb  6 04:46:02 2017


I'm not sure what to look for in the rules... Can someone with more experience than myself make some suggestions on troubleshooting?

Peter Price

unread,
Feb 13, 2017, 4:21:34 PM2/13/17
to kubernet...@googlegroups.com
This was caused by initial configuration of iptables before kube-proxy took control. 

Reply all
Reply to author
Forward
0 new messages