Need help to Disable IPv6 (kubernetes node/master/minions)

9,328 views
Skip to first unread message

satheesh pandiaraj

unread,
Jun 20, 2018, 3:08:27 PM6/20/18
to Kubernetes developer/contributor discussion

Need help to Disable IPv6 (kubernetes node/master/minions)

Is there any kubernetes network configuration i can disable ipv6.

i set up openstack - kubernetes (master & minions established)

Deployed sample service yaml via kubernetes dashboard. Service (docker container) successfully runs in node/minion.

When i tried to check netstat -an | grep udp  (checking the exposed node ports) - observed as udp6 instead udp.  

I tried disabling ipv6 in system/node level sysctl parameters /etc/sysctl.conf by adding disable net.ipv6.conf.default.disable_ipv6 = 1, net.ipv6.conf.all.disable_ipv6 = 1. This change reflected after i did service network restart by confirming ifconfig command there is not inet6 address.

ifconfig (after syctl.conf changes then service network restart - no inet6 address)
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.11  netmask 255.255.255.0  broadcast 192.168.10.255
        ether fa:16:3e:b4:3d:66  txqueuelen 1000  (Ethernet)
        RX packets 531569  bytes 242477370 (231.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 464606  bytes 79916440 (76.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


Redeployed Service yaml via kubernetes dashboard. Service (docker container) successfully runs in node/minion but still the ports are listening in ipv6 address (udp6).

netstat -an | grep udp
udp6       0      0 :::10410                :::*
udp6       0      0 :::10411                :::*
udp6       0      0 :::10412                :::*
udp6       0      0 :::10413                :::*
udp6       0      0 :::10414                :::*
udp6       0      0 :::10415                :::*
udp6       0      0 :::10416                :::*
udp6       0      0 :::10417                :::*


Note/Additional Info :  
I tried running the same docker container in minion directly without kubernetes deployment it runs & all the ports of my application exposed all are bind to ipv4 address (udp).
udp        0      0 0.0.0.0:10410           0.0.0.0:*
udp        0      0 0.0.0.0:10411           0.0.0.0:*


satheesh pandiaraj

unread,
Jun 20, 2018, 3:12:58 PM6/20/18
to Kubernetes developer/contributor discussion
Any help greatly appreciated 

Tim Hockin

unread,
Jun 20, 2018, 3:56:05 PM6/20/18
to pgsat...@gmail.com, Kubernetes developer/contributor discussion
Is there an actual problem?  Netstat show IPv6 addresses even when there's IPv4 in play.  I know it's weird, it's just the way it is. Are you experiencing actual trouble?

--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/28795e88-8377-479d-88e1-92bfb82bb3c8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

satheesh pandiaraj

unread,
Jun 21, 2018, 3:14:28 AM6/21/18
to Kubernetes developer/contributor discussion

Yes. Im running some client/server publish-subscribe model application talks on certain ports for data exchange. 

1) When i ran as data receiver  (docker container) outside of kubernetes directly in node vm - data exchange works (netstat grep list of udp ports listing with udp ipv4 stack).
2) When i ran as data receiver  (docker container) inside of kubernetes node vm - data exchange doesnt work (netstat grep list of udp ports listing with udp6 ipv6 stack).

from my above experiments/observation - trying to look due ipv6 may be issue (not 100% sure). i thought lets give it try disabling ipv6 entirely in kubernetes level & see if it works or not.   yeah basically its try & i need help to disable ipv6 in kubernetes itself (any network configuration available in kubernetes disable ipv6  ??? )

----

Matthias Bertschy

unread,
Jun 21, 2018, 3:19:57 AM6/21/18
to Kubernetes developer/contributor discussion
Are you using the service name to address you're pods?
If you rely on individual container addresses, they might not work if scheduled on different nodes... you really have to use the service abstraction for communications between pods.

satheesh pandiaraj

unread,
Jun 21, 2018, 3:44:42 AM6/21/18
to Kubernetes developer/contributor discussion
im running data sender/producer (docker container) running outside of kubernetes just like anyother VM (here i have control so always static ip).
Where as data receiver  (docker container) inside of kubernetes node vm (expect ip address will change depends on node/pod available during deployment) - data exchange doesnt work (netstat grep list of udp ports listing with udp6 ipv6 stack).

data sender/producer (docker container) running outside of kubernetes just like anyother VM  (IPv4 Address 192.168.10.11 "eth0" interface)  
data reciver (docker container) running insideof kubernetes node vm  (IPv4 Address 192.168.10.13 "eth0" interface,  kubernetes dashboard shows service running in one of the minions ClusterIP 10.233.45.57, endpoint host 10.233.103.79,)   

192.168.10.11 ping 192.168.10.13 (vice versa works) - this is ip address pair (sender/receiver)i expect actual data exchange should work via kubernetes layer of networking.

Additional Info : Im very much new to kubernetes (still learning) so please guide me right direction (my expectation is right ? )

Matthias Bertschy

unread,
Jun 21, 2018, 4:21:21 AM6/21/18
to Kubernetes developer/contributor discussion
If your data is sent from outside the cluster to pods that are inside, you need an ingress:

Bryan Boreham

unread,
Jun 21, 2018, 7:28:30 AM6/21/18
to Kubernetes developer/contributor discussion
Strongly doubt that OP will "need an ingress" - Ingress is for HTTP only and they say they are using UDP.

They do need _some_ means of getting into the pod network, which could be NodePort, or it could be the host they're coming from can route directly.

Sadly there are so many variations it is difficult to offer advice that is concise, correct and useful.

I gave this talk, a broad overview aimed at the Kubernetes newcomer: https://youtu.be/7OFw3lgSb1Q

Bryan

satheesh pandiaraj

unread,
Jun 21, 2018, 8:32:35 AM6/21/18
to Kubernetes developer/contributor discussion


yes im using NodPort trying to specify which NodePort for UDP communication (service data receiver application yaml file) below pasted yaml file for reference

Note : One more hint i changed the default service node port range from "30000 - 32767"  to "10000 - 32767"  restarted kube-apiserver worked  (just for experiments). im confident on this port range change worked (deployed anothe sample web app https://masternodeIP:10001 works). so no worries. Here exposed ip is masternodeIP which is 10.xxx.xxx.xxx.

here currently im trying to esatablish data connectivity 192.168.10.11 vs 192.168.10.13 (VM from outside kuberenets cluster connecting to kuberenets node pod).
data sender/producer (docker container) running outside of kubernetes cluster just like anyother VM  (IPv4 Address 192.168.10.11 "eth0" interface)  
data reciver (docker container) running insideof kubernetes node vm  (IPv4 Address 192.168.10.13 "eth0" interface,

yaml file for reference
**********************************

apiVersion: v1

kind: Service

metadata:
  name: data-receiver
  namespace: test

spec:
  template:
    metadata:
      labels:
        app: data-receiver

spec:
  selector:
   app: data-receiver
  
  hostNetwork: true
  type: NodePort

  containers:
  - name: data-receiver
    image: artifcatory.url.data-receiver:latest
  
  ports:
  - name: control-channel
    protocol: UDP
    port: 10410
    containerPort: 10410
    targetPort: 10410
    nodePort: 10410

  - name: data-channel
    protocol: UDP
    port: 10411
    containerPort: 10411
    targetPort: 10411
    nodePort: 10411
    
---

apiVersion: extensions/v1beta1
kind: Deployment

metadata:
  name: data-receiver
  namespace: test

spec:

  template:
    metadata:
      labels:
        app: data-receiver


    spec:
      containers:
      - name: data-receiver
        image: artifcatory.url.data-receiver:latest
        ports:
        - containerPort: 10410
          protocol: UDP
        - containerPort: 10411
          protocol: UDP

**********************************   

satheesh pandiaraj

unread,
Jun 21, 2018, 8:40:23 AM6/21/18
to Kubernetes developer/contributor discussion
i may be wrong started asking here in this forum "how to disable ipv6" - i may change it to "how to establish connectivity from outside kuberenets cluster connecting to kuberenets node pod service running".

Tim Hockin

unread,
Jun 21, 2018, 7:41:17 PM6/21/18
to satheesh pandiaraj, Kubernetes developer/contributor discussion
Have you said what enviornment, OS, and network stack this is?

satheesh pandiaraj

unread,
Jun 22, 2018, 1:41:59 AM6/22/18
to Kubernetes developer/contributor discussion
its CentOS v7 

Tim Hockin

unread,
Jun 22, 2018, 11:44:41 AM6/22/18
to satheesh pandiaraj, Kubernetes developer/contributor discussion
What networking?  Can any pod ping any other pod on a different machine?

Jay Vyas

unread,
Jun 22, 2018, 2:44:04 PM6/22/18
to Tim Hockin, satheesh pandiaraj, Kubernetes developer/contributor discussion
Hi satheesh ... So, long thread, but since your a beginner, I’ll just relay an example  to level set, hope this isn’t to rudimentary, but sometimes it’s easy to make some crude assumptions and move forward with an example.

- If your openstack VMs ip addresses are 10.100.200.[1-255]  
- and the nodeport randomly allocated is 51274 

Now, 

- you will want to access your service on something such as 10.100.200.123:51275

1) if this connection hangs , it means that the nodeport is allocated , and likely the kubelet is ok ... but with receiiving traffic on the port, but maybe there is **no pod** that is attached to that service. Use kubectl get endpoints to debug.

2) if this connection is **refused** it means that the nodeport mechanism isn’t working at all, and maybe it is an IPv6 issue.

Cédric de Saint Martin

unread,
Jun 26, 2018, 10:50:35 AM6/26/18
to Kubernetes developer/contributor discussion
For reference only, for people coming from Google search to this post, here is how one can completely disable IPv6 on a machine (be it Kubernetes or not):

echo "
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
" | sudo tee -a /etc/sysctl.conf
reboot
Reply all
Reply to author
Forward
This conversation is locked
You cannot reply and perform actions on locked conversations.
0 new messages