Hello Everyone!
I am checking the networking (services) in Kubernets 1.29.0 and I see a significant difference compared with earlier versions (I checked against 1.27 and 1.28).
If I list the nat/KUBE-SERVICES chain with iptables, then I cannot see the destination ports (the service port) being matched there:
root@worker1:~# iptables -t nat -L KUBE-SERVICES -n
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-SVC-Z6GDYMWE5TV2NNJN tcp -- 0.0.0.0/0
10.108.188.27 /*
kubernetes-dashboard/dashboard-metrics-scraper cluster IP */
KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- 0.0.0.0/0
10.96.0.1 /* default/kubernetes:https cluster IP */
KUBE-SVC-TCOU7JCQXEZGVUNU udp -- 0.0.0.0/0
10.96.0.10 /* kube-system/kube-dns:dns cluster IP */
KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- 0.0.0.0/0
10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */
KUBE-SVC-JD5MR3NA4I4DYORP tcp -- 0.0.0.0/0
10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */
KUBE-SVC-CEZPIJSAUFW5MYPQ tcp -- 0.0.0.0/0
10.96.200.153 /* kubernetes-dashboard/kubernetes-dashboard
cluster IP */
KUBE-NODEPORTS all -- 0.0.0.0/0 0.0.0.0/0
/* kubernetes service nodeports; NOTE: this must be the last rule
in this chain */ ADDRTYPE match dst-type LOCAL
but if I list the same (similar?) chain with nft then I see the service ports:
root@worker1:~# nft list chain nat KUBE-SERVICES
table ip nat {
chain KUBE-SERVICES {
meta l4proto tcp ip daddr 10.108.188.27 tcp dport
8000 counter packets 0 bytes 0 jump
KUBE-SVC-Z6GDYMWE5TV2NNJN
meta l4proto tcp ip daddr 10.96.0.1 tcp dport 443
counter packets 0 bytes 0 jump KUBE-SVC-NPX46M4PTMTKRN6Y
meta l4proto udp ip daddr 10.96.0.10 udp dport 53
counter packets 0 bytes 0 jump KUBE-SVC-TCOU7JCQXEZGVUNU
meta l4proto tcp ip daddr 10.96.0.10 tcp dport 53
counter packets 0 bytes 0 jump KUBE-SVC-ERIFXISQEP7F7OF4
meta l4proto tcp ip daddr 10.96.0.10 tcp dport 9153
counter packets 0 bytes 0 jump KUBE-SVC-JD5MR3NA4I4DYORP
meta l4proto tcp ip daddr 10.96.200.153 tcp dport 443
counter packets 0 bytes 0 jump KUBE-SVC-CEZPIJSAUFW5MYPQ
fib daddr type local counter packets 233 bytes 14427 jump
KUBE-NODEPORTS
}
}
I know nftables is an alpha feature of kube-proxy, but in my case
that is not enabled and I was expecting the kube-proxy to work in
iptables mode. The kube-proxy image is 1.29.0 and the command line
for kube-proxy is:
- command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- --hostname-override=$(NODE_NAME)
my kube-proxy configmap has: mode: "" , and according to the documentation this should mean iptables.
My node runs the kernel coming with Ubuntu 22.04.3:
root@worker1:~# uname -a
Linux worker1 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14
13:30:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Can you please give me some directions why the mentioned thing is happening, and where can I read more about these.
Thank you in advance for any help.
Kind regards,
Laszlo
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-network" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-ne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-network/e6e6d5fe-4d73-49e1-b9cc-3df49d911c9f%40gmail.com.
(If you join the list your messages won't get moderated)
I did join the list yesterday, so I guess there is some time needed until my messages will be automatically accepted (I can see the list policy that says that new member messages are moderated. That is normal.).
I did a test by adding a rule to the iptables nat/KUBE-SERVICES using iptables, and when I check the rules, then I can see the destination port there: root@worker1:~# iptables -t nat -N MY-CHAIN root@worker1:~# iptables -t nat -A KUBE-SERVICES -p tcp -d 192.168.233.1/32 --dport 8008 -j MY-CHAIN root@worker1:~# iptables -t nat -nL KUBE-SERVICES"iptables -L" is not a very good command... it tries to guess what information you care about and present it in table form, while preserving backward compatibility for people trying to parse the output based on what old versions did, etc. If you do "iptables -S" instead you'll get the full rule. (I don't know why the output would be different with kube 1.29 than before... you didn't change iptables versions at the same time?)
I did test with "iptables -S" and also with "iptables-save". None
of them are showing the destination port in the KUBE-SERVICES
chain.
root@worker1:~# iptables -t nat -S KUBE-SERVICES
-N KUBE-SERVICES
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment
"kube-system/kube-dns:metrics cluster IP" -j
KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment
"kube-system/kube-dns:dns cluster IP" -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment
"kube-system/kube-dns:dns-tcp cluster IP" -j
KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment
"default/kubernetes:https cluster IP" -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.100.241.240/32 -p tcp -m comment --comment
"default/mydep cluster IP" -j KUBE-SVC-5K4KPF3R3ZZJPT44
-A KUBE-SERVICES -m comment --comment "kubernetes service
nodeports; NOTE: this must be the last rule in this chain" -m
addrtype --dst-type LOCAL -j KUBE-NODEPORTS
root@worker1:~# iptables --version
iptables v1.8.7 (nf_tables)
I did another test: on the _same_ cluster I just downgraded the kube-proxy image to 1.28.0 and once the new kube-proxy pods have started the destination port was there in the iptables output. Then set it back to the original 1.29.0 and the ports are gone again. So definitely there is something different in kube-proxy 1.29. And that's what I'm trying to figure it out.
Kind regards,
Laszlo