I am playing with runsc on bare metal, no docker (let alone Kubernetes). I followed the CNI configuration steps available at
https://gvisor.dev/docs/tutorials/cni/ and it works.
But only once.
My container runs `nc -lvn4 12345`, so it exits gracefully after a first connection succeeds. On the host, I use `date | nc -v4n -q 1 $POD_IP 1234` as a client. Running ifconfig inside the container (before I launch nc) shows this:
ifconfig
eth0: flags=65<UP,RUNNING> mtu 1500
inet 10.22.0.10 netmask 255.255.0.0
inet6 fe80::90b7:12ff:fed0:97 prefixlen 64 scopeid 0x0<global>
ether 92:b7:12:d0:00:97 txqueuelen 1500 (Ethernet)
RX packets 39 bytes 2069 (2.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 19 bytes 1222 (1.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0x9700d012000005dc-0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x0<global>
loop txqueuelen 65536 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0x10000-0
But if I start the container right after it exited, I only see the loopback interface in it. On the host, the IP is also gone:
$ sudo ip netns exec ${CNI_CONTAINERID} ip addr show eth0
3: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 92:b7:12:d0:00:97 brd ff:ff:ff:ff:ff:ff link-netnsid 0
But the IP is still there somewhere, because if I clean it up like instructed and configure it back again, I will get an IP+1. You can see in my ifconfig above that POD_IP=10.22.0.10.
I tried both the 0.8.3 version listed and the latest 1.6.2 version of the CNI plugin and they have the same behavior.
Is it expected that I should reconfigure IP address after each run of the container? If not, should I look into CNI or gVisor to debug?