Possible IP conflict for VMs when using whereabouts CNI

253 views
Skip to first unread message

Yash Patil

unread,
Aug 20, 2021, 3:30:30 AM8/20/21
to kubevirt-dev
Hi everyone,

I am trying to deploy a centos VM which is connected to the pod network in masquerade mode (the primary interface) and the ovs bridge using Multus CNI (secondary interface). Also for the secondary interface, I am using the whereabouts plugin for IPAM configuration. 

Initially, when I deploy a single VM the first IP specified in the whereabouts range is allotted to the virt-launcher pod of the VM. As the interface is using bridge binding the same IP is getting allotted to the VMI. Now if I try to migrate the VMI, the new virt-launcher pod is allotted the second IP in the whereabouts range, and the VMI still holds the first IP of the range. 

After this, if I try to deploy a second VM using the same configuration, the whereabouts again assigns the first IP of the range to the new virt-launcher pod (maybe because from the whereabouts point of view the first IP is free). So now the virt-launcher pod tries to allocate this IP to its new VMI but as the old VMI already holds the IP, address allocation is unsuccessful. I was able to verify the IP addresses by the commands kubectl describe vmi, kubectl describe pod virt-launcher-XXXX and also verified them inside the VM using ifconfig and ip a.

Is this the expected behavior? If yes, is there any way to avoid the IP conflict?

Thanks, 
Yash

Roman Mohr

unread,
Aug 20, 2021, 4:45:51 AM8/20/21
to Yash Patil, kubevirt-dev
On Fri, Aug 20, 2021 at 9:30 AM Yash Patil <ya...@platform9.com> wrote:
Hi everyone,

I am trying to deploy a centos VM which is connected to the pod network in masquerade mode (the primary interface) and the ovs bridge using Multus CNI (secondary interface). Also for the secondary interface, I am using the whereabouts plugin for IPAM configuration. 

Initially, when I deploy a single VM the first IP specified in the whereabouts range is allotted to the virt-launcher pod of the VM. As the interface is using bridge binding the same IP is getting allotted to the VMI. Now if I try to migrate the VMI, the new virt-launcher pod is allotted the second IP in the whereabouts range, and the VMI still holds the first IP of the range. 

Just to clarify this, on the first interface you use masquerade, on the second interface you use bridge mode? If that is the case, if you want to use bridge mode on the second interface and migrations, you will have to use e.g. an external DHCP server for the secondary multus network, or static IP address assignment inside the VMIs. You basically can't use the IP address assignment mechanisms from CNI in this case.

Best regards,
Roman
 

After this, if I try to deploy a second VM using the same configuration, the whereabouts again assigns the first IP of the range to the new virt-launcher pod (maybe because from the whereabouts point of view the first IP is free). So now the virt-launcher pod tries to allocate this IP to its new VMI but as the old VMI already holds the IP, address allocation is unsuccessful. I was able to verify the IP addresses by the commands kubectl describe vmi, kubectl describe pod virt-launcher-XXXX and also verified them inside the VM using ifconfig and ip a.

Is this the expected behavior? If yes, is there any way to avoid the IP conflict?

Thanks, 
Yash

--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/43f45e3d-8386-4119-ac56-3ce2b0beffc0n%40googlegroups.com.

Yash Patil

unread,
Aug 20, 2021, 5:18:11 AM8/20/21
to kubevirt-dev
On Friday, August 20, 2021 at 2:15:51 PM UTC+5:30 Roman Mohr wrote:
On Fri, Aug 20, 2021 at 9:30 AM Yash Patil <ya...@platform9.com> wrote:
Hi everyone,

I am trying to deploy a centos VM which is connected to the pod network in masquerade mode (the primary interface) and the ovs bridge using Multus CNI (secondary interface). Also for the secondary interface, I am using the whereabouts plugin for IPAM configuration. 

Initially, when I deploy a single VM the first IP specified in the whereabouts range is allotted to the virt-launcher pod of the VM. As the interface is using bridge binding the same IP is getting allotted to the VMI. Now if I try to migrate the VMI, the new virt-launcher pod is allotted the second IP in the whereabouts range, and the VMI still holds the first IP of the range. 

Just to clarify this, on the first interface you use masquerade, on the second interface you use bridge mode? If that is the case, if you want to use bridge mode on the second interface and migrations, you will have to use e.g. an external DHCP server for the secondary multus network, or static IP address assignment inside the VMIs. You basically can't use the IP address assignment mechanisms from CNI in this case.

Yes, I am using masquerade for the first interface and bridge mode for the second interface.

Thanks for the clarification.

Thanks,
Yash
 

Pooja Ghumre

unread,
Oct 1, 2021, 2:57:59 PM10/1/21
to kubevirt-dev
Hi Roman, 

Is using "masquerade" mode for secondary interfaces an option when using OVS CNI? 

My understanding based on the documentation is that default pod network can be used in either "bridge" or "masquerade" mode, but live migration won't work with the former and for secondary multus interfaces, VM has to use "bridge" mode. 

Secondly, is anything extra needed for external connectivity from inside the VM using the secondary OVS interfaces in bridge mode?  

Thanks,
Pooja


Miguel Duarte de Mora Barroso

unread,
Oct 4, 2021, 7:59:19 AM10/4/21
to Pooja Ghumre, kubevirt-dev
On Fri, Oct 1, 2021 at 8:58 PM Pooja Ghumre <po...@platform9.com> wrote:
Hi Roman, 

Is using "masquerade" mode for secondary interfaces an option when using OVS CNI? 

Masquerade cannot be used for secondary interfaces, only for the primary.
 

My understanding based on the documentation is that default pod network can be used in either "bridge" or "masquerade" mode, but live migration won't work with the former and for secondary multus interfaces, VM has to use "bridge" mode. 

Correct.
 

Secondly, is anything extra needed for external connectivity from inside the VM using the secondary OVS interfaces in bridge mode?  

I'm afraid I do not have any experience with this CNI, and as such cannot help you with it. I see that ovn-cni has IPAM support; probably you just have to configure the gateway for that network.

I think you could explicitly request this, by issuing an issue to improve the provided demo - [0] - with a setup to provide external connectivity. 
 


Thanks,
Pooja
Reply all
Reply to author
Forward
0 new messages