How is KubeVirts's networking configured?

1,192 views
Skip to first unread message

Jihoon O

unread,
Mar 13, 2017, 7:04:21 AM3/13/17
to kubevirt-dev
Hello, I'm Jihoon O.
I wonder the architecture of network in KubeVirt.

In above page, there are not explanation about networking.

Is KubeVirt using libvirt's networking? (like https://wiki.libvirt.org/page/Networking)
Or, please explain networking model or give me some links for understanding.

Thanks in advance.

Fabian Deutsch

unread,
Mar 13, 2017, 7:17:25 AM3/13/17
to Jihoon O, kubevirt-dev
Hey Jihoon,

good that you raise it, networking is still quite open.

In theory you can already use libvirt's capabilities for VM network management.
I.e. you can use a local bridge to connect VMs to your LAN.

But we aim a little higher and would like to connect VMs to Kubernetes
networks - or said differently: To inherit the same network
connectivity as pods have.
Kubernets networks? Multiple? Yes, today there is just the default NIC.
But there are proposals to enable multiple networks for pods, and in
addition is also at least one CNI plugin which allows connecting
mutiple networks to a pod (multus).

But having multiple NICs does not solve networking right away, as the
current networking also involves IPAM, which is a little problematic
with pet VMs.
In our case, we would like to give the VMs the power to configure
their IP addresses and ranges. To allow this, we will need L2
connectivity between them - whic his currently not in scope of
kubernetes netwroking.
However, part of the multiple NIC proposal is also to define networks
which don't have IPAM, but cover L2 connectivity only. And that is
what we strive for, and what will help us to connect VMs.

There are now a few ways of how this can be implemented on the KubeVirt side.
And this dicussion has not seen to much attention lately.
But I see attractive ways:
Let VMs inheirt pod connectivity, and bind VMs to the NICs of a pod.
This would allow us to offload the complete NIC wiring to the kubelet
and thus CNI. The gap would be to allow KubeVirt to connect a VM to
the NICs of a pod.

The other way is to bypass kubelet's/CNI capabilities to attach
networks, and use libvirt to connect to the right ones. This would
mean we could specify Kube networks in VM specificatiosn, and KubeVirt
would take care to connect a VM to the correct network.
But the obvious and big problem here is that KubeVirt needs to
reimplement parts of kubelet's network logic to wire up VMs.

Both areas need a little research to make some progress.

A third area is to allow the VMs to "inherit IPAM" - this is yeat
another area of research.

Despite all this thoughts - What are your requirements on networking Jihoon?

- fabian
> --
> You received this message because you are subscribed to the Google Groups
> "kubevirt-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubevirt-dev...@googlegroups.com.
> To post to this group, send email to kubevi...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kubevirt-dev/5dabbb9b-764d-4063-b0df-78270ca0ff90%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Sungwon Han

unread,
Mar 14, 2017, 10:17:07 PM3/14/17
to kubevirt-dev
Hello,

I am trying to connect guest VMs directly to the Kubernetes network configured to use Calico network plugin. I just tested this approach manually through the below steps.

1. create a tap device (e.g., tap0)
2. specify the tap device in the interface element of the VM specification as below to launch a guest VM using the tap device

      interfaces:
      - target:
          dev: tap0
        type: ethernet
        model:
          type: virtio

3. create a VM (i.e., kubectl create -f vm.json)
4. set up the eth0 interface of the VM using an IP address from the IP pool of the Kubernetes network.
5. add a host route to the VM using the IP address and the tap0 interface.
6. enable proxy-arp on the tap0 interface.
7. create a Kubernetes Service to expose a web service running on the VM onto an external IP address.
8. create an Kubernetes Endpoints to map the Service to the web service in the VM.
9. access the web service using [IP address (of the Kubernetes node) + NodePort]

I think step 1, 2, 5, and 6 can be implemented in virt-handler using libcalico (1). To implement step 4, a DHCP agent running on each Kubernetes node is needed. I think Calico DHCP agent (2) can be utilized for step 4.

What do you think about this approach?


- Sungwon

2017년 3월 13일 월요일 오후 8시 17분 25초 UTC+9, Fabian Deutsch 님의 말:

Roman Mohr

unread,
Mar 15, 2017, 5:38:36 AM3/15/17
to kubevirt-dev
Hi Sungwon,


On Wednesday, March 15, 2017 at 3:17:07 AM UTC+1, Sungwon Han wrote:
Hello,

I am trying to connect guest VMs directly to the Kubernetes network configured to use Calico network plugin. I just tested this approach manually through the below steps.

Note that at the moment we are using weave. 

1. create a tap device (e.g., tap0) 
2. specify the tap device in the interface element of the VM specification as below to launch a guest VM using the tap device

      interfaces:
      - target:
          dev: tap0
        type: ethernet
        model:
          type: virtio

3. create a VM (i.e., kubectl create -f vm.json)
4. set up the eth0 interface of the VM using an IP address from the IP pool of the Kubernetes network.

I guess here we will also somehow make sure that overlay network knows that this IP is now used.  I guess in case of calico the dhcp agent you mention down below mould make sure that later CRI calls from kubernetes don't try to use this IP.
 
5. add a host route to the VM using the IP address and the tap0 interface.
6. enable proxy-arp on the tap0 interface.

Afaik that would also work if we would create a macvtap in 1. and enable proxy-arp there, right? It might be nice to have that possibility too once we run libvirt in a container.
 
7. create a Kubernetes Service to expose a web service running on the VM onto an external IP address.
8. create an Kubernetes Endpoints to map the Service to the web service in the VM.
9. access the web service using [IP address (of the Kubernetes node) + NodePort]

Do you mean by that that you were using an externalIP in the service definition?
 

I think step 1, 2, 5, and 6 can be implemented in virt-handler using libcalico (1). To implement step 4, a DHCP agent running on each Kubernetes node is needed. I think Calico DHCP agent (2) can be utilized for step 4.

In case of weave, they don't seem to have a dhcp solution (3), maybe there are other ways how we could do that. Maybe even the dnsmasq agent started by libvirt could help us there. 

What do you think about this approach?


For a fist step I really like it. It would be a very nice proof of concept implementation. From that we can then learn, how for example a CNI like plugin mechanism in virt-handler could look like, to also allow the usage of different network providers.

I think. right now, the most interesting projects, to integrate with, are calico, weave and ovn.

- Sungwon



Thank you for sharing this.

Roman
Message has been deleted

Sungwon Han

unread,
Mar 15, 2017, 8:53:44 PM3/15/17
to kubevirt-dev
Hi Roman,

2017년 3월 15일 수요일 오후 6시 38분 36초 UTC+9, Roman Mohr 님의 말:
Hi Sungwon,

On Wednesday, March 15, 2017 at 3:17:07 AM UTC+1, Sungwon Han wrote:
Hello,

I am trying to connect guest VMs directly to the Kubernetes network configured to use Calico network plugin. I just tested this approach manually through the below steps.

Note that at the moment we are using weave. 

1. create a tap device (e.g., tap0) 
2. specify the tap device in the interface element of the VM specification as below to launch a guest VM using the tap device

      interfaces:
      - target:
          dev: tap0
        type: ethernet
        model:
          type: virtio

3. create a VM (i.e., kubectl create -f vm.json)
4. set up the eth0 interface of the VM using an IP address from the IP pool of the Kubernetes network.

I guess here we will also somehow make sure that overlay network knows that this IP is now used.  I guess in case of calico the dhcp agent you mention down below mould make sure that later CRI calls from kubernetes don't try to use this IP.

You are right. Calico DHCP agent synchronizes with the IP pool of the Kubernetes network for available IP addresses.
 
 
5. add a host route to the VM using the IP address and the tap0 interface.
6. enable proxy-arp on the tap0 interface.

Afaik that would also work if we would create a macvtap in 1. and enable proxy-arp there, right? It might be nice to have that possibility too once we run libvirt in a container.

Yes, you are right.
 
 
7. create a Kubernetes Service to expose a web service running on the VM onto an external IP address.
8. create an Kubernetes Endpoints to map the Service to the web service in the VM.
9. access the web service using [IP address (of the Kubernetes node) + NodePort]

Do you mean by that that you were using an externalIP in the service definition?

Step 7, 8, and 9 are for accessing the VM from outside of the Kubernetes cluster. To do that, a Service is defined without a selector (because the VM is not bound to a NIC of a Pod) as follow.

apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  ports:
    - port: 8189
  type: NodePort

Because this service has no selector, the corresponding Endpoints object will not be created. This service can be manually mapped to an endpoint of the VM as follow.

apiVersion: v1
kind: Endpoints
metadata:
  name: web-service
subsets:
  - addresses:
      - ip: 192.168.0.3
    ports:
      - port: 8189

After creating the above two things, 8189 port of the VM can be accessed by <Node IP>:<NodePort> from outside of the cluster. Note that <Node IP> is any IP address of Kubernetes nodes.
 
 

I think step 1, 2, 5, and 6 can be implemented in virt-handler using libcalico (1). To implement step 4, a DHCP agent running on each Kubernetes node is needed. I think Calico DHCP agent (2) can be utilized for step 4.

In case of weave, they don't seem to have a dhcp solution (3), maybe there are other ways how we could do that. Maybe even the dnsmasq agent started by libvirt could help us there. 

What do you think about this approach?


For a fist step I really like it. It would be a very nice proof of concept implementation. From that we can then learn, how for example a CNI like plugin mechanism in virt-handler could look like, to also allow the usage of different network providers.

I think. right now, the most interesting projects, to integrate with, are calico, weave and ovn.

 


- Sungwon



Thank you for sharing this.

Roman

Thank you for your feedback.

Sungwon 

Roman Mohr

unread,
Mar 17, 2017, 11:49:33 AM3/17/17
to kubevirt-dev
Big +1 for caligo for a first prototype. Looking forward to see something we can play with. Let me know if you need some assistance! :)

Roman

Roman Mohr

unread,
Apr 3, 2017, 3:14:15 AM4/3/17
to kubevirt-dev
Looks like ovn, since a few months, also has native dhcp support (4). @sungwon,  will try to do what you are doing with calicao with ovn :)
Did you already start with externalizing the calico plugin from virt-handler? If so,  did you already think about the request and response formats?

Best Regards,

Roman 

Sungwon Han

unread,
Apr 4, 2017, 3:50:01 AM4/4/17
to kubevirt-dev
Hi Roman,

I am trying to run the calico-dhcp-agent on the Kubernetes network, but I think it's not easy because it depends on the neutron service of OpenStack. I think it would be a little easier to do what I did again with ovn because it has native dhcp support as you said.

As for the externalizing the calico plugin from virt-handler, I have not started yet. I think some env variables can be used for the request and v1.Interface{} can be used for the response.

Thank you
Sungwon

2017년 4월 3일 월요일 오후 4시 14분 15초 UTC+9, Roman Mohr 님의 말:

Roman Mohr

unread,
Apr 10, 2017, 3:19:07 AM4/10/17
to kubevirt-dev


On Tuesday, April 4, 2017 at 9:50:01 AM UTC+2, Sungwon Han wrote:
Hi Roman,

I am trying to run the calico-dhcp-agent on the Kubernetes network, but I think it's not easy because it depends on the neutron service of OpenStack. I think it would be a little easier to do what I did again with ovn because it has native dhcp support as you said.

We could also run one dnsmasq ( maybe per host?) which we send the updated association between ip and mac. The advantage would be that it is then moe network provider independent.
 

As for the externalizing the calico plugin from virt-handler, I have not started yet. I think some env variables can be used for the request and v1.Interface{} can be used for the response.

That sounds good.

Fabian Deutsch

unread,
Apr 10, 2017, 5:10:54 AM4/10/17
to Roman Mohr, kubevirt-dev
Hey,

Sungwon Han, did you consider to create aplugin based on CNI?

I.e. virtcontainers are using a CNI based approach
https://github.com/containers/virtcontainers

Also ClearContainers and virtlet follow a similar approach IIUIC (at
least they two follow a similar approach, whereas virtcontainers might
be a little different).

- fabian
> https://groups.google.com/d/msgid/kubevirt-dev/a3c6f2fc-606c-4084-bd30-d3dc17e721db%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages