Kubevirt with type-1 hypervisor (Xen)

352 views
Skip to first unread message

Jun Zhang

unread,
Oct 13, 2021, 11:10:44 PM10/13/21
to kubevirt-dev
Hello everyone,

I filed an issue(6577) to propose this topic two days ago. As this is a big topic(suggested by @mazzystr, thank you @mazzystr), I decide to open this conversation to kick off a discussion about this topic.

1. Current state.
Currently, kubevirt can manage kvm(type-2) VM not only on x86 but also on arm64(experimental stage) via libvirt and the experience is good for me. But it can not support type-1 hypervisor now.

2.The investigation, feasibility analysis and test.
As kubevirt architecture is designed to manage VM via libvirt and libvirt can support xen hypervisor, so it is fesiable to add xen hypervisor support in kubevirt. Before proposing this new feature, we had done lots of exporing, test and analysis for xen hypervisor. And finally, we successfully finished a prototype which can launch a xen VM not only on x86 but also on arm64. Below basic features had been verified on x86 and arm64 host by modifing some hard code.
  • Schedule a xen VM on a kubernetes cluster
  • Launch a xen VM
  • Stop a xen VM
  • Pause/unpause a xen VM
  • Login a xen VM by virtctl console

And below features are not supported for xen hypervisor currently by our test.

  • Hotplug volumes
  • Disks with virtio bus

The investigation and exporing had been done as above. We are working in Arm and very pleased to add xen hypervisor in kubevirt. Is the community interested in this? We are willing to harden our patches and raise PR for the implementation.

Looking forward the suggestion and discussion about the topic.

Thanks a lot.


BR

Jun


Michael Zhao

unread,
Oct 20, 2021, 5:30:38 AM10/20/21
to kubevirt-dev
Hi, 

I attached some slides to introduce the proposal for Xen supporting in very high level. Hopefully we can go through them and discuss in the community meeting today.

BR
Michael
Support Xen in KubeVirt.pptx

Roman Mohr

unread,
Nov 3, 2021, 12:28:48 PM11/3/21
to kubevirt-dev
Hi Jun,

Thanks again for the presentation about a possible XEN integration in the community meeting two weeks ago.

On Thursday, October 14, 2021 at 5:10:44 AM UTC+2 junzh...@gmail.com wrote:
Hello everyone,

I filed an issue(6577) to propose this topic two days ago. As this is a big topic(suggested by @mazzystr, thank you @mazzystr), I decide to open this conversation to kick off a discussion about this topic.

1. Current state.
Currently, kubevirt can manage kvm(type-2) VM not only on x86 but also on arm64(experimental stage) via libvirt and the experience is good for me. But it can not support type-1 hypervisor now.


Just for completeness: KVM is also a type-1 hypervisor, see  [1], [2], [3] for reference. What KVM does not have in contrast to XEN is the dom0 concept.

As we discussed in the community meeting [4], the dom0 concept potentially imposes some interesting difficulties when it comes to the integration of Kubernetes and KubeVirt with XEN.

I am looking forward to hearing about your progress in the next meetings.

Best regards,
Roman

Shaya Potter

unread,
Nov 3, 2021, 2:46:20 PM11/3/21
to kubevirt-dev
so I wasn't at the meeting, but glancing at the notes, I didn't see any mention of virtual kubelet (and its ilk), so I think the xen people should consider taking a close look at that.

the reason being, viewing kubevirt as needing containers is limiting it to it's virt-launcher interface (i.e. running each vm in a pod).  This is good in some ways (get certain things for free, i.e. if vm and pod share the same ip effectively, things like services become free), but limiting in other ways.   A question the kubevirt people might want to answer, how weded are they to the virt launcher approach, and would they be opened to abstracting it away. 

But that might not be 100% necessary, as other approaches exist that are available

Another approach is what I did years ago (https://github.com/sjpotter/infranetes, dead/defunct/don't use, hasn't been run by me since kube 1.07 I believe) and what the virtual kubelet people have done since (https://github.com/virtual-kubelet/virtual-kubelet).  Basically reimagine what a pod can be (i.e not simply a set of containers) but simply anything that is backed by an ip.  In infranetes, I implemented it as a container runtime interface implementation that had multiple pluggable modules (i wrote aws, vsphere, gcp and virtualbox providers) to enable one to run VMs that to kubernetes looked and behaved like pods.  In virtual-kubelet, they seem to have gone about just making kubelet itself pluggable (probably gives a bit more control, the CRI was very limiting and really did imagine it was just containers, I really abused that API).

One of the issues I faced (and something I would change if I would go back to the drawing board with infranetes) is that the pod struct isn't really the best method to define a VM (again, I abused it).  But, by using pods, one gains a lot of things for free (scheduling, service ip mapping amongst others).  If I were to redo it, Id probably create my own CRD with an operator/controller that can take a well formed structure and then turn it into a pod struct (but that the end user doesn't have to know about).  so lots of configuration could go into annotations/labels, but the end user wouldn't have to know about that magical incantation, they would have a well documented type safe api/struct to use via the CRD which is then manageled into a pod to be delivered to a kubelet that knows how to handle this pod with the funky annotations.

just a thought that might be useful to them in other ways of approaching the issue.

--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/43373f01-3146-427e-bc3d-71439af55723n%40googlegroups.com.

Roman Mohr

unread,
Nov 4, 2021, 4:28:20 AM11/4/21
to Shaya Potter, kubevirt-dev
On Wed, Nov 3, 2021 at 7:46 PM Shaya Potter <spo...@gmail.com> wrote:
so I wasn't at the meeting, but glancing at the notes, I didn't see any mention of virtual kubelet (and its ilk), so I think the xen people should consider taking a close look at that.

the reason being, viewing kubevirt as needing containers is limiting it to it's virt-launcher interface (i.e. running each vm in a pod).  This is good in some ways (get certain things for free, i.e. if vm and pod share the same ip effectively, things like services become free), but limiting in other ways.   A question the kubevirt people might want to answer, how weded are they to the virt launcher approach, and would they be opened to abstracting it away. 

But that might not be 100% necessary, as other approaches exist that are available

Another approach is what I did years ago (https://github.com/sjpotter/infranetes, dead/defunct/don't use, hasn't been run by me since kube 1.07 I believe) and what the virtual kubelet people have done since (https://github.com/virtual-kubelet/virtual-kubelet).  Basically reimagine what a pod can be (i.e not simply a set of containers) but simply anything that is backed by an ip.  In infranetes, I implemented it as a container runtime interface implementation that had multiple pluggable modules (i wrote aws, vsphere, gcp and virtualbox providers) to enable one to run VMs that to kubernetes looked and behaved like pods.  In virtual-kubelet, they seem to have gone about just making kubelet itself pluggable (probably gives a bit more control, the CRI was very limiting and really did imagine it was just containers, I really abused that API).

One of the issues I faced (and something I would change if I would go back to the drawing board with infranetes) is that the pod struct isn't really the best method to define a VM (again, I abused it).  But, by using pods, one gains a lot of things for free (scheduling, service ip mapping amongst others).  If I were to redo it, Id probably create my own CRD with an operator/controller that can take a well formed structure and then turn it into a pod struct (but that the end user doesn't have to know about).  so lots of configuration could go into annotations/labels, but the end user wouldn't have to know about that magical incantation, they would have a well documented type safe api/struct to use via the CRD which is then manageled into a pod to be delivered to a kubelet that knows how to handle this pod with the funky annotations.

just a thought that might be useful to them in other ways of approaching the issue.

I think in general a lot is possible. However, I personally think kubevirt works pretty great because it has a pretty narrow focus on traditional kubernetes (no virtual-kubelet, not trying to run side-by-side with a container runtime implementation but rather running inside a pod, VMs interhit pod assumptions, ...). 

At some point, things diverge so much from these core assumptions, that I don't think that the benefit of having a common CRD definition outweighs the disadvantages of the additional complexity. Especially since it will probably be hard for all these different implementations to keep up with supporting the whole API. Therefore I think, if e.g. XEN would be supported in KubeVirt, it would have to somehow fit in the model of a standard k8s deployment. If that would not be possible, KubeVirt may not be the right place for it. I hope we gain better insight into that over the next weeks/months.

Best regards,
Roman
 
You received this message because you are subscribed to a topic in the Google Groups "kubevirt-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubevirt-dev/C6cUgzTOsVg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/CALHdMH2-iTo9O4QG-GB-Mgq%3D%3DX3owKMD1cb4j%2BQ501H_YeaBcA%40mail.gmail.com.

Fabian Deutsch

unread,
Nov 4, 2021, 4:35:38 AM11/4/21
to Roman Mohr, Shaya Potter, kubevirt-dev
On Thu, Nov 4, 2021 at 9:28 AM Roman Mohr <rm...@redhat.com> wrote:


On Wed, Nov 3, 2021 at 7:46 PM Shaya Potter <spo...@gmail.com> wrote:
so I wasn't at the meeting, but glancing at the notes, I didn't see any mention of virtual kubelet (and its ilk), so I think the xen people should consider taking a close look at that.

the reason being, viewing kubevirt as needing containers is limiting it to it's virt-launcher interface (i.e. running each vm in a pod).  This is good in some ways (get certain things for free, i.e. if vm and pod share the same ip effectively, things like services become free), but limiting in other ways.   A question the kubevirt people might want to answer, how weded are they to the virt launcher approach, and would they be opened to abstracting it away. 

But that might not be 100% necessary, as other approaches exist that are available

Another approach is what I did years ago (https://github.com/sjpotter/infranetes, dead/defunct/don't use, hasn't been run by me since kube 1.07 I believe) and what the virtual kubelet people have done since (https://github.com/virtual-kubelet/virtual-kubelet).  Basically reimagine what a pod can be (i.e not simply a set of containers) but simply anything that is backed by an ip.  In infranetes, I implemented it as a container runtime interface implementation that had multiple pluggable modules (i wrote aws, vsphere, gcp and virtualbox providers) to enable one to run VMs that to kubernetes looked and behaved like pods.  In virtual-kubelet, they seem to have gone about just making kubelet itself pluggable (probably gives a bit more control, the CRI was very limiting and really did imagine it was just containers, I really abused that API).

One of the issues I faced (and something I would change if I would go back to the drawing board with infranetes) is that the pod struct isn't really the best method to define a VM (again, I abused it).  But, by using pods, one gains a lot of things for free (scheduling, service ip mapping amongst others).  If I were to redo it, Id probably create my own CRD with an operator/controller that can take a well formed structure and then turn it into a pod struct (but that the end user doesn't have to know about).  so lots of configuration could go into annotations/labels, but the end user wouldn't have to know about that magical incantation, they would have a well documented type safe api/struct to use via the CRD which is then manageled into a pod to be delivered to a kubelet that knows how to handle this pod with the funky annotations.

just a thought that might be useful to them in other ways of approaching the issue.

I think in general a lot is possible. However, I personally think kubevirt works pretty great because it has a pretty narrow focus on traditional kubernetes (no virtual-kubelet, not trying to run side-by-side with a container runtime implementation but rather running inside a pod, VMs interhit pod assumptions, ...). 

At some point, things diverge so much from these core assumptions, that I don't think that the benefit of having a common CRD definition outweighs the disadvantages of the additional complexity. Especially since it will probably be hard for all these different implementations to keep up with supporting the whole API. Therefore I think, if e.g. XEN would be supported in KubeVirt, it would have to somehow fit in the model of a standard k8s deployment. If that would not be possible, KubeVirt may not be the right place for it.

+1

As much as I understand the desire to leverage XEN, the implementation of supporting XEN would require us to give up this - aforementioned - core principle of running (and containing) a complete VM inside a pod.
It is not just a side effect, it has resource management, security, observability, mental model, and other reasons why this approach - a pod is the atomic compute unit - was chosen. If - for whatever reason - we step away from this, then many assumptions and integrations will break - mostly at the expense of usability and user expectations.
Thus at the point where a VM is not "contained" in by a pod anymore, might be the time to look at a different abstraction.
 

Maya Rashish

unread,
Nov 4, 2021, 10:44:28 AM11/4/21
to kubevi...@googlegroups.com
This discussion seems to be missing the motivation which is that arm64 virtualization was originally
designed in mind with a bare-bones hypervisor that doesn't need an MMU for itself.
KVM does work but it comes at a performance penalty - it has a small hypervisor doing things for it that
it has to communicate with for every operation.

It seems like newer versions(?) of arm64 virtualization support aren't as limited, what do you think
about the various efforts to run Linux in EL2?

On 3/11/21 18:28, Roman Mohr wrote:
> Hi Jun,
>
> Thanks again for the presentation about a possible XEN integration in the community meeting two weeks ago.
>
> On Thursday, October 14, 2021 at 5:10:44 AM UTC+2 junzh...@gmail.com wrote:
>
> Hello everyone,
>
> I filed an issue(6577 <https://github.com/kubevirt/kubevirt/issues/6577>) to propose this topic two days ago. As this is a big topic(suggested by @mazzystr, thank you @mazzystr), I decide to open this conversation to kick off a discussion about this topic.
>
> *1. Current state.*
> Currently, kubevirt can manage kvm(type-2) VM not only on x86 but also on arm64(experimental stage) via libvirt and the experience is good for me. But it can not support type-1 hypervisor now.
>
>
> Just for completeness: KVM is also a type-1 hypervisor, see  [1], [2], [3] for reference. What KVM does not have in contrast to XEN is the dom0 concept.
>
> As we discussed in the community meeting [4], the dom0 concept potentially imposes some interesting difficulties when it comes to the integration of Kubernetes and KubeVirt with XEN.
>
> I am looking forward to hearing about your progress in the next meetings.
>
> Best regards,
> Roman
>
> [1] https://www.spinics.net/lists/kvm/msg150882.html
> [2] https://apps.dtic.mil/sti/pdfs/AD0772809.pdf
> [3] https://searchservervirtualization.techtarget.com/feature/Whats-the-difference-between-Type-1-and-Type-2-hypervisors
> [4] https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls#heading=h.2dzf25e4q2bm
>
> *2.The investigation, feasibility analysis and test.*
> As kubevirt architecture is designed to manage VM via libvirt and libvirt can support xen hypervisor, so it is fesiable to add xen hypervisor support in kubevirt. Before proposing this new feature, we had done lots of exporing, test and analysis for xen hypervisor. And finally, we successfully
> finished a prototype which can launch a xen VM not only on x86 but also on arm64. Below basic features had been verified on x86 and arm64 host by modifing some hard code.
>
> * Schedule a xen VM on a kubernetes cluster
> * Launch a xen VM
> * Stop a xen VM
> * Pause/unpause a xen VM
> * Login a xen VM by virtctl console
>
> And below features are not supported for xen hypervisor currently by our test.
>
> * Hotplug volumes
> * Disks with virtio bus
>
> The investigation and exporing had been done as above. We are working in Arm and very pleased to add xen hypervisor in kubevirt. Is the community interested in this? We are willing to harden our patches and raise PR for the implementation.
>
> Looking forward the suggestion and discussion about the topic.
>
> Thanks a lot.
>
>
> BR
>
> Jun
>
>
> --
> You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com <mailto:kubevirt-dev...@googlegroups.com>.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/43373f01-3146-427e-bc3d-71439af55723n%40googlegroups.com <https://groups.google.com/d/msgid/kubevirt-dev/43373f01-3146-427e-bc3d-71439af55723n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Reply all
Reply to author
Forward
0 new messages