And below features are not supported for xen hypervisor currently by our test.
The investigation and exporing had been done as above. We are working in Arm and very pleased to add xen hypervisor in kubevirt. Is the community interested in this? We are willing to harden our patches and raise PR for the implementation.
Looking forward the suggestion and discussion about the topic.
Thanks a lot.
BR
Jun
Hello everyone,I filed an issue(6577) to propose this topic two days ago. As this is a big topic(suggested by @mazzystr, thank you @mazzystr), I decide to open this conversation to kick off a discussion about this topic.1. Current state.Currently, kubevirt can manage kvm(type-2) VM not only on x86 but also on arm64(experimental stage) via libvirt and the experience is good for me. But it can not support type-1 hypervisor now.
--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/43373f01-3146-427e-bc3d-71439af55723n%40googlegroups.com.
so I wasn't at the meeting, but glancing at the notes, I didn't see any mention of virtual kubelet (and its ilk), so I think the xen people should consider taking a close look at that.
the reason being, viewing kubevirt as needing containers is limiting it to it's virt-launcher interface (i.e. running each vm in a pod). This is good in some ways (get certain things for free, i.e. if vm and pod share the same ip effectively, things like services become free), but limiting in other ways. A question the kubevirt people might want to answer, how weded are they to the virt launcher approach, and would they be opened to abstracting it away.
But that might not be 100% necessary, as other approaches exist that are available
Another approach is what I did years ago (https://github.com/sjpotter/infranetes, dead/defunct/don't use, hasn't been run by me since kube 1.07 I believe) and what the virtual kubelet people have done since (https://github.com/virtual-kubelet/virtual-kubelet). Basically reimagine what a pod can be (i.e not simply a set of containers) but simply anything that is backed by an ip. In infranetes, I implemented it as a container runtime interface implementation that had multiple pluggable modules (i wrote aws, vsphere, gcp and virtualbox providers) to enable one to run VMs that to kubernetes looked and behaved like pods. In virtual-kubelet, they seem to have gone about just making kubelet itself pluggable (probably gives a bit more control, the CRI was very limiting and really did imagine it was just containers, I really abused that API).One of the issues I faced (and something I would change if I would go back to the drawing board with infranetes) is that the pod struct isn't really the best method to define a VM (again, I abused it). But, by using pods, one gains a lot of things for free (scheduling, service ip mapping amongst others). If I were to redo it, Id probably create my own CRD with an operator/controller that can take a well formed structure and then turn it into a pod struct (but that the end user doesn't have to know about). so lots of configuration could go into annotations/labels, but the end user wouldn't have to know about that magical incantation, they would have a well documented type safe api/struct to use via the CRD which is then manageled into a pod to be delivered to a kubelet that knows how to handle this pod with the funky annotations.just a thought that might be useful to them in other ways of approaching the issue.
You received this message because you are subscribed to a topic in the Google Groups "kubevirt-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubevirt-dev/C6cUgzTOsVg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/CALHdMH2-iTo9O4QG-GB-Mgq%3D%3DX3owKMD1cb4j%2BQ501H_YeaBcA%40mail.gmail.com.
On Wed, Nov 3, 2021 at 7:46 PM Shaya Potter <spo...@gmail.com> wrote:so I wasn't at the meeting, but glancing at the notes, I didn't see any mention of virtual kubelet (and its ilk), so I think the xen people should consider taking a close look at that.
the reason being, viewing kubevirt as needing containers is limiting it to it's virt-launcher interface (i.e. running each vm in a pod). This is good in some ways (get certain things for free, i.e. if vm and pod share the same ip effectively, things like services become free), but limiting in other ways. A question the kubevirt people might want to answer, how weded are they to the virt launcher approach, and would they be opened to abstracting it away.
But that might not be 100% necessary, as other approaches exist that are available
Another approach is what I did years ago (https://github.com/sjpotter/infranetes, dead/defunct/don't use, hasn't been run by me since kube 1.07 I believe) and what the virtual kubelet people have done since (https://github.com/virtual-kubelet/virtual-kubelet). Basically reimagine what a pod can be (i.e not simply a set of containers) but simply anything that is backed by an ip. In infranetes, I implemented it as a container runtime interface implementation that had multiple pluggable modules (i wrote aws, vsphere, gcp and virtualbox providers) to enable one to run VMs that to kubernetes looked and behaved like pods. In virtual-kubelet, they seem to have gone about just making kubelet itself pluggable (probably gives a bit more control, the CRI was very limiting and really did imagine it was just containers, I really abused that API).One of the issues I faced (and something I would change if I would go back to the drawing board with infranetes) is that the pod struct isn't really the best method to define a VM (again, I abused it). But, by using pods, one gains a lot of things for free (scheduling, service ip mapping amongst others). If I were to redo it, Id probably create my own CRD with an operator/controller that can take a well formed structure and then turn it into a pod struct (but that the end user doesn't have to know about). so lots of configuration could go into annotations/labels, but the end user wouldn't have to know about that magical incantation, they would have a well documented type safe api/struct to use via the CRD which is then manageled into a pod to be delivered to a kubelet that knows how to handle this pod with the funky annotations.just a thought that might be useful to them in other ways of approaching the issue.I think in general a lot is possible. However, I personally think kubevirt works pretty great because it has a pretty narrow focus on traditional kubernetes (no virtual-kubelet, not trying to run side-by-side with a container runtime implementation but rather running inside a pod, VMs interhit pod assumptions, ...).
At some point, things diverge so much from these core assumptions, that I don't think that the benefit of having a common CRD definition outweighs the disadvantages of the additional complexity. Especially since it will probably be hard for all these different implementations to keep up with supporting the whole API. Therefore I think, if e.g. XEN would be supported in KubeVirt, it would have to somehow fit in the model of a standard k8s deployment. If that would not be possible, KubeVirt may not be the right place for it.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/CALDPj7vuhNv%3DWYuS4Nt-aWzF%3Dyazqd9rgDvxpr3xhtqy1FP2Xg%40mail.gmail.com.