Hi All,
I will keep sharing the technical queries that I get for Xvisor. I hope that this will help us all to be on the same page. If you get/have an interesting query then please do share it on xvisor-devel.
Please find the queries and their replies below:
Query: How do compare xvisor with L4 Fiasco ?L4 Fiasco is a microkernel with virtualization support whereas Xvisor is a monolithic kernel made for Virtualization purpose only. Also, since Xvisor is meant for Virtualization purpose only, we dont support Unix compatible user space programs. In fact, we only support hypervisor threads which run with supervisor permissions. In Xvisor, we are very clear about being focused to Virtualization only and we add a feature only if it is absolutely necessary for Virtualization or Virtual Machine Management.
Query: How do you support full virtualization on ARM A8 ?Full virtualization enables Xvisor-ARM to boot unmodified linux 2.6.30.10 and linux 3.0.4 on Realview-PB-A8 guest. We dont expect the users of Xvisor to optimize there guest OS, but they can always spend time optimizing their guest OSes for running on Xvisor. Unlike Xen or KVM, Xvisor does not dependent on QEMU for hardware virtualization. Instead, Xvisor has its own device emulation framework. In future, we will be providing optional architecture specific hypercalls which Xvisor user can use to optimize their guest OSes.
Query: Related to the blog article and comments, how far have you been with the "todo list", in particular related to get Guest OSs booting under Xvisor, Xvisor working for Cortex-A9, SMP ..The TODO list in blog article comments is out-of-date. For updated TODO list visit:
https://github.com/xvisor/xvisor/blob/master/TODO
We definitely want to support SMP systems, before that we want to squeeze as much performance as possible and make hypervisor threading framework more matured.
Query: How long have been the xvisor development so far (roughly in men month). Have you developed it alone ?
I had started some time back in June 2010 with a experimental code running on PPC. Later I scrapped PPC port due poor availability of development resources (board or emulator).
In October 2010, Himanshu started with MIPS architecture.
In February 2011, I started with ARM port and completed it in mid October 2011. (From ARM perspective it took me nearly 9 months)
In November 2011 (roughly first week), Sukanto started Beagle Board and within 2 weeks he got Xvisor-ARM running for Beagle Board on QEMU 0.15.xx
Recently, we got our hardware Beagle Board and trying to get Xvisor-ARM working on hardware Beagle Board.
First announcement of Xvisor was done on 27th October 2011. After that many new developers have joined and are making valuable contribution. (You can look at Github statistics & commit logs of Xvisor)
To summarize, currently we are 5+ active developers working on Xvisor as a hobby project (part time).
Query: When running unmodified guest kernels, are you using security extensions ? Are you limited to a single guest.
No we don't use security extensions and don't assume security extensions to be present. We might use them in future if we get performance advantage and host hardware has security extensions enabled. Also, we are not limited to a single guest, if you have tried our ARM demo (mentioned in announcement) then you can see we have provided image for running two Linux guests on single Realview-PB-A8 (for more details refer the README in the ARM demo).
Query: Do you have network drivers support, GPU support, etc. How do you plan to maintain driver support for the different SoCs being produced all the time ? Do you do single-assign pass-through, or do you plan on re-writing qemu emulation.
Our most important goal currently is Android virtualization, due to this block device virtualization (for virtual SD card) and frame buffer virtualization (for display sharing) have higher priority. We also have plans for network virtualization but, thats after Android virtualization. For network virtualization, we wont be requiring full-fledged network stack, from hypervisor perspective we just need a framework for emulating a network switch with different policies. In simpler words, all the hardware network ports and guest emulated network ports will be virtually connected to emulated switch. The only place where we might require network stack support is when we want to have telnet daemon or some remote managment daemon, in such cases we will be using lwIP or UIP as lightweight optional network stack just for these daemons.
KVM & Xen have a great advantage in terms of host device driver support, but in our case we will require putting efforts. To ease our task we will try to keep the driver development APIs as close as possible to Linux.
Our device emulation framework is highly compatible with QEMU emulation framework (in terms of kind of APIs available). Due to this we were able to port Realview-PB-A8 related emulators from QEMU to Xvisor with great ease.
Query: Can xvisor dynamically instantiate new VMs or do they have to be configured statically ? Important use case in a data centerWe have kept the scope of dynamically instantiating new VMs or new Guests in our design but, currently not supported. Also, as I said previously, we are first targeting android (embedded) virtualization so, we have not put any efforts in this direction, but its definitely an important use case.
Query: What kind of management infrastructure would be available ? What are the related use cases ?We (specifically me) are very impressed by the libvirt project (
http://libvirt.org/) which tries to provide common management interface for different hypervisors. For now we have planned to add a driver for Xvisor in libvirt, after we have network support and network virtualization support in Xvisor,
Query: About the memory management. Does the normal VCPU and Orphan VCPU share the same page table. (eg. one linux kernel run on the normal VCPU which has its own page table management, how does its page table and xvisor hypervisor coexit?)
There is one master (or default) L1 page table that is shared by all Orphan VCPUs and Xvisor itself. Each Normal VCPU has its own L1 table but when the Normal VCPU is created its L1 page table is actually cloned from master L1 page table. The entries in L1 page table of Normal VCPU are filled on demand as required by the normal VCPU. The page tables are only switched during context switch. For avoiding overlapping virtual address space of Normal VCPU and Xvisor we set our virtual address pool in the reserved memory starting at 0xFF000000 (Note: virtual address pool means all possible virtual addresses used Xvisor code/data during execution. This also includes virtual address returned by iomap/iounmap calls by Xvisor drivers). When it comes to using memory for itself Xvisor is very miser so it can easy fit the memory reserved for future DMA expansion (For more details please look at <linux_source>/Documentation/arm/memory.txt). The problem of overlapping virtual address space exists for all ARM processors upto Cortex-A15 but Xvisor can easily share virtual address space with Linux guest because it has lower memory foot print.
Query: How about the Positioning and development of the xvisor project in the future?Actually, we are small group of open source hackers working part time on Xvisor. Our immediate goal is to add more virtualization support to Xvisor such that we can boot multiple instances of Android OS. Our TODO list is part of our source itself which keeps updating. Our highlevel goals in order of priority are: 1. Android Virtualization, 2. SMP support, and 3. Data Center Virtualization. For now we will focusing on Android (Embedded) Virtualization.
Best Regards,
Anup Patel