2012/5/5 Anup Patel <an...@brainfault.org>:
> This announcement is to show an apple to apple performance comparison
> between Xvisor ARM and KVM ARM running on VExpress-A15 Fast Model.
I would strongly caution against trying to do any performance/timing
type tests if you're still running on the ARM Fast Model -- they are
not representative of performance characteristics on hardware
and you really can't draw any conclusions about real world
performance by timing things on a model. It's quite easy to get
into a situation where all you're measuring is "does my code happen
to do a lot of some perfectly reasonable operation which happens
to be hard and slow to implement for the model?".
(Also, KVM for ARM is still under development and we haven't
yet made several of the obvious performance improvements like
in-kernel irqchip and timer support, so it's not really a very
useful thing to compare against yet.)
-- PMM
On Sat, 5 May 2012 15:31:36 +0100
> Also in my-view even if we have in-kernel emulation of irqchip and timer still Xvisor ARM will be performing better than KVM ARM because amount of code path traversed in KVM ARM will always be more.
>
> (Please note my-view about in-kernel emulation is totally based on code flow comparison of Xvisor ARM emulation and possible KVM ARM in-kernel emulation)
>
Sweet. Can I borrow your crystal ball?
M.
> On Sat, May 5, 2012 at 7:54 PM, Anup Patel <an...@brainfault.org<mailto:an...@brainfault.org>> wrote:
> Hi PMM,
>
> I agree we cannot predict real world performance based on performance on ARM fast models but if system A is performing better than system B no ARM fast model or QEMU then in real world system A will perform better than system B. Of-course in real world scale of difference in performance between system A and system B will differ.
>
> The previous announcement only proves that Xvisor ARM is relatively better than KVM ARM.
>
> Regards,
> --Anup
>
>
> On Sat, May 5, 2012 at 3:36 PM, Peter Maydell <peter....@linaro.org<mailto:peter....@linaro.org>> wrote:
> 2012/5/5 Anup Patel <an...@brainfault.org<mailto:an...@brainfault.org>>:
> > This announcement is to show an apple to apple performance comparison
> > between Xvisor ARM and KVM ARM running on VExpress-A15 Fast Model.
>
> I would strongly caution against trying to do any performance/timing
> type tests if you're still running on the ARM Fast Model -- they are
> not representative of performance characteristics on hardware
> and you really can't draw any conclusions about real world
> performance by timing things on a model. It's quite easy to get
> into a situation where all you're measuring is "does my code happen
> to do a lot of some perfectly reasonable operation which happens
> to be hard and slow to implement for the model?".
>
> (Also, KVM for ARM is still under development and we haven't
> yet made several of the obvious performance improvements like
> in-kernel irqchip and timer support, so it's not really a very
> useful thing to compare against yet.)
>
> -- PMM
>
>
--
I'm the slime oozin' out from your TV set...
On the Fast Model.
On 6 May 2012 05:22, Anup Patel <an...@brainfault.org> wrote:
> Also can you give example of a code sequence which is faster on model and
> slower in real world. As far as I know ARM fast models are internally TLM
> based models and If a TLM based model is emulating a timer chip of X clock
> then it is quite precise X clock.
Support for TLM does not require that the underlying model is cycle
accurate (you can have 'loosely timed' behaviour).
You might want to read the Fast Models documentation, which tries
to be clear about what the models do and don't provide. In particular:
http://infocenter.arm.com/help/topic/com.arm.doc.dui0423l/ch02s01s02.html
"Fast models cannot be used to:
* model cycle counting
* model software performance
"
> Ofcourse CPU emulation and computation
> power will be less compared to real world. To see this behaviour try to boot
> linux on Fast model or QEMU and leave it for hours and come back see the
> time elapsed, you will definitely see same amount of time elapsed as real
> world.
Nobody's arguing that the models are faster than hardware!
Let's try a simple example with some numbers representing
relative speeds:
operation A: h/w: 1 ; model 5
operation B: h/w 3 ; model 30
Where we're comparing two equivalent code sequences "A A A A" vs "B".
On hardware "B" will be faster. On the model the "A A A A" beats "B".
(both sequences are slower on the model than on the hardware, obviously.)
The point is that some operations will be vastly vastly slower
on the model, and some operations merely moderately slower. Which
of any two code sequences is fastest depends at least as much on
whether it's using operations that are disproportionally worse
on the model. A trivial example of this is VFP -- certainly QEMU
has to do complex software emulation of the floating point ops to
maintain bit-for-bit accuracy, which makes them very slow to the
point where a hand-optimised-integer-assembly codec is likely to
be faster on the model than a Neon/VFP-using codec, even though
of course the Neon codec will be faster on hardware.
[NB: this is itself a big simplification: model performance will
depend on a lot of interacting things and is not purely a
same-every-time slowdown per operation. Some operations effectively
slow down what happens after them, for instance on QEMU if you do
something that makes us flush our cache of translated code. And
if for instance you have a periodic timer then the fact the model
is generally slower means you execute proportionally more insns in
the timer interrupt, so inefficiency or slowness in that code path
has disproportionately more effect on overall speed than it does
on hardware. There are other complications too...]
> The results in the announcemnt are not baseless we have quite amount reasons
> to believe Xvisor ARM will perform better than KVM ARM in real-world too.
I'm not stating a position on whether KVM will be better or worse
than Xvisor. I'm just pointing out that you can't base an argument
on the faulty assumption that performance inside a model can tell
you anything useful about performance on hardware.
-- PMM
_______________________________________________
Android-virt mailing list
Androi...@lists.cs.columbia.edu
https://lists.cs.columbia.edu/cucslists/listinfo/android-virt
The intention behind the announcement was to inform people interested in virtualization about Xvisor.
The announcement was an early info about achievements of Xvisor ARM (for now compared to KVM ARM). Certainly we are planning to have scientific paper for Xvisor.
Also, I do agree that KVM ARM can be further optimized but as I mentioned in my previous replies "KVM ARM will end-up putting more and more stuff in-kernel". For now you can think of Xvisor ARM = KVM ARM doing everything in-kernel. Even Xvisor ARM is being optimized so, as time passes Xvisor ARM is also going to improve further. Its a common wisdom that "No hypervisor in the world can be better than Native performance". Xvisor ARM is already very very close to native performance and KVM ARM will come close to native performance only by increasing its monolithic nature (i.e. doing more things in-kernel). If monolithic hypervisors are so well performing then why not to have a monolithic hypervisor made for virtualization purpose only. The motivation behind writing Xvisor was the same.
Apart from high performance Xvisor has many interesting features such as:
Ability to work without hardware virtualization support - Xvisor ARM is able to boot multiple unmodified Linux guest even on hosts which do not have virtualization extensions implemented. In contrast, KVM ARM does not work without virtualization extension. The potential number of host hardware that Xvisor ARM can support is much more than KVM ARM can support. Xvisor ARM can in-fact run on old ARMv5 processors too.
Tree-based configuration - To create a guest we have to just describe the guest in form of a device tree (possible even in runtime). In contrast, for KVM one needs to add the support in QEMU and recompile the binaries.
Pass-through hardware access - For hardware not accessed or virtualized by Xvisor can be used in pass-through mode. Providing the guests a pass-through accessible device is just matter of adding a tree node and configure irq routing information in guest tree. Its not just PCI devices, we can provide any kind of device as pass-through accessible (Note: if device has in-built DMA then it should have IOMMU or SysMMU otherwise it would be security breach). We have already tried out Serial Port and NIC as pass-through devices.
We can compare KVM advantages with Xvisor as follows:
Scheduler - The linux kernel scheduler is very mature and proven OS scheduler but Hypervisor scheduler can be quite different. Scheduling processes and scheduling VMs can be very different problems. In case of VMs we can use info such as: amount of emulated IO done, amount of time spend in waiting for irq, etc for improving the quality of server consolidation.
Driver base - Xvisor has and will have all driver framework APIs similar to Linux and driver porting will be just one-on-one replacement of APIs in most cases.
User space tools - For starter Xvisor will use libvirt tools (or similar open source initiative) for remote management.
Co-existence host processes - Xvisor is not an OS. Its made for virtualization only so no process. Ofcourse, Xvisor has internal threading framework but most of the time this background threads are sleeping doing nothing. All the management commands are provided by
The intention behind the announcement was to inform people interested in virtualization about Xvisor. The announcement was an early info about achievements of Xvisor ARM (for now compared to KVM ARM). Certainly we are planning to have scientific paper for Xvisor.
Also, I do agree that KVM ARM can be further optimized but as I mentioned in my previous replies "KVM ARM will end-up putting more and more stuff in-kernel". For now you can think of Xvisor ARM = KVM ARM doing everything in-kernel. Even Xvisor ARM is being optimized so, as time passes Xvisor ARM is also going to improve further. Its a common wisdom that "No hypervisor in the world can be better than Native performance". Xvisor ARM is already very very close to native performance and KVM ARM will come close to native performance only by increasing its monolithic nature (i.e. doing more things in-kernel). If monolithic hypervisors are so well performing then why not to have a monolithic hypervisor made for virtualization purpose only. The motivation behind writing Xvisor was the same.
Apart from high performance Xvisor has many interesting features such as:
Ability to work without hardware virtualization support - Xvisor ARM is able to boot multiple unmodified Linux guest even on hosts which do not have virtualization extensions implemented. In contrast, KVM ARM does not work without virtualization extension. The potential number of host hardware that Xvisor ARM can support is much more than KVM ARM can support. Xvisor ARM can in-fact run on old ARMv5 processors too.
Tree-based configuration - To create a guest we have to just describe the guest in form of a device tree (possible even in runtime). In contrast, for KVM one needs to add the support in QEMU and recompile the binaries.
Pass-through hardware access - For hardware not accessed or virtualized by Xvisor can be used in pass-through mode. Providing the guests a pass-through accessible device is just matter of adding a tree node and configure irq routing information in guest tree. Its not just PCI devices, we can provide any kind of device as pass-through accessible (Note: if device has in-built DMA then it should have IOMMU or SysMMU otherwise it would be security breach). We have already tried out Serial Port and NIC as pass-through devices.
We can compare KVM advantages with Xvisor as follows:
Scheduler - The linux kernel scheduler is very mature and proven OS scheduler but Hypervisor scheduler can be quite different. Scheduling processes and scheduling VMs can be very different problems. In case of VMs we can use info such as: amount of emulated IO done, amount of time spend in waiting for irq, etc for improving the quality of server consolidation.
Driver base - Xvisor has and will have all driver framework APIs similar to Linux and driver porting will be just one-on-one replacement of APIs in most cases.
User space tools - For starter Xvisor will use libvirt tools (or similar open source initiative) for remote management.
Co-existence host processes - Xvisor is not an OS. Its made for virtualization only so no process. Ofcourse, Xvisor has internal threading framework but most of the time this background threads are sleeping doing nothing. All the management commands are provided by managment terminal daemon (which a background thread in Xvisor).