> Hi,
>
> As far as I understand, HVMs should be faster than PV. With the latter, the OS makes hyper calls to
> the hyper-visor while HVMs simply see virtualized hardware through the hyper visor.
I always thought the opposite was true. PVs should be faster because they don't have to go through a hardware emulator, they can just communicate with the hypervisor using an efficient protocol. I think that was the only reason for creating PVs in the first place (other than historical reasons before VT-x maybe). Otherwise they would have made HVM the only mode, much like KVM does.
Btw, Qubes 4.0 uses PVH as the default mode, except for PCI passthru VMs. The reason is that the PV, while efficient, has become really insecure and is becoming deprecated. PVH runs the VM under VT-x like an HVM, but also has special guest-side PV drivers to make I/O faster by bypassing the emulated hardware. For example, PV(H) guests use the blkback driver (e.g. xvda) while HVM guests use a virtual SATA controller (e.g. sda) which is emulated by Qemu in userspace. Similarly, PV(H) uses netfront (e.g. "vif0" network interface) while HVMs use an emulated ethernet device (e.g. "eth0"). For PCI devices, PV(H) uses pcifront, however I think this is deprecated, which is why Qubes 4.0 uses HVM for all passthru VMs. HVMs use the platform features (e.g. IOMMU) to passthru the actual PCI hardware.
HVM:
DomU driver -> Xen -> Qemu -> Dom0 hardware driver -> Dom0 hardware
PV:
DomU PV driver -> Xen -> Dom0 hardware
At least that's my understanding.
You could try it in PV mode. I'm not sure if it's supported by Qubes but you could probably make it work. https://xenproject.org/windows-pv-drivers/
Also consider that Windows might simply be slower than Linux, especially in resource-constrained environments like a VM. Are your Linux HVMs also slow/laggy?