> I will try your hint wrt the Intel USB controller; I just tried to avoid
> passing it through so far because I require more ports for dom0 than for
> domX and the PCIE card only has two external ports... I guess I'll go
> for an external USB hub if it works...
Tried it and didn't work unfortunately as the PCIE card is only
recognized after the kernel is loaded (+ I like to be able to use my
keyboard in the BIOS or GRUB...). Moreover I cannot unbind the
controller; I guess that could be fixed somehow, but the first reason
already made me stop invest further time.
The analysis is not quite that straightforward, because running a device through PCI passthrough is a significantly different environment from the typical non-passthrough environment of dom0. There are certain Linux device drivers that may not be robust enough to work in the passthrough environment.
I mentioned the kernel as a possibility, because I have seen passthrough break simply by changing from a pre-4.4 kernel to the 4.4 kernel.
It seems like a lot of changes to PCI passthrough - both hypervisor and kernel side - made in the last year were in connection with issues like XSAs 120 and 157. This may be connected to why Xen 4.4 may work better than 4.6.
Hello,
On Saturday, January 30, 2016 at 4:16:11 PM UTC+1, Eric Shelton wrote:The analysis is not quite that straightforward, because running a device through PCI passthrough is a significantly different environment from the typical non-passthrough environment of dom0. There are certain Linux device drivers that may not be robust enough to work in the passthrough environment.
Good point. Considering how PV works, this sounds plausible. This also brings some ideas:
* For a VT-d (IOMMU) devices, these problems should not be present on HVMs.
* In long term, switching to PVH might be a solution provided that VT-d will be on virtually all relevant devices.
Anyone with similar issues:
* Do you have VT-d/IOMMU?
* If yes, does it work on HVM?
I mentioned the kernel as a possibility, because I have seen passthrough break simply by changing from a pre-4.4 kernel to the 4.4 kernel.
Hmm, do you mean the dom0 kernel, or domU kernel? I've tries downgrading domU kernel to anything installed, but without any positive result. I haven't tried downgrading the dom0 kernel, as there might be more security considerations.
It seems like a lot of changes to PCI passthrough - both hypervisor and kernel side - made in the last year were in connection with issues like XSAs 120 and 157. This may be connected to why Xen 4.4 may work better than 4.6.
So, 3.1 might work worse than 3.0…
On Saturday, January 30, 2016 at 12:31:08 PM UTC-5, Vít Šesták wrote:Hello,
On Saturday, January 30, 2016 at 4:16:11 PM UTC+1, Eric Shelton wrote:The analysis is not quite that straightforward, because running a device through PCI passthrough is a significantly different environment from the typical non-passthrough environment of dom0. There are certain Linux device drivers that may not be robust enough to work in the passthrough environment.
Good point. Considering how PV works, this sounds plausible. This also brings some ideas:
* For a VT-d (IOMMU) devices, these problems should not be present on HVMs.The main issue for drivers and passthrough devices is that Xen restricts access to certain PCI config registers, and some drivers may expect or even require access to those registers. I don't think that issue is any different for HVM domains.* In long term, switching to PVH might be a solution provided that VT-d will be on virtually all relevant devices.Joanna suggested a switch to PVH for other reasons - reducing hypervisor complexity by taking advantage of EPT (https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-022-2015.txt):This bug might also be considered an argument for the view of ditchingof para-virtualized (PV) VMs, and switch to HVMs, or better yet: PVHVMs for better isolation. This seems to be a valid argument indeed,but only if the underlying processor also supports SLAT (e.g. EPT).Otherwise the complexity of the hypervisor code needed to implementShadow Paging offsets any potential benefits of using CPU-assistedvirtualization, such as VT-x. Luckily it seems majority of the recentmodern laptops does support SLAT, so this might be the direction we gofor Qubes 4.Anyone with similar issues:
* Do you have VT-d/IOMMU?
* If yes, does it work on HVM?PCI passthrough is working with HVM domains since Qubes 3.0.
The main issue for drivers and passthrough devices is that Xen restricts access to certain PCI config registers, and some drivers may expect or even require access to those registers. I don't think that issue is any different for HVM domains.
Joanna suggested a switch to PVH for other reasons - reducing hypervisor complexity by taking advantage of EPT (https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-022-2015.txt):
Anyone with similar issues:
* Do you have VT-d/IOMMU?
* If yes, does it work on HVM?PCI passthrough is working with HVM domains since Qubes 3.0.
Disabling MSI (Message Signaled Interrupts) for the usb 3.0 card specifically may solve your issue.