SUCCESS: GPU passthrough on Qubes 3.1 (Xen 4.6.1) / Radeon 6950 / Win 7 & Win 8.1 (TUTORIAL + HCL)

7,809 views
Skip to first unread message

Marcus at WetwareLabs

unread,
Jun 22, 2016, 11:26:50 AM6/22/16
to qubes-users
Hello all,

I've been tinkering with GPU passthrough these couple of weeks and I thought I should now share some of my findings. It's not so much unlike the earlier report on GPU passthrough here (https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ).

I started with Nvidia GTX 980, but I had no luck with ANY of the Xen hypervisors or Qubes versions. Please see my other thread for more information (https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/PuZLWxhTgM0/pWe7LXI-AgAJ).

However after I switched to Radeon 6950, I've had success with all the Xen versions. So I guess it's a thing with Nvidia driver initialization. On a side note, someone should really test this with Nvidia Quadros that are officially supported to be used in VMs. (And of course, there are the hacks to convert older Geforces to Quadros..)

Anyway, here's a quick and most likely incomplete list (for most users) for getting GPU passthrough working on Win 8.1 VM. (works identically on Win7)

Enclosed are the VM configuration file and HCL file for information about my hardware setup (feel free to add this to HW compatibility list!)

TUTORIAL

  • Check which PCI addresses correspond to your GPU (and optionally, USB host) with lspci.
Here's mine:
...
# lspci
....
03:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cayman XT [Radeon HD 6970]
03:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cayman/Antilles HDMI Audio [Radeon HD 6900 Series]

Note that you have to pass both of these devices if you have similar GPU with dual functionality.

  • Edit /etc/default/grub and add following options (change the pci address if needed):
GRUB_CMDLINE_LINUX=".... rd.qubes.hide_pci=03:00.0,03:00.1 modprobe=xen-pciback.passthrough=1 xen-pciback.permissive"
GRUB_CMDLINE_XEN_DEFAULT
="... dom0_mem=min:1024M dom0_mem=max:4096M"


For extra logging:
GRUB_CMDLINE_XEN_DEFAULT="... apic_verbosity=debug loglvl=all guest_loglvl=all iommu=verbose"


There are many other options available, but I didn't see any difference in success rate. See here:
http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
http://wiki.xenproject.org/wiki/Xen_PCI_Passthrough
http://wiki.xenproject.org/wiki/XenVGAPassthrough

  • Update grub:
# grub2-mkconfig -o /boot/grub2/grub.cfg

  • Reboot. Check that VT-t is enabled:
# xl dmesg
...
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.
(XEN) Intel VT-d Snoop Control not enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed

  • Check that pci devices are available to be passed:
# xl pci-assignable list
0000:03:00.0
0000:03:00.1

  • Create disk images:
# dd if=/dev/zero of=win8.img bs=1M count=30000
# dd if=/dev/zero of=win8-user.img bs=1M count=30000

  • Install VNC server into Dom0
# qubes-dom0-update vnc

  • Modify the win8.hvm:
    •  Check that the disk images and Windows installation CDROM image are correct, and that the IP address does not conflict with any other VM (I haven't figured out yet how to set up dhcp)
    •  Check that 'pci = [ .... ]' is commented for now
  • Start the VM ( -V option runs automatically VNC client)
# xl create win8.hvm -V


If you happen to close the client (but VM is still running), start it again with
# xl vncviewer win8

Note that I had success starting the VM only as root. Also killing the VM with 'xl destroy win8' would leave the qemu process lingering if not done as root (if that occurs, you have to kill that process manually)
  • Install Windows
  • Partition the user image using 'Disk Manager'
  • Download signed paravirtualized drivers here (Qubes PV drivers work only in Win 7):
http://apt.univention.de/download/addons/gplpv-drivers/gplpv_Vista2008x64_signed_0.11.0.373.msi
Don't mind the name, it works on Win 8.1 as well.
For more info: http://wiki.univention.com/index.php?title=Installing-signed-GPLPV-drivers

  • Move the drivers inside user image partition (shut down VM first):
# losetup   (Check for free loop device)
# losetup -P /dev/loop10 win8-user.img   (Setup loop device and scan partition. Assuming loop10 is free)
# mount /dev/loop10p1 /mnt/removable  ( Mount the first partition )
- copy the driver there and unmount.

  • Reboot VM, install paravirtual drivers and reboot again
  • Create this script inside sys-firewall (check that the sys-net vm ip address 10.137.1.1 is correct though):
fwcfg.sh:
#!/bin/bash
   vmip
=$1

    iptables
-A FORWARD -s $vmip -p udp -d 10.137.1.1   --dport 53 -j ACCEPT
    iptables
-A FORWARD -s $vmip -p udp -d 10.137.1.254 --dport 53 -j ACCEPT
    iptables
-A FORWARD -s $vmip -p icmp -j ACCEPT
    iptables
-A FORWARD -s $vmip -p tcp -d 10.137.255.254 --dport 8082 -j DROP
    iptables
-A FORWARD -s $vmip -j ACCEPT

Then setup the iptables rules:
# sudo ./fwcfg.sh 10.137.2.50   # substitute with the win8.1 VM ip address

Note that this has to do be done manually EVERY TIME the vm is (re)started, because a new virtual interface is created and the old one is scrapped. If someone knows how to automate this, I'm all ears :)

  • Configure VM networking
Inside Windows, manually setup IP, netmask and GW in VM (10.137.2.50, 255.255.255.0, 10.137.2.1) as well as DNS (10.137.1.1) for the 'Xen Net' interface.
If routing does not work properly at this point, try disabling the other (Realtek) network interface in Windows.

  • Uncomment the devices-to-be-passed list ( PCI = [ ... ] ) in win8.hvm
  • Download the GPU drivers (ATI Catalyst 15.7.1 for Win 8.1 worked for me for Radeon 6950)
  • Launch the installer but close it after it has unzipped drivers to C:\ATI
  • Install the driver manually via Device Manager ( Update driver -> Browse )
  • Cross  your fingers and hope for the best!
  • Enjoy a beer :)

 ---------

If these instructions don't work for you, you could try following things:
  • Enable permissive mode for PCI device (see link above)
  • iommu=workaround_bios_bug  boot option
  • enable/disable options in .hvm file: viridian, pae, hpet, acpi, apic, cpi_msitranslate, pci_power_mgmt, xen_platform_pci

If you still don't get passthrough working, make sure that it is even possible with you current hardware. Most of the modern (<3 years old) working GPU PT installations seem to using KVM (I got even my grumpy NVidia GTX 980 functional!), so you should at least try creating bare-metal Arch Linux installion and then following instructions here: https://bufferoverflow.io/gpu-passthrough/
or Arch wiki entry here: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
or a series of tutorials here: http://vfio.blogspot.se/2015/05/vfio-gpu-how-to-series-part-1-hardware.html

Most of the instructions are KVM specific, but there's lot of great non-hypervisor specific information there as well, especially in the latter blog. Note that all the info about VFIO and IOMMU groups can be misleading since they are KVM specific functionality and not part of Xen (don't ask me how much time I spent time figuring out why I can't seem to find IOMMU group entries in /sys/bus/pci/ under Qubes...)

One thing about FLReset (Function Level Reset): There's quite general misconception about FLR being a requirement in order to do GPU passthrough, but this isn't true. As a matter of fact, not even the NVidia Quadros have FLR+ in PCI DevCaps, and not many non-GPU PCI devices do either. So even though the how-to here (http://wiki.xen.org/wiki/VTd_HowTo) states otherwise, the missing FLR capability will not necessarily mean that device can't be used in VM, but could only make it harder to survive DomU boot. I've seen in my tests that both Win 7 and Win8 VMs can be in fact booted several times without a requirement to boot Dom0 (but hopping BETWEEN the two Windows versions will result in either BSOD or Code 43). But again, this may wary a lot with GPU models and driver versions. But anyway, if you see this message during VM startup:
lbxl: error: ....  The kernel doesn't support reset from sysfs for PCI device ...
... you can safely ignore it

Happy hacking!

Best regards,
Marcus

Qubes-HCL-ASRock_X99_WS-20160622-011739.yml
win8.hvm

Ilpo Järvinen

unread,
Jun 22, 2016, 4:33:50 PM6/22/16
to Marcus at WetwareLabs, qubes-users
Great to hear you got it working! I've done some googling related to
techniques you mention below and I want to share some thoughts /
information related to them.

On Wed, 22 Jun 2016, Marcus at WetwareLabs wrote:

> If you still don't get passthrough working, make sure that it is even
> possible with you current hardware. Most of the modern (<3 years old)
> working GPU PT installations seem to using KVM (I got even my grumpy NVidia
> GTX 980 functional!), so you should at least try creating bare-metal Arch
> Linux installion and then following instructions here:
> https://bufferoverflow.io/gpu-passthrough/
> or Arch wiki entry here:
> https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
> or a series of tutorials here: http://vfio.blogspot.se/2015/05/vfio-gpu-how-to-series-part-1-hardware.html
>
>
> Most of the instructions are KVM specific, but there's lot of great
> non-hypervisor specific information there as well, especially in the latter
> blog. Note that all the info about VFIO and IOMMU groups can be misleading
> since they are KVM specific functionality and not part of Xen (don't ask me
> how much time I spent time figuring out why I can't seem to find IOMMU group
> entries in /sys/bus/pci/ under Qubes...)

This contradicts what I've understood about PCI ACS functionality.

IOMMU groups may be named differently for Xen or not exist (I don't know,
it's news to me that they don't exist), but lack of PCI ACS functionality
is still a HW thing and according to my understanding the same limit on
isolation applies regardless of hypervisor. ACS support relates how well,
that is, how fine-grained, those "IOMMU groups" were partitioned. Each
different group indicates a boundary were IOMMU is truly able separate
PCIe devices and are based on HW limitation not on a hypervisor feature.
Unfortunately mostly high-end, server platforms have true support of ACS
(some consumer oriented ones support it only inofficially, see
drivers/pci/quirks.c for the most current known to support list).

Lack of ACS may not be a big deal to many. But it may limit isolation in
some cases, most notably having storage on PCIe slot connected SSDs and
GPU passthrough. And passing through more than a single GPU to different
VMs might have some isolation related hazards too because of the usual
PCIe slot arrangement. But one likely needs deep pockets to have such
arrangements anyway, so going to server or high-end platform may be less
of a issue to begin with :-).

> One thing about FLReset (Function Level Reset): There's quite general
> misconception about FLR being a requirement in order to do GPU passthrough,
> but this isn't true. As a matter of fact, not even the NVidia Quadros have
> FLR+ in PCI DevCaps, and not many non-GPU PCI devices do either. So even
> though the how-to here (http://wiki.xen.org/wiki/VTd_HowTo) states
> otherwise, the missing FLR capability will not necessarily mean that device
> can't be used in VM, but could only make it harder to survive DomU boot.
> I've seen in my tests that both Win 7 and Win8 VMs can be in fact booted
> several times without a requirement to boot Dom0 (but hopping BETWEEN the
> two Windows versions will result in either BSOD or Code 43). But again, this
> may wary a lot with GPU models and driver versions. But anyway, if you see
> this message during VM startup:
> lbxl: error: ....  The kernel doesn't support reset from sysfs for PCI
> device ...
> ... you can safely ignore it

FLR is "needed" for reseting the device "safely" (after first init, if a
reset is needed), not for the passthrough. But there seem to be some other
ways which can usually result good enough reset and don't depend on FLR
support. I've not yet come across any indication that there would be _any_
GPU that would even claim support FLR (whether that would work is yet
another big question mark :-)). As you have noted, the issues seem to
occur more frequently when trying to reassign the PCI device to another
VM which has practical implications only to subset of usage scenarios.
But also rebooting a VM may obviously fail due to too incomplete reset
of PCI device state.

And this same reset limitation applies to non-GPU devices too, USB
controllers being the most important I can immediately think off. Luckily
3.2 with support for USB passthrough will make this less of a issue.
Again, the FLR support of other devices seems better with server/high-end
platforms.


--
i.

'01938'019384'091843'091843'09183'04918'029348'019

unread,
Jun 23, 2016, 3:07:31 PM6/23/16
to qubes-users
Hello,

wow cool.


Would this mean, I can in some way (extra manual work) use the full GPU power in a WindowsVM or a LinuxVM, without security issues for the hole QubesOS System?
(Or should I first use this setup on a separate machine or some Qubes-Qubes Dual boot machine).

Kind Regards

Marek Marczykowski-Górecki

unread,
Jun 23, 2016, 3:17:59 PM6/23/16
to '01938'019384'091843'091843'09183'04918'029348'019, qubes-users
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
I haven't reviewed the instruction details, but it most likely involve
running qemu process in dom0, which is a huge security drawback for the
whole system.

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJXbDXdAAoJENuP0xzK19csZJkH/0eH6sttRaGVL5FWbPrWkEN8
BrhB/9WA6fI/c0pVkNAQI0uzZwRlL+yQuKzI6Epi08kQXgO8AK/sUnc8C5l8u+jX
0Gv0fDwG9vEAsmMfCBkAnPun509JUjMonKgxE5KBb4mrz+3/KlLjj40+djRSDxRg
vr5U96EMeqDfLr7ikx1CMUSTGAAypQFXE7YyGKW+q9z/6mO3ya7bM7DVZhZEzBy7
vbK4Kau27ycpGCgWZ/T7ftQsrLbxC2O6fHHdl9AEeRBWPtiMfKktRa3QfoHwF7wc
xWDliQy7bQ3ieAd7n+lfbXd0Nxtu/Kv3UwQVJXOLSYrmc9/YkzMafAzR6rQPd6A=
=m+57
-----END PGP SIGNATURE-----

Marcus at WetwareLabs

unread,
Jun 23, 2016, 6:04:14 PM6/23/16
to qubes-users, mar...@wetwa.re, ilpo.j...@helsinki.fi

Moi, Ilpo!

And thanks for chiming in.

Yes, you're right about ACS being a hardware capability. What I've understood is that IOMMU group and VFIO are software packages (developed by guys at Red Hat specifically for KVM) in the kernel / hypervisor that in turn use ACS (but please correct if I'm wrong). On Arch Linux / KVM I checked that the GPU was alone (together with the combined sound device) in its own IOMMU group, so passing those two together should be safe (safe as in "no accidental memory access violations via peer-to-peer transactions"). However I'm not sure how this (conforming to restrictions according to IOMMU groups while passing through ) translates into isolation in Xen. Is ACS turned on by default and is the isolation as good as with KVM and its IOMMU groups?

In my setup I can see this log entry in messages:
pci 0000:00:1c.0: Intel PCH root port ACS workaround enabled
pci
0000:00:1c.3: Intel PCH root port ACS workaround enabled

Those devices are the X99 series chipset PCI Express Root Ports.

And in the /linux/drivers/pci/quirks.c there's entry also for X99 (along with few other inlet chipsets):
3877 /*
3878  * Many Intel PCH root ports do provide ACS-like features to disable peer
3879  * transactions and validate bus numbers in requests, but do not provide an
3880  * actual PCIe ACS capability.  This is the list of device IDs known to fall
3881  * into that category as provided by Intel in Red Hat bugzilla 1037684.
3882  */

This relates to this patch
https://patchwork.kernel.org/patch/6312441/

So I guess (for X99) this should be supported starting from Linux 4.0 onwards.  But I'm not certain how well is this actually enforced. I should try to passthrough a device belonging to a group that has other PCI devices as wall and see if it's denied. 


Lack of ACS may not be a big deal to many. But it may limit isolation in
some cases, most notably having storage on PCIe slot connected SSDs and
GPU passthrough. And passing through more than a single GPU to different
VMs might have some isolation related hazards too because of the usual
PCIe slot arrangement. But one likely needs deep pockets to have such
arrangements anyway, so going to server or high-end platform may be less
of a issue to begin with :-).

> One thing about FLReset (Function Level Reset): There's quite general
> misconception about FLR being a requirement in order to do GPU passthrough,
> but this isn't true. As a matter of fact, not even the NVidia Quadros have
> FLR+ in PCI DevCaps, and not many non-GPU PCI devices do either. So even
> though the how-to here (http://wiki.xen.org/wiki/VTd_HowTo) states
> otherwise, the missing FLR capability will not necessarily mean that device
> can't be used in VM, but could only make it harder to survive DomU boot.
> I've seen in my tests that both Win 7 and Win8 VMs can be in fact booted
> several times without a requirement to boot Dom0 (but hopping BETWEEN the
> two Windows versions will result in either BSOD or Code 43). But again, this
> may wary a lot with GPU models and driver versions. But anyway, if you see
> this message during VM startup:
> lbxl: error: ....  The kernel doesn't support reset from sysfs for PCI
> device ...
> ... you can safely ignore it

FLR is "needed" for reseting the device "safely" (after first init, if a
reset is needed), not for the passthrough.

 Yes, you're absolutely right. My critique was only targeted for this entry in the how-to above:
" Passing through a PCI card without FLR capability will result an error "
I guess that was valid in Xen 4.1 but obviously is not anymore :)

Marcus at WetwareLabs

unread,
Jun 23, 2016, 6:34:04 PM6/23/16
to qubes-users, kerste...@gmail.com

Hi Marek,

you're right, it's using qemu-xen-traditional and qemu is running in dom0, so inherently it's more exposed than running VMs in stub domain. 

In the end, rigorous risks vs benefits analysis should be done which programs should be allowed to run there. Personally, I use it only for those few applications that I really need (Office, Visual Studio, Atmel Studio, Diptrace) and deem "safe". Networking is also disabled by default. Another Windows VM (without GPU passthrough) is running in stubdom to be used for those occasional needs for trying out miscellaneous less-trusted programs that need internet connection.

Continuing on this matter, what is your personal opinion about the security of the following scenarios:
- VM running in Dom0 (on Xen)
- VM running in Dom0 (on KVM) (I assume this is the default case, or does KVM have it's own version of stubdom?)
- Dual booting Qubes and Windows, without AEM

BTW. I saw you found the culprit for PCI passthrough not working in stubdom! (https://github.com/QubesOS/qubes-issues/issues/1659)  Congrats!  Finally we may be getting closer to getting Qubes both secure AND usable for larger masses  :)

Best regards,
Marcus

Marcus at WetwareLabs

unread,
Jun 26, 2016, 5:24:58 PM6/26/16
to qubes-users
Follow up to the PCI reset problem:

There's a patch that was not accepted to the official Xen branch (it was a bit hack), but it works out for me and according to Xen devel forum, few others have been using it successfully as well.

It publishes the 'do_flr' command in /sys filesystem which is used automatically by libxl to reset the PCI device when added to a VM. I managed to patch Linux kernel 4.1.24 only with minor modifications, and now I can pass through the GPU in succession between Win 7 and Win 8.1 VM's without BSODs or Dom0 reboots.  :)

The patch is here for those interested: http://lists.xenproject.org/archives/html/xen-devel/2014-12/msg00459.html
 
(I know this is more suited for qubes-devel, but as this patch has pretty low probability to be incorporated officially into Qubes, it would be used probably mostly by users prone to bleeding edge experimentation..)

Marcus at WetwareLabs

unread,
Jul 9, 2016, 10:55:51 AM7/9/16
to qubes-users
I've continued experimenting with GTX 980 passthrough with Arch Linux. I noticed that the xf86-video-nouveau does NOT in fact have Maxwell support. One would think otherwise looking at their Feature Matrix here: https://nouveau.freedesktop.org/wiki/FeatureMatrix/
NV110 is the Maxwell family (GTX 980 including). But the mode-setting driver can be used instead, so *I finally got GTX 980 PT working in Arch Linux*:

Add this file to /etc/X11/xorg.conf.d/20-nouveau.conf
```
Section "Device"
Identifier "NVidia Card"
Driver "modesetting"
BusID "PCI:0:5:0"
EndSection
```

Note that the PCI address is the address that the GPU has inside the VM (use lspci IN the vm to find out that). Also "pci_msitranslate=0" has to be set in VM configuration, otherwise VM will hang when X is started.

This is tested with Arch linux (up to date 8.7.2016), with Linux 4.6.3-1-ARCH, modesetting and X versions 1.18.3.

-----

Ok, now that it's proven that newer Nvidia cards CAN in fact be passed through in Xen, I tried the official NVidia binary driver, but it failed with error message "The NVIDIA GPU at PCI:0:5:0 is not supported by the 367.27 NVIDIA driver".

I think that's the proprietary driver refusing to work when it detects that it's running under hypervisor (the Code 43 issue in Windows). Since KVM has for a while supported hiding both the "KVMKVMKVMKVM" signature (with "-cpu kvm=off" flag) as well as the Viridian hypervisor signature ("-cpu hv_vendor_id="..." flag), and currently there's no such functionality in Xen, I patched it in quite similar way to what Alex Willimson did to KVM.

Attached is a patch for Xen 4.6.1 that spoofs Xen signature ("XenVMMXenVMM" to "ZenZenZenZen") and Viridian signature ("Microsoft Hv" to "Wetware Labs") when "spoof_xen=1" and "spoof_viridian=1" are added VM configuration file.

The signatures are currently hard-coded, and currently there's no way to modify them (beyond re-compiling Xen), since HVMLoader also uses a hard-coded string to detect Xen and there's no API (understandably) to change that signature in real-time.

WARNING! In case you try the patch, you MUST re-compile and install also core-libvirt (in addition to vmm-xen) packages. Otherwise starting all DomUs will fail! You have been warned :)

-----------------

With this patch, the *NVidia binary driver (version 367.27) works also on Arch Linux* :)

However this was not enough on Windows 7 and 8.1 VMs (driver version 368.39) still announce Error 43 :(

I would love if others could test this as well. Maybe the Windows driver uses some other functionality to check for hypervisor, or maybe it's not a spoofing issue at all.

More investigation coming in..


Marcus at WetwareLabs

unread,
Jul 9, 2016, 10:57:42 AM7/9/16
to qubes-users
Here's the patch.
spoof-xen-viridian.patch

Marcus at WetwareLabs

unread,
Jul 9, 2016, 11:06:43 AM7/9/16
to qubes-users
On Saturday, July 9, 2016 at 5:57:42 PM UTC+3, Marcus at WetwareLabs wrote:
> Here's the patch.

Forgot to add that if spoofing is turned on for an already-installed Windows VM, there was a BSOD during boot (Windows really doesn't like if hypervisor suddenly disappears..). Re-installing Windows (with spoofing on) fixes this (maybe fixing installation with rescue CD could work also, but I did not test that).

Marcus at WetwareLabs

unread,
Jul 14, 2016, 4:47:59 PM7/14/16
to qubes-users
Some more experimentation with GTX980:

- Tried Core2Duo CPUID from KVM VM
- Ported NoSnoop patch from KVM

Sadly, neither of these would help with BSODs / Code 43 errors.

I posted the results (with patches and more detailed information) on Xen-devel (https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg01713.html). I hope the experts there might have more suggestions.

voxl...@gmail.com

unread,
Jul 17, 2016, 2:31:20 PM7/17/16
to qubes-users

I'm guessing that it has to do with the nvidia-specific quirks implemented where the PCI BAR's are used to access the PCI Config Space and other BAR's. See: https://github.com/qemu/qemu/blob/master/hw/vfio/pci-quirks.c

I think before others worked around it somewhat by reserving the memory and having 1:1 mappings ( http://www.davidgis.fr/blog/index.php?2011/12/07/860-xen-42unstable-patches-for-vga-pass-through ), but that isn't really a proper solution.

tom...@gmail.com

unread,
Aug 1, 2016, 3:24:32 PM8/1/16
to qubes-users
Hi Marcus,

I'm bit confused with this


> Edit /etc/default/grub and add following options (change the pci address if needed)

Which version of Qubes is this? Aint 3.1 EFI-only?
And EFI version of kernel args are to be passed via /boot/efi/EFI/qubes (kernel=)?

regards,
Tom

Marcus at WetwareLabs

unread,
Aug 4, 2016, 5:26:14 PM8/4/16
to qubes-users, tom...@gmail.com

Hi Tom,

I use 3.1 and 3.2 rc2. Actually I haven't thought about this before. It seems on my system the default state is 'BIOS compatibilty mode' even though it's a new motherboard which is running on UEFI firmware. As for the partition table type on my SDD, it has always been 'dos' type MBR and that was never converted to GPT by Qubes Installer.

I'm not familiar how to configure EFI type bootloader, but it seems editing /boot/efi/EFI/qubes/xen.cfg should work. There's lots of discussion about it here: https://github.com/QubesOS/qubes-issues/issues/794

emergenc...@gmail.com

unread,
Sep 27, 2016, 3:41:06 PM9/27/16
to qubes-users
On Wednesday, June 22, 2016 at 8:26:50 AM UTC-7, Marcus at WetwareLabs wrote:
> Hello all,
>
> I've been tinkering with GPU passthrough these couple of weeks and I thought I should now share some of my findings. It's not so much unlike the earlier report on GPU passthrough here (https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ).
>
> I started with Nvidia GTX 980, but I had no luck with ANY of the Xen hypervisors or Qubes versions. Please see my other thread for more information (https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/PuZLWxhTgM0/pWe7LXI-AgAJ).
>
> However after I switched to Radeon 6950, I've had success with all the Xen versions. So I guess it's a thing with Nvidia driver initialization. On a side note, someone should really test this with Nvidia Quadros that are officially supported to be used in VMs. (And of course, there are the hacks to convert older Geforces to Quadros..)
>
> Anyway, here's a quick and most likely incomplete list (for most users) for getting GPU passthrough working on Win 8.1 VM. (works identically on Win7)
>
> Enclosed are the VM configuration file and HCL file for information about my hardware setup (feel free to add this to HW compatibility list!)
>
> TUTORIAL
>
> Check which PCI addresses correspond to your GPU (and optionally, USB host) with lspci.Here's mine:
> Modify the win8.hvm: Check that the disk images and Windows installation CDROM image are correct, and that the IP address does not conflict with any other VM (I haven't figured out yet how to set up dhcp) Check that 'pci = [ .... ]' is commented for nowStart the VM ( -V option runs automatically VNC client)
>
> # xl create win8.hvm -V
>
> If you happen to close the client (but VM is still running), start it again with
>
>
> # xl vncviewer win8
> Note that I had success starting the VM only as root. Also killing the VM with 'xl destroy win8' would leave the qemu process lingering if not done as root (if that occurs, you have to kill that process manually)
> Install WindowsPartition the user image using 'Disk Manager'Download signed paravirtualized drivers here (Qubes PV drivers work only in Win 7):http://apt.univention.de/download/addons/gplpv-drivers/gplpv_Vista2008x64_signed_0.11.0.373.msi
> Don't mind the name, it works on Win 8.1 as well.
> For more info: http://wiki.univention.com/index.php?title=Installing-signed-GPLPV-drivers
>
> Move the drivers inside user image partition (shut down VM first):
>
> # losetup   (Check for free loop device)
> # losetup -P /dev/loop10 win8-user.img   (Setup loop device and scan partition. Assuming loop10 is free)
> # mount /dev/loop10p1 /mnt/removable  ( Mount the first partition )- copy the driver there and unmount.
>
> Reboot VM, install paravirtual drivers and reboot againCreate this script inside sys-firewall (check that the sys-net vm ip address 10.137.1.1 is correct though):
>
> fwcfg.sh:
> #!/bin/bash
>    vmip=$1
>
>     iptables -A FORWARD -s $vmip -p udp -d 10.137.1.1   --dport 53 -j ACCEPT
>     iptables -A FORWARD -s $vmip -p udp -d 10.137.1.254 --dport 53 -j ACCEPT
>     iptables -A FORWARD -s $vmip -p icmp -j ACCEPT
>     iptables -A FORWARD -s $vmip -p tcp -d 10.137.255.254 --dport 8082 -j DROP
>     iptables -A FORWARD -s $vmip -j ACCEPT
> Then setup the iptables rules:
>
>
> # sudo ./fwcfg.sh 10.137.2.50   # substitute with the win8.1 VM ip address
> Note that this has to do be done manually EVERY TIME the vm is (re)started, because a new virtual interface is created and the old one is scrapped. If someone knows how to automate this, I'm all ears :)
>
> Configure VM networkingInside Windows, manually setup IP, netmask and GW in VM (10.137.2.50, 255.255.255.0, 10.137.2.1) as well as DNS (10.137.1.1) for the 'Xen Net' interface.
> If routing does not work properly at this point, try disabling the other (Realtek) network interface in Windows.
>
> Uncomment the devices-to-be-passed list ( PCI = [ ... ] ) in win8.hvmDownload the GPU drivers (ATI Catalyst 15.7.1 for Win 8.1 worked for me for Radeon 6950)Launch the installer but close it after it has unzipped drivers to C:\ATIInstall the driver manually via Device Manager ( Update driver -> Browse )Cross  your fingers and hope for the best!
> Enjoy a beer :)
>  ---------
>
> If these instructions don't work for you, you could try following things:
> Enable permissive mode for PCI device (see link above)iommu=workaround_bios_bug  boot optionenable/disable options in .hvm file: viridian, pae, hpet, acpi, apic, cpi_msitranslate, pci_power_mgmt, xen_platform_pci
> If you still don't get passthrough working, make sure that it is even possible with you current hardware. Most of the modern (<3 years old) working GPU PT installations seem to using KVM (I got even my grumpy NVidia GTX 980 functional!), so you should at least try creating bare-metal Arch Linux installion and then following instructions here: https://bufferoverflow.io/gpu-passthrough/
> or Arch wiki entry here: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
> or a series of tutorials here: http://vfio.blogspot.se/2015/05/vfio-gpu-how-to-series-part-1-hardware.html
>
> Most of the instructions are KVM specific, but there's lot of great non-hypervisor specific information there as well, especially in the latter blog. Note that all the info about VFIO and IOMMU groups can be misleading since they are KVM specific functionality and not part of Xen (don't ask me how much time I spent time figuring out why I can't seem to find IOMMU group entries in /sys/bus/pci/ under Qubes...)
>
> One thing about FLReset (Function Level Reset): There's quite general misconception about FLR being a requirement in order to do GPU passthrough, but this isn't true. As a matter of fact, not even the NVidia Quadros have FLR+ in PCI DevCaps, and not many non-GPU PCI devices do either. So even though the how-to here (http://wiki.xen.org/wiki/VTd_HowTo) states otherwise, the missing FLR capability will not necessarily mean that device can't be used in VM, but could only make it harder to survive DomU boot. I've seen in my tests that both Win 7 and Win8 VMs can be in fact booted several times without a requirement to boot Dom0 (but hopping BETWEEN the two Windows versions will result in either BSOD or Code 43). But again, this may wary a lot with GPU models and driver versions. But anyway, if you see this message during VM startup:
>
>
> lbxl: error: ....  The kernel doesn't support reset from sysfs for PCI device ...... you can safely ignore it
>
> Happy hacking!
>
> Best regards,
> Marcus

Hello,

I am fairly new to qubes, and have been trying to get pci pass-through to work with my system. Your instructions are very good for getting the idealogy understood, but I dont quiet know where to copy the .hvm file in order for it to work. I also dont know how to invoke it once it is in it's special place. Can I please have some guidance on that part?

Thanks,
Emex

tom...@gmail.com

unread,
Nov 24, 2016, 1:48:23 AM11/24/16
to qubes-users, tom...@gmail.com
So, after Marek's fix here, https://github.com/QubesOS/qubes-issues/issues/1659
is it true that I can expect this from it:
- HVM passthrough working using stub domain via xl ?
(following your guide above, exlcuding 'qemu-xen-traditional')
And not:
- HVM passthrough working via VM created with Qubes manager and started with it / qvm-start ?

regards,
tom

Marek Marczykowski-Górecki

unread,
Nov 24, 2016, 8:51:14 AM11/24/16
to tom...@gmail.com, qubes-users
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Actually, generic PCI passthrough should just work in both cases now.
Don't know if GPU passthrough is any special here, but I wouldn't be
surprised if it is...

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJYNvBNAAoJENuP0xzK19csycgH/jSvl5JNBfFzSAyj2BFB4WsD
Ig6+VUNBSKCXdL4wEl2RTIE4EylU3/0hjEIaw1mSRLAx4NLRmmZVOUAq51rMMlBz
/RXQIzggOzcqdyUXa4Hi185SZg3SJeVV04Lm9YBTV4hQ5i7AKw0+Sn3/PBaoui2D
9A0HPUGV9c+bMvUWc0yp26podxVoicz0v7en3WAOvJVhoDare9ioLRQKhQ7inNrY
Kp7/1S/WAMF4c2tbThrfFjAN/ou87UWyKhhWTzDBa+crC8t/75lHpQp8sS6Ec1tU
51t3eiGPKWEghlvHY2sCwQRAKtZjkqSGxu73RCxPYDZ1nuUf/yw7hsmC1uRwUNQ=
=ViLz
-----END PGP SIGNATURE-----

Grzesiek Chodzicki

unread,
Nov 24, 2016, 2:08:33 PM11/24/16
to qubes-users, tom...@gmail.com
I've tried passing through a USB controller to my windows hvm. Despite setting the pci_strictreset to false, qvm-start still fails with libvirt.libvirtError: internal error: libxenlight failed to create new domain 'windows-7'

Marek Marczykowski-Górecki

unread,
Nov 24, 2016, 5:48:56 PM11/24/16
to Grzesiek Chodzicki, qubes-users, tom...@gmail.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Thu, Nov 24, 2016 at 11:08:33AM -0800, Grzesiek Chodzicki wrote:
> W dniu czwartek, 24 listopada 2016 14:51:14 UTC+1 użytkownik Marek Marczykowski-Górecki napisał:
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA256
> >
> > On Wed, Nov 23, 2016 at 10:48:23PM -0800, tom...@gmail.com wrote:
> > > So, after Marek's fix here, https://github.com/QubesOS/qubes-issues/issues/1659
> > > is it true that I can expect this from it:
> > > - HVM passthrough working using stub domain via xl ?
> > > (following your guide above, exlcuding 'qemu-xen-traditional')
> > > And not:
> > > - HVM passthrough working via VM created with Qubes manager and started with it / qvm-start ?
> >
> > Actually, generic PCI passthrough should just work in both cases now.
> > Don't know if GPU passthrough is any special here, but I wouldn't be
> > surprised if it is...
> >
>
> I've tried passing through a USB controller to my windows hvm. Despite setting the pci_strictreset to false, qvm-start still fails with libvirt.libvirtError: internal error: libxenlight failed to create new domain 'windows-7'

Do you have VT-d (aka IOMMU) supported and enabled? You can check for
details in /var/log/libvirt/libxl/libxl-driver.log.

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJYN25VAAoJENuP0xzK19csxxAH/R2x4FXcoikNcZqj2LycK35n
G3+zJe5c84HreQVTRi4f512IsOc0oyenDxqXPJ8Y3Bud/Fyk6LObFRU8qCpFe/U4
s/8KSQ8H94eI3bPQ5dyrLJSY8KkgPDbwIkeNJaFUxxjWpAPfbZLKr6ibmkO3ivD+
brvD87vS14oZQQ0ffoUQ14AI+jAK6Jx+f4WKz8zmi3G3ZTVAhjr4cbHyI039mZ0u
K9um9tN94a2TK5xUfT6+ciRWLicRYgxd8szgjXQIlcT51rud/E/EVlQbUrBXBrYp
QZv6zWXGykZfzRRzq400aXOdXDFxL62J8ZgywYcOPCxiSbmdMEy24VhXMTHa4Io=
=WZz9
-----END PGP SIGNATURE-----

Jean-Philippe Ouellet

unread,
Nov 24, 2016, 6:13:13 PM11/24/16
to Marek Marczykowski-Górecki, tom...@gmail.com, qubes-users
On Thu, Nov 24, 2016 at 8:51 AM, Marek Marczykowski-Górecki
<marm...@invisiblethingslab.com> wrote:
> Actually, generic PCI passthrough should just work in both cases now.
> Don't know if GPU passthrough is any special here, but I wouldn't be
> surprised if it is...

At least for intel-integrated stuff I can confirm that it definitely is.

The relevant drivers poke at pci config space registers of other pci
devices besides only the GPU, and expect them to be the actual hw with
the intended physical-world side-effects, not qemu.

It is currently making my effort to get hardware accelerated graphics
in non-dom0 difficult :(

Grzesiek Chodzicki

unread,
Nov 25, 2016, 1:31:00 PM11/25/16
to qubes-users, grzegorz....@gmail.com, tom...@gmail.com
I have both VT-x and VT-d enabled in my bios. According to xl dmesg following technologies are enabled: IOMMU, Queued Invalidation, Interrupt Remapping and Shared EPT tables. Snoop Control and Dom0 DMA Passthrough are not enabled.
I went through the libxl-driver.log but the only thing that stood out were the multiple warnings that the USB controller does not support reset from sysfs.

cela...@gmail.com

unread,
Dec 18, 2016, 1:08:35 PM12/18/16
to qubes-users
Le mercredi 22 juin 2016 11:26:50 UTC-4, Marcus at WetwareLabs a écrit :
> Hello all,
>
> I've been tinkering with GPU passthrough these couple of weeks and I thought I should now share some of my findings. It's not so much unlike the earlier report on GPU passthrough here (https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ).
>
> I started with Nvidia GTX 980, but I had no luck with ANY of the Xen hypervisors or Qubes versions. Please see my other thread for more information (https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/PuZLWxhTgM0/pWe7LXI-AgAJ).
>
> However after I switched to Radeon 6950, I've had success with all the Xen versions. So I guess it's a thing with Nvidia driver initialization. On a side note, someone should really test this with Nvidia Quadros that are officially supported to be used in VMs. (And of course, there are the hacks to convert older Geforces to Quadros..)
>
> Anyway, here's a quick and most likely incomplete list (for most users) for getting GPU passthrough working on Win 8.1 VM. (works identically on Win7)
>
> Enclosed are the VM configuration file and HCL file for information about my hardware setup (feel free to add this to HW compatibility list!)
>
> TUTORIAL
>
> Check which PCI addresses correspond to your GPU (and optionally, USB host) with lspci.Here's mine:
> Modify the win8.hvm: Check that the disk images and Windows installation CDROM image are correct, and that the IP address does not conflict with any other VM (I haven't figured out yet how to set up dhcp) Check that 'pci = [ .... ]' is commented for nowStart the VM ( -V option runs automatically VNC client)
>
> # xl create win8.hvm -V
>
> If you happen to close the client (but VM is still running), start it again with
>
>
> # xl vncviewer win8
> Note that I had success starting the VM only as root. Also killing the VM with 'xl destroy win8' would leave the qemu process lingering if not done as root (if that occurs, you have to kill that process manually)
> Install WindowsPartition the user image using 'Disk Manager'Download signed paravirtualized drivers here (Qubes PV drivers work only in Win 7):http://apt.univention.de/download/addons/gplpv-drivers/gplpv_Vista2008x64_signed_0.11.0.373.msi
> Don't mind the name, it works on Win 8.1 as well.
> For more info: http://wiki.univention.com/index.php?title=Installing-signed-GPLPV-drivers
>
> Move the drivers inside user image partition (shut down VM first):
>
> # losetup   (Check for free loop device)
> # losetup -P /dev/loop10 win8-user.img   (Setup loop device and scan partition. Assuming loop10 is free)
> # mount /dev/loop10p1 /mnt/removable  ( Mount the first partition )- copy the driver there and unmount.
>
> Reboot VM, install paravirtual drivers and reboot againCreate this script inside sys-firewall (check that the sys-net vm ip address 10.137.1.1 is correct though):
>
> fwcfg.sh:
> #!/bin/bash
>    vmip=$1
>
>     iptables -A FORWARD -s $vmip -p udp -d 10.137.1.1   --dport 53 -j ACCEPT
>     iptables -A FORWARD -s $vmip -p udp -d 10.137.1.254 --dport 53 -j ACCEPT
>     iptables -A FORWARD -s $vmip -p icmp -j ACCEPT
>     iptables -A FORWARD -s $vmip -p tcp -d 10.137.255.254 --dport 8082 -j DROP
>     iptables -A FORWARD -s $vmip -j ACCEPT
> Then setup the iptables rules:
>
>
> # sudo ./fwcfg.sh 10.137.2.50   # substitute with the win8.1 VM ip address
> Note that this has to do be done manually EVERY TIME the vm is (re)started, because a new virtual interface is created and the old one is scrapped. If someone knows how to automate this, I'm all ears :)
>
> Configure VM networkingInside Windows, manually setup IP, netmask and GW in VM (10.137.2.50, 255.255.255.0, 10.137.2.1) as well as DNS (10.137.1.1) for the 'Xen Net' interface.
> If routing does not work properly at this point, try disabling the other (Realtek) network interface in Windows.
>
> Uncomment the devices-to-be-passed list ( PCI = [ ... ] ) in win8.hvmDownload the GPU drivers (ATI Catalyst 15.7.1 for Win 8.1 worked for me for Radeon 6950)Launch the installer but close it after it has unzipped drivers to C:\ATIInstall the driver manually via Device Manager ( Update driver -> Browse )Cross  your fingers and hope for the best!
> Enjoy a beer :)
>  ---------
>
> If these instructions don't work for you, you could try following things:
> Enable permissive mode for PCI device (see link above)iommu=workaround_bios_bug  boot optionenable/disable options in .hvm file: viridian, pae, hpet, acpi, apic, cpi_msitranslate, pci_power_mgmt, xen_platform_pci
> If you still don't get passthrough working, make sure that it is even possible with you current hardware. Most of the modern (<3 years old) working GPU PT installations seem to using KVM (I got even my grumpy NVidia GTX 980 functional!), so you should at least try creating bare-metal Arch Linux installion and then following instructions here: https://bufferoverflow.io/gpu-passthrough/
> or Arch wiki entry here: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
> or a series of tutorials here: http://vfio.blogspot.se/2015/05/vfio-gpu-how-to-series-part-1-hardware.html
>
> Most of the instructions are KVM specific, but there's lot of great non-hypervisor specific information there as well, especially in the latter blog. Note that all the info about VFIO and IOMMU groups can be misleading since they are KVM specific functionality and not part of Xen (don't ask me how much time I spent time figuring out why I can't seem to find IOMMU group entries in /sys/bus/pci/ under Qubes...)
>
> One thing about FLReset (Function Level Reset): There's quite general misconception about FLR being a requirement in order to do GPU passthrough, but this isn't true. As a matter of fact, not even the NVidia Quadros have FLR+ in PCI DevCaps, and not many non-GPU PCI devices do either. So even though the how-to here (http://wiki.xen.org/wiki/VTd_HowTo) states otherwise, the missing FLR capability will not necessarily mean that device can't be used in VM, but could only make it harder to survive DomU boot. I've seen in my tests that both Win 7 and Win8 VMs can be in fact booted several times without a requirement to boot Dom0 (but hopping BETWEEN the two Windows versions will result in either BSOD or Code 43). But again, this may wary a lot with GPU models and driver versions. But anyway, if you see this message during VM startup:
>
>
> lbxl: error: ....  The kernel doesn't support reset from sysfs for PCI device ...... you can safely ignore it
>
> Happy hacking!
>
> Best regards,
> Marcus

Thank you Marcus, i made it work on my Qubes 3.2 install following your instructions.

GPU: ASUS Radeon 480 4GB

Sound works with the Radeon HDMI sound device but not with the ASUS XONAR DG PCI device that i've been trying to passthrough. Although the device is recognized and the driver installs, windows cannot start the device.

Also, Windows boots well the first time after a Dom0 boot, but as soon as the Windows VM is shutdown (whether gracefully or by a crash), the windows VM will invariably crash with a BSOD. It won't boot again.
It seems though that adding both the Radeon GPU and the HDMI sound device to another Qubes VM, starting and shutting down this VM, will "release" the devices which will allow the Windows VM to start again without a BSOD.

Now, next step would be to be able to control the startup and shutdown of the Windows VM via the Qubes VM manager. I have tried to translate the config file into a libvirt XML one with virsh but with no success.
I'm guessing it's because of the use of the qemu-xen-traditional (which i hear is not so secure) that libvirt doesn't seem to allow.

square...@gmail.com

unread,
Apr 8, 2017, 10:57:18 AM4/8/17
to qubes-users
Did anyone have any luck with GPU passthrough in gaming laptops with discrete GPU and iGPU?

W dniu środa, 22 czerwca 2016 17:26:50 UTC+2 użytkownik Marcus at WetwareLabs napisał:
> Hello all,
>
> I've been tinkering with GPU passthrough these couple of weeks and I thought I should now share some of my findings. It's not so much unlike the earlier report on GPU passthrough here (https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ).
>
> I started with Nvidia GTX 980, but I had no luck with ANY of the Xen hypervisors or Qubes versions. Please see my other thread for more information (https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/PuZLWxhTgM0/pWe7LXI-AgAJ).
>
> However after I switched to Radeon 6950, I've had success with all the Xen versions. So I guess it's a thing with Nvidia driver initialization. On a side note, someone should really test this with Nvidia Quadros that are officially supported to be used in VMs. (And of course, there are the hacks to convert older Geforces to Quadros..)
>
> Anyway, here's a quick and most likely incomplete list (for most users) for getting GPU passthrough working on Win 8.1 VM. (works identically on Win7)
>
> Enclosed are the VM configuration file and HCL file for information about my hardware setup (feel free to add this to HW compatibility list!)
>
> TUTORIAL
>
> Check which PCI addresses correspond to your GPU (and optionally, USB host) with lspci.Here's mine:
> Modify the win8.hvm: Check that the disk images and Windows installation CDROM image are correct, and that the IP address does not conflict with any other VM (I haven't figured out yet how to set up dhcp) Check that 'pci = [ .... ]' is commented for nowStart the VM ( -V option runs automatically VNC client)
>
> # xl create win8.hvm -V
>
> If you happen to close the client (but VM is still running), start it again with
>
>
> # xl vncviewer win8
> Note that I had success starting the VM only as root. Also killing the VM with 'xl destroy win8' would leave the qemu process lingering if not done as root (if that occurs, you have to kill that process manually)
> Install WindowsPartition the user image using 'Disk Manager'Download signed paravirtualized drivers here (Qubes PV drivers work only in Win 7):http://apt.univention.de/download/addons/gplpv-drivers/gplpv_Vista2008x64_signed_0.11.0.373.msi
> Don't mind the name, it works on Win 8.1 as well.
> For more info: http://wiki.univention.com/index.php?title=Installing-signed-GPLPV-drivers
>
> Move the drivers inside user image partition (shut down VM first):
>
> # losetup   (Check for free loop device)
> # losetup -P /dev/loop10 win8-user.img   (Setup loop device and scan partition. Assuming loop10 is free)
> # mount /dev/loop10p1 /mnt/removable  ( Mount the first partition )- copy the driver there and unmount.
>
> Reboot VM, install paravirtual drivers and reboot againCreate this script inside sys-firewall (check that the sys-net vm ip address 10.137.1.1 is correct though):
>
> fwcfg.sh:
> #!/bin/bash
>    vmip=$1
>
>     iptables -A FORWARD -s $vmip -p udp -d 10.137.1.1   --dport 53 -j ACCEPT
>     iptables -A FORWARD -s $vmip -p udp -d 10.137.1.254 --dport 53 -j ACCEPT
>     iptables -A FORWARD -s $vmip -p icmp -j ACCEPT
>     iptables -A FORWARD -s $vmip -p tcp -d 10.137.255.254 --dport 8082 -j DROP
>     iptables -A FORWARD -s $vmip -j ACCEPT
> Then setup the iptables rules:
>
>
> # sudo ./fwcfg.sh 10.137.2.50   # substitute with the win8.1 VM ip address
> Note that this has to do be done manually EVERY TIME the vm is (re)started, because a new virtual interface is created and the old one is scrapped. If someone knows how to automate this, I'm all ears :)
>
> Configure VM networkingInside Windows, manually setup IP, netmask and GW in VM (10.137.2.50, 255.255.255.0, 10.137.2.1) as well as DNS (10.137.1.1) for the 'Xen Net' interface.
> If routing does not work properly at this point, try disabling the other (Realtek) network interface in Windows.
>
> Uncomment the devices-to-be-passed list ( PCI = [ ... ] ) in win8.hvmDownload the GPU drivers (ATI Catalyst 15.7.1 for Win 8.1 worked for me for Radeon 6950)Launch the installer but close it after it has unzipped drivers to C:\ATIInstall the driver manually via Device Manager ( Update driver -> Browse )Cross  your fingers and hope for the best!
> Enjoy a beer :)
>  ---------
>
> If these instructions don't work for you, you could try following things:
> Enable permissive mode for PCI device (see link above)iommu=workaround_bios_bug  boot optionenable/disable options in .hvm file: viridian, pae, hpet, acpi, apic, cpi_msitranslate, pci_power_mgmt, xen_platform_pci
> If you still don't get passthrough working, make sure that it is even possible with you current hardware. Most of the modern (<3 years old) working GPU PT installations seem to using KVM (I got even my grumpy NVidia GTX 980 functional!), so you should at least try creating bare-metal Arch Linux installion and then following instructions here: https://bufferoverflow.io/gpu-passthrough/
> or Arch wiki entry here: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
> or a series of tutorials here: http://vfio.blogspot.se/2015/05/vfio-gpu-how-to-series-part-1-hardware.html
>
> Most of the instructions are KVM specific, but there's lot of great non-hypervisor specific information there as well, especially in the latter blog. Note that all the info about VFIO and IOMMU groups can be misleading since they are KVM specific functionality and not part of Xen (don't ask me how much time I spent time figuring out why I can't seem to find IOMMU group entries in /sys/bus/pci/ under Qubes...)
>
> One thing about FLReset (Function Level Reset): There's quite general misconception about FLR being a requirement in order to do GPU passthrough, but this isn't true. As a matter of fact, not even the NVidia Quadros have FLR+ in PCI DevCaps, and not many non-GPU PCI devices do either. So even though the how-to here (http://wiki.xen.org/wiki/VTd_HowTo) states otherwise, the missing FLR capability will not necessarily mean that device can't be used in VM, but could only make it harder to survive DomU boot. I've seen in my tests that both Win 7 and Win8 VMs can be in fact booted several times without a requirement to boot Dom0 (but hopping BETWEEN the two Windows versions will result in either BSOD or Code 43). But again, this may wary a lot with GPU models and driver versions. But anyway, if you see this message during VM startup:
>
>
> lbxl: error: ....  The kernel doesn't support reset from sysfs for PCI device ...... you can safely ignore it

lemond...@gmail.com

unread,
Feb 13, 2018, 4:12:52 PM2/13/18
to qubes-users
On Wednesday, June 22, 2016 at 8:26:50 AM UTC-7, Marcus at WetwareLabs wrote:
> Hello all,
>
> I've been tinkering with GPU passthrough these couple of weeks and I thought I should now share some of my findings. It's not so much unlike the earlier report on GPU passthrough here (https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ).
>
> I started with Nvidia GTX 980, but I had no luck with ANY of the Xen hypervisors or Qubes versions. Please see my other thread for more information (https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/PuZLWxhTgM0/pWe7LXI-AgAJ).
>
> However after I switched to Radeon 6950, I've had success with all the Xen versions. So I guess it's a thing with Nvidia driver initialization. On a side note, someone should really test this with Nvidia Quadros that are officially supported to be used in VMs. (And of course, there are the hacks to convert older Geforces to Quadros..)
>
> Anyway, here's a quick and most likely incomplete list (for most users) for getting GPU passthrough working on Win 8.1 VM. (works identically on Win7)
>
> Enclosed are the VM configuration file and HCL file for information about my hardware setup (feel free to add this to HW compatibility list!)
>
> TUTORIAL
>
> Check which PCI addresses correspond to your GPU (and optionally, USB host) with lspci.Here's mine:
> Modify the win8.hvm: Check that the disk images and Windows installation CDROM image are correct, and that the IP address does not conflict with any other VM (I haven't figured out yet how to set up dhcp) Check that 'pci = [ .... ]' is commented for nowStart the VM ( -V option runs automatically VNC client)
>
> # xl create win8.hvm -V
>
> If you happen to close the client (but VM is still running), start it again with
>
>
> # xl vncviewer win8
> Note that I had success starting the VM only as root. Also killing the VM with 'xl destroy win8' would leave the qemu process lingering if not done as root (if that occurs, you have to kill that process manually)
> Install WindowsPartition the user image using 'Disk Manager'Download signed paravirtualized drivers here (Qubes PV drivers work only in Win 7):http://apt.univention.de/download/addons/gplpv-drivers/gplpv_Vista2008x64_signed_0.11.0.373.msi
> Don't mind the name, it works on Win 8.1 as well.
> For more info: http://wiki.univention.com/index.php?title=Installing-signed-GPLPV-drivers
>
> Move the drivers inside user image partition (shut down VM first):
>
> # losetup   (Check for free loop device)
> # losetup -P /dev/loop10 win8-user.img   (Setup loop device and scan partition. Assuming loop10 is free)
> # mount /dev/loop10p1 /mnt/removable  ( Mount the first partition )- copy the driver there and unmount.
>
> Reboot VM, install paravirtual drivers and reboot againCreate this script inside sys-firewall (check that the sys-net vm ip address 10.137.1.1 is correct though):
>
> fwcfg.sh:
> #!/bin/bash
>    vmip=$1
>
>     iptables -A FORWARD -s $vmip -p udp -d 10.137.1.1   --dport 53 -j ACCEPT
>     iptables -A FORWARD -s $vmip -p udp -d 10.137.1.254 --dport 53 -j ACCEPT
>     iptables -A FORWARD -s $vmip -p icmp -j ACCEPT
>     iptables -A FORWARD -s $vmip -p tcp -d 10.137.255.254 --dport 8082 -j DROP
>     iptables -A FORWARD -s $vmip -j ACCEPT
> Then setup the iptables rules:
>
>
> # sudo ./fwcfg.sh 10.137.2.50   # substitute with the win8.1 VM ip address
> Note that this has to do be done manually EVERY TIME the vm is (re)started, because a new virtual interface is created and the old one is scrapped. If someone knows how to automate this, I'm all ears :)
>
> Configure VM networkingInside Windows, manually setup IP, netmask and GW in VM (10.137.2.50, 255.255.255.0, 10.137.2.1) as well as DNS (10.137.1.1) for the 'Xen Net' interface.
> If routing does not work properly at this point, try disabling the other (Realtek) network interface in Windows.
>
> Uncomment the devices-to-be-passed list ( PCI = [ ... ] ) in win8.hvmDownload the GPU drivers (ATI Catalyst 15.7.1 for Win 8.1 worked for me for Radeon 6950)Launch the installer but close it after it has unzipped drivers to C:\ATIInstall the driver manually via Device Manager ( Update driver -> Browse )Cross  your fingers and hope for the best!
> Enjoy a beer :)
>  ---------
>
> If these instructions don't work for you, you could try following things:
> Enable permissive mode for PCI device (see link above)iommu=workaround_bios_bug  boot optionenable/disable options in .hvm file: viridian, pae, hpet, acpi, apic, cpi_msitranslate, pci_power_mgmt, xen_platform_pci
> If you still don't get passthrough working, make sure that it is even possible with you current hardware. Most of the modern (<3 years old) working GPU PT installations seem to using KVM (I got even my grumpy NVidia GTX 980 functional!), so you should at least try creating bare-metal Arch Linux installion and then following instructions here: https://bufferoverflow.io/gpu-passthrough/
> or Arch wiki entry here: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
> or a series of tutorials here: http://vfio.blogspot.se/2015/05/vfio-gpu-how-to-series-part-1-hardware.html
>
> Most of the instructions are KVM specific, but there's lot of great non-hypervisor specific information there as well, especially in the latter blog. Note that all the info about VFIO and IOMMU groups can be misleading since they are KVM specific functionality and not part of Xen (don't ask me how much time I spent time figuring out why I can't seem to find IOMMU group entries in /sys/bus/pci/ under Qubes...)
>
> One thing about FLReset (Function Level Reset): There's quite general misconception about FLR being a requirement in order to do GPU passthrough, but this isn't true. As a matter of fact, not even the NVidia Quadros have FLR+ in PCI DevCaps, and not many non-GPU PCI devices do either. So even though the how-to here (http://wiki.xen.org/wiki/VTd_HowTo) states otherwise, the missing FLR capability will not necessarily mean that device can't be used in VM, but could only make it harder to survive DomU boot. I've seen in my tests that both Win 7 and Win8 VMs can be in fact booted several times without a requirement to boot Dom0 (but hopping BETWEEN the two Windows versions will result in either BSOD or Code 43). But again, this may wary a lot with GPU models and driver versions. But anyway, if you see this message during VM startup:
>
>
> lbxl: error: ....  The kernel doesn't support reset from sysfs for PCI device ...... you can safely ignore it
>
> Happy hacking!
>
> Best regards,
> Marcus

I realize this is 2 years old but was wondering if you've had more recent successes with GPU passthrough on subsequent releases of Qubes using more current GPUs? I read somewhere that the reason for no support thus far in Qubes for GPU passthrough is due to security concerns should the firmware become compromised in a given VM. But I also read that one way around this is to use a dedicated discrete graphics card for GPU pass-through only and use integrated graphics or maybe a separate graphics card for all other VMs in Qubes? that seems to keep any vulnerability isolated to the VM using the GPU passthrough card, so if you are just doing gaming or video editing on that GPU passthrough VM, it shouldn't be a big deal if it becomes compromised. You could just go back to an earlier snapshot right?

So can this solution work currently or do we need to wait for a latter release of Qubes for some reason?

Curious to know what your latest findings are - 2 years later.
Reply all
Reply to author
Forward
Message has been deleted
0 new messages