SUCCESS report: Qubes R3-rc2 + GPU passthrough up, and running!

12,017 views
Skip to first unread message

XO...@riseup.net

unread,
Aug 12, 2015, 5:55:10 AM8/12/15
to qubes...@googlegroups.com
After several weeks of research and dedicated work, I am happy to report that I successfully have (secondary) GPU-passthrough working on Qubes R3-rc2 x86_64 with a Windows 7 x64 HVM. To try and keep this post short, I have attached a copy of my setup for those interested in the configs I am using. I've only just got it up and running here recently, so far my testing has consisted of playing a few video games =  'Her Story' (not very demanding), 'Borderlands 2', and 'Metal Gear Rising Revengeance'. I am a VERY HAPPY Qubes user on this day. I want to thank the Qubes devs for all the work you guy/gals do!
ATTACHED FILES:
grub = /etc/default/grub
win7.hvm = /etc/xen/win7.hvm
rc.local = /etc/rc.local
If the person(s) in charge wouldn't mind updating the Qubes HCL, this here is my hardware setup:
MOTHERBOARD: MSI 890FXA-GD70, currently running V1.8 BIOS (A7640AMS.180 - 2011-02-24)
CPU: AMD Athlon II X2 270 Regor Dual-Core 3.4GHz Socket AM3
POWER SUPPLY: CORSAIR Professional Series Gold AX1200 - 1200W
MEMORY: 16GB total (4x 4GB sticks), G.SKILL Sniper 4GB DDR3 SDRAM DDR3 1600 (PC3 12800)
HARD DRIVE(s): 2x Seagate Barracuda Green ST1500DL003 1.5TB 5900 RPM
VIDEO CARD(s): 2x XFX Radeon HD 6970 2GB 256-Bit GDDR5 PCI Express 2.1 x16
The truth be told, the "type" of GPU passthrough I have working in Qubes R3-rc2 is using libxl, NOT libvirt << this being key to GPU passthrough functionality at this point in Qubes development (for my hardware). I have extensively tried to get GPU-passthrough working with native libvirt tools and processes using standard Qubes HVM creation procedures with no luck. Namely:
  • 'qvm-create win7hvm --hvm --label green' to build the Win7 HVM
  • install the QTW 'qubes-windows-tools' (I've tried both QTW versions for R3-rc# using 'sudo qubes-dom0-update --enablerepo=qubes-dom0-unstable qubes-windows-tools' OR 'sudo qubes-dom-update --enablerepo=qubes-dom0-current-testing qubes-windows-tools')
    • starting the Win7 HVM with 'qvm-start win7 --cdrom=/dev/cdrom' which does fully install the QTW properly
  • setup pci-passthrough of the (secondary) GPU using 'qvm-pci'
  • futzing around extensively with every option known to (wo)man within 'virsh edit win7hvm' and 'qvm-prefs'
In my experience the ultimate issue with GPU Passthrough using libvirt tools manifests itself once I try to install the official AMD GPU drivers in the Win7 x64 HVM; it always ends with the same BSOD during installation of the driver:

SYSTEM_SERVICE_EXCEPTION
    <<--- SNIP --->>
*** STOP: 0x00000003B (0x00000000c0000005, 0xFFFFF880051C49A8,0xFFFFF8800210FB10,0x0000000000000)
*** atdcm64a.sys - Address FFFFF880051C49A8 base at FFFFF880051C2000, Datestamp 55a7048b
I have tried omitting the QTW install steps from above, but continue to get a BSOD during installation of the official AMD GPU driver. I've also tried installing older versions of the official AMD GPU drivers to no avail.

I have seen reports around the Internet that report in order to install the AMD GPU drivers properly in an Xen-based Win7 x64 HVM, you should extract the drivers to a temp folder (using 7zip), then install JUST the GPU driver itself, making sure NOT install CCC (Catalyst Control Center) as I understand it, CCC is the cause of the BSOD. I would love to find a solution, and get Qubes with libvirt working properly with Win7 HVM + GPU Passthrough.

My hope is that by posting to the qubes-users ML regarding my libvirt GPU-passthrough dilemma, someone may be able to help out with this? I am all ears, and will happily try things out, as I would LOVE to get all the Qubes-related functionality from QTW (Qubes Tools for Windows) up and running in my Win7 GPU Passthrough-enabled HVM! Feel free to post ideas on that topic, and I would happily try and report back results...

Also, feel free to ask questions if anyone wants further details on my Qubes libxl GPU-passthrough setup. Getting IOMMU up and running was the biggest hurdle, but once that was accomplished everything else pretty much fell into place.
grub
rc.local
win7.hvm

7v5w7go9ub0o

unread,
Aug 12, 2015, 8:25:47 AM8/12/15
to qubes...@googlegroups.com


On 08/12/2015 05:54 AM, XO...@riseup.net wrote:
> After several weeks of research and dedicated work, I am happy to report
> that I successfully have (secondary) GPU-passthrough working on Qubes
> R3-rc2 x86_64 with a Windows 7 x64 HVM. To try and keep this post short,
> I have attached a copy of my setup for those interested in the configs I
> am using. I've only just got it up and running here recently, so far my
> testing has consisted of playing a few video games = 'Her Story' (not
> very demanding), 'Borderlands 2', and 'Metal Gear Rising Revengeance'.
> *I am a VERY HAPPY Qubes user on this day.* I want to thank the Qubes
> devs for all the work you guy/gals do!
>

<snip great stuff>

I can't even pretend to understand everything you/Qubes Team have done,
but likely this will have broad benefits!

THANK YOU!!! for researching and posting this!!

David Hobach

unread,
Aug 13, 2015, 12:25:58 PM8/13/15
to XO...@riseup.net, qubes...@googlegroups.com


On 08/12/2015 11:54 AM, XO...@riseup.net wrote:
> After several weeks of research and dedicated work, I am happy to report
> that I successfully have (secondary) GPU-passthrough working on Qubes
> R3-rc2 x86_64 with a Windows 7 x64 HVM. To try and keep this post short,
> I have attached a copy of my setup for those interested in the configs I
> am using. I've only just got it up and running here recently, so far my
> testing has consisted of playing a few video games = 'Her Story' (not
> very demanding), 'Borderlands 2', and 'Metal Gear Rising Revengeance'.
> *I am a VERY HAPPY Qubes user on this day.* I want to thank the Qubes
> devs for all the work you guy/gals do!
>
> *ATTACHED FILES:*
> *grub* = */etc/default/grub*
> *win7.hvm* = */etc/xen/win7.hvm*
> *rc.loca**l* = */etc/rc.local*
>
> If the person(s) in charge wouldn't mind updating the Qubes *HCL
> <https://www.qubes-os.org/hcl/>*, this here is my hardware setup:
>
> *MOTHERBOARD*: MSI 890FXA-GD70, currently running V*1.8* BIOS
> (A7640AMS.180 - 2011-02-24
> <www.msi.com/support/mb/890FXAGD70.html/#down-bios>)
> *CPU*: AMD Athlon II X2 270 Regor Dual-Core 3.4GHz Socket AM3
> *POWER SUPPLY*: CORSAIR Professional Series Gold AX1200 - 1200W
> *MEMORY*: 16GB total (4x 4GB sticks), G.SKILL Sniper 4GB DDR3 SDRAM
> DDR3 1600 (PC3 12800)
> *HARD DRIVE*(*s*): 2x Seagate Barracuda Green ST1500DL003 1.5TB 5900 RPM
> *VIDEO CARD*(*s*): 2x XFX Radeon HD 6970 2GB 256-Bit GDDR5 PCI
> Express 2.1 x16
>
> The truth be told, the "/*type*/" of GPU passthrough I have working in
> Qubes R3-rc2 is using *libxl*, /*NOT */*libvirt* << this being key to
> GPU passthrough functionality at this point in Qubes development (for/my
> hardware/). I have extensively tried to get GPU-passthrough working with
> native libvirt tools and processes using standard Qubes HVM creation
> procedures <https://www.qubes-os.org/doc/HvmCreate/> with no luck. Namely:
>
> * 'qvm-create win7hvm --hvm --label green' to build the Win7 HVM
> * install the QTW 'qubes-windows-tools' (I've tried both QTW versions
> for R3-rc# using 'sudo qubes-dom0-update
> --enablerepo=qubes-dom0-unstable qubes-windows-tools' OR 'sudo
> qubes-dom-update --enablerepo=qubes-dom0-current-testing
> qubes-windows-tools')
> o starting the Win7 HVM with 'qvm-start win7 --cdrom=/dev/cdrom'
> which does fully install the QTW properly
> * setup pci-passthrough of the (secondary) GPU
> <https://www.qubes-os.org/doc/AssigningDevices/> using 'qvm-pci'
> * futzing around extensively with every option known to (wo)man within
> 'virsh edit win7hvm' and 'qvm-prefs'
>
> In my experience the ultimate issue with GPU Passthrough using libvirt
> tools manifests itself once I try to install the official AMD GPU
> drivers
> <http://support.amd.com/en-us/download/desktop?os=Windows+7+-+64> in the
> Win7 x64 HVM; it always ends with the /same/ *BSOD* during installation
> of the driver:
>
> SYSTEM_SERVICE_EXCEPTION
> <<--- SNIP --->>
> *** STOP: 0x00000003B (0x00000000c0000005,
> 0xFFFFF880051C49A8,0xFFFFF8800210FB10,0x0000000000000)
> *** atdcm64a.sys - Address FFFFF880051C49A8 base at
> FFFFF880051C2000, Datestamp 55a7048b
>
> I have tried omitting the QTW install steps from above, but continue to
> get a BSOD during installation of the official AMD GPU driver. I've also
> tried installing older versions of the official AMD GPU drivers
> <http://support.amd.com/en-us/download/desktop/previous?os=Windows%207%20-%2064>
> to no avail.
>
> I have seen reports around the Internet that report in order to install
> the AMD GPU drivers properly in an Xen-based Win7 x64 HVM, you should
> extract the drivers to a temp folder (using 7zip), then install JUST the
> GPU driver itself, making sure NOT install CCC (Catalyst Control Center)
> as I understand it, CCC is the cause of the BSOD. I would love to find a
> solution, and get Qubes with libvirt working properly with Win7 HVM +
> GPU Passthrough.
>
> My hope is that by posting to the qubes-users ML regarding my libvirt
> GPU-passthrough dilemma, someone may be able to help out with this? I am
> all ears, and will happily try things out, as I would LOVE to get all
> the Qubes-related functionality from QTW (Qubes Tools for Windows) up
> and running in my Win7 GPU Passthrough-enabled HVM! Feel free to post
> ideas on that topic, and I would happily try and report back results...
>
> Also, feel free to ask questions if anyone wants further details on my
> Qubes libxl GPU-passthrough setup. Getting IOMMU up and running was the
> biggest hurdle, but once that was accomplished everything else pretty
> much fell into place.

Well, this really sounds awesome as I neither heard of many people
getting IOMMU to work nor heard of anyone who got GPU passthrough
working with Qubes. IOMMU support would make Qubes available on a
broader set of hardware, GPU passthrough would remove the need to have
some dedicated box for GPU-intensive applications (incl. gaming).

I think neither is officially supported, is it?

Thanks for sharing!

Regarding the libxl vs libvirt for GPU passthrough I honestly couldn't
directly spot in your attachments where exactly you are using libxl - I
can see a lot of logging in your rc.local, but I don't see you actually
assigning the GPU to a particular VM (not sure what bind_lib.bash does);
I'd be interested in some further details there.

nasuan...@gmail.com

unread,
Aug 13, 2015, 5:13:43 PM8/13/15
to qubes-users, XO...@riseup.net
Looks cool, but be aware that by using device_model_version = 'qemu-xen-traditional' you run qemu in dom0, and are vulnerable to many qemu exploits that are discoverd quite regulary (wich is why qubes uses stubdoms for qemu).
But anyway if xen vga passthrough will work with stubdom in the future its a great thing.

XO...@riseup.net

unread,
Aug 13, 2015, 5:52:13 PM8/13/15
to David Hobach, qubes...@googlegroups.com
David,


On 08/13/2015 09:25 AM, David Hobach wrote:
Well, this really sounds awesome as I neither heard of many people getting IOMMU to work nor heard of anyone who got GPU passthrough working with Qubes. IOMMU support would make Qubes available on a broader set of hardware, GPU passthrough would remove the need to have some dedicated box for GPU-intensive applications (incl. gaming).

Agreed = It's quite slick, Xen has a whole section of the wiki dedicated to IOMMU and GPU-passthrough, the pages are listed as VT-d, and VGA-passthrough respectively. Yes, I can imagine many people would enjoy taking advantage of GPU-passthrough for gaming, and GP-GPU (Open-CL, CUDA), as was my original intent.


I think neither is officially supported, is it?

Guess that depends, Xen wiki says:
"Xen 4.0.0 is the first version to support VGA graphics adapter passthrough to Xen HVM (fully virtualized) guests. This means you can give HVM guest full and direct control of the graphics adapter, making it possible to have high performance full 3D and video acceleration in a virtual machine." - quote directly from the Xen wiki.
Qubes is running Xen version 4.4.2, but the Qubes User FAQ seems to state that they do not officially endorse "3d support":
Can I run applications, like games, which require 3D support?

Those won’t fly. We do not provide OpenGL virtualization for AppVMs. This is mostly a security decision, as implementing such a feature would most likely introduce a great deal of complexity into the GUI virtualization infrastructure. However, Qubes does allow for the use of accelerated graphics (OpenGL) in Dom0’s Window Manager, so all the fancy desktop effects should still work.

For further discussion about the potential for GPU passthorugh on Xen/Qubes, please see the following threads:

    GPU passing to HVM
    Clarifications on GPU security

I am going to bet that FAQ was written prior to Xen 4.4 inclusion in Qubes... Interestingly enough though, reading the first link referenced there are 2x different qubes-devel members stating they did successfully get GPU-passthrough working with (an) older version of Qubes (not many details though):
Radosław Szkodziński posted """
I've done it a long, long time ago with Radeon 7950 as a secondary card, but I had to start the VM using xm and not xl, manually. Otherwise the card wouldn't be properly reset on VM shutdown and any future attempt to start a Windows VM with it would cause a hardware hang, while Linux ones would oops.
"""

coderman posted """
i had success with this same setup, a pair of 6950's, with the second manually started via xm with specific configuration.

important: disable SLI!  i could not get this to work reliably when
SLI was linked up...

this was R1B1, i have not tried since.
"""
I'm thinking the ability for Qubes to run GPU-Passthrough has been available for some time, but "supported" is a different beast altogether... From reading the second link above in the User FAQ it appears the Qubes developers haven't embraced official support due to security-issues/considerations.

Really, a person just needs Qubes R3, VT-d/IOMMU hardware support, time/patience getting that part sorted out (as there are alot of buggy BIOS's out there! including mine), 1x or more GPU(s), and the ability to follow instructions from any of the MANY great guides on the internet.

Thanks for sharing!

Sure thing!


Regarding the libxl vs libvirt for GPU passthrough I honestly couldn't directly spot in your attachments where exactly you are using libxl -
The libxl-part is just using the xl toolstack, with the 'win7.hvm' file I attached. As long as I boot with the GRUB Xen command-line (attached to my previous post) that then enables IOMMU, and does the xen-pciback binding at boot (which i also use the script for at boot-up as extra measures). I then issue the Xen commands to boot the HVM from @dom0 CLI:
sudo xl create /etc/xen/win7.hvm
sudo xl vncviewer win7
And from there the Windows 7 HVM is displayed on my 2nd monitor attached to the 2nd GPU. =)

I can see a lot of logging in your rc.local, but I don't see you actually assigning the GPU to a particular VM (not sure what bind_lib.bash does); I'd be interested in some further details there.

My bad, I forgot to reference where I grabbed the script: 'bind_lib.bash' script == http://wiki.xen.org/wiki/Bind_lib.bash
I attached it here for ease-of-access if you want to give it a shot.

Hope that helps answer your questions, good luck!
bind_lib.bash

cprise

unread,
Aug 13, 2015, 6:53:21 PM8/13/15
to David Hobach, XO...@riseup.net, qubes...@googlegroups.com
On 08/13/2015 12:25 PM, David Hobach wrote:
> Well, this really sounds awesome as I neither heard of many people
> getting IOMMU to work nor heard of anyone who got GPU passthrough
> working with Qubes. IOMMU support would make Qubes available on a
> broader set of hardware, GPU passthrough would remove the need to have
> some dedicated box for GPU-intensive applications (incl. gaming).
>
> I think neither is officially supported, is it?
>
> Thanks for sharing!
>
> Regarding the libxl vs libvirt for GPU passthrough I honestly couldn't
> directly spot in your attachments where exactly you are using libxl - I
> can see a lot of logging in your rc.local, but I don't see you actually
> assigning the GPU to a particular VM (not sure what bind_lib.bash does);
> I'd be interested in some further details there.
>

I'll second that.. Its really interesting and hopeful to see any success
in getting GPU access. Not only that, its also an example of Qubes
hardware compatibility with an AMD CPU / IOMMU.

Good work, XOR!

Gorka Alonso

unread,
Aug 14, 2015, 2:36:07 AM8/14/15
to qubes-users, tri...@hackingthe.net, XO...@riseup.net
Was not the first success, but indeed is the best documented so far. If I recall correctly this was the first success. With Ati Radeon HD 6970 cards also!:


I tried NVidia GTX 670 with no success, Looked into it long time ago and found the same Xen pages that  XOR posted where said it was pretty difficult with NVidia cards.

 

XO...@riseup.net

unread,
Aug 14, 2015, 8:58:00 AM8/14/15
to nasuan...@gmail.com, qubes-users
On 08/13/2015 02:13 PM, nasuan...@gmail.com wrote:
Looks cool, but be aware that by using device_model_version = 'qemu-xen-traditional' you run qemu in dom0, and are vulnerable to many qemu exploits that are discoverd quite regulary (wich is why qubes uses stubdoms for qemu). But anyway if xen vga passthrough will work with stubdom in the future its a great thing.
Good to know.
I am curious... you mention "stubdom" -- I've read the Xen wiki VT-d and Xen wiki PCI-passthrough pages where there is talk about pci-stub vs xen-pciback; is 'pci-stub' what you mean when you say: "stubdom"?
Specifically, I see on the Xen PCI passthrough page, this explaination about pci-stub:
pci-stub can be used only with Xen HVM guest PCI passthru, so it's recommended to use pciback instead, which works for both PV and HVM guests.

I would definitely like to be doing GPU-passthrough as secure as possible! So far my intention has been only to use HVMs for passthrough. What do you think? I am def going to read-up on the pciback vs pci-stub knowledge none-the-less, but i'm quite curious how one can use "stubdoms" as you term it, if that is available at all to us Qubes users - in a current GPU-passthrough scenario?

Thank you for any information you can provide.

nasuan...@gmail.com

unread,
Aug 14, 2015, 11:42:26 AM8/14/15
to qubes-users, nasuan...@gmail.com, XO...@riseup.net
Am Freitag, 14. August 2015 14:58:00 UTC+2 schrieb XO...@riseup.net:

>
> I am curious... you mention "stubdom" -- I've read the Xen wiki VT-d
> and Xen wiki
> PCI-passthrough pages where there is talk about pci-stub
> vs xen-pciback; is 'pci-stub' what you mean
> when you say: "stubdom"?
>
> Specifically, I see on the Xen
> PCI passthrough page, this explaination about pci-stub:
>
> pci-stub can be used only with Xen HVM guest PCI
> passthru, so it's recommended to use pciback instead, which
> works for both PV and HVM guests.
>

I do not know much about the pci passthrough architecture of xen, but as far
as I understand pci-stub and pciback are the possible backends for pci-passthrough in xen. But the use of stubdom is about using qemu as device model. There are (afaik) two ways of runing qemu in xen.
1. Running one instance of qemu per HVM in Dom0 (witch is what you do by adding device_model_version = 'qemu-xen-traditional')
2. Running a vm with a "mini OS" witch runs qemu per HVM. (witch is Qubes default)
The Problem with qemu is that it has a big attack surface as its rather big. If someone would exploit a bug in it he would be able to run code with the rights
of qemu itself. So if you were running qemu in dom0 you are doomed. That is why
Qubes uses stubdomains by default, as an attacker would only gain the control over the stubdomain.


>
>
>
> I would definitely like to be doing GPU-passthrough as secure as
> possible! So far my intention has been only to use HVMs for
> passthrough. What do you think? I am def going to read-up on the
> pciback vs pci-stub knowledge none-the-less, but i'm quite curious
> how one can use "stubdoms" as you term it, if that is
> available at all to us Qubes users - in a current GPU-passthrough
> scenario?
>
>
>
> Thank you for any information you can provide.

I think the only "secure" option is to wait until xen supports vga passthrough
with stubdomains, witch I hope will be in near future when xen introduces linux
stubdoms.

Vít Šesták

unread,
Aug 15, 2015, 11:47:45 AM8/15/15
to qubes-users, nasuan...@gmail.com, XO...@riseup.net
I suppose this will not be possible with PVM, will it?

Connor Page

unread,
Aug 24, 2015, 7:28:00 AM8/24/15
to qubes-users
I struggle to understand why you need that bind_lib.bash script because it defines functions and does not process the command line parameters. Did you modify that? Moreover, the source states that the script is obsolete for xen 4.2 and xl because those will process [pci] parameters in your hvm conf file.

XO...@riseup.net

unread,
Aug 24, 2015, 3:07:10 PM8/24/15
to Connor Page, qubes-users

On 08/24/2015 04:28 AM, Connor Page wrote:
I struggle to understand why you need that bind_lib.bash script because it defines functions and does not process the command line parameters. Did you modify that?
You are correct, I have since changed how I use the bind_lib.bash script, to actually using the functions properly. Prior to this post, I had placed the entirety of the bind_lib.bash script at the top of my dom0 /etc/rc.local file, and was calling it like so:
echo "" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
echo "`date`" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
bindback "0000:04:00.0" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
bindback "0000:04:00.1" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
bindback "0000:00:14.2" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
bindback "0000:06:00.0" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
##
# 04:00.0 => VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cayman XT [Radeon HD 6970]
# 04:00.1 => Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cayman/Antilles HDMI Audio [Radeon HD 6900 Series]
# 00:14.2 => Audio device: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 Azalia (Intel HDA) (rev 40)
# 06:00.0 => Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 03)
##
echo "" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
echo "`date`" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
xl pci-assignable-list 2>&1 >> /var/log/xen/vgapassthu-bootup.log
echo "" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
cat /sys/bus/pci/devices/0000:04:00.0/uevent 2>&1 >> /var/log/xen/vgapassthu-bootup.log
cat /sys/bus/pci/devices/0000:04:00.1/uevent 2>&1 >> /var/log/xen/vgapassthu-bootup.log
cat /sys/bus/pci/devices/0000:00:14.2/uevent 2>&1 >> /var/log/xen/vgapassthu-bootup.log
cat /sys/bus/pci/devices/0000:06:00.0/uevent 2>&1 >> /var/log/xen/vgapassthu-bootup.log

On 08/24/2015 04:28 AM, Connor Page wrote:
Moreover, the source states that the script is obsolete for xen 4.2 and xl because those will process [pci] parameters in your hvm conf file.
Now, that's interesting! I did not know that.

Through my own experimentation I've found that by issuing the xen-pciback.hide=XXXX.XX.XX calls via the grub command-line at boot-up, this appears to be doing all the same work that the bind_lib.bash script does. I've continued to run it anyway. But, according to your post I can flat-out ditch the rc.local/bind_lib.bash process altogether. I think I will do just that =)

Thank you Conner, your post has been very helpful!

Stepping completely off-topic; I've been trying to work on a Windows 10 HVM => I can get it to boot and fully work so-far without VGA pass-through. But, my desire to use any version of Windows depends entirely on VGA passthrough functionality. I'd love to hear from anyone that has Windows 10 + VGA passthrough working on Qubes!

Connor Page

unread,
Aug 24, 2015, 7:15:21 PM8/24/15
to qubes-users, conp...@gmail.com, XO...@riseup.net


On Monday, 24 August 2015 20:07:10 UTC+1, XO...@riseup.net wrote:

On 08/24/2015 04:28 AM, Connor Page wrote:
I struggle to understand why you need that bind_lib.bash script because it defines functions and does not process the command line parameters. Did you modify that?
You are correct, I have since changed how I use the bind_lib.bash script, to actually using the functions properly. Prior to this post, I had placed the entirety of the bind_lib.bash script at the top of my dom0 /etc/rc.local file, and was calling it like so:
no need to copy the whole script when you can source it by including just ". bind_lib.bash" (as it actually says in the script itself.
 
On 08/24/2015 04:28 AM, Connor Page wrote:
Moreover, the source states that the script is obsolete for xen 4.2 and xl because those will process [pci] parameters in your hvm conf file.
Now, that's interesting! I did not know that.

Through my own experimentation I've found that by issuing the xen-pciback.hide=XXXX.XX.XX calls via the grub command-line at boot-up, this appears to be doing all the same work that the bind_lib.bash script does. I've continued to run it anyway. But, according to your post I can flat-out ditch the rc.local/bind_lib.bash process altogether. I think I will do just that =)

The Qubes official kernel option is rd.qubes.hide_pci=AA:BB.C,XX:YY:Z
It should work in many different scenarios and different flavours of kernel. It is processed in the same hook that steals all network devices from dom0.
 
Thank you Conner, your post has been very helpful!
 
I'm glad it was helpful. Actually, this thread has been helpful for me as well. Thanks for all the links posted.
 
Stepping completely off-topic; I've been trying to work on a Windows 10 HVM => I can get it to boot and fully work so-far without VGA pass-through. But, my desire to use any version of Windows depends entirely on VGA passthrough functionality. I'd love to hear from anyone that has Windows 10 + VGA passthrough working on Qubes!

Microsoft is on my hate list as #1 and practically forbidden at home :) While it's certainly better to run Win on emulated and isolated hardware I still think there's a big security hole from giving it a very complex device that will run proprietary drivers (and firmware) and then actually be initialised in dom0.

Eric Shelton

unread,
Aug 24, 2015, 9:40:34 PM8/24/15
to qubes-users, conp...@gmail.com, XO...@riseup.net
On Monday, August 24, 2015 at 7:15:21 PM UTC-4, Connor Page wrote:

Microsoft is on my hate list as #1 and practically forbidden at home :) While it's certainly better to run Win on emulated and isolated hardware I still think there's a big security hole from giving it a very complex device that will run proprietary drivers (and firmware) and then actually be initialised in dom0.

That issue can be avoided by running it in a stub domain.  Was there any attempt made at running it with these this line in the win7.hvm file?

device_model_stubdomain_override = 1

You will still avoid using libvirt this way.  My guess is that will break your use of VNC for the main display.  If so, you might want to consider setting this in win7.hvm, to make the passed through adapter the primary display:  An upside is that you will no longer have to have the PV drivers installed to get networking to work, the emulated RTL8139 should work within the stub domain.

gfx_passthru = 1

Best,
Eric

Eric Shelton

unread,
Aug 24, 2015, 9:51:48 PM8/24/15
to qubes-users, XO...@riseup.net, nasuan...@gmail.com
On Thursday, August 13, 2015 at 5:13:43 PM UTC-4, nasuan...@gmail.com wrote:

Looks cool, but be aware that by using device_model_version = 'qemu-xen-traditional' you run qemu in dom0, and are vulnerable to many qemu exploits that are discoverd quite regulary (wich is why qubes uses stubdoms for qemu).
But anyway if xen vga passthrough will work with stubdom in the future its a great thing.

Actually, the recent high profile QEMU-based vulnerabilities affect either both QEMU upstream and traditional (see XSA-135, XSA-138, and  XSA-140, ) or just QEMU upstream (see XSA-139).  KVM makes use of QEMU upstream, so there's a greater return on finding vulnerabilities in QEMU upstream.  My guess is that many more vulnerabilities remain to be discovered and will be created as new features are added and changes made.

Nevertheless, since this is already being run under QEMU traditional, hopefully it still works with the stub domain switch enabled in the config file.

Eric

XO...@riseup.net

unread,
Aug 26, 2015, 5:47:11 PM8/26/15
to Connor Page, qubes-users
Conner,


On 08/24/2015 04:15 PM, Connor Page wrote:
The Qubes official kernel option is rd.qubes.hide_pci=AA:BB.C,XX:YY:Z
It should work in many different scenarios and different flavours of kernel. It is processed in the same hook that steals all network devices from dom0.
Based on your post, I gave the 'rd.qubes.hide_pci' option a shot, and it works flawlessly. Thank you for the heads-up! Beings that this is the Qubes official kernel option, I am using it for now on. In an effort to continue using Win7 and GPU-Passthrough functionality, this is the key parts of my updated /etc/default/grub file:
GRUB_CMDLINE_LINUX="rd.luks.uuid=luks-878dd9b7-fc12-4283-8802-999f47aab5ab rd.lvm.lv=qubes_dom0/root vconsole.font=latarcyrheb-sun16 rd.lvm.lv=qubes_dom0/swap $([ -x /usr/sbin/rhcrashkernel-param ] && /usr/sbin/rhcrashkernel-param || :) modprobe=xen-pciback.passthrough=1 xen-pciback.permissive rd.qubes.hide_pci=04:00.0,04:00.1,00:14.2,06:00.0 rhgb quiet"
GRUB_CMDLINE_XEN_DEFAULT="console=none dom0_max_vcpus=2 dom0_vcpus_pin iommu=pv swiotlb=force watchdog e820-mtrr-clip=false extra_guest_irqs=,18 lapic x2apic=false irqpoll amd_iommu_dump debug,verbose apic_verbosity=debug e820-verbose=true ivrs_ioapic[2]=00:14.0 loglvl=all guest_loglvl=all unrestricted_guest=1 msi=1"
So far, so good!

XO...@riseup.net

unread,
Aug 26, 2015, 9:03:46 PM8/26/15
to Eric Shelton, qubes-users, conp...@gmail.com
Eric,


On 08/24/2015 06:40 PM, Eric Shelton wrote:
That issue can be avoided by running it in a stub domain.  Was there any attempt made at running it with these this line in the win7.hvm file?

device_model_stubdomain_override = 1

You will still avoid using libvirt this way.  My guess is that will break your use of VNC for the main display.  If so, you might want to consider setting this in win7.hvm, to make the passed through adapter the primary display:  An upside is that you will no longer have to have the PV drivers installed to get networking to work, the emulated RTL8139 should work within the stub domain.

gfx_passthru = 1

          
Best,
Eric
This sounds like a great concept, but I've yet to get it working. Anything that can add more layers to security-side of things, I am happy to try it out.

I've worked on this "seven ways to sunday", first trying as you suggested => using primary GPU, then as secondary since the primary wouldn't work. Everything I've tried so far has failed, and I've been working on it for a many hours today =( I believe I am going to throw in the towel for today, but will retry another time.

I am attaching what I've documented so far as filename 'stubdom.txt'. I was documenting for my own reference, so some things may not make complete sense... If anyone is curious, feel free to ask and I can elaborate. I plan on coming back to try this again another time, to further troubleshoot.

Thanks again Eric.
stubdom.txt

Eric Shelton

unread,
Sep 1, 2015, 5:10:04 AM9/1/15
to qubes-users, knock...@gmail.com, conp...@gmail.com, XO...@riseup.net
It may be helpful to take a look at /var/log/xen/console/guest-win7new-dm.log, to see if QEMU is outputting any error messages.

Also, I came across this item:


which discusses a problem with IRQ mapping for PCI passthrough devices in a stub domain.  This would clearly be the culprit is the above logfile has any messages like "xen_pt_initfn: Error: Mapping machine irq 25 to pirq -1 failed, (rc: -1)"  A corresponding patch is attached, if someone doing stubdom PCI passthrough wants to give it a try.

Eric
passthrough.patch

bvanlan...@gmail.com

unread,
Oct 6, 2015, 9:05:37 AM10/6/15
to qubes-users, XO...@riseup.net
I've been looking for a way to not having to dualboot windows with linux anymore and I came across this. how should I proceed if I want to replicate this using a 980ti for the hvm? I did this before on arch and I would need to allocate a usb controller to the vm and switch the cables around. Is this neccesary here?

I apologize in advance if I'm asking weird things, I didn't quit follow everything described in here.

Thanks in Advance!

thibaut noah

unread,
Oct 15, 2015, 5:50:53 AM10/15/15
to qubes-users, XO...@riseup.net, bvanlan...@gmail.com
Same situation here, any help or insight would be more than welcome

Olivier Médoc

unread,
Oct 16, 2015, 1:41:52 AM10/16/15
to qubes...@googlegroups.com
On 08/12/2015 11:54 AM, XO...@riseup.net wrote:
Hello,

Do you think this setup has chances to work on laptops that have two GPUs (eg: intel GPU for low power consumption + AMDGPU for gaming). I know there is a BIOS configuration option to disable automatic switching between GPUs.


Vít Šesták

unread,
Oct 16, 2015, 1:48:00 PM10/16/15
to qubes-users
If I understand it well, two GPUs is actually a precondition. If other preconditions (mainly IOMMU/VT-d) are met, then it should work.

I haven't tried this, but you will probably have to disable the second GPU (the one you want to use in another VM) in dom0. For Nvidia, Bumblebee probably can do the job. (It can disable the Nvidia GPU, but I haven't tested it with this scenario.) For AMD GPUs, there is likely something similar.

I am not sure what BIOS settings are required for having those GPUs working correctly. The second GPU must be in some way available for dom0 (in order to be assignable to the other VM), but it must not be used by dom0 (or you are likely to get some kernel panic in dom0 quickly when you start the VM with assigned GPU).

Regards,
Vít Šesták 'v6ak'

Gorka Alonso

unread,
Oct 17, 2015, 2:22:09 PM10/17/15
to qubes-users, XO...@riseup.net
One thing I discovered long time ago and wanted to advice was about vt-d pci/pci-e gpu's.

About a year ago I had a Geforce gtx 680 and when trying to do a secondary GPU passthru under Qubes. Unluckily Xen raised me an error. Further investigation revealed it was a FLReset problem. I thought back then i was a Xen problem.

Now I had a newer one (gtx 780) and tried under plain ubuntu with KVM using a very nice KVM guide on the topic [1]. I did everything fine but got and error when launching the qemu that said 'vfio: error no iommu_group for device'. looked for that error and found out that:

a) I had bios configured OK
b) Modules where configured right
c) I had grub OK
d) the secondary GPU modules where blacklisted fine
c) I had iommu configured right
d) pci_stub was working right

e) THE PROBLEM: 
sudo lspci -vv -s 01:00.0 | grep FLReset

 showed the following result:

ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-

I dig around a bit more and verified the problem was the GTX 780 model not supporting FLReset [2], now I assume my previous attemp with a GTX 670 would probably have the same cause. I blame back then Xen when, in fact, the guilty was NVidia.

Could anyone whom did a successful secondary passthrough do the lspci command and post the result to verify my claim? It could also help if anyone that had a Geforce TItan posted it.

[1] https://www.pugetsystems.com/labs/articles/Multiheaded-NVIDIA-Gaming-using-Ubuntu-14-04-KVM-585/
[2] http://www.tomshardware.co.uk/forum/523-71-intel-home#9870991

para...@gmail.com

unread,
May 15, 2017, 10:00:19 AM5/15/17
to qubes-users, XO...@riseup.net
Sorry to necro this, but has this process changed any to support Nvidia cards? If so are there any additional steps as there have been in the past to make this work? I'd be keenly interested in moving to Qubes as I do gaming on Windows and I have a Linux VM for most everything else. I'd prefer to do the reverse.
Reply all
Reply to author
Forward
0 new messages