ATTACHED FILES:If the person(s) in charge wouldn't mind updating the Qubes HCL, this here is my hardware setup:
grub = /etc/default/grub
win7.hvm = /etc/xen/win7.hvm
rc.local = /etc/rc.local
MOTHERBOARD: MSI 890FXA-GD70, currently running V1.8 BIOS (A7640AMS.180 - 2011-02-24)The truth be told, the "type" of GPU passthrough I have working in Qubes R3-rc2 is using libxl, NOT libvirt << this being key to GPU passthrough functionality at this point in Qubes development (for my hardware). I have extensively tried to get GPU-passthrough working with native libvirt tools and processes using standard Qubes HVM creation procedures with no luck. Namely:
CPU: AMD Athlon II X2 270 Regor Dual-Core 3.4GHz Socket AM3
POWER SUPPLY: CORSAIR Professional Series Gold AX1200 - 1200W
MEMORY: 16GB total (4x 4GB sticks), G.SKILL Sniper 4GB DDR3 SDRAM DDR3 1600 (PC3 12800)
HARD DRIVE(s): 2x Seagate Barracuda Green ST1500DL003 1.5TB 5900 RPM
VIDEO CARD(s): 2x XFX Radeon HD 6970 2GB 256-Bit GDDR5 PCI Express 2.1 x16
SYSTEM_SERVICE_EXCEPTIONI have tried omitting the QTW install steps from above, but continue to get a BSOD during installation of the official AMD GPU driver. I've also tried installing older versions of the official AMD GPU drivers to no avail.
<<--- SNIP --->>
*** STOP: 0x00000003B (0x00000000c0000005, 0xFFFFF880051C49A8,0xFFFFF8800210FB10,0x0000000000000)
*** atdcm64a.sys - Address FFFFF880051C49A8 base at FFFFF880051C2000, Datestamp 55a7048b
Well, this really sounds awesome as I neither heard of many people getting IOMMU to work nor heard of anyone who got GPU passthrough working with Qubes. IOMMU support would make Qubes available on a broader set of hardware, GPU passthrough would remove the need to have some dedicated box for GPU-intensive applications (incl. gaming).
I think neither is officially supported, is it?
"Xen 4.0.0 is the first version to support VGA graphics adapter passthrough to Xen HVM (fully virtualized) guests. This means you can give HVM guest full and direct control of the graphics adapter, making it possible to have high performance full 3D and video acceleration in a virtual machine." - quote directly from the Xen wiki.Qubes is running Xen version 4.4.2, but the Qubes User FAQ seems to state that they do not officially endorse "3d support":
Can I run applications, like games, which require 3D support?
Those won’t fly. We do not provide OpenGL virtualization for AppVMs. This is mostly a security decision, as implementing such a feature would most likely introduce a great deal of complexity into the GUI virtualization infrastructure. However, Qubes does allow for the use of accelerated graphics (OpenGL) in Dom0’s Window Manager, so all the fancy desktop effects should still work.
For further discussion about the potential for GPU passthorugh on Xen/Qubes, please see the following threads:
GPU passing to HVM
Clarifications on GPU security
Radosław Szkodziński posted """
I've done it a long, long time ago with Radeon 7950 as a secondary card, but I had to start the VM using xm and not xl, manually. Otherwise the card wouldn't be properly reset on VM shutdown and any future attempt to start a Windows VM with it would cause a hardware hang, while Linux ones would oops.
"""
coderman posted """I'm thinking the ability for Qubes to run GPU-Passthrough has been available for some time, but "supported" is a different beast altogether... From reading the second link above in the User FAQ it appears the Qubes developers haven't embraced official support due to security-issues/considerations.
i had success with this same setup, a pair of 6950's, with the second manually started via xm with specific configuration.
important: disable SLI! i could not get this to work reliably when
SLI was linked up...
this was R1B1, i have not tried since.
"""
Thanks for sharing!
Regarding the libxl vs libvirt for GPU passthrough I honestly couldn't directly spot in your attachments where exactly you are using libxl -
sudo xl create /etc/xen/win7.hvmAnd from there the Windows 7 HVM is displayed on my 2nd monitor attached to the 2nd GPU. =)
sudo xl vncviewer win7
I can see a lot of logging in your rc.local, but I don't see you actually assigning the GPU to a particular VM (not sure what bind_lib.bash does); I'd be interested in some further details there.
Looks cool, but be aware that by using device_model_version = 'qemu-xen-traditional' you run qemu in dom0, and are vulnerable to many qemu exploits that are discoverd quite regulary (wich is why qubes uses stubdoms for qemu). But anyway if xen vga passthrough will work with stubdom in the future its a great thing.
pci-stub can be used only with Xen HVM guest PCI passthru, so it's recommended to use pciback instead, which works for both PV and HVM guests.
>
> I am curious... you mention "stubdom" -- I've read the Xen wiki VT-d
> and Xen wiki
> PCI-passthrough pages where there is talk about pci-stub
> vs xen-pciback; is 'pci-stub' what you mean
> when you say: "stubdom"?
>
> Specifically, I see on the Xen
> PCI passthrough page, this explaination about pci-stub:
>
> pci-stub can be used only with Xen HVM guest PCI
> passthru, so it's recommended to use pciback instead, which
> works for both PV and HVM guests.
>
I do not know much about the pci passthrough architecture of xen, but as far
as I understand pci-stub and pciback are the possible backends for pci-passthrough in xen. But the use of stubdom is about using qemu as device model. There are (afaik) two ways of runing qemu in xen.
1. Running one instance of qemu per HVM in Dom0 (witch is what you do by adding device_model_version = 'qemu-xen-traditional')
2. Running a vm with a "mini OS" witch runs qemu per HVM. (witch is Qubes default)
The Problem with qemu is that it has a big attack surface as its rather big. If someone would exploit a bug in it he would be able to run code with the rights
of qemu itself. So if you were running qemu in dom0 you are doomed. That is why
Qubes uses stubdomains by default, as an attacker would only gain the control over the stubdomain.
>
>
>
> I would definitely like to be doing GPU-passthrough as secure as
> possible! So far my intention has been only to use HVMs for
> passthrough. What do you think? I am def going to read-up on the
> pciback vs pci-stub knowledge none-the-less, but i'm quite curious
> how one can use "stubdoms" as you term it, if that is
> available at all to us Qubes users - in a current GPU-passthrough
> scenario?
>
>
>
> Thank you for any information you can provide.
I think the only "secure" option is to wait until xen supports vga passthrough
with stubdomains, witch I hope will be in near future when xen introduces linux
stubdoms.
I struggle to understand why you need that bind_lib.bash script because it defines functions and does not process the command line parameters. Did you modify that?
echo "" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
echo "`date`" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
bindback "0000:04:00.0" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
bindback "0000:04:00.1" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
bindback "0000:00:14.2" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
bindback "0000:06:00.0" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
##
# 04:00.0 => VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cayman XT [Radeon HD 6970]
# 04:00.1 => Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cayman/Antilles HDMI Audio [Radeon HD 6900 Series]
# 00:14.2 => Audio device: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 Azalia (Intel HDA) (rev 40)
# 06:00.0 => Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 03)
##
echo "" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
echo "`date`" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
xl pci-assignable-list 2>&1 >> /var/log/xen/vgapassthu-bootup.log
echo "" 2>&1 >> /var/log/xen/vgapassthu-bootup.log
cat /sys/bus/pci/devices/0000:04:00.0/uevent 2>&1 >> /var/log/xen/vgapassthu-bootup.log
cat /sys/bus/pci/devices/0000:04:00.1/uevent 2>&1 >> /var/log/xen/vgapassthu-bootup.log
cat /sys/bus/pci/devices/0000:00:14.2/uevent 2>&1 >> /var/log/xen/vgapassthu-bootup.log
cat /sys/bus/pci/devices/0000:06:00.0/uevent 2>&1 >> /var/log/xen/vgapassthu-bootup.log
Moreover, the source states that the script is obsolete for xen 4.2 and xl because those will process [pci] parameters in your hvm conf file.
On 08/24/2015 04:28 AM, Connor Page wrote:
You are correct, I have since changed how I use the bind_lib.bash script, to actually using the functions properly. Prior to this post, I had placed the entirety of the bind_lib.bash script at the top of my dom0 /etc/rc.local file, and was calling it like so:I struggle to understand why you need that bind_lib.bash script because it defines functions and does not process the command line parameters. Did you modify that?
On 08/24/2015 04:28 AM, Connor Page wrote:
Now, that's interesting! I did not know that.Moreover, the source states that the script is obsolete for xen 4.2 and xl because those will process [pci] parameters in your hvm conf file.
Through my own experimentation I've found that by issuing the xen-pciback.hide=XXXX.XX.XX calls via the grub command-line at boot-up, this appears to be doing all the same work that the bind_lib.bash script does. I've continued to run it anyway. But, according to your post I can flat-out ditch the rc.local/bind_lib.bash process altogether. I think I will do just that =)
Thank you Conner, your post has been very helpful!
Stepping completely off-topic; I've been trying to work on a Windows 10 HVM => I can get it to boot and fully work so-far without VGA pass-through. But, my desire to use any version of Windows depends entirely on VGA passthrough functionality. I'd love to hear from anyone that has Windows 10 + VGA passthrough working on Qubes!
Microsoft is on my hate list as #1 and practically forbidden at home :) While it's certainly better to run Win on emulated and isolated hardware I still think there's a big security hole from giving it a very complex device that will run proprietary drivers (and firmware) and then actually be initialised in dom0.
gfx_passthru = 1
Best,
Eric
Looks cool, but be aware that by using device_model_version = 'qemu-xen-traditional' you run qemu in dom0, and are vulnerable to many qemu exploits that are discoverd quite regulary (wich is why qubes uses stubdoms for qemu).
But anyway if xen vga passthrough will work with stubdom in the future its a great thing.
The Qubes official kernel option is rd.qubes.hide_pci=AA:BB.C,XX:YY:Z
It should work in many different scenarios and different flavours of kernel. It is processed in the same hook that steals all network devices from dom0.
GRUB_CMDLINE_LINUX="rd.luks.uuid=luks-878dd9b7-fc12-4283-8802-999f47aab5ab rd.lvm.lv=qubes_dom0/root vconsole.font=latarcyrheb-sun16 rd.lvm.lv=qubes_dom0/swap $([ -x /usr/sbin/rhcrashkernel-param ] && /usr/sbin/rhcrashkernel-param || :) modprobe=xen-pciback.passthrough=1 xen-pciback.permissive rd.qubes.hide_pci=04:00.0,04:00.1,00:14.2,06:00.0 rhgb quiet"So far, so good!
GRUB_CMDLINE_XEN_DEFAULT="console=none dom0_max_vcpus=2 dom0_vcpus_pin iommu=pv swiotlb=force watchdog e820-mtrr-clip=false extra_guest_irqs=,18 lapic x2apic=false irqpoll amd_iommu_dump debug,verbose apic_verbosity=debug e820-verbose=true ivrs_ioapic[2]=00:14.0 loglvl=all guest_loglvl=all unrestricted_guest=1 msi=1"
That issue can be avoided by running it in a stub domain. Was there any attempt made at running it with these this line in the win7.hvm file?
device_model_stubdomain_override = 1
You will still avoid using libvirt this way. My guess is that will break your use of VNC for the main display. If so, you might want to consider setting this in win7.hvm, to make the passed through adapter the primary display: An upside is that you will no longer have to have the PV drivers installed to get networking to work, the emulated RTL8139 should work within the stub domain.
gfx_passthru = 1Best,Eric
I apologize in advance if I'm asking weird things, I didn't quit follow everything described in here.
Thanks in Advance!