GPU?

227 views
Skip to first unread message

Ro...@tuta.io

unread,
Jan 14, 2018, 2:12:24 AM1/14/18
to qubes-users
Is qubes able to use the computing power of the gpu or is the type of gpu installed a waste in this issue?

Vít Šesták

unread,
Jan 14, 2018, 6:08:37 AM1/14/18
to qubes-users
Qubes does not have GPU virtualization for security reasons. As a result, additional GPU is used only in dom0 (od GuiVM in future). GPU might be useful for:

* additional output like HDMI (well, good luck…)
* window manager acceleration (but integrated GPU usually does the job well for less power)
* GPU passthrough to a VM (It might work, but it is not officially not supported and much work will be needed. Also, if the VM can rewrite GPU firmware, the GPU can perform a DMA attack during boot.)

When selecting my last laptop, I've decided to choose one without additional GPU. First, I don't need it much. Second, it adds some hassle. It would be ideal to have it switched off in order not to comsume power (=> lower heat, more quiet laptop, better battery life). On the other hand, I remember having HDMI output wired to the additional GPU, which was rather PITA. I was able to get it somehow working on my old laptop, but it used to crash X11.

HDMI through additional GPU will reportedly get better with Wayland, but we are not there yet.

Regards,
Vít Šesták 'v6ak'

demio...@gmail.com

unread,
Jan 15, 2018, 2:35:44 PM1/15/18
to qubes-users
Why is it not possible to securely virtualize the GPU?

Vít Šesták

unread,
Jan 15, 2018, 3:02:13 PM1/15/18
to qubes-users
It might be possible, just no one has implemented it in a way that does not require complex processing by trusted parts of system.

There is an attempt called XenGT (for Intel iGPUs), but I am not sure about its state and at least it is not integrated to Qubes yet.

Regards,
Vít Šesták 'v6ak'

Elias Mårtenson

unread,
Jan 16, 2018, 6:03:19 AM1/16/18
to qubes-users
On Tuesday, 16 January 2018 04:02:13 UTC+8, Vít Šesták wrote:
> It might be possible, just no one has implemented it in a way that does not require complex processing by trusted parts of system.
>
> There is an attempt called XenGT (for Intel iGPUs), but I am not sure about its state and at least it is not integrated to Qubes yet.

I'm sure that if someone wants to take it up as a GSoC project, a lot of people
would be very happy. :-)

Alex Dubois

unread,
Jan 18, 2018, 4:00:19 PM1/18/18
to qubes-users
On Sunday, 14 January 2018 07:12:24 UTC, Ro...@tuta.io wrote:
> Is qubes able to use the computing power of the gpu or is the type of gpu installed a waste in this issue?

You can use GPU computing in Dom0 with the assumption that:
- You trust the software you plan on using
- 3D design software such as Blender
- GPU compute such as CUDA libs, Tensorflow, Keras, etc..
- You only create assets/code and export them out of Dom0

If you have multiple GPU (i.e. integrated + NVidia), it is possible with Xen to do GPU pass-through (Assign the NVidia GPU to a dedicated VM) however:
- It is far from trivial and only limited setups are known to work
- The security of it is not as robust (I can't remember where I read that, I think it was in the GPU Pass-through page of the Xen wiki)

I have tried with limited success few years back (only one boot and was never able to get it back after)...

Alex Dubois

unread,
Jan 18, 2018, 4:02:27 PM1/18/18
to qubes-users

Sorry forgot to mention that GPU pass-through also require another monitor (or switch input...).
It may also be much easier to only use it as a Compute GPU (you keep the UI via Qubes-Dom0)

Vít Šesták

unread,
Jan 18, 2018, 4:22:24 PM1/18/18
to qubes-users
On Thursday, January 18, 2018 at 10:00:19 PM UTC+1, Alex Dubois wrote:
> You can use GPU computing in Dom0 with the assumption that:
> - You trust the software you plan on using
> - 3D design software such as Blender
> - GPU compute such as CUDA libs, Tensorflow, Keras, etc..
> - You only create assets/code and export them out of Dom0

You right, one can, but:

* At least, this goes against the nature of Qubes.
* You don't have any Internet connection there.
* Creating only (and not importing anything) is a very important (and often unrealistic) assumption. So, you should not open any file you download. If there is some vulnerability in such software (well, Blender: https://developer.blender.org/T52924), you are actually potentially more affected than with traditional OS like Ubuntu: In Qubes, dom0 sometimes gets out of date (like Q3.2 being based on EOLed F23), so you don't receive any security update for software like Blender. That's not because ITL does not care about security, that's because Blender is not a a security-critical component like Xen or Linux kernel are. That's the cost of using Qubes in a way it was never intended.

> If you have multiple GPU (i.e. integrated + NVidia), it is possible with Xen to do GPU pass-through (Assign the NVidia GPU to a dedicated VM) however:
> - It is far from trivial and only limited setups are known to work

Right.

> - The security of it is not as robust (I can't remember where I read that, I think it was in the GPU Pass-through page of the Xen wiki)

I guess one of potential reasons: Some people have succeeded only without stubdom, i.e., with QEMU running in dom0.

V6

Tom Zander

unread,
Jan 18, 2018, 5:56:10 PM1/18/18
to qubes...@googlegroups.com, Ro...@tuta.io
On Sunday, 14 January 2018 08:12:24 CET Ro...@tuta.io wrote:
> Is qubes able to use the computing power of the gpu or is the type of gpu
> installed a waste in this issue?

Relevant here is an email I wrote recently;
https://groups.google.com/forum/#!msg/qubes-devel/40ImS390sAw/Z7M0E8RiAQAJ

The context is a GSoC proposal proposal to modernize the painting
pipeline of Qubes.

Today GL using software uses [llvmpipe] to compile and render GL inside of
a Qube, completely in software and then push the 2d image to dom0.
This indeed wastes the GPU.


[llvmpipe]: https://groups.google.com/forum/#!msg/qubes-devel/40ImS390sAw/Z7M0E8RiAQAJ

--
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel


Demi Obenour

unread,
Jan 19, 2018, 10:57:27 PM1/19/18
to Tom Zander, qubes...@googlegroups.com, Ro...@tuta.io
I think that Qubes needs 3 things to really take off:

1. It Just Works.  Even on new systems with new hardware.  That means an up-to-date kernel and drivers.  Probably not an LTS.  It also means getting UEFI to work out of the box — it doesn't for me.  That also means recent installers that are aware of the quirks of different kinds of firmware.

2. GPU acceleration.  A big use for Qubes IMO is running games in a sandboxed environment.  But games need hardware-accelerated graphics.  In fact, recent games often require dedicated graphics cards to get acceptable performance.  That means GPU virtualization for ALL GPUs.  Not just Intel integrated graphics.

And it's not just games.  Firefox’s WebRender makes heavy use of the GPU.  So does QT5.  And I suspect Chromium will follow suit.  GPUs are quickly becoming a requirement, not an option.

I think that the solution is to implement OpenGL on WebGL inside the VMs, and expose WebGL from GUIVM.  That's what browsers do.

3. Windows support that Just Works.  One should not need to know anything about Linux or Xen to use Qubes.  Even though they are what Qubes is built on, they should be implementation details that one need not be familiar with.



--
You received this message because you are subscribed to a topic in the Google Groups "qubes-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/qubes-users/l2oqYEWpY-A/unsubscribe.
To unsubscribe from this group and all its topics, send an email to qubes-users+unsubscribe@googlegroups.com.
To post to this group, send email to qubes...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/qubes-users/1970768.QL1Wn2a4Hl%40mail.
For more options, visit https://groups.google.com/d/optout.

Alex Dubois

unread,
Jan 20, 2018, 3:38:06 AM1/20/18
to qubes-users
On Thursday, 18 January 2018 22:56:10 UTC, Tom Zander wrote:
> On Sunday, 14 January 2018 08:12:24 CET Ro...@tuta.io wrote:
> > Is qubes able to use the computing power of the gpu or is the type of gpu
> > installed a waste in this issue?
>
> Relevant here is an email I wrote recently;
> https://groups.google.com/forum/#!msg/qubes-devel/40ImS390sAw/Z7M0E8RiAQAJ

I'll reply in that thread about this to stay in topic.

But in few words: Not possible until GPU virtualization to have a trustable solution.

Foppe de Haan

unread,
Jan 20, 2018, 4:40:36 AM1/20/18
to qubes-users

Since I am unable to estimate the security aspects of any given approach, and you do, have you seen this approach? https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387

Tom Zander

unread,
Jan 20, 2018, 6:34:39 AM1/20/18
to qubes...@googlegroups.com, Foppe de Haan
On Saturday, 20 January 2018 10:40:36 CET Foppe de Haan wrote:
> Since I am unable to estimate the security aspects of any given approach,
> and you do, have you seen this approach?
> https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122
> 387

That looks exactly like the approach my (very naive) proposal was thinking
of; but these guys actually seem to know their GL and went ahead
and did it :)

Their proof-of-concept showing that the result is *faster* (much less
bandwidth) than the Qubes approach is very exciting.

Thanks for the link!

Alex Dubois

unread,
Jan 20, 2018, 8:53:38 AM1/20/18
to qubes-users

I am not member of the Qubes core team, I am an avid user/developer and believer :) so my view is only mine...
The project you mention is doing a great job (for a VMWare workstation type set-up), however as far as I understood the copy is from/to the same GPU. This is where I am NOT comfortable with. As explained the client VM would issue processing requests to the GPU (and may abuse it).

However, using their work to copy from one GPU (assigned to ONE VM) to Dom0 GPU could be good. However you still have the problem with the BW on the bus (luckily depending on your hardware build 2 different buses (your 2 cards are on different PCIe lines). You will not get 144Hz but 60Hz is within reach. Temptation to compress the stream will be there, the decompression code will be in the attack surface.

Foppe de Haan

unread,
Jan 20, 2018, 10:14:13 AM1/20/18
to qubes-users

Thanks for looking at it, and your thoughts. :)

To clarify: their idea indeed is to use two GPUs, since SR-IOV support simply isn't an option for regular users (due to artificial market segmentation), and according to them, any dom0 GPU that supports PCIe gen3 x4 can handle up to 4k60 at least.

Vít Šesták

unread,
Jan 20, 2018, 10:29:55 AM1/20/18
to qubes-users
When Qubes gets a separate GUIVM, the risks of GUI virtualization could become lower, because the GUIVM is expected to be more up-to-date (and thus have recent security updates for the drivers) than the current dom0.

The GUI virtualization should be optional (so user can choose the reasonable tradeoff). This can be actually good for security provided that the choice is informed. User that wants some GPU-insentive tasks will now probably choose Ubuntu (or dualboot) over Qubes. None of them are better choices than allowing to take some risks for some VMs.

Before GUIVM is implemented, it probably does not make much sense to implement GPU virtualization, because it would make additional maintenance effort for ITL.

GPU passthrough (that can be also used with some less secure approach of GPU virtualization) might be a reasonable addition for some people, but not as a general solution for all Qubes users, because external monitors often connected to the dedicated GPU*. Not mentioning laptops with just one GPU. (Those can be more common for Linux and Qubes users.)

I foresee a GPUVM in VM settings (like today's NetVM in VM settings).

Regards,
Vít Šesták 'v6ak'


*) I honestly don't know the reason for that. In the past, I had laptop with three graphical outputs (screen, VGA and HDMI). Since the old integrated GPU was able only two of them, it makes sense that one of the outputs goes through the dedicated cards. The last time I checked, it however looks like this should be no longer a problem. Today's Intel CPUs seem to often support three displays (quickly verified on Intel ARK on few random CPUs), while today's laptops tend to have just two outputs (internal and HDMI).

Demi Obenour

unread,
Jan 20, 2018, 3:11:50 PM1/20/18
to Vít Šesták, qubes-users
Another thought I had was to do binary translation of GPU instructions and/or Software Fault Isolation a la NaCl.

--
You received this message because you are subscribed to a topic in the Google Groups "qubes-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/qubes-users/l2oqYEWpY-A/unsubscribe.
To unsubscribe from this group and all its topics, send an email to qubes-users+unsubscribe@googlegroups.com.
To post to this group, send email to qubes...@googlegroups.com.

Tai...@gmx.com

unread,
Jan 25, 2018, 11:56:52 AM1/25/18
to Alex Dubois, qubes-users
On 01/18/2018 04:00 PM, Alex Dubois wrote:

> If you have multiple GPU (i.e. integrated + NVidia), it is possible with Xen to do GPU pass-through (Assign the NVidia GPU to a dedicated VM) however:
> - It is far from trivial and only limited setups are known to work
> - The security of it is not as robust (I can't remember where I read that, I think it was in the GPU Pass-through page of the Xen wiki)
>
> I have tried with limited success few years back (only one boot and was never able to get it back after)...
>
I do this all the time to play games and watch movies.

I recommend either a quality server board or a platform that has libre
or open source firmware so that IOMMU issues can be fixed if they happen.

Correct me if I am wrong but I don't see the issue with an apparmor
restricted qemu running in dom0...

Vít Šesták

unread,
Jan 25, 2018, 12:05:17 PM1/25/18
to qubes...@googlegroups.com
Well, AppArmor might reduce the attack surface, but remember that:

1. Qubes was not intended to run QEMU in dom0 and
2. Qubes dom0 is often based on outdated Fedora. While ITL provides security updates for security-critical components, it does not necessarily cover all vulnerabilities in kernel and apparmor, because of #1.
3. Linux kernel is considered as quite weaker than Xen in terms of attack surface, so exploits in Linux kernel are more likely. AppArmor might mitigate *some* of them, but not all.

Regards,
Vít Šesták 'v6ak'
Reply all
Reply to author
Forward
0 new messages