Clarifications on GPU security

8,741 views
Skip to first unread message

Sirenfal

unread,
Apr 7, 2013, 12:46:15 AM4/7/13
to qubes...@googlegroups.com
As I understand, you do not support GPU access, preventing 3D usage in both HVMs and AppVMs. Your reasoning is that sensitive information could be recorded by a compromised VM if it had access to the GPU because the GPU is also rendering the other VMs at the same time.

If that is the concern, what are your problems with direct passthrough (complete rendering control) being given temporarily to a VM? It could be revoked with a global hotkey, and seeing as Qubes still has full input control a compromised VM couldn't interfere with it.

Just looking for some clarifications. In short, I really like Qubes, and I'm interested in trying it, but I'm a game programmer and the lack of 3D/GPU support would interfere with my ability to work. I apologize if I got anything wrong, I'm not super familiar with visualization on a hardware level.

Sirenfal

unread,
Apr 7, 2013, 1:04:26 AM4/7/13
to qubes...@googlegroups.com
And for that matter, if I understand the problems, it should be safe to use an on-board GPU on the motherboard and/or a second video card, so Qubes can use one and a particular VM can have total control of the other. Is that possible?

Joanna Rutkowska

unread,
Apr 7, 2013, 6:59:47 AM4/7/13
to qubes...@googlegroups.com, Sirenfal
There are essentially three approaches to virtualize GPU:

1) Virtualize the DirectX/OpenGL protocol
2) Virtualize the underlying (or some generic) GPU
3) Do selective GPU passthru to an AppVM

The first two require complex backends on the dom0 domain side (which
currently handles the real GPU in Qubes). Just like virtualizing disks
requires a disk backend. Only that virtualizing a disk is a rather
simple thing to do, and so the backend might be simple, while
virtualizing DirectX, OpenGL, or a real GPU, sounds to me like orders of
magnitude more complex thing to do. So, the resulting backend will be
orderes of magnitude more complex. And that's something we precisely
would like to avoid, because such complex "listening" code in Dom0 would
just become an ideal target for attacks.

Of course, there is also an issue you mentioned, i.e. assuming our
backends are written perfectly secure (i.e. they don't have bugs such as
overflows, race conditions, double frees, etc), we still cannot be sure
if the stream of DirectX or GPU commands we allow from one AppVM will
not be able to e.g. read some buffers created by another AppVMs, which
would allow to steal the app window content. But I think this is much
less of a concern than the complexity and exploit-ability of the backend
as discussed above.

Finally we have option #3 -- to do a selective PCIe passthrough of the
GPU. The obvious limitation of this would be that only one AppVM could
handle the screen at the moment, and we would need e.g. magic keys
(Alt-Tab) to force switch to Dom0 desktop. Not very convenient IMHO.

But there is a bigger problem with GPU passthrou. People think that
device passthrough is just as easy as using a switch and giving one
device to a VM for a full control. This is not really so -- if it was
like this indeed, then we would not have the pciback backend in Dom0,
would we (Plus I think GPU passthrou is still not supported by the
mainstream Xen)? Now, passing a GPU device is even more complex, because
we cannot just take away a GPU from a running VM all of a sudden (I
think) -- we need to provide the VM that looses the real GPU with some
kind of a replacement, an emulated device for this time. Ok, perhaps
that wouldn't be that security critical given our use of stubdomain for
hosting the qemu, but certainly not easy to code, I think.

So, because #3 still requires non-trivial amount of coding (and further
security considerations), and given that it still would be quite
inconvenient to the user (as described above), I'd rather postpone this
until we got tools from Intel/GPU/other vendors to do #1 or #2 in a
secure way.

Well, OpenGL/DirectX multiplexing is being done by all mainstream,
desktop OSes these days, allowing e.g. Windows Chess program to use
DirectX to manage its window, and Google Earth to also use it for its
window at the same time. Similarly, modern browsers start exposing GPU
via WebGL to its apps (= websites). I'm pretty sure all those mechanisms
are done not very securely, or perhaps even totally insecurely, but once
people start exploiting those GPU multiplexing to e.g. attack Chrome or
IE, we should see some better ways of multiplexing, hopefully aided by
GPU manufactures.

joanna.

signature.asc

Joanna Rutkowska

unread,
Apr 7, 2013, 7:00:51 AM4/7/13
to qubes...@googlegroups.com, Sirenfal
Sure, that would be much better, security-wise. Only that, I'm afraid,
Xen's support for GPU passthrough is either not-working or incomplete.
There was a threat on this topic in the past on the list, and AFAIR,
people reported this not to be working with the Xen we use in Qubes.

j.

signature.asc

syd bris

unread,
Apr 7, 2013, 8:01:44 PM4/7/13
to qubes...@googlegroups.com
what about a usb mini pc - like raspbarry pi or cotton candy etc. that
has its own OS, RAm, gpu, motherboard? if such device was attached and
booted into its own (usb)hvm, who would have gpu control - qubes or
the minipc gpu?

Sirenfal

unread,
Apr 8, 2013, 1:19:00 AM4/8/13
to qubes...@googlegroups.com, Sirenfal
I have a few more questions:

Where can I read more about how to attempt giving a second GPU to a VM? Can you link to the thread you mentioned?

Failing that, fully knowing that I understand the security implications, is there any way to give non-passthrough (full) GPU access to an HVM or AppVM, or is that not possible at all either?


Furthermore, what is the current performance impact (CPU/RAM/HD) of using Qubes? I know there will be increased memory consumption, but does this also carry a performance hit for speed too? What kind of performance impact, if any, would GPU passthrough involve (I'm guessing none)?

Thanks for the information.

Joanna Rutkowska

unread,
Apr 8, 2013, 5:10:47 AM4/8/13
to qubes...@googlegroups.com, Sirenfal
On 04/08/13 07:19, Sirenfal wrote:
> I have a few more questions:
>
> Where can I read more about how to attempt giving a second GPU to a VM? Can
> you link to the thread you mentioned?
>
>

I think this is the thread:

https://groups.google.com/group/qubes-devel/browse_frm/thread/31f1f2da39978573/586a6bed214fe2ed?lnk=gst&q=GPU#586a6bed214fe2ed

> Failing that, fully knowing that I understand the security implications, is
> there any way to give non-passthrough (full) GPU access to an HVM or AppVM,
> or is that not possible at all either?
>

There is currently no option in Qubes to do that.

>
> Furthermore, what is the current performance impact (CPU/RAM/HD) of using
> Qubes? I know there will be increased memory consumption, but does this
> also carry a performance hit for speed too? What kind of performance
> impact, if any, would GPU passthrough involve (I'm guessing none)?
>

There is surely a performance impact on disk operations in AppVM. It's
difficult to measure it reliably though, because of aggressive caching
done on many levels (e.g. VMs caches disk accesses, as well as Dom0
cache them too -- i.e. when the disk backend uses disk, etc).

In other words, you would have to see for yourself :)

joanna.
signature.asc

Joanna Rutkowska

unread,
Apr 8, 2013, 5:12:02 AM4/8/13
to qubes...@googlegroups.com, syd bris
How would you boot the attached "mini PC computer" into "its own" HVM?

j.

signature.asc

roxa...@gmail.com

unread,
Dec 23, 2013, 5:15:07 AM12/23/13
to qubes...@googlegroups.com
@joanna: You seem to forget that PC can have multiple GPUs. With that in mind - imagine dom0 gets hold of 1st gpu like it does now, one HVM/AppVM gets hold of 2nd gpu for 3d-heavy stuff, and rest of appvms work like they work now. Would it not be simplest way to support gpu pass-through and in turn high performance 3d stuff? Thats how XenClient does it afterall. If cubes could do this it would be a killer feature. Other two options sound like some utopia. Hardly worth that kind of effort.. Its easier for everyone just have two GPUs. Actually AMD is now pursuing that APU technology. Imagine - gpu on amd cpu could serve dom0 as it is not that demanding anyway, and dedicated video card (anyone has at least one nowdays) could be dedicated for 3d-vm (be it 3d modeling, gaming or w/e).

cprise

unread,
Dec 23, 2013, 1:50:45 PM12/23/13
to qubes...@googlegroups.com, Sirenfal, Joanna Rutkowska

On 04/08/13 05:10, Joanna Rutkowska wrote:
> On 04/08/13 07:19, Sirenfal wrote:
>
>> Furthermore, what is the current performance impact (CPU/RAM/HD) of using
>> Qubes? I know there will be increased memory consumption, but does this
>> also carry a performance hit for speed too? What kind of performance
>> impact, if any, would GPU passthrough involve (I'm guessing none)?
>>
> There is surely a performance impact on disk operations in AppVM. It's
> difficult to measure it reliably though, because of aggressive caching
> done on many levels (e.g. VMs caches disk accesses, as well as Dom0
> cache them too -- i.e. when the disk backend uses disk, etc).
>
> In other words, you would have to see for yourself :)
>
> joanna.

Interesting. I had wondered about caching myself.

My ignorance about VM architecture may be showing here, but would it not
save RAM and CPU capacity to turn disk caching off (or reduce it) on one
level?

>> Thanks for the information.
>>
>> On Sunday, April 7, 2013 7:00:51 AM UTC-4, joanna wrote:
>>> On 04/07/13 07:04, Sirenfal wrote:
>>>> And for that matter, if I understand the problems, it should be safe to
>>> use
>>>> an on-board GPU on the motherboard and/or a second video card, so Qubes
>>> can
>>>> use one and a particular VM can have total control of the other. Is that
>>>> possible?
>>>>
>>> Sure, that would be much better, security-wise. Only that, I'm afraid,
>>> Xen's support for GPU passthrough is either not-working or incomplete.
>>> There was a threat on this topic in the past on the list, and AFAIR,
>>> people reported this not to be working with the Xen we use in Qubes.
>>>
>>> j.
>>>
>>>

I have to wonder what his chances are with a USB3 video adapter...
assuming there are models powerful enough to interest him.

Joanna Rutkowska

unread,
Dec 26, 2013, 5:35:49 AM12/26/13
to roxa...@gmail.com, qubes...@googlegroups.com
On 12/23/13 11:15, roxa...@gmail.com wrote:
> @joanna: You seem to forget that PC can have multiple GPUs.

And you seem to forgot to even read this very threat before posting in
it...? :/

joanna.
signature.asc

dhu...@gmail.com

unread,
Apr 29, 2014, 11:24:51 AM4/29/14
to qubes...@googlegroups.com
What about XenGT?

https://github.com/01org/XenGT-Preview-xen

This project seems pretty good for gpu usage in secure vm, didn't he?

Joanna Rutkowska

unread,
Apr 29, 2014, 12:08:13 PM4/29/14
to dhu...@gmail.com, qubes...@googlegroups.com
On 04/29/14 17:24, dhu...@gmail.com wrote:
> What about XenGT?
>
> https://github.com/01org/XenGT-Preview-xen
>
> This project seems pretty good for gpu usage in secure vm, didn't he?
>

https://groups.google.com/forum/#!topic/qubes-devel/bSp7B92Y4tE

joanna.
signature.asc

kerste...@gmail.com

unread,
Feb 19, 2015, 1:10:36 PM2/19/15
to qubes...@googlegroups.com, avosi...@gmail.com
Hallo Joanna,

would it be more easy to enable the OpenGL for pure calculation purposes. It would be quite need to accelerate one Qubes-Window only the mathematical calculation, GPU-powered.

Perhaps this can accelerate one running GPU domain and keeps the strickt isolation to the other domains. (This would be more a solution for labs, because there is at this stage no 3D acceleration).

This would enable Qubes the hardware-acceleration, like:


and can break many mathematical calculation limitations, like for advanced crypto schema.

Best Regards,
Kersten
Reply all
Reply to author
Forward
0 new messages