There are essentially three approaches to virtualize GPU:
1) Virtualize the DirectX/OpenGL protocol
2) Virtualize the underlying (or some generic) GPU
3) Do selective GPU passthru to an AppVM
The first two require complex backends on the dom0 domain side (which
currently handles the real GPU in Qubes). Just like virtualizing disks
requires a disk backend. Only that virtualizing a disk is a rather
simple thing to do, and so the backend might be simple, while
virtualizing DirectX, OpenGL, or a real GPU, sounds to me like orders of
magnitude more complex thing to do. So, the resulting backend will be
orderes of magnitude more complex. And that's something we precisely
would like to avoid, because such complex "listening" code in Dom0 would
just become an ideal target for attacks.
Of course, there is also an issue you mentioned, i.e. assuming our
backends are written perfectly secure (i.e. they don't have bugs such as
overflows, race conditions, double frees, etc), we still cannot be sure
if the stream of DirectX or GPU commands we allow from one AppVM will
not be able to e.g. read some buffers created by another AppVMs, which
would allow to steal the app window content. But I think this is much
less of a concern than the complexity and exploit-ability of the backend
as discussed above.
Finally we have option #3 -- to do a selective PCIe passthrough of the
GPU. The obvious limitation of this would be that only one AppVM could
handle the screen at the moment, and we would need e.g. magic keys
(Alt-Tab) to force switch to Dom0 desktop. Not very convenient IMHO.
But there is a bigger problem with GPU passthrou. People think that
device passthrough is just as easy as using a switch and giving one
device to a VM for a full control. This is not really so -- if it was
like this indeed, then we would not have the pciback backend in Dom0,
would we (Plus I think GPU passthrou is still not supported by the
mainstream Xen)? Now, passing a GPU device is even more complex, because
we cannot just take away a GPU from a running VM all of a sudden (I
think) -- we need to provide the VM that looses the real GPU with some
kind of a replacement, an emulated device for this time. Ok, perhaps
that wouldn't be that security critical given our use of stubdomain for
hosting the qemu, but certainly not easy to code, I think.
So, because #3 still requires non-trivial amount of coding (and further
security considerations), and given that it still would be quite
inconvenient to the user (as described above), I'd rather postpone this
until we got tools from Intel/GPU/other vendors to do #1 or #2 in a
secure way.
Well, OpenGL/DirectX multiplexing is being done by all mainstream,
desktop OSes these days, allowing e.g. Windows Chess program to use
DirectX to manage its window, and Google Earth to also use it for its
window at the same time. Similarly, modern browsers start exposing GPU
via WebGL to its apps (= websites). I'm pretty sure all those mechanisms
are done not very securely, or perhaps even totally insecurely, but once
people start exploiting those GPU multiplexing to e.g. attack Chrome or
IE, we should see some better ways of multiplexing, hopefully aided by
GPU manufactures.
joanna.