On Wednesday, 10 January 2018 02:32:02 GMT Andrew David Wong wrote:
> In preparation for Google Summer of Code (GSoC) 2018 [1], we invite
> the Qubes community to contribute ideas to the Qubes ideas list. [2]
> You can find our 2017 ideas list here:
Here is a suggestion that may be fun, it will certainly be useful.
Not specifically aimed at GSOC, just an idea I had some weeks ago, something
that needs to be done one way or another :-)
# Modernize painting pipeline
The Qubes design is that each virtual machine runs its own xorg server and
applications running on that VM paint into a specially crafted device
driver.
Changes are sent as pixmaps to the actual user-visible xorg server creating
the illusion of all applications running on one desktop.
Since the rise of GPUs available for everyone, the design of painting in a
pixel-buffer is outdated and modern GUI toolkits are severely slowed down in
the Qubes pipeline as it is today.
A Graphics Processing Unit (GPU) is dedicated hardware for updating your
screen, most CPUs ship with a GPU on-board today. The actual operations that
modern toolkits give in order to create their graphical windows is
dramatically different on such systems and as a result it is sub-optimal to
force them through a pixmap causing slow refreshes.
The task is to research a way to build an xorg driver (using open source
libraries like mesa) which can be used inside of an embedded virtual machine
for applications to draw into.
The information (does not have to be pixels!) gained from that should be
possible to send over a connection to our GUI virtual machine (currently
dom0) for display where the scene should be rendered using opengl based
instructions.
A bit of background to explain the actual problem;
Imagine a list of an address-book. A picture and some text and some lines.
In the old style painting the GUI would be drawn using 1 thread, into a
pixel buffer. It would loop over each person in the address book and draw
the picture, then the text then the decorations.
This is how Qubes operates now.
Using a GPU this can be sped up immensely by realizing that starting a
stream of instructions is what takes the time on such hardware, executing
the instructions happens totally inside of the GPU.
The toolkit would first paint all the backgrounds. Then it would set a
different pen and paint all things in that color, which in our case would be
the decorations.
Notice that we no longer loop over each address-book item, they are all done
at the same time.
The border around a picture is drawn by drawing a black square and last the
images of the users are drawn on top of that.
The main thing to notice is that this new way of doing things has as a side-
effect that many pixels are overwritten many times. This is not a downside
if it happens completely in the GPU, but you do notice this effect in the
Qubes pipeline.
This new approach allows modern toolkits to update their GUI and reach a 30-
frames-per-second update strategly, even on modest hardware. So it really
works, and the goal is to build on top of that and make Qubes GUIs much more
perform ants as well.
Expected results:
* A proof of concept application which uses two xorg servers and running an
opengl aware app (Any Qt5 application) in one will show its window in the
other while sending mostly opengl instructions and a minimum of bitmaps over
the connection.
Knowledge prerequisite:
* Programming in C
* opengl and xorg experience are highly useful
--
Tom Zander
Blog:
https://zander.github.io
Vlog:
https://vimeo.com/channels/tomscryptochannel