Is It Theortically Possible to Isolate Users/Applications From 3D X Server?

142 views
Skip to first unread message

Tom Li

unread,
Apr 14, 2020, 7:32:06 PM4/14/20
to VirtualGL User Discussion/Support
Hello.

I'm exploring possible methods to run OpenGL-enabled desktop applications in
an isolated X server "sandbox", so that these applications cannot take over
the main X server's mouse, keyboard, screen, etc, and have unrestricted access
to the main's desktop and/or each other.

Traditionally, Xephyr is the solution, and the procedures are basically:
(1) Run Xephyr in a seperate user, grant Xephyr permission to access the main
X server to display itself as a window. (2) Run a GUI program is a seperate
user, grant it permission to access Xephyr. Thus, the program can be completed
isolated, but it does not support OpenGL, and not useful for 3D applications.
So I turned to VirtualGL.

Unfortunately, access to an 3D X server is required for VirtualGL to work. And
since it's not possible to run multiple X servers on a single GPU, the only
option is granting the access to the main X server to all VirtualGL-powered
applications on the system. By doing so, the isolation between the main X
server and the isolated X server becomes non-existent. Programs inside the
sandbox will be able take control of the main X server.

I wonder is it theortically possible to modify the codebase of VirtualGL to
implement an additional privilege seperation? My basic idea is to modify the
VirtualGL Faker - Instead of issuing OpenGL commands by accessing the 3D X
server itself, it only passes these commands to a server via IPC. Only the
Faker Server, running as a different user, has access to the 3D X server. In
this way, a VirtualGL-powered 3D programs is completely seperated from the
main 3D X server, providing a sandboxed graphics environment. It's also helpful
in a shared 3D server, since users will not be able to access each other via
the 3D X server.

Is this a good/feasible idea? If it's feasible, how difficult will the
implementation be?

Thanks,
Tom Li

DRC

unread,
Apr 14, 2020, 8:35:26 PM4/14/20
to virtual...@googlegroups.com
Unfortunately, that isn't feasible, because VirtualGL relies upon Direct
Rendering to handle most of the OpenGL commands.  That's why VGL only
intercepts GLX commands and a few OpenGL commands-- basically everything
that's necessary to manage the redirecting of OpenGL contexts from the
2D X server to the 3D X server and from windows to Pbuffers, but nothing
more.  Once an OpenGL context has been redirected into a Pbuffer on the
3D X server, VirtualGL gets out of the way, and OpenGL commands pass
through unimpeded until the frame has finished rendering.  What you
propose would require a full OpenGL interposer, which would be a
maintenance nightmare given that OpenGL changes so frequently.  Also,
there's the matter of what to do with the OpenGL commands once they
reach the hypothetical IPC server.  Unless that IPC server was somehow
in-process with the 3D X server, the IPC server would be limited in its
ability to pass OpenGL commands to the 3D X server in much the same way
that VirtualGL is limited.  I made an active decision in 2004 not to
create a full OpenGL interposer, for reasons described here: 
https://virtualgl.org/About/Background.  If you did want to build such a
solution, you'd be better off basing it on the Mesa source, since Mesa
keeps track of changes in the OpenGL API and already provides
dispatching mechanisms.

In general, the 3D X server in a VirtualGL server should be thought of
as a shared resource, and it shouldn't be used for any
security-conscious activities.  If you need to use a local X server on
the VirtualGL server, then I strongly recommend configuring two X
servers-- one headless that is dedicated to VirtualGL and another one
that can be used for local activities.  In a VirtualGL environment, it
doesn't really matter if an application takes over the keyboard and
mouse on the 3D X server, because VirtualGL is only using the 3D X
server to execute GLX commands.  And VirtualGL is designed to run at the
display manager login prompt, which prevents anyone from having
unrestricted access to a local desktop.

Tom Li

unread,
Apr 15, 2020, 5:17:24 AM4/15/20
to VirtualGL User Discussion/Support
> Unfortunately, that isn't feasible, because VirtualGL relies upon Direct
> Rendering to handle most of the OpenGL commands. [...] I made an active

> decision in 2004 not to create a full OpenGL interposer, for reasons
> described here: https://virtualgl.org/About/Background.  If you did
> want to build such a solution, you'd be better off basing it on the Mesa
> source, since Mesa keeps track of changes in the OpenGL API and already
> provides dispatching mechanisms.

Fully understood. Thanks for the clear and detailed explanation.

> In general, the 3D X server in a VirtualGL server should be thought of
> as a shared resource, and it shouldn't be used for any
> security-conscious activities.  If you need to use a local X server on
> the VirtualGL server, then I strongly recommend configuring two X
> servers-- one headless that is dedicated to VirtualGL and another one
> that can be used for local activities.

So, it's, in fact, possible to operate two X servers simultaneously, one
normal and one headless, with only a single physical GPU?

It would solve the problem. I've never used such a configuration before,
and I'm not sure how a X server can run on a graphic card, but without
actually creating the "head". Is there any documentation for that?

Thanks,
Tom Li

Youssef Ghorbal

unread,
Apr 15, 2020, 6:48:07 AM4/15/20
to VirtualGL User Discussion/Support
Hi Tom,

> In general, the 3D X server in a VirtualGL server should be thought of
> as a shared resource, and it shouldn't be used for any
> security-conscious activities.  If you need to use a local X server on
> the VirtualGL server, then I strongly recommend configuring two X
> servers-- one headless that is dedicated to VirtualGL and another one
> that can be used for local activities.

So, it's, in fact, possible to operate two X servers simultaneously, one
normal and one headless, with only a single physical GPU?

It would solve the problem. I've never used such a configuration before,
and I'm not sure how a X server can run on a graphic card, but without
actually creating the "head". Is there any documentation for that?

 I want to achieve a similar setup (application isolation) for an HPC env and I was asking similar question here 
 I'm currently exploring running per user Xorg bound to a given GPU (allocated by the HPC scheduler for instance) So far binding to a given GPU is working (just tweak to xorg.conf to reference the GPU PCI path basically). Running rootless is not that conclusive, for now, unfortunately since Xorg needs to have access to a tty upon startup and those are root owned by default and get chown by PAM (pam_console) upon user console login. There is also the fact that Xorg seems to want access to input devices anyway, if run rootless it will only spit a warning (about not beeing able to bind input devices) but it starts anyway. I've asked for help in Xorg mailing lists, no luck for now.
 I'll report back if I have come up with a running setup.

 As anserwed by DRC, in the other thread : "this be addressed in VirtualGL 3.0 by the introduction of an EGL back end, which will allow VirtualGL to be used without a 3D X server (with some limitations.)".

Youssef Ghorbal

DRC

unread,
Apr 15, 2020, 10:07:07 AM4/15/20
to virtual...@googlegroups.com
On 4/15/20 4:17 AM, Tom Li wrote:
> So, it's, in fact, possible to operate two X servers simultaneously, one
> normal and one headless, with only a single physical GPU?
No, you need a separate GPU for each.  The idea is that you would use a
low-end GPU for the X server connected to the display (presumably you'd
only be using that for administering the server) and a high-end GPU for
the headless 3D X server used by VirtualGL.

> It would solve the problem. I've never used such a configuration before,
> and I'm not sure how a X server can run on a graphic card, but without
> actually creating the "head". Is there any documentation for that?

For nVidia GPUs, the process is described here:

https://virtualgl.org/Documentation/HeadlessNV


DRC

unread,
Apr 15, 2020, 10:19:30 AM4/15/20
to virtual...@googlegroups.com
On 4/15/20 5:48 AM, Youssef Ghorbal wrote:
>  As anserwed by DRC, in the other thread : "this be addressed in
> VirtualGL 3.0 by the introduction of an EGL back end, which will allow
> VirtualGL to be used without a 3D X server (with some limitations.)".

That is correct, and you can follow this GitHub issue to be notified
when the feature is ready for testing:

https://github.com/VirtualGL/virtualgl/issues/10

Due to a lot of people suddenly working remotely, I have had a sharp
uptick in demand for TurboVNC over the past two months, so I have been
temporarily distracted by getting that code base ready for an upcoming
stable release later this month and a next-gen beta release this
summer.  However, I am now back to focusing on the VGL EGL back end,
since the funding to complete that feature finally arrived a few weeks ago.

For those who don't know about the EGL back end, it uses an EGL
extension in order to create Pbuffers and OpenGL rendering contexts
without using a 3D X server at all.  Access to the GPU is provided
solely through the DRI device files-- /dev/dri/card0, etc.

DRC


Reply all
Reply to author
Forward
0 new messages