virtualgl only locally with xvfb?

817 views
Skip to first unread message

Martin Pecka

unread,
Aug 21, 2020, 8:44:12 AM8/21/20
to VirtualGL User Discussion/Support
Hi, we're thinking about getting GLX support on our HPC cluster which (currently) is completely headless. The idea is that users should be able to run virtual containers which would be given access to HW rendering with OpenGL. EGL would be better, but we're stuck with OGRE rendering engine which doesn't have proper support for the nvidia EGL.

Could you comment on my idea? Is it a supported scenario?

The multi-GPU server would run a single "3D X server", probably Xorg. It would also run the virtualgl client. Containers that want to do some OpenGL stuff would call a combination of xvfb and vglrun. I.e. the whole setup only works with a single machine, not a pair connected via ssh -X.

Is that possible? Is there a tutorial for this kind of setup?

Thanks for help

Jason Edgecombe

unread,
Aug 24, 2020, 9:10:56 AM8/24/20
to virtual...@googlegroups.com
Hello Martin,

I don't have experience with running containers and VGL, but I have set up headless servers with GPU cards and remote users doing server-side rendering. I can confirm that CUDA programs will work in this scenario. The users connect with FastX, but Turbo VNC would also work. We accomplish this using a normal Xorg process (not xvfb) running on DISPLAY=:1 with the proprietary nvidia driver configured as normal except for setting the DISPLAY and the PCI_ID. We set the PCI ID due to using multiple GPUs in the same box.

We also run the above config in KVM VMs with one GPU per VM and use PCI-passthrough to pass the GPUs from the host to the guest. I don't know how to have a single Xorg talk to multiple GPUs.

I've attached a file with commands that might help to set up an Nvidia card on RHEL7

If you want to set up multiple Xorg processes, then just use the next DISPLAY number and increment the numbers for the headlessx@Z service to increment Z

** NOTE1: change the +s and +f options to -s and -f  on the vglserver_config command to restrict access. Allowing all users is useful for troubleshooting/exploration.
** NOTE2: The headlessx@1 service will take over the console and make the display blank while it's active. I haven't found a way to fix this.

I hope this helps.

Sincerely,
Jason
---------------------------------------------------------------------------
Jason Edgecombe | Linux Administrator
UNC Charlotte | Office of OneIT
9201 University City Blvd. | Charlotte, NC 28223-0001
Phone: 704-687-1943
jwed...@uncc.edu | http://engr.uncc.edu |  Facebook
---------------------------------------------------------------------------
If you are not the intended recipient of this transmission or a person responsible for delivering it to the intended recipient, any disclosure, copying, distribution, or other use of any of the information in this transmission is strictly prohibited. If you have received this transmission in error, please notify me immediately by reply e-mail or by telephone at
704-687-1943.  Thank you.


On Fri, Aug 21, 2020 at 8:44 AM Martin Pecka <peck...@fel.cvut.cz> wrote:
[Caution: Email from External Sender. Do not click or open links or attachments unless you know this sender.]
--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/d4256049-e3e7-40ab-97b8-634fc69a38e4n%40googlegroups.com.
headless-vgl.txt

DRC

unread,
Aug 24, 2020, 10:02:20 AM8/24/20
to virtual...@googlegroups.com
I don't fully understand what you're proposing.  The 3D X server part of
your proposal should be no problem, as long as you connect each GPU to a
separate screen on that X server (presumably, the 3D X server would be
headless.)  But why is the VirtualGL Client involved?

Conceptually, it should be possible to share the 3D X server connection
with, say, a Docker container, but given the extremely limited resources
of this project, I have thus far been unable to dedicate the time toward
researching how best to accomplish that
(https://github.com/VirtualGL/virtualgl/issues/98).

Martin Pecka

unread,
Aug 24, 2020, 10:15:17 AM8/24/20
to VirtualGL User Discussion/Support
As the GPUs are shared among more users, I find it useful to give a separate (but still accelerated) X display to each user. I suppose telling everybody to use :0 wouldn't end up very well (as long as everyone would be rendering offscreen, it'd work, but I can't guarantee that). So I thought that VirtualGL could be the thing that would guarantee that nobody will be rendering onscreen.

Dne pondělí 24. srpna 2020 v 16:02:20 UTC+2 uživatel DRC napsal:

DRC

unread,
Aug 24, 2020, 10:57:49 AM8/24/20
to virtual...@googlegroups.com

Yes, that is exactly what VirtualGL does, but the VirtualGL Client is not a required component of VirtualGL.  The VirtualGL Client is only used if you use the built-in VGL Transport, which is only useful in a remote X environment (i.e. when the 2D X server is on the client machine.)  Most users use VirtualGL with an X proxy these days, in which case the VirtualGL Client is not used.

Apart from that, it sounds like what you are trying to accomplish is the same as https://github.com/VirtualGL/virtualgl/issues/98: sharing the 3D X server connection from host to guest in a container environment such as Docker.

--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.

Martin Pecka

unread,
Aug 24, 2020, 11:29:08 AM8/24/20
to VirtualGL User Discussion/Support
Yes, I'm working with containers, only its Singularity HPC and not Docker, which makes some things more complicated (and some less).

So, to get back to the original question: If I don't run the vglclient, are the correct steps the following?

1) Get the headless 3D X server running on :0 (should be quite easy once the cluster admin agrees to do that)
2) Each user runs:
    xvfb-run -a -s "-screen 0 1920x1080x24" env VGL_DISPLAY=:0 vglrun /path/to/app arg1 arg2 ...

I  tested this on my (non-headless) laptop and it seemed to work:

No window popped up, so apparently :0 wasn't used. And nvidia-smi showed that the GPU is being fully utilized. In glxgears, I get around 700 FPS on my GeForce GTX 1050 via VGL compared to 3000 FPS with direct on-screen rendering, but that's probably okay, we'll see when I test the full-blown apps.

If this approach is correct for this use-case, could I help making a part of documentation from it? We could also add the server-side setup part Jason posted.

Dne pondělí 24. srpna 2020 v 16:57:49 UTC+2 uživatel DRC napsal:

DRC

unread,
Aug 24, 2020, 1:45:26 PM8/24/20
to virtual...@googlegroups.com

I don't understand how you plan to get the rendered pixels from the X proxy to the client machine.  Xvfb won't do that.  You would need an X proxy that has an image transport layer attached (TurboVNC, as an example, is essentially Xvfb with an attached VNC server.)  I also don't understand why you're still mentioning vglclient, since that has no relevance to the server-side configuration.  I can almost guarantee you that the technical specifics you listed below are not correct, but in order to correct them, I need a better understanding of what you are ultimately trying to accomplish.  Let's bring the discussion up a level and develop a mutual understanding of the proposed solution architecture before we get mired down in the specifics of command lines and environment variables and such.

I'm not sure, for instance, what you are expecting in terms of a window popping up.  That will never happen on the 3D X server, since VirtualGL is redirecting all OpenGL rendering into off-screen Pbuffers.  It would only happen on the 2D X server, but again, if you're trying to use Xvfb as a 2D X server, then I don't understand how you expect to get the pixels from that X server to the client machine without an image transport layer.  Also, unless you change the DISPLAY environment variable, the application will not display to the Xvfb instance anyhow.

Martin Pecka

unread,
Aug 24, 2020, 3:53:43 PM8/24/20
to VirtualGL User Discussion/Support
Ah, I'm sorry, it isn't evident from my first post - I'm not interested in transferring the "X buffer" anywhere. The apps we need to run only do offscreen rendering. But they need accelerated OpenGL for that, and can only do it through GLX (though EGL would be more appropriate). Does that help understanding the issue?

If I get it correctly, people who need to see what X draws to the "onscreen buffer", will use TurboVNC, and people who only need offscreen rendering are good with Xvfb.

Dne pondělí 24. srpna 2020 v 19:45:26 UTC+2 uživatel DRC napsal:

DRC

unread,
Aug 24, 2020, 5:37:09 PM8/24/20
to virtual...@googlegroups.com

OK.  In that case, your general strategy will be:

- On the host, set up the headless 3D X server on Display :0 with each GPU attached to a different screen.

- Figure out how to share the 3D X server connection from the host to a container instance.  My first approach would be to try sharing the files related to the 3D X server's Unix domain socket.  Again, I have not personally experimented with this yet, so I do not yet know what issues might be encountered.

- To launch an application in a container instance:
  - Launch Xvfb

  - export DISPLAY=:{d}.0  # {d} = X display of Xvfb instance

  - export VGL_DISPLAY=:0.{s}  # {s} = screen number of desired GPU

  - vglrun {application}

Martin Pecka

unread,
Aug 26, 2020, 8:10:14 AM8/26/20
to VirtualGL User Discussion/Support
I can confirm that the suggested steps work on the headless cluster server with a 3D X server running.

After installing xvfb and virtualgl into the container, I can run apps like this:

VGL_DISPLAY=:0 singularity exec --nv /path/to/container/image.simg xvfb-run -a -s '-screen 0 1x1x24' vglrun opengl_app

And that's it, no more hassle. With singularity, the key is to pass the `--nv` argument which delegates the GPU drivers. X server authentication doesn't need to be configured when using singularity as the container app runs with the same privileges and UID as the regular user. I assume that for docker there would be enough tutorials about setting up X auth.

Dne pondělí 24. srpna 2020 v 23:37:09 UTC+2 uživatel DRC napsal:
Reply all
Reply to author
Forward
0 new messages