vglrun fails on a TurboVNC client to a headless server node

3,483 views
Skip to first unread message

qwofford

unread,
Apr 4, 2018, 10:13:36 PM4/4/18
to VirtualGL User Discussion/Support
Hello,

I am having trouble launching applications with vglrun from a TurboVNC session. Server OS is Centos 7. Client OS is OSX. vnc server is TigerVNC/Xvnc. I followed the server configuration documentation for VirtualGL 2.5.2, section 6.1 documentation very carefully. I am using gdm. My configuration answers were YES to restrict 3D X server, YES to restrict framebuffer device, and NO to disable XTEST.

My use case involves tunneling port 5950 via SSH with X-forwarding. The server is launching Xvnc with xinetd/XDMCP. I'm sending xinet logs to /var/log/messages at maximum verbosity.

After xinetd launches Xvnc, xinetd error logs show a peculiar line which claims a display has been started on screen :0. This confuses me because my TurboVNC client host is set to "localhost:50" and it connects just fine. The line in my logs which is suspicious is below:

Apr  4 19:35:28 cn9999 Xvnc[1452]: vncext: VNC extension running!
Apr  4 19:35:28 cn9999 Xvnc[1452]: vncext: created VNC server for screen 0
Apr  4 19:35:28 cn9999 Xvnc[1452]: Connections: accepted: 127.0.0.1::45176

Ultimately, the TurboVNC connection is established and I'm able to work with the desktop. The DISPLAY variable appears to be set correctly by default:
$ echo $DISPLAY
127.0.0.1:50

I am then successfully working with the TurboVNC gui (MATE). Section 9.1 of the VirtualGL 2.5.2 documentation describes my situation precisely: VirtualGL is running on the same server as my Xvnc server. So I bring up a terminal and try:
$ vglrun glxgears -info
[VGL] ERROR: Could not open display :0.

I wonder if this behavior is related to the suspicious log file above? I then tried setting the display manually:

$ vglrun -display :50 glxgears
Running synchronized to the vertical refresh.  The framerate should be
approximately the same
as the monitor refresh rate.
[VGL] ERROR: Could not connect to VGL client.  Make sure that vglclient is
[VGL]    running and that either the DISPLAY or VGL_CLIENT environment
[VGL]    variable points to the machine on which vglclient is running.
[VGL] ERROR: in connect--
[VGL]    261: Connection refused

I can establish a connection using vglconnect, but then vglrun simply uses Mesa instead of my NVIDIA driver:

$ vglconnect -display :50 -e 'glxgears -info' localhost

VirtualGL Client 64-bit v2.5.2 (Build 20170302)
vglclient
is already running on this X display and accepting unencrypted
   connections on port
4242.

Pseudo-terminal will not be allocated because stdin is not a terminal.
testuser@localhost
's password:
GL_RENDERER   = Gallium 0.4 on llvmpipe (LLVM 3.9, 256 bits)
GL_VERSION    = 2.1 Mesa 17.0.1
GL_VENDOR     = VMware, Inc.

I see the guide at https://virtualgl.org/Documentation/HeadlessNV, and I have taken these steps. See my steps below:

# lspci | grep NVIDIA
03:00.0 VGA compatible controller: NVIDIA Corporation GK106GL [Quadro K4000] (rev a1)

And

# cat /etc/X11/xorg.conf
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 375.82  (buildmeister@swio-display-x86-rhel47-03)  Wed Jul 19 21:43:37 PDT 2017
Section "DRI"
   
Mode 0660
   
Group "vglusers"
EndSection


Section "ServerLayout"
   
Identifier     "Layout0"
   
Screen      0  "Screen0"
   
InputDevice    "Keyboard0" "CoreKeyboard"
   
InputDevice    "Mouse0" "CorePointer"
EndSection

Section "Files"
EndSection

Section "Module"
   
Load           "dbe"
   
Load           "extmod"
   
Load           "type1"
   
Load           "freetype"
   
Load           "glx"
EndSection

Section "InputDevice"

   
# generated from default
   
Identifier     "Mouse0"
   
Driver         "mouse"
   
Option         "Protocol" "auto"
   
Option         "Device" "/dev/input/mice"
   
Option         "Emulate3Buttons" "no"
   
Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"

   
# generated from default
   
Identifier     "Keyboard0"
   
Driver         "keyboard"
EndSection

Section "Monitor"
   
Identifier     "Monitor0"
   
VendorName     "Unknown"
   
ModelName      "Unknown"
   
HorizSync       28.0 - 33.0
   
VertRefresh     43.0 - 72.0
   
Option         "DPMS"
EndSection

Section "Device"
   
Identifier     "Device0"
   
Driver         "nvidia"
   
VendorName     "NVIDIA Corporation"
   
BoardName      "Quadro K4000"
   
BusID          "PCI:0:3:0"
EndSection

Section "Screen"
   
Identifier     "Screen0"
   
Device         "Device0"
   
Monitor        "Monitor0"
   
DefaultDepth    24
   
Option         "UseDisplayDevice" "None"
   
SubSection     "Display"
       
Virtual     1920 1200
       
Depth       24
   
EndSubSection
EndSection


Can someone please point out steps I have missed in the documentation, or nuances about my configuration which prevent vglrun from utilizing my NVIDIA driver?

Thank you,
Quincy

DRC

unread,
Apr 5, 2018, 12:11:50 AM4/5/18
to virtual...@googlegroups.com
On 4/4/18 9:13 PM, qwofford wrote:
> I am having trouble launching applications with vglrun from a TurboVNC
> session. Server OS is Centos 7. Client OS is OSX. vnc server is
> TigerVNC/Xvnc. I followed the server configuration documentation for
> VirtualGL 2.5.2, section 6.1 documentation very carefully. I am using
> gdm. My configuration answers were YES to restrict 3D X server, YES to> restrict framebuffer device, and NO to disable XTEST.

You said "TurboVNC session" but then indicated that you were running the
TigerVNC Server, which is a different product. I assume you mean that
you're using the TigerVNC Server with the TurboVNC Viewer, in which case
you're actually running a TigerVNC session (the VNC "session" is on the
server.) We generally claim that VirtualGL works with the TigerVNC
Server, but The VirtualGL Project hasn't been affiliated with TigerVNC
for over five years, so the only kind of support I can offer for it is
indirect. I directly support TurboVNC, which is my bread and butter.


> My use case involves tunneling port 5950 via SSH with X-forwarding. The
> server is launching Xvnc with xinetd/XDMCP. I'm sending xinet logs to
> /var/log/messages at maximum verbosity.

I don't understand why you would want to do that. That seemingly
defeats the purpose of having a virtual X server, i.e. Xvnc. The
purpose of such a solution is precisely to avoid sending X11 traffic
over the network, which is what you're doing when you use SSH with X11
forwarding.


> After xinetd launches Xvnc, xinetd error logs show a peculiar line which
> claims a display has been started on screen :0. This confuses me because
> my TurboVNC client host is set to "localhost:50" and it connects just
> fine. The line in my logs which is suspicious is below:
> |
>
> Apr 419:35:28cn9999 Xvnc[1452]:vncext:VNC extension running!
> Apr 419:35:28cn9999 Xvnc[1452]:vncext:created VNC server forscreen 0
> Apr 419:35:28cn9999 Xvnc[1452]:Connections:accepted:127.0.0.1::45176

That really looks as if the TigerVNC Server is loading the TigerVNC
X.org module. The TigerVNC X.org module is intended only for
single-user remote access to the root display. It's an orthogonal
solution to Xvnc and is not intended for use with VirtualGL.


> I am then successfully working with the TurboVNC gui (MATE). Section 9.1
> of the VirtualGL 2.5.2 documentation describes my situation precisely:
> VirtualGL is running on the same server as my Xvnc server. So I bring up
> a terminal and try:
> |
> $ vglrun glxgears -info
> [VGL]ERROR:Couldnotopen display :0.
> |
>
> I wonder if this behavior is related to the suspicious log file above? I
> then tried setting the display manually:

Did you try the "Sanity Check" procedure described here?

https://cdn.rawgit.com/VirtualGL/virtualgl/master/doc/index.html#hd006002


> $ vglrun -display :50glxgears
> Runningsynchronizedto the vertical refresh. Theframerate should be
> approximately the same asthe monitor refresh rate.
> [VGL]ERROR:Couldnotconnect to VGL client. Makesure that vglclient is
> [VGL]   running andthat either the DISPLAY orVGL_CLIENT environment
> [VGL]   variable points to the machine on which vglclient isrunning.
> [VGL]ERROR:inconnect--
> [VGL]   261:Connectionrefused

Not a valid configuration. -display is used to specify the 3D X server,
whereas :50 is the 2D X server in your case. You definitely want
VGL_DISPLAY/'vglrun -display' to be pointing to :0, which is the
default. The problem is that something isn't configured properly
vis-a-vis allowing access to :0.


> I can establish a connection using vglconnect, but then vglrun simply
> uses Mesa instead of my NVIDIA driver>
> |
> $ vglconnect -display :50-e 'glxgears -info'localhost

Also not a valid configuration. vglconnect is used with the VGL
Transport, which is only used in conjunction with remote X. It is not
used with an X proxy such as Xvnc.


> I see the guide at https://virtualgl.org/Documentation/HeadlessNV, and I
> have taken these steps. See my steps below:
>
> |
> # lspci | grep NVIDIA
> 03:00.0VGA compatible controller:NVIDIA CorporationGK106GL
> [QuadroK4000](rev a1)

A Quadro K4000 is not headless, and therefore that how-to is not
necessary for your hardware. As the User's Guide describes, the basic
strategy is to get accelerated OpenGL working on the 3D X server without
VirtualGL, then add VirtualGL to the mix. I assume that you can log
into Display :0 locally and run 3D applications on the server?

If the sanity check procedure fails, then possible causes are:

- Incorrect permissions. Did you add your user account to the vglusers
group and log out/back in to activate the new group permissions? Note
also that you'll probably need to restart the TigerVNC Server session
once you've logged back in, in order to pick up the new permissions.

- GDM isn't executing /etc/gdm/Init/Default. This is a known bug with
the bleeding-edge GDM releases in Fedora, but I'm not aware that it has
made it into RHEL yet. To diagnose such things, I insert a line into
/etc/gdm/Init/Default that echoes something to a file under /tmp. Upon
restarting the 3D X server, I verify whether that file under /tmp has
been created. If not, then you might be encountering the bug in
question, in which case the only known remedy is to switch to LightDM.

If the sanity check procedure succeeds, then I have no idea. XDMCP is
not exactly a common configuration these days.

qwofford

unread,
Apr 5, 2018, 3:52:16 PM4/5/18
to VirtualGL User Discussion/Support
I will skip X-forwarding on my ssh connections, thanks for pointing that out.

I realized shortly after posting that I mis-spoke. Although I have tigervnc installed, I am using xinetd to launch Xvnc, not tigervnc.

service xvnc_darwin
{
disable
= no
protocol
= tcp
socket_type
= stream
wait
= no
user
= nobody
server
= /usr/bin/Xvnc
server_args
= -Log *:syslog:100 -inetd -query localhost -once -geometry 1024x768 -depth 24 securitytypes=none
}

Do you prefer using Xvnc for *nix systems in your own work, or something else? It is not important to me how remote X servers are managed, I just need multiple users to be able to create an SSH tunnel, and point their TurboVNC viewer at this port to receive a private remote desktop, with VirtualGL capabilities. This xinetd/XDMCP configuration popped up on the RHEL 7 server documentation, so this is where I got the idea. Very open to your thoughts.


Once I'm able to access the server node, I will plug in a monitor to assess whether I am able to run GL applications, such as glxinfo.

I was not able to run the sanity checks from your documentation, because I do not have a vgl_xauth_key file in /etc/opt/VirtualGL/. In fact, there are no files in this directory at all. Perhaps this is where I should have started our discussion!

I ran the vglserver_config to view its output once more, and noticed that part of the setup failed with 'can not rmmod nvidia, module is currently in use by nvidia_modeset. I stopped the gdm service again, and used rmmod to remove all the nvidia dependencies, and finally the nvidia module itself before running the vglserver_config. I am  still unable to use vglrun from the remote desktop, but perhaps the file I am missing in /etc/opt/VirtualGL has something to do with that?

 

qwofford

unread,
Apr 5, 2018, 4:06:00 PM4/5/18
to VirtualGL User Discussion/Support
I forgot to address your comment about /etc/gdm/Init/Default.

I added a line to echo some text into a file in /tmp/. It did not appear when I restarted gdm. Since this server is not in production, I had the opportunity to reboot it to be sure. Still no test file. It appears that /etc/gdm/Init/Default is not being run, as you suspected.

I will switch to LightDM (which I prefer in any case)

DRC

unread,
Apr 5, 2018, 6:31:17 PM4/5/18
to virtual...@googlegroups.com
UGH. I'm really sorry to hear that. I guess I'll need to update my
wiki article (https://virtualgl.org/Documentation/RHEL6) to reflect the
fact that this bug has now migrated into RHEL. The bug report, BTW, is
here: https://bugzilla.redhat.com/show_bug.cgi?id=851769. If anyone
reading this is a paying Red Hat customer, please put pressure on them
to fix this.

DRC

unread,
Apr 5, 2018, 7:18:17 PM4/5/18
to virtual...@googlegroups.com
On 4/5/18 2:52 PM, qwofford wrote:
> I realized shortly after posting that I mis-spoke. Although I have
> tigervnc installed, I am using xinetd to launch Xvnc, /not/ tigervnc.
>
> |
> service xvnc_darwin
> {
> disable =no
> protocol =tcp
> socket_type =stream
> wait =no
> user =nobody
> server =/usr/bin/Xvnc
> server_args =-Log*:syslog:100-inetd -query localhost -once -geometry
> 1024x768-depth 24securitytypes=none
> }
> |
>
> Do you prefer using Xvnc for *nix systems in your own work, or something
> else? It is not important to me how remote X servers are managed, I just
> need multiple users to be able to create an SSH tunnel, and point their
> TurboVNC viewer at this port to receive a private remote desktop, with
> VirtualGL capabilities. This xinetd/XDMCP configuration popped up on the
> RHEL 7 server documentation, so this is where I got the idea. Very open
> to your thoughts.

TurboVNC is recommended and supported by this project. If you have
TigerVNC installed, then /usr/bin/Xvnc is probably the TigerVNC Server,
so I think your initial assertion was correct. XDMCP (which provides a
login dialog within the VNC session) is old and very insecure, and it
isn't necessary, because Xvnc sessions are inherently per-user. When
you start an Xvnc session (regardless of whether you're using TurboVNC
or TigerVNC or RealVNC or whatnot), it runs under the credentials of the
user account that started it, and it chooses a unique port so as not to
conflict with other Xvnc sessions on the same machine. TurboVNC and
TigerVNC both have the ability to authenticate remotely, using either
their own built-in encryption and authentication layer or using an SSH
tunnel for both encryption and authentication. The general procedure
for TurboVNC would be (assuming the TurboVNC Server is installed on the
server machine-- note that you can install TurboVNC concurrently with
TigerVNC, since our installation resides in /opt/TurboVNC):

- Log into the server using SSH
- Start a TurboVNC Server session by running /opt/TurboVNC/bin/vncserver
in the SSH session. The vncserver script will report back the display
number that Xvnc is listening on. NOTE: It will ask you to choose a VNC
password. Choose something that is not the same as your Unix password.
- Start the TurboVNC Viewer on your client machine.
- Enter the hostname:display that Xvnc is listening on, and click "Connect".
- Enter the VNC password when prompted.

This is the simplest way to get up and running. More advanced
configurations include:

- Using Unix login credentials. Generally you would do this by:
* (as root) Copying /etc/pam.d/passwd to /etc/pam.d/turbovnc on the server
* Passing '-securitytypes TLSPlain,X509Plain' to
/opt/TurboVNC/bin/vncserver.
This ensures that the Unix password won't be passed over the network
without encryption, and it disables other forms of authentication. You
can optionally add the OTP security types (TLSOtp,X509Otp,OTP) to this
list if you want to use either Unix login credentials or a one-time
password. That would be useful, for instance, if you wanted others to
be able to temporarily connect to your session for collaboration
purposes. You can also modify /etc/turbovncserver-security.conf to
enforce this type of authentication for all TurboVNC sessions started on
the machine.

- Using one-time passwords. Generally you would do this by passing
'-otp' to /opt/TurboVNC/bin/vncserver. You can then enter the token it
prints to the console instead of a VNC password, when prompted by the
TurboVNC viewer. Since the token is discarded after one use, it is safe
to use it without encryption. Run '/opt/TurboVNC/bin/vncpasswd -o' on
the server to generate a new OTP.

- Using time-based one-time passwords/Google Authenticator. Refer to
https://turbovnc.org/Documentation/TOTP for instructions.

- Using SSH. Generally you would do this by:
* modifying /etc/turbovncserver-security.conf and setting

no-reverse-connections
no-remote-connections
permitted-security-types = VNC, OTP

NOTE: this disables TurboVNC's built-in encryption, since you're
using SSH, but it continues to enable simple authentication. This is to
prevent others from launching a VNC viewer within their own VNC sessions
on the same machine and thus connecting to your session. If that isn't
a concern, then you can set 'permitted-security-types = None'.

* In the TurboVNC Viewer, prior to connecting, set the "SSH user" and
"Host" fields under the "Security" tab in the Options dialog (must use
the Java version of the viewer if you're running a Windows client), then
check "Use VNC server as gateway" and click "OK". When you connect, the
viewer will now prompt you for the SSH password as well as the VNC password.

Documentation is here:
https://turbovnc.org/Documentation/Documentation

Also, please address any support questions that are specific to TurboVNC
to one of those Google groups:
https://turbovnc.org/About/MailingLists


> Once I'm able to access the server node, I will plug in a monitor to
> assess whether I am able to run GL applications, such as glxinfo.
>
> I was not able to run the sanity checks from your documentation, because
> I do not have a vgl_xauth_key file in /etc/opt/VirtualGL/. In fact,
> there are no files in this directory at all. Perhaps this is where I
> should have started our discussion!
>
> I ran the vglserver_config to view its output once more, and noticed
> that part of the setup failed with 'can not rmmod nvidia, module is
> currently in use by nvidia_modeset. I stopped the gdm service again, and
> used rmmod to remove all the nvidia dependencies, and finally the nvidia
> module itself before running the vglserver_config. I am  still unable to
> use vglrun from the remote desktop, but perhaps the file I am missing in
> /etc/opt/VirtualGL has something to do with that?

Yep. GDM bug. <sigh> I don't know why that silly bug hasn't been
fixed. It has existed for years now.

I am seeking funding for a project that will allow VirtualGL to access
the GPU without going through an X server-- using nVidia's EGL
implementation (https://github.com/VirtualGL/virtualgl/issues/10).
However, that's likely to be a very disruptive project, requiring a lot
of time and money to implement. So far, no one has stepped forward to
sponsor it financially.

qwofford

unread,
Apr 5, 2018, 9:26:51 PM4/5/18
to VirtualGL User Discussion/Support
I've ditched the TigerVNC server. I'm now launching sessions from /opt/TurboVNC/bin/vncserver, and using SSH tunneling for the simplest possible solution as you suggest. I will absolutely pursue one of the more advanced security configurations in the near future, so thanks for that.

I am having problems with the way /opt/TurboVNC/bin/Xvnc is being launched by /opt/TurboVNC/bin/vncserver, however (which I will refer to as Xvnc and vncserver from here forward).

My /etc/lightdm/lightdm.conf:
[LightDM]
start
-default-seat=true

[Seat:*]
greeter
-session=lightdm-gtk-greeter

[VNCServer]
enabled
=true
command
=/opt/TurboVNC/bin/vncserver
port
=5900
width
=1024
height
=768
depth
=24
[SeatDefaults]
display
-setup-script=/opt/VirtualGL/bin/vglgenkey

When I restart the lightdm service, I get a message stating there may be a problem with the vgl_xauth_key:
Apr 05 18:59:45 cn9999 systemd[1]: Starting Light Display Manager...
Apr 05 18:59:45 cn9999 lightdm[9531]: Failed to create IPv6 VNC socket: Error binding to address: Address already in use
Apr 05 18:59:45 cn9999 systemd[1]: Started Light Display Manager.
Apr 05 18:59:46 cn9999 lightdm[9531]: xauth:  file /etc/opt/VirtualGL/vgl_xauth_key does not exist
Apr 05 18:59:46 cn9999 lightdm[9531]: xauth: (argv):1:  couldnt query Security extension on display ":0"
Apr 05 18:59:46 cn9999 lightdm[9531]: xauth:  file /etc/opt/VirtualGL/vgl_xauth_key does not exist

Despite this message, the key is generated (and regenerated) any time I start the lightdm service.

Despite these apparent problems, I am able to start a vncserver, and connect to it using the TurboVNC Viewer. The trouble is that, in my remote session, I am presented with only a firefox browser opened to Centos documentation. I assumed this was because my .vnc/xstartup file needed to be edited, so I added a line to start a mate-session. This did not change the behavior. In fact, I can stop lightdm entirely, and my Xvnc session window with the firefox browser remains intact. Is it possible that I'm running an Xsession without a display manager at all?

I have configured TurboVNC with VirtualGL and TigerVNC server for previous use cases, and the performance was excellent. I do feel that the remote desktop component is clunky and unnecessary, however. If you were to eliminate the X server from VirtualGL, what would be the ideal application for that tool? Would you simply launch a remote app with a command like 'vglrun username@server -app '/path/to/OpenGL/app'?

DRC

unread,
Apr 6, 2018, 1:33:25 AM4/6/18
to virtual...@googlegroups.com
On 4/5/18 8:26 PM, qwofford wrote:
> I am having problems with the way /opt/TurboVNC/bin/Xvnc is being
> launched by /opt/TurboVNC/bin/vncserver, however (which I will refer to
> as Xvnc and vncserver from here forward).
>> My /etc/lightdm/lightdm.conf:

TurboVNC is not designed to be started from the DM, and in fact, it's a
bad idea to do so, because the DM usually runs under a special account
with limited privileges. Let me back up and explain that Xvnc sessions
are virtual. They create a virtual X server that is completely
decoupled from the "root" X server and the server machine's graphics
hardware. That's why VirtualGL is necessary-- it directs the OpenGL
commands from a 3D application to the "root" X server (AKA "3D X
server"), which has GPU hardware attached, and it rewrites the commands
such that OpenGL rendering occurs in an off-screen Pbuffer instead of an
X window. VirtualGL reads back the rendered 3D images in real time
(generally when it detects an end-of-frame trigger command, such as
glXSwapBuffers(), being issued by the application) and displays the 3D
images to the "2D X server" (a TurboVNC session, in your case) using
regular 2D X11 drawing commands. This allows multiple virtual X servers
(TurboVNC sessions) to co-exist simultaneously on the same server
machine, and with VirtualGL, those virtual X servers can share the GPU
hardware to get 3D acceleration. Without VirtualGL, the only way to run
a 3D application with hardware acceleration would be using only the 3D X
server, which would require you to be sitting in front of the server
machine, or it would require the use of a "screen scraper" to send the
pixels from the 3D X server over the network. Such is inherently a
single-user solution and is not what VirtualGL and TurboVNC
fundamentally do. Screen scrapers tend to be slow, they exhibit tearing
artifacts, and they tend not to work well (or at all) with hardware 3D
acceleration (another reason why VirtualGL exists.)

Just log in with a normal SSH terminal and run
/opt/TurboVNC/bin/vncserver to start a server session (or you can just
execute 'ssh {host} /opt/TurboVNC/bin/vncserver' on the client.) That
will ensure that the TurboVNC Server session runs under your user
account, which is what you want. You don't want to run it as root or as
the lightdm user.


> Despite these apparent problems, I am able to start a vncserver, and
> connect to it using the TurboVNC Viewer. The trouble is that, in my
> remote session, I am presented with only a firefox browser opened to
> Centos documentation. I assumed this was because my .vnc/xstartup file
> needed to be edited, so I added a line to start a mate-session. This did
> not change the behavior. In fact, I can stop lightdm entirely, and my
> Xvnc session window with the firefox browser remains intact. Is it
> possible that I'm running an Xsession without a display manager at all?

TurboVNC uses ~/.vnc/xstartup.turbovnc, but you don't even have to edit
it. You can just 'export TVNC_WM=mate-session' prior to starting the
TurboVNC Server, and it will load MATE as the window manager. The
default GNOME 3 window manager in RHEL 7 isn't very VNC-friendly,
because it's a compositing window manager and generally requires
hardware-accelerated OpenGL in order to achieve any kind of decent
performance. You can run the window manager in VirtualGL by passing
-3dwm to vncserver, but it's really much better to just use MATE or
another non-compositing window manager.


> I have configured TurboVNC with VirtualGL and TigerVNC server for
> previous use cases, and the performance was excellent. I do feel that
> the remote desktop component is clunky and unnecessary, however. If you
> were to eliminate the X server from VirtualGL, what would be the ideal
> application for that tool? Would you simply launch a remote app with a
> command like 'vglrun username@server -app '/path/to/OpenGL/app'?

You can't run a Unix OpenGL application without an X server of some
sort. TurboVNC is an "X proxy", so it creates virtual X servers on a
per-user basis (refer to diagrams in the VirtualGL and TurboVNC User's
Guides.) There are any number of other X proxies that can also be used
to accomplish the same task, although I assert that TurboVNC is faster
with 3D application workloads than most if not all of them (except
TigerVNC, but that's only because I added the TurboVNC encoding methods
and optimizations to TigerVNC "back in the day.") Some examples of
other X proxies are: TigerVNC (which you know) and ThinLinc (commercial
product based on TigerVNC), RealVNC (available in an ancient, slow, open
source flavor and a new, faster, closed-source variety), FreeNX,
NoMachine Enterprise (closed-source commercial product-- successor to
NX), xpra, Exceed Freedom (closed-source commercial product), and Oracle
Secure Global Desktop (closed-source commercial product.) There are
others. VirtualGL should work with any of the above, or with any other
2D X server or X proxy.

When used with the VGL Transport (vglclient/vglconnect), the 2D X server
is running on the client machine, so in that configuration, VirtualGL is
being used to prevent the OpenGL/GLX commands and data from traveling
over the network, but the rest of the X11 commands are still sent to the
client machine to be rendered (remote X.) VirtualGL is a bolt-on
technology for remote X in that case, and some people prefer that
configuration when running on a high-speed network with a Unix client,
precisely because it eliminates the need for a remote desktop. Most,
however, just use TurboVNC, because the remote X approach is clunky with
Windows clients.

With the X proxy configuration, both OpenGL/GLX commands and X11
commands are rendered on the server, and only images are sent over the
network (refer to https://virtualgl.org/About/Background.) There is no
other way, of which I'm aware, to solve this problem. As I mentioned,
with some difficulty VirtualGL might be modified such that it doesn't
require a 3D X server, but the 2D X server will always be required (that
is, until Wayland become ubiquitous enough that most applications and
GUI frameworks start using that instead of X11.)

qwofford

unread,
Apr 10, 2018, 12:31:26 AM4/10/18
to VirtualGL User Discussion/Support
Thanks for linking me to the background reading, that was very educational. I see that I can use VirtualGL Transport to call my GL applications without a remote desktop, but this will introduce latency between the client-side 2D X server, and the remote 3D X server. Perhaps a startup script on the client which  transparently launches a remote desktop to the TurboVNC server is a better solution, because the 3D and 2D X servers have access to the same shared memory for PBuffer swaps. Did I understand the advantage of the X proxy correctly? Collaborative visualization is not a concern for me, but the X proxy seems like a better solution in any case.

Regarding the modifications to VirtualGL which would obviate the 3D X server; in the background reading you mention:

... the application must still use indirect OpenGL rendering to send 3D commands and data to the X proxy. It is, of course, much faster to use indirect rendering over a local socket rather than a remote socket, but there is still some overhead involved.

 Is the 3DX to 2DX chatter the bottleneck, is the gigabit network the bottleneck, or are they cumulative bottlenecks?

You mentioned a few open and closed source solutions like TurboVNC, and I noticed you did not mention NVIDIA's own remote visualization solutions, GeForce Experience and the Moonlight client. Remote visualization + GL capability appears to be an area where NVIDIA should be leading, but it seems they are not...am I wrong? I do not work for NVIDIA so speak freely, ha!

DRC

unread,
Apr 25, 2018, 9:58:58 PM4/25/18
to virtual...@googlegroups.com
On 4/9/18 11:31 PM, qwofford wrote:
> Thanks for linking me to the background reading, that was very
> educational. I see that I can use VirtualGL Transport to call my GL
> applications without a remote desktop, but this will introduce latency
> between the client-side 2D X server, and the remote 3D X server. Perhaps
> a startup script on the client which  transparently launches a remote
> desktop to the TurboVNC server is a better solution, because the 3D and
> 2D X servers have access to the same shared memory for PBuffer swaps.
> Did I understand the advantage of the X proxy correctly? Collaborative
> visualization is not a concern for me, but the X proxy seems like a
> better solution in any case.

Here's one:
https://gist.github.com/dcommander/9137ce58a92952b23fe4def73bb8c678

In general, however, this is just a proof of concept that only works for
Linux/Unix clients. In the long term, I would really like to integrate
SSH more tightly with the TurboVNC Viewer, so as to enable automatically
launching TurboVNC Server sessions and managing them. I may also opt to
web-enable that functionality, using a built-in TurboVNC web portal that
allows you to log in and manage sessions, but both tasks will take a
significant amount of time that I don't have at the moment. As soon as
TurboVNC 2.2 beta is released (soon), I'll be able to look at blue sky
projects like that again.


> Regarding the modifications to VirtualGL which would obviate the 3D X
> server; in the background reading you mention:
>
> ... the application must still use indirect OpenGL rendering to send
> 3D commands and data to the X proxy. It is, of course, much faster
> to use indirect rendering over a local socket rather than a remote
> socket, but there is still some overhead involved.
>
>
>  Is the 3DX to 2DX chatter the bottleneck, is the gigabit network the
> bottleneck, or are they cumulative bottlenecks?

You're conflating two issues. The modifications to obviate the 3D X
server (we informally refer to that feature as "X-server-less GPU
access" or a "GLX-to-EGL interposer") would mainly eliminate the need to
run vglserver_config, as well as the need to run a 3D X server on the
server machine and leave it sitting at the login prompt for no other
purpose than GPU access. Doing that will make VirtualGL easier to
install and maintain, from a sysadmin's point of view. It won't affect
performance or usability.

The background article is referring to a particular type of X proxy that
implements hardware-accelerated OpenGL via indirect rendering (AKA
"out-of-process", as opposed to VirtualGL, which is "in-process.")
Historically, ThinAnywhere 3D (proprietary, closed-source) was such a
solution, although I don't know of any others that exist at the moment.
Basically that paragraph is explaining why I chose the approach I did
with VirtualGL-- that is, preloading VirtualGL into the application so
that the application can continue to use direct rendering despite
running in a remote display environment. There are disadvantages to
VirtualGL's approach-- namely that preloading libraries can cause
certain "creatively-engineered" applications to break (<cough> Steam
<cough>.) However, the only other known approach (indirect,
out-of-process OpenGL rendering) would sacrifice performance or
compatibility or both.

Basically, indirect rendering can be bad because:

1. In a remote X environment, indirect OpenGL rendering means that all
of the OpenGL commands and data have to be sent over the network. Bear
in mind that VirtualGL was originally designed as a bolt-on technology
for remote X11. It originated in the oil & gas industry, an industry in
which some of the OpenGL applications-- particularly seismic
visualization applications-- had to render many megabytes (sometimes
even hundreds of megabytes) of unique geometry or texture data with each
frame. Without VirtualGL, that data would have had to traverse the
network every time a frame was rendered, which was not a tenable
proposition. Also bear in mind that, when VirtualGL was first
prototyped 15 years ago, high-speed X proxies simply didn't exist.
TurboVNC was the first of its kind, but it remained in the prototype
stage from its introduction in late 2004 until Sun Microsystems released
it as a product in 2006. For quite a few years, Exceed and Exceed 3D
were the standard methods of remote application access for Linux/Unix
technical 3D applications, so the first releases of VirtualGL were
designed mostly to work in conjunction with Exceed or with Linux
clients. Also bear in mind that gigabit networking was less prevalent
back then (VirtualGL was designed from the beginning to use high-speed
JPEG, mainly so it could stream 3D images in real time over 100-megabit
connections.) Also bear in mind that many technical applications back
then used "immediate mode" rendering, which meant that they were
transmitting all of the geometry to the GPU with every single frame
(more impetus to avoid indirect rendering.) The scale of the indirect
rendering problem has improved because of advances in network technology
(gigabit is more common than not these days), improvements to the OpenGL
API (applications can now use vertex arrays to avoid immediate mode
while also getting around the limitations of display lists), etc.
However, applications are also rendering larger and larger amounts of
data, so the indirect rendering problem definitely still exists.

2. Even in a local X server/X proxy environment, indirect rendering
overhead can become apparent when you try to render particularly large
geometries (millions of polys), even if you're using display lists or
vertex arrays. If you're using immediate-mode rendering, then the
overhead becomes *really* apparent, and it becomes apparent with much
smaller geometries. Even with VirtualGL, this can become a factor in
performance if you're using VirtualGL to run VirtualBox or VMWare,
because the VirtualBox Guest Additions and VMWare Tools add a layer of
indirect OpenGL rendering between VirtualGL and the 3D application
(necessary in order to communicate the OpenGL commands from the guest
O/S to the host O/S.) Even on a local machine, it takes a finite amount
of time for the OpenGL stack to transmit the commands and data through
inter-process communication mechanisms. It's the difference between
sharing a memory buffer and copying it. If you're rendering a million
polygons, then a 100 ns difference in per-polygon overhead will add 10
ms per frame, which can amount to a significant difference in frame rate.

3. Indirect OpenGL rendering introduces compatibility problems. Some
OpenGL features just don't work properly in that environment, and if
there is a mismatch between the OpenGL implementation in the OpenGL
"server" and the OpenGL implementation loaded by the application (the
OpenGL "client"), then the application may not work properly or at all.
It's frequently the case that indirect OpenGL implementations only
support OpenGL 2 and not later versions of the API. Yet another reason
why VirtualGL exists. As much as possible, VirtualGL tries to get out
of the way of OpenGL functions and pass those through unmodified to the
3D X server.

It's also worth pointing out that, while solving the performance
problems of indirect rendering was once the "mother of invention" with
regard to VirtualGL, increasingly the big selling feature of VirtualGL
in more recent years has been the ability to share high-end GPUs among
multiple simultaneous users and to provide access to remote GPU
resources on demand. Santos was the poster child for this-- quite
literally (they won the Red Hat Innovator of the Year award in 2011 for
the solution they designed around VirtualGL and TurboVNC.) Rather than
purchase new workstations for their hundreds of geoscience application
users in 2010, they purchased eight beefy multi-pipe,
massively-multi-core servers, installed VirtualGL and TurboVNC on them,
and bought laptops for their users. They saved $2 million in upfront
equipment costs, relative to doing a workstation refresh, plus $1
million in yearly maintenance costs. Their users claimed that the
performance was actually better than their old workstations, and it was
certainly more flexible, since they could access their geoscience
applications from multiple locations. They-- and most other commercial
VirtualGL/TurboVNC shops-- use an immersive remote desktop approach,
whereby users run the TurboVNC Viewer on a Windows client and
full-screen it with keyboard grabbing enabled when they want to do
serious work with their Linux applications. When they want to check
Outlook or whatnot, they take the TurboVNC Viewer out of full-screen
mode so they can interact with their Windows desktop. With TurboVNC,
there is an option to use Alt-Enter to switch in and out of full-screen
mode, so it becomes a very intuitive thing to do.


> You mentioned a few open and closed source solutions like TurboVNC, and
> I noticed you did not mention NVIDIA's own remote visualization
> solutions, GeForce Experience and the Moonlight client. Remote
> visualization + GL capability appears to be an area where NVIDIA should
> be leading, but it seems they are not...am I wrong? I do not work for
> NVIDIA so speak freely, ha!

I didn't mention it because, AFAIK, it doesn't exist on Linux servers.
The Windows server/remote application space is a completely different
animal, and a very difficult (if not impossible) one to tame with open
source technologies. (Believe me, I've tried.) However, it's also
difficult for nVidia to compete in VirtualGL's space for the same
reasons-- licensing incompatibility between proprietary code and
GPL/LGPL code. Also, the fact of the matter is-- VirtualGL was there
first, it works for most applications, my labor is a lot cheaper than
paying a big company for a support contract, and I'll work for anyone,
so there is no vendor lock-in.

The best we can do for Windows 3D applications at the moment is to run
them in VirtualBox and run VirtualBox in VirtualGL and run VirtualGL in
TurboVNC. That works, but there is a lot of overhead to it (although
not really any more overhead than Microsoft's RemoteFX, which also seems
to require a virtual machine in order to handle hardware-accelerated
OpenGL.)

DRC

unread,
Dec 23, 2018, 11:12:50 AM12/23/18
to VirtualGL User Discussion/Support
On Wednesday, April 25, 2018 at 8:58:58 PM UTC-5, DRC wrote:
Here's one:
https://gist.github.com/dcommander/9137ce58a92952b23fe4def73bb8c678

In general, however, this is just a proof of concept that only works for
Linux/Unix clients.  In the long term, I would really like to integrate
SSH more tightly with the TurboVNC Viewer, so as to enable automatically
launching TurboVNC Server sessions and managing them.  I may also opt to
web-enable that functionality, using a built-in TurboVNC web portal that
allows you to log in and manage sessions, but both tasks will take a
significant amount of time that I don't have at the moment.  As soon as
TurboVNC 2.2 beta is released (soon), I'll be able to look at blue sky
projects like that again.

The proposed TurboVNC Session Manager (which uses SSH to remotely start/kill/generate OTPs for/connect to multiple TurboVNC sessions running under your account on the TurboVNC host) was funded by a startup and is now available in the TurboVNC 3.0/dev pre-release build (https://turbovnc.org/DeveloperInfo/PreReleases).  I welcome feedback on the feature.

Reply all
Reply to author
Forward
0 new messages