Bad configuration (I'm unexperienced)

32 views
Skip to first unread message

Ivi

unread,
May 18, 2019, 12:23:44 PM5/18/19
to VirtualGL User Discussion/Support
Hello everyone.
I installed VirtualGL for Debian, by downloading and installing the .deb package on SourceForge, I configured the server and added my users to the virtualgl group.
But, when I try to run any 3D software through vglrun, I get this error:

[VGL] ERROR: VirtualGL attempted to load the real
[VGL]   glXChooseFBConfig function and got the fake one instead.
[VGL]   Something is terribly wrong.  Aborting before chaos ensues.

What am I missing? Please, help.

Note:I'm using libgl1-mesa-swx11 as glx provider, but I had a 300 fps with glxgears (just once) and I can't guess how I made it.

DRC

unread,
May 18, 2019, 1:23:18 PM5/18/19
to virtual...@googlegroups.com
The whole idea behind VirtualGL is to provide remote access to a GPU
that supports hardware-accelerated OpenGL. Unless your server has such
a GPU and drivers installed for it, then there is no point to using
VirtualGL. VirtualGL is really designed to be used with either an X
proxy (such as ours-- TurboVNC-- or others such as TigerVNC, Xpra,
FreeNX, etc.) or a client-side X server (which could be the native X
server on a Linux client machine or Xquartz on macOS or Cygwin/X on
Windows.) X proxies are virtual X servers that use the CPU to render X
windows commands into a framebuffer in main memory, so X proxies can't
inherently support GPU acceleration. When using a client-side X server,
the normal way of rendering OpenGL would be via indirect rendering,
which involves sending all of the OpenGL commands and data over the
network to be rendered on the client machine. Indirect rendering has
severe limitations in terms of performance and compatibility. Thus,
VirtualGL intercepts and redirects OpenGL commands away from the X proxy
or client-side X server (we call this the "2D X server") and onto the
server-side X server (we call this the "3D X server".) VGL also reads
back the OpenGL-rendered images from the 3D X server at appropriate
times (such as when the 3D application swaps the OpenGL rendering
buffers) and transports those images to the 2D X server, either directly
in the case of X proxies (by simply drawing the images using
XShmPutImage()) or indirectly in the case of client-side X servers (by
compressing the images using libjpeg-turbo and sending them over the
network to the VirtualGL Client application, which decompresses the
images and draws them using XShmPutImage().)

libgl1-mesa-swx11 is a software-only implementation of Mesa that renders
OpenGL using plain X11 drawing commands. It doesn't ever touch a GPU,
even if you have one, and that implementation of Mesa is incompatible
with VirtualGL. You need to use libgl1-mesa-glx instead, with a GPU and
appropriate drivers for that GPU.

Note also that GLXgears is a very poor OpenGL benchmark. It has such a
low polygon count (about 300) that, even with GPU acceleration, the
rendering performance is almost entirely CPU-bound (i.e. the time it
takes the GPU to render 300 polys is so negligible that it doesn't
contribute significantly to the overall execution time of the program.)
From VirtualGL's point of view, the other problems with GLXgears are (a)
it uses flat shading and (b) its default window size is very small.
That means that the images generated by GLXgears are very easy to
compress, so they don't really challenge VirtualGL at all. They also
don't challenge the GPU's rasterizer. VirtualGL provides a better
benchmark, GLXspheres, in /opt/VirtualGL/bin. On modern GPUs, I
recommend running GLXspheres with at least 1 million, and preferably 5
million, polygons:

vglrun /opt/VirtualGL/bin/glxspheres64 -p 1000000

or

vglrun /opt/VirtualGL/bin/glxspheres64 -p 5000000 -n 100

That should really highlight the difference between software
(unaccelerated) and GPU-accelerated OpenGL. To put it another way, 300
fps with GLXgears is not that impressive. My machine w/ an nVidia
Quadro can do about 5000 fps with GLXgears, but again, that's not really
meaningful, because the frame rate of GLXgears is mostly CPU-dependent.
Also, human vision can't distinguish 300 fps from 5000 fps, so it's not
really meaningful from a usability perspective either.

Ivi

unread,
May 18, 2019, 1:41:13 PM5/18/19
to VirtualGL User Discussion/Support
Yes, I was aware about the software only nature of swx11. If I use glx, on running any graphic app will return "can't create double buffered visual" error.

Note I'm in a chroot container on Android, with Intel architecture and PowerVR, but I need to use OpenGL ES.

By the way, thanks for your reply and for the explanation.

DRC

unread,
May 18, 2019, 2:32:52 PM5/18/19
to virtual...@googlegroups.com

VirtualGL doesn't support OpenGL ES yet.  There is an open ticket for that on GitHub, but funding is needed in order to implement the feature.

Ivi

unread,
May 18, 2019, 5:52:52 PM5/18/19
to VirtualGL User Discussion/Support
Thanks, could you link that?
How many funds does that need now?

DRC

unread,
May 18, 2019, 8:45:09 PM5/18/19
to virtual...@googlegroups.com
https://github.com/VirtualGL/virtualgl/issues/66

I mention such things in passing on this list in case corporate patrons
might be listening or stumble upon the thread in a Google search. I do
not mean for individual users to fund such features. If you represent
such a corporate patron, then contact me offline for an estimate (I do
not discuss specific funding amounts or rates on these open forums.)

Ivi

unread,
May 19, 2019, 8:49:44 AM5/19/19
to VirtualGL User Discussion/Support
I made it to install vgl with gl4es.
What should I do to launch vgl at startup?

DRC

unread,
May 20, 2019, 11:42:15 AM5/20/19
to virtual...@googlegroups.com

VirtualGL is an interposer, so it is loaded into individual processes.  It cannot be launched at startup, nor would it be meaningful to do so.  If you are using TurboVNC, then it is possible to:

1. Launch a specific TurboVNC session for a specific user at startup.  If your chroot environment supports init.d (or init.d emulated with systemd, which most modern Linux distros support), then after installing TurboVNC, you should be able to enable the tvncserver service (using 'sudo systemctl enable tvncserver', for instance) and edit /etc/sysconfig/tvncservers in order to launch a TurboVNC session at startup.

2. Use VirtualGL to launch the window manager in that TurboVNC session, so VirtualGL is automatically loaded into any OpenGL applications that are launched within that session.  You can do this by adding '-vgl' to the VNCSERVERARGS variable in /etc/sysconfig/tvncservers.

Apart from that, if there are problems you encounter that would apply to any configuration of VGL and TurboVNC, then I'm happy to address those, but in general, running VGL in a chroot environment isn't supported.

Ivi

unread,
May 20, 2019, 12:01:35 PM5/20/19
to VirtualGL User Discussion/Support
Ok, then, please, at least tell me what command should I type to launch the server.

DRC

unread,
May 20, 2019, 12:41:15 PM5/20/19
to virtual...@googlegroups.com
Sorry, but your questions are confusing. Do you mean the VirtualGL
server or the TurboVNC Server? Again, VirtualGL is an interposer, not a
daemon, so there is no "VirtualGL server" process. If you are using the
init.d mechanism with the TurboVNC Server, then a TurboVNC session is
automatically started at boot (which is what you said you wanted to do.)
Otherwise, refer to the TurboVNC User's Guide for instructions on
starting a TurboVNC session outside of init.d.

Ivi

unread,
May 21, 2019, 2:23:18 AM5/21/19
to VirtualGL User Discussion/Support
Sorry, I was meaning a vgl daemon indeed.
Ok, so I don't need to do anything else, thanks.
Reply all
Reply to author
Forward
0 new messages