The whole idea behind VirtualGL is to provide remote access to a GPU
that supports hardware-accelerated OpenGL. Unless your server has such
a GPU and drivers installed for it, then there is no point to using
VirtualGL. VirtualGL is really designed to be used with either an X
proxy (such as ours-- TurboVNC-- or others such as TigerVNC, Xpra,
FreeNX, etc.) or a client-side X server (which could be the native X
server on a Linux client machine or Xquartz on macOS or Cygwin/X on
Windows.) X proxies are virtual X servers that use the CPU to render X
windows commands into a framebuffer in main memory, so X proxies can't
inherently support GPU acceleration. When using a client-side X server,
the normal way of rendering OpenGL would be via indirect rendering,
which involves sending all of the OpenGL commands and data over the
network to be rendered on the client machine. Indirect rendering has
severe limitations in terms of performance and compatibility. Thus,
VirtualGL intercepts and redirects OpenGL commands away from the X proxy
or client-side X server (we call this the "2D X server") and onto the
server-side X server (we call this the "3D X server".) VGL also reads
back the OpenGL-rendered images from the 3D X server at appropriate
times (such as when the 3D application swaps the OpenGL rendering
buffers) and transports those images to the 2D X server, either directly
in the case of X proxies (by simply drawing the images using
XShmPutImage()) or indirectly in the case of client-side X servers (by
compressing the images using libjpeg-turbo and sending them over the
network to the VirtualGL Client application, which decompresses the
images and draws them using XShmPutImage().)
libgl1-mesa-swx11 is a software-only implementation of Mesa that renders
OpenGL using plain X11 drawing commands. It doesn't ever touch a GPU,
even if you have one, and that implementation of Mesa is incompatible
with VirtualGL. You need to use libgl1-mesa-glx instead, with a GPU and
appropriate drivers for that GPU.
Note also that GLXgears is a very poor OpenGL benchmark. It has such a
low polygon count (about 300) that, even with GPU acceleration, the
rendering performance is almost entirely CPU-bound (i.e. the time it
takes the GPU to render 300 polys is so negligible that it doesn't
contribute significantly to the overall execution time of the program.)
From VirtualGL's point of view, the other problems with GLXgears are (a)
it uses flat shading and (b) its default window size is very small.
That means that the images generated by GLXgears are very easy to
compress, so they don't really challenge VirtualGL at all. They also
don't challenge the GPU's rasterizer. VirtualGL provides a better
benchmark, GLXspheres, in /opt/VirtualGL/bin. On modern GPUs, I
recommend running GLXspheres with at least 1 million, and preferably 5
million, polygons:
vglrun /opt/VirtualGL/bin/glxspheres64 -p 1000000
or
vglrun /opt/VirtualGL/bin/glxspheres64 -p 5000000 -n 100
That should really highlight the difference between software
(unaccelerated) and GPU-accelerated OpenGL. To put it another way, 300
fps with GLXgears is not that impressive. My machine w/ an nVidia
Quadro can do about 5000 fps with GLXgears, but again, that's not really
meaningful, because the frame rate of GLXgears is mostly CPU-dependent.
Also, human vision can't distinguish 300 fps from 5000 fps, so it's not
really meaningful from a usability perspective either.