On 3/25/20 4:59 PM, 'eresrch' via VirtualGL User Discussion/Support wrote:
> I sort of have the 2D rendering working between two linux systems
> using VGL. The application runs on a large server which is at run
> level 3, so there is no active X server running. Using vglconnect to
> the server and then vglrun -d localhost:10.0 <app> I get all the 2D
> images just fine, but no 3D renderings. This makes some sense because
> the server isn't running an X server at all. But it does have V100
> GPU's, so it could do 3D rendering.
>
> I have a second, smaller server which does run at level 5 and also has
> good graphics cards. I don't have local access to it. So I'm
> wondering if it makes sense at all (probably not, but I'm asking) if
> it's possible to use the X server on a different system than the
> application and send the rendered results to a 3rd system. This is
> similar to the figure under section 9.2 of the user's guide, except
> that the GPU driver and 3D X server would be on the X proxy host.
No, that is not possible with VirtualGL. The application must be
running on the same machine as the 3D X server and GPU. What you're
proposing would involve some form of indirect OpenGL rendering, needed
in order to send the OpenGL commands from the application server to the
server containing the GPU. Conceivably, the OpenGL commands could be
forwarded using remote X11 via an SSH tunnel, but since indirect OpenGL
via remote X11 doesn't support OpenGL > 1.4, it wouldn't work with most
modern OpenGL applications. The only other option would be for the
VirtualGL Faker to intercept every OpenGL command from the application
(instead of just every GLX command) and forward it to the GPU server
using a custom OpenGL wire protocol. That would be a huge undertaking
to develop, not to mention a compatibility and performance minefield.
Also, even if we created our own OpenGL wire protocol, it still wouldn't
be possible to support all modern OpenGL features using indirect
rendering. From a project management point of view, such a solution
would also be much more complex than I'm capable of developing and
maintaining on my own (bearing in mind that VGL is one of three open
source products I maintain for a living.) The only solution I know of
that does what you propose is NICE DCV (now owned by Amazon), but it is
proprietary, and there are some limitations (performance being a big
one) to using a GPU on a different machine than the application.
> Alternatively, is there a way to set up a virtual X server which uses
> the local GPU, but does not connect to anything else except VGL
> requests?
A "virtual" X server, by definition, doesn't use a GPU at all (Xvfb, for
instance.) You can set up a "headless" X server (without a physical
display) that uses the GPU. It should be possible to start such a
headless X server in Run Level 3. To mimic the supported behavior of
vglserver_config, you would create an XAuth cookie under
/etc/opt/VirtualGL/vgl_xauth_key, instruct the headless X server to use
that cookie, and set the permissions of the cookie so that only certain
users (members of the vglusers group, for instance) can access it. But
there isn't any way to limit access to an X server on an
application-by-application basis. Access can only be limited on a
per-user basis.