virtualgl understanding

748 views
Skip to first unread message

sergio

unread,
Sep 11, 2018, 1:12:46 PM9/11/18
to virtual...@googlegroups.com
Hello.

I've been using X over network for years. It's a good way to connect
several (3-4) monitors to laptop with only one video output or make a
silent station with a lot of ram and high-perf cpu on small intel
compute stick. 2D graphics works well, I can watch 1080p movie over
gigabit network without any issues. But now, a lot of 2D effects are
done via 3D. It's impossible to run Enlightenment WM in such environment
as it's become very slow. So I'm trying to understand and run virtualgl.


I have two hosts:

hostA, which is the qemu virtual, with no graphic card and XDMCP server

hostB, which is the real host with intel atom x5-Z8330 (and gpu
embedded), with X server

both hosts are debian sid with virtualgl_2.6_amd64.deb installed

glxinfo runned on hostA will give a lot of output:

name of display: hostB:0
display: hostB:0 screen: 0
direct rendering: Yes
...
Extended renderer info (GLX_MESA_query_renderer):
Vendor: VMware, Inc. (0xffffffff)
Device: llvmpipe (LLVM 6.0, 128 bits) (0xffffffff)
...

OpenGL vendor string: VMware, Inc.
OpenGL renderer string: llvmpipe (LLVM 6.0, 128 bits)
...


glxinfo runned on hostB will show hardware acceleration:

name of display: :0
display: :0 screen: 0
direct rendering: Yes
...
Extended renderer info (GLX_MESA_query_renderer):
Vendor: Intel Open Source Technology Center (0x8086)
Device: Mesa DRI Intel(R) HD Graphics (Cherrytrail) (0x22b0)
Version: 18.1.7
Accelerated: yes
...
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) HD Graphics (Cherrytrail)
...


now I'm starting virtualgl:

on hostB:

% DISPLAY=:0 vglclient

VirtualGL Client 64-bit v2.6 (Build 20180824)
Listening for unencrypted connections on port 4242

on hostA:

% DISPLAY=hostB:0 vglrun -c rgb -d hostB:0 -v
/opt/VirtualGL/bin/glxspheres64
Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
libGL error: failed to open drm device: No such file or directory
libGL error: failed to load driver: i965
libGL error: failed to open drm device: No such file or directory
libGL error: failed to load driver: i965
Visual ID of window: 0x21
Context is Direct
OpenGL Renderer: llvmpipe (LLVM 6.0, 128 bits)
7.126069 frames/sec - 7.952693 Mpixels/sec
7.083555 frames/sec - 7.905248 Mpixels/sec
...


hostB says: ++ Connection from hostA.

1. Why I got failed to open drm device / failed to load driver: i965?
2. Why OpenGL Renderer is llvmpipe?


If I'll try to run glxinfo I'll got an segfault with no
"++ Connection from hostA." message on hostB:

% DISPLAY=hostB:0 vglrun -c rgb -d hostB:0 -v /opt/VirtualGL/bin/glxinfo
name of display: hostB:0
libGL error: failed to open drm device: No such file or directory
libGL error: failed to load driver: i965
libGL error: failed to open drm device: No such file or directory
libGL error: failed to load driver: i965
display: hostB:0 screen: 0
direct rendering: Yes
server glx vendor string: VirtualGL
server glx version string: 1.4
...
client glx vendor string: VirtualGL
...
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: llvmpipe (LLVM 6.0, 128 bits)
...
OpenGL core profile extensions:
GL_AMD_conservative_depth, GL_AMD_draw_buffers_blend,
...
GL_OES_EGL_image, GL_S3_s3tc
zsh: segmentation fault DISPLAY=hostB:0 vglrun -c rgb -d hostB:0 -v
/opt/VirtualGL/bin/glxinfo


--
sergio.

sergio

unread,
Sep 11, 2018, 1:23:53 PM9/11/18
to virtual...@googlegroups.com
On 11/09/2018 18:38, sergio wrote:

> % DISPLAY=hostB:0 vglrun -c rgb -d hostB:0 -v
> /opt/VirtualGL/bin/glxspheres64
> Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
> libGL error: failed to open drm device: No such file or directory
> libGL error: failed to load driver: i965
> libGL error: failed to open drm device: No such file or directory
> libGL error: failed to load driver: i965
> Visual ID of window: 0x21
> Context is Direct
> OpenGL Renderer: llvmpipe (LLVM 6.0, 128 bits)
> 7.126069 frames/sec - 7.952693 Mpixels/sec
> 7.083555 frames/sec - 7.905248 Mpixels/sec
> ...


Running glxspheres64 without vglrun will give

hostA $ DISPLAY=hostB:0 /opt/VirtualGL/bin/glxspheres64

Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
libGL error: failed to open drm device: No such file or directory
libGL error: failed to load driver: i965
Visual ID of window: 0xe4
Context is Direct
OpenGL Renderer: llvmpipe (LLVM 6.0, 128 bits)
9.189567 frames/sec - 10.255557 Mpixels/sec
9.206145 frames/sec - 10.274057 Mpixels/sec
9.116518 frames/sec - 10.174034 Mpixels/sec

(faster than with vglrun?)



directlry on hostB:
% DISPLAY=:0 /opt/VirtualGL/bin/glxspheres64
Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
Visual ID of window: 0xe4
Context is Direct
OpenGL Renderer: Mesa DRI Intel(R) HD Graphics (Cherrytrail)
60.185634 frames/sec - 67.167167 Mpixels/sec
56.978271 frames/sec - 63.587751 Mpixels/sec

(works without any issues)

--
sergio.

DRC

unread,
Sep 11, 2018, 1:33:13 PM9/11/18
to virtual...@googlegroups.com
Follow the instructions in the User's Guide. Use the vglconnect script
rather than trying to run vglclient manually, and don't try to
manipulate the DISPLAY and VGL_DISPLAY environment variables yourself
unless you have good reason to. VGL_DISPLAY (the -d argument to vglrun)
defaults to :0.0, which is correct for your server (hostB.) Setting
VGL_DISPLAY to hostB:0 is incorrect, since that may cause VirtualGL to
make a connection to the 3D X server using a TCP socket rather than a
Unix domain socket. That may be the source of the DRM errors. The
DISPLAY variable is normally set by SSH and shouldn't be modified.
Setting it to hostB:0 is definitely incorrect, as hostB:0 is the 3D X
server. VGL_DISPLAY should point to the 3D X server (normally :0.0),
and DISPLAY should point to the 2D X server. Really it should be as
simple as the description in the User's Guide:

From hostA:
> vglconnect hostB

This establishes an SSH session with X11 tunneling on hostB, in which
you can execute:
> vglrun {application}

Nothing else should be required, unless your CPUs really are so slow
that enabling RGB encoding (-c rgb) improves performance. Usually -c
rgb is only used for running VirtualGL within a server-area network and
re-encoding its output using TurboVNC or another X proxy running on a
different machine than VirtualGL. These days, CPUs can
compress/decompress hundreds of megapixels/second using libjpeg-turbo,
so there is no reason why JPEG encoding would be a bottleneck unless you
are using really old CPUs. RGB encoding, however, can encounter network
bottlenecks, even on gigabit Ethernet.

sergio

unread,
Sep 11, 2018, 1:42:25 PM9/11/18
to virtual...@googlegroups.com
On 11/09/2018 20:33, DRC wrote:

> Follow the instructions in the User's Guide. Use the vglconnect script
> rather than trying to run vglclient manually

User's Guide says to use vglclient (as I'm using XDMCP)

> VGL_DISPLAY should point to the 3D X server (normally :0.0),
> and DISPLAY should point to the 2D X server

can hostB with real GPU be 2D X server and 3D X server simultaneously?


> From hostA:
>> vglconnect hostB

> This establishes an SSH session with X11 tunneling on hostB, in which
> you can execute:

I do not use SSH.


--
sergio.

sergio

unread,
Sep 11, 2018, 2:03:08 PM9/11/18
to virtual...@googlegroups.com
On 11/09/2018 20:33, DRC wrote:

> Setting it to hostB:0 is definitely incorrect, as hostB:0 is the 3D X
> server. VGL_DISPLAY should point to the 3D X server (normally :0.0),
> and DISPLAY should point to the 2D X server.

I have only one X server runned on hostB.

Do I understand correctly that vglclient should be runned on the host
with X server (hostB in my case) and vglrun on the host without X server
(hostA in my case)?

--
sergio.

DRC

unread,
Sep 11, 2018, 2:16:43 PM9/11/18
to virtual...@googlegroups.com
Let's back up, because if you have no 2D X server, then I don't
understand how you are using remote X. Please clarify which machine is
physically in front of you and which machine you are trying to access
remotely.

DRC

sergio

unread,
Sep 11, 2018, 2:34:33 PM9/11/18
to virtual...@googlegroups.com
On 11/09/2018 21:16, DRC wrote:
> Let's back up, because if you have no 2D X server, then I don't
> understand how you are using remote X. Please clarify which machine is
> physically in front of you and which machine you are trying to access
> remotely.


hostA is qemu virtual, with no X server (lightdm acts as XDMCPServer here)


hostB is atom x5-Z8330 and i965, with X server runned (lightdm acts as
XDMCP client here)


--
sergio.

DRC

unread,
Sep 11, 2018, 4:04:48 PM9/11/18
to virtual...@googlegroups.com
VirtualGL is used for server-side OpenGL rendering, i.e. for accessing a
GPU remotely, i.e. it assumes that the GPU is on the same machine on
which the OpenGL applications are running. Given that the GPU is on
your client machine, VirtualGL cannot be of much help there. To answer
your question about whether the 2D and 3D X server can be on the same
machine-- yes, if using an X proxy such as TurboVNC, but generally there
is no point to doing that unless both the 2D and 3D X server are on the
remote machine. The purpose for that configuration is to prevent any
X11 traffic from transiting the network (since X proxies convert X11
drawing commands into an image stream.) When using VirtualGL, the 3D X
server has to be on the same machine on which the applications are
running. Effectively the "VirtualGL Server" and the "Application
Server" will always be the same machine, and that machine must have a
GPU and an X server (the 3D X server) attached to that GPU.

What you're trying to do is use a remote 2D X server with a local 3D X
server, which is not what VirtualGL is designed to do. With VirtualGL,
the 3D X server is always remote, and the 2D X server can be either
local (if using the VGL Transport) or remote (if using the X11 Transport
with an X proxy.) Generally VirtualGL is used to display 3D
applications from a machine with greater 3D capabilities to a machine
with lesser 3D capabilities. What you are doing would work fine if you
were running the XDMCP server on the machine with the GPU and connecting
to it remotely from the machine without the GPU. The reverse, however,
is not a problem that VirtualGL can solve.

In short, you are setting up a "silent station" with a lot of RAM and a
high-performance CPU, but that station also needs a GPU in order to run
3D applications remotely from it using VirtualGL. Otherwise, you're
better off logging in locally to your GPU-equipped machine, using SSH
with X11 tunneling to connect to the remote machine, and running 3D
applications without VirtualGL. That would cause the GLX/OpenGL
commands to be sent over the network, which is far from ideal, but it's
the only reasonable way to run OpenGL applications with hardware
acceleration when the client has a GPU but the application server
doesn't. VirtualGL is specifically meant to work around the problems
with that approach, which is why I emphasize that the approach is far
from ideal (refer to https://virtualgl.org/About/Background), but again,
VirtualGL requires a GPU in the application server.

sergio

unread,
Sep 11, 2018, 7:22:54 PM9/11/18
to virtual...@googlegroups.com
On 11/09/2018 23:04, DRC wrote:

> VirtualGL cannot be of much help there

Thank you for brief explanation!


> When using VirtualGL, the 3D X server has to be on the same machine on
> which the applications are running.

OK. Really I'd like to try VirtualGL so I have another setup:

two real hosts with debian sid and virtualgl_2.6_amd64.deb installed

X server is started on the both hosts

hostA is an application server with radeon GPU

hostB is an 2D x server


open x-terminal on hostB and

$ vglconnect hostA

got ssh to hostA and

$ vglrun /opt/VirtualGL/bin/glxspheres64

[VGL] NOTICE: Automatically setting VGL_CLIENT environment variable to
[VGL] <IP hostB>, the IP address of your SSH client.
Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
Visual ID of window: 0x21
Context is Direct
failed to create drawable
X Error of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 153 (GLX)
Minor opcode of failed request: 27 (X_GLXCreatePbuffer)
Serial number of failed request: 29
Current serial number in output stream: 31
zsh: exit 1 vglrun /opt/VirtualGL/bin/glxspheres64

What is wrong now?



Do you know a way to use local GPU for remote app acceleration?


--
sergio.

DRC

unread,
Sep 13, 2018, 5:40:07 PM9/13/18
to virtual...@googlegroups.com
On 9/11/18 6:22 PM, sergio wrote:
> [VGL] NOTICE: Automatically setting VGL_CLIENT environment variable to
> [VGL]    <IP hostB>, the IP address of your SSH client.
> Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
> Visual ID of window: 0x21
> Context is Direct
> failed to create drawable
> X Error of failed request:  BadAlloc (insufficient resources for operation)
>   Major opcode of failed request:  153 (GLX)
>   Minor opcode of failed request:  27 (X_GLXCreatePbuffer)
>   Serial number of failed request:  29
>   Current serial number in output stream:  31
> zsh: exit 1     vglrun /opt/VirtualGL/bin/glxspheres64
>
> What is wrong now?

No idea, but if it's an older Radeon with an older driver, then it might
have a broken Pbuffer implementation. I seem to recall that I ran into
similar problems with certain Radeon GPUs years ago. Try the following:

1. Try setting VGL_READBACK=sync in the environment on the server prior
to invoking vglrun.

2. If that doesn't work, then try setting VGL_DRAWABLE=pixmap in the
environment on the server prior to invoking vglrun.

3. If that doesn't work, then I don't know how to fix it. VGL is known
to work well with nVidia GPUs and drivers, but AMD/ATI GPUs
(particularly their consumer-grade GPUs) have traditionally been
hit-and-miss. I have not tested their more recent products, though.
The only AMD GPUs I have in my lab are an old Radeon HD 7660G and an
older FirePro V5700.


> Do you know a way to use local GPU for remote app acceleration?

No matter how you attempt it, using a local GPU for remote app
acceleration is going to require sending OpenGL commands and data over
the network. The traditional method of doing that is using remote GLX.
That is not ideal, for reasons described in the VirtualGL background
article that I previously posted, but any other method of using a local
GPU for remote app acceleration would have similar drawbacks. The only
product I know of that might accomplish that task is NICE (now Amazon)
DCV, but it is neither free nor open source. Do I know of a way to use
a local GPU for remote app acceleration while running XDMCP on the
remote application server? No.
Reply all
Reply to author
Forward
0 new messages