Google Chrome: Failed to get GLXConfig

805 views
Skip to first unread message

Marcello Blancasio

unread,
Jan 28, 2020, 12:13:41 PM1/28/20
to VirtualGL User Discussion/Support
Hello,

I'm not able to run recent Chrome with GPU acceleration ( google-chrome-stable.x86_64, version 79.0.3945.130-1, installed from google-chrome repo):

[6981:6981:0128/172712.708029:ERROR:gl_surface_glx.cc(129)] Failed to get GLXConfig
[6981:6981:0128/172712.708168:ERROR:gl_surface_glx.cc(475)] CreateDummyWindow(gfx::GetXDisplay()) failed
[6981:6981:0128/172712.708199:ERROR:gl_initializer_x11.cc(148)] GLSurfaceGLX::InitializeOneOff failed.
[6981:6981:0128/172712.712314:ERROR:viz_main_impl.cc(180)] Exiting GPU process due to errors during initialization
[7007:12:0128/172712.894210:ERROR:command_buffer_proxy_impl.cc(124)] ContextResult::kTransientFailure: Failed to send GpuChannelMsg_CreateCommandBuffer.

Chrome succeeds to start but chrome://gpu reports "Software only" for WebGL/WebGL2.

I also tried the workaround suggested here:

without success. I exported VGL_DEFAULTFBCONFIG=GLX_ALPHA_SIZE,8 to the environment but nothing changed.

-
M.



DRC

unread,
Feb 12, 2020, 6:15:36 PM2/12/20
to virtual...@googlegroups.com
I was unable to reproduce the issue on CentOS 7 with an nVidia GPU or on
CentOS 8 with VMWare Tools.  Can you provide more details regarding your
system?

DRC

Marcello Blancasio

unread,
Feb 14, 2020, 1:59:06 PM2/14/20
to virtual...@googlegroups.com
# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)

# rpm -qa VirtualGL
VirtualGL-2.6.3-20191024.x86_64

# rpm -qa \*chrome\*
google-chrome-stable-79.0.3945.130-1.x86_64

# rpm -qa \*mesa\*
mesa-libEGL-devel-18.3.4-5.el7.x86_64
mesa-private-llvm-3.9.1-3.el7.x86_64
mesa-libgbm-18.3.4-5.el7.x86_64
mesa-dri-drivers-18.3.4-5.el7.x86_64
mesa-libGLU-9.0.0-4.el7.x86_64
mesa-libxatracker-18.3.4-5.el7.x86_64
mesa-libGLES-devel-18.3.4-5.el7.x86_64
mesa-filesystem-18.3.4-5.el7.x86_64
mesa-libGL-18.3.4-5.el7.x86_64
mesa-debuginfo-18.3.4-5.el7.x86_64
mesa-libEGL-18.3.4-5.el7.x86_64
mesa-libGLES-18.3.4-5.el7.x86_64
mesa-khr-devel-18.3.4-5.el7.x86_64
mesa-libglapi-18.3.4-5.el7.x86_64
mesa-libGLU-debuginfo-9.0.0-4.el7.x86_64
mesa-libGL-devel-18.3.4-5.el7.x86_64

# rpm -qa \*nvidia\*
yum-plugin-nvidia-1.0.2-1.el7.elrepo.noarch
nvidia-x11-drv-libs-430.50-1.el7_7.elrepo.x86_64
kmod-nvidia-430.50-1.el7_7.elrepo.x86_64
nvidia-detect-430.40-1.el7.elrepo.x86_64
nvidia-x11-drv-430.50-1.el7_7.elrepo.x86_64

I get different errors for NoMachine and TruboVNC.

TurboVNC:
[VGL] ERROR: in VirtualWin--
[VGL]    75: Could not clone X display connection
[VGL] ERROR: in VirtualWin--
[VGL]    75: Could not clone X display connection
[13551:13:0214/185928.926089:ERROR:command_buffer_proxy_impl.cc(124)]
ContextResult::kTransientFailure: Failed to send
GpuChannelMsg_CreateCommandBuffer.
[VGL] ERROR: in VirtualWin--
[VGL]    75: Could not clone X display connection
[13551:13:0214/185929.197750:ERROR:command_buffer_proxy_impl.cc(124)]
ContextResult::kTransientFailure: Failed to send
GpuChannelMsg_CreateCommandBuffer.

NoMachine:
[11871:11871:0214/185332.795246:ERROR:gl_surface_glx.cc(129)] Failed to
get GLXConfig
[11871:11871:0214/185332.795435:ERROR:gl_surface_glx.cc(475)]
CreateDummyWindow(gfx::GetXDisplay()) failed
[11871:11871:0214/185332.795463:ERROR:gl_initializer_x11.cc(148)]
GLSurfaceGLX::InitializeOneOff failed.
[11871:11871:0214/185332.816002:ERROR:viz_main_impl.cc(180)] Exiting GPU
process due to errors during initialization

DRC

unread,
Feb 14, 2020, 3:33:34 PM2/14/20
to virtual...@googlegroups.com
Why are you setting __GLX_VENDOR_LIBRARY_NAME?  That seems to be the
source of the problem.

Marcello Blancasio

unread,
Feb 14, 2020, 5:07:14 PM2/14/20
to virtual...@googlegroups.com
I'm doing that to select nvidia dispatch in libGLX: it works fine with glxinfo.
The other possible value is "mesa" but it makes display :0 fallback to glx indirect rendering.

--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/d09b6db3-d97a-17fa-d46d-6761fcc038d9%40virtualgl.org.

DRC

unread,
Feb 17, 2020, 12:55:49 PM2/17/20
to virtual...@googlegroups.com

The other possible value is no value at all.  VirtualGL doesn't use libGLX, so you shouldn't set that environment variable.

Marcello Blancasio

unread,
Feb 18, 2020, 11:35:21 AM2/18/20
to VirtualGL User Discussion/Support
Tried also env -u __GLX_VENDOR_LIBRARY_NAME vglrun google-chrome
and it didn't help.

Log reported something went wrong in function CreateDummyWindow(Display* display):


It looks that such dummy window inherits visual from parent (the root window), then the list of available GLXFBConfig
is searched for one matching GLX_VISUAL_ID. No match is found and Chrome falls back to software rendering.

Provided that Visuals and GLXFBconfigs come from different X server, is it possible they match somehow?
To unsubscribe from this group and stop receiving emails from it, send an email to virtual...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtual...@googlegroups.com.

DRC

unread,
Feb 18, 2020, 1:45:12 PM2/18/20
to virtual...@googlegroups.com

Well, yes, it did help.  It solved one problem that was masking another problem, so now we can focus on the real issue.  You still haven't answered my question, though.  What prompted you to set __GLX_VENDOR_LIBRARY_NAME in the first place?  What problem did that solve, if any?  Or were you just trying random things in an attempt to make Chrome work?

Now, as to the visual vs. FB config matching issue, the next step is to figure out why you observe that issue but I don't.  Which versions of Chrome and TurboVNC did you test when you observed the failure?

VirtualGL 3.0 evolving (the code in the dev branch) now maps GLX FB configs with particular rendering attributes to 2D X server visuals in a round-robin fashion, and it front-loads this mapping in such a way that an application that hunts for a specific rendering attribute will have a high likelihood of finding a visual with an attached GLX FB config that has the desired attribute.  That should eliminate the need for VGL_DEFAULTFBCONFIG in almost all cases.  However, I certainly didn't consider the pathological case of an application that expects to find a GLX FB config with the same visual ID as the root visual, because in a VirtualGL environment, the visuals are on the 2D X server and the FB configs are on the 3D X server.  The Chromium code could instead use glXGetFBConfigFromVisualSGIX(), which is fully supported in VirtualGL.

There may be a way of working around this, but I need to be able to reproduce it first.

DRC

To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/c09fd330-8fde-493c-bb29-77868d9f86d5%40googlegroups.com.

Marcello Blancasio

unread,
Feb 19, 2020, 12:16:09 PM2/19/20
to VirtualGL User Discussion/Support
Well, I've run vglrun google-chrome in TurboVNC, again. This time I kept __GLX_VENDOR_LIBRARY_NAME cleared and *Chrome got WebGL hardware acceleration*!

So I can reproduce the problem with NoMachine but I can't with TurboVNC.

The reason that make me set __GLX_VENDOR_LIBRARY_NAME=nvidia is that I'd like to use Mesa/llvmpipe for Gnome and Nvidia/VGL for Chrome but I couldn't rely on the GLX_EXT_libglvnd extension because my version of Xorg was too old.

By the way, Chrome looks searching the array of GLXFB configs for TurboVNC as well. But in this case the required visual id is found (0x21 being the id of the "dummy window" visual):

[VGL 0xe6421b40] XCreateWindow (dpy=0x39632a26c800(:1) parent=0x000003af x=0 y=0 width=1 height=1 depth=0 c_class=1 visual=0x00000000(0x00) win=0x02c00002 ) 0.005960 ms
[VGL 0xe6421b40] glXGetFBConfigs (dpy=0x39632a26c800(:1) screen=0 *nelements=359 ) 0.128031 ms
[VGL 0xe6421b40] glXGetFBConfigAttrib (dpy=0x39632a26c800(:1) config=0x39632a22aec0(0x135) attribute=32779(0x800b) *value=941(0x3ad) ) 0.003815 ms
[VGL 0xe6421b40] glXGetFBConfigAttrib (dpy=0x39632a26c800(:1) config=0x39632a217030(0x136) attribute=32779(0x800b) *value=34(0x22) ) 0.016928 ms
[VGL 0xe6421b40] glXGetFBConfigAttrib (dpy=0x39632a26c800(:1) config=0x39632a22aca0(0x137) attribute=32779(0x800b) *value=33(0x21) ) 0.013113 ms
[VGL 0xe6421b40] glXCreateWindow (dpy=0x39632a26c800(:1) config=0x39632a22aca0(0x137) win=0x02c00002

-
Marcello.

DRC

unread,
Feb 19, 2020, 7:47:27 PM2/19/20
to virtual...@googlegroups.com
On 2/19/20 11:16 AM, Marcello Blancasio wrote:
Well, I've run vglrun google-chrome in TurboVNC, again. This time I kept __GLX_VENDOR_LIBRARY_NAME cleared and *Chrome got WebGL hardware acceleration*!

So I can reproduce the problem with NoMachine but I can't with TurboVNC.

The reason that make me set __GLX_VENDOR_LIBRARY_NAME=nvidia is that I'd like to use Mesa/llvmpipe for Gnome and Nvidia/VGL for Chrome but I couldn't rely on the GLX_EXT_libglvnd extension because my version of Xorg was too old.

If the nVidia proprietary drivers are installed, then libGL should default to using those drivers if GLVND isn't available.  Also, in the scenario you describe, GLVND should be available on the 3D X server, so VGL will use the nVidia proprietary drivers if __GLX_VENDOR_LIBRARY_NAME is unspecified (since libGLX_nvidia is the default GLX implementation on the 3D X server.)  Thus, it should be possible to set __GLX_VENDOR_LIBRARY_NAME=mesa when loading the window manager, then unset __GLX_VENDOR_LIBRARY_NAME when using VirtualGL.

That being said, I don't understand why setting __GLX_VENDOR_LIBRARY_NAME=nvidia causes problems with Chrome.  As you observed, setting __GLX_VENDOR_LIBRARY_NAME=nvidia works with other GLX applications.  I did notice that, if an application uses dlopen() to load libGLX rather than libGL, VirtualGL's dlopen() interposer doesn't currently handle that case, but fixing that problem doesn't change the situation with Chrome (Chrome appears to be using dlopen() to load libGL, not libGLX.)


By the way, Chrome looks searching the array of GLXFB configs for TurboVNC as well. But in this case the required visual id is found (0x21 being the id of the "dummy window" visual):

[VGL 0xe6421b40] XCreateWindow (dpy=0x39632a26c800(:1) parent=0x000003af x=0 y=0 width=1 height=1 depth=0 c_class=1 visual=0x00000000(0x00) win=0x02c00002 ) 0.005960 ms
[VGL 0xe6421b40] glXGetFBConfigs (dpy=0x39632a26c800(:1) screen=0 *nelements=359 ) 0.128031 ms
[VGL 0xe6421b40] glXGetFBConfigAttrib (dpy=0x39632a26c800(:1) config=0x39632a22aec0(0x135) attribute=32779(0x800b) *value=941(0x3ad) ) 0.003815 ms
[VGL 0xe6421b40] glXGetFBConfigAttrib (dpy=0x39632a26c800(:1) config=0x39632a217030(0x136) attribute=32779(0x800b) *value=34(0x22) ) 0.016928 ms
[VGL 0xe6421b40] glXGetFBConfigAttrib (dpy=0x39632a26c800(:1) config=0x39632a22aca0(0x137) attribute=32779(0x800b) *value=33(0x21) ) 0.013113 ms
[VGL 0xe6421b40] glXCreateWindow (dpy=0x39632a26c800(:1) config=0x39632a22aca0(0x137) win=0x02c00002

I'll see if I can mock up a test case for this.


To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/b33730e7-c8be-4028-8cb4-a689aeac022d%40googlegroups.com.

DRC

unread,
Feb 20, 2020, 1:01:31 PM2/20/20
to virtual...@googlegroups.com

I don't understand why that would happen in NoMachine unless the root visual for the X server wasn't a TrueColor visual.  Otherwise, that visual should have already been hashed to a GLXFBConfig within the body of VirtualGL's interposed version of glXChooseFBConfig(), or the visual will be hashed to a GLXFBConfig within the body of VirtualGL's interposed version of glXGetFBConfigAttrib().  In a "normal" (non-VirtualGL) GLX environment, there is a 1:1 correspondence between GLXFBConfigs and visuals.  In a VirtualGL environment, there is not, but the GLXFBConfig-->X visual relationship is straightforward.  For every GLXFBConfig, VirtualGL simply finds a "compatible" visual on the 2D X server, which means a visual that matches the depth, class (TrueColor or DirectColor), bits per RGB, and stereo properties of the GLXFBConfig's corresponding visual on the 3D X server.  So there should be at least one GLXFBConfig that has the root visual as a matching visual, unless that root visual is exotic somehow.

Can you send me the output of xdpyinfo from the NoMachine X server and '/opt/VirtualGL/bin/glxinfo -c' from the 3D X server?

To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/b33730e7-c8be-4028-8cb4-a689aeac022d%40googlegroups.com.

Marcello Blancasio

unread,
Feb 21, 2020, 6:15:56 AM2/21/20
to VirtualGL User Discussion/Support
xdpyinfo-nomachine.txt
glxinfo-3d-xserver.txt

DRC

unread,
Feb 21, 2020, 2:05:58 PM2/21/20
to virtual...@googlegroups.com

OK, I think that the proximal cause of the problem is that the default visual is listed last in the NoMachine X server.  VirtualGL maintains a list of X visual attributes for all 2D X server visuals returned by XGetVisualInfo(), in the order that those visuals are returned.  VirtualGL then uses that attribute list to find a 2D X server visual with the appropriate depth and class to match a particular GLXFBConfig.  Thus, in most cases, any TrueColor visual will be an appropriate match for a given GLXFBConfig, and VirtualGL will never use visuals at the end of the visual list.

Do you know why the default visual is last in the NoMachine X server?  I can't find any way to make TurboVNC do that.

To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/57419be8-5678-43a2-b7c6-7ac467475a77%40googlegroups.com.

Marcello Blancasio

unread,
Feb 21, 2020, 3:55:45 PM2/21/20
to virtual...@googlegroups.com
It is likely due to some changes to the X server core code. I'll search for a way for moving the default visual back to the first position. I'm quite confident that will fix the problem. I'll let you know, by the way. Thank you very much.

DRC

unread,
Feb 21, 2020, 5:52:05 PM2/21/20
to virtual...@googlegroups.com

OK, let me know if you are unable to do that, because it should be possible to re-order VirtualGL's visual table to give the default visual precedence.  I would rather not have to do that, however, unless this same problem can be shown to affect multiple X servers.

Marcello Blancasio

unread,
Feb 25, 2020, 10:13:02 AM2/25/20
to VirtualGL User Discussion/Support
I managed to do that. I can confirm that re-ordering the visuals fixes the problem, indeed.
Thanks for your help.

-
M.
--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtual...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtual...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages