Could not map pixel buffer object

31 views
Skip to first unread message

Alexandre Skrzyniarz

unread,
Apr 6, 2023, 1:49:02 PM4/6/23
to VirtualGL User Discussion/Support
Hello.

I'm quite new with VirtualGL.

I'm trying to do build a virtual machine that will serve an old application over VNC. That application needs video acceleration.

I use a libvirt/qemu guest with PCI bypass to expose a Tesla T4 headless card to the virtual machine.

The virtual machine runs Debian 10, with the nvidia-tesla-470-driver package from the distribution.

I run a Xorg server on :0 which, according to nvidia-smi, actually uses the T4 card. A lightdm service runs on this X session.

I run a turbovnc session on :1 to use the virtual machine.

I followed the Headless nVidia Mini How-To, then the Documentation (chapter 6). For testing purposes, I didn't restrict virtualgl usage to a vglusers group. XTEST extension is enabled as I plan to use a vnc server.

According to the sanity test with glx info, the renderer string is "Tesla T4/PCIe/SSE2", which is fine. According to the same tool, there are numerous visuals with 24 bits and pbuffer support.

Unfortunately, I cannot run glxgears.

When running "vglrun glxgears" from a vnc client, a glxgears window flashes briefly, and I have the following error message:

[VGL] ERROR: in readPixels--
[VGL]    515: Could not map pixel buffer object

What is this 515 error?

I tried to run glreadtest, and it runs flawlessly without options. If I run glreadtest with -alpha options, I have a lot of errors:

X11 Error: BadMatch

I tried to run glxgears with the VLG_FORCEALPHA=1 option, but I still have my error.


Could you please help me on this? Any help would be appreciated.

I attach the vglrun trace (+tr option)
vglrun.trace

Alexandre Skrzyniarz

unread,
Apr 6, 2023, 1:51:20 PM4/6/23
to VirtualGL User Discussion/Support
Edit:

The debian version is Debian 11.6, not debian 10.

virtualgl version is 3.1
turbovnc version is 3.0.3

DRC

unread,
Apr 6, 2023, 10:16:51 PM4/6/23
to virtual...@googlegroups.com

If you are running both VirtualGL and TurboVNC in the virtual machine, then I'm not sure why that would occur.  It could be an issue with the virtualized GPU or the driver.  Regardless, you should be able to work around it by setting VGL_READBACK=sync in the environment.

DRC

--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/890a2ece-59ca-44fb-b5e9-4e655a8b12d0n%40googlegroups.com.
Reply all
Reply to author
Forward
Message has been deleted
0 new messages