You can't really determine which OpenGL renderer is in use by
just looking at dlopen() calls. In a TurboVNC environment,
swrast will be used for any GLX/OpenGL commands sent to the
TurboVNC X server (the "2D X server"), and VirtualGL does send a
couple of GLX/OpenGL commands to the 2D X server to probe its
capabilities. That's probably why swrast is being loaded, but
if everything is working properly, the OpenGL renderer string
should still report that the Intel driver is in use for actual
rendering. Compare the output of /opt/VirtualGL/bin/glxinfo on
the local display with 'vglrun /opt/VirtualGL/bin/glxinfo' in
TurboVNC, or just run /opt/VirtualGL/bin/glxspheres64 (using
vglrun in TurboVNC), which reports the OpenGL renderer string as
well.
--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/ada84490-cfbe-41c7-8919-c0f00241ba82%40googlegroups.com.
#normal start - works with llvmpipe and vglrun
#exec startplasma-x11
#VGL start
exec vglrun +wm startplasma-x11
$vncserver -3dwmTo unsubscribe from this group and stop receiving emails from it, send an email to virtual...@googlegroups.com.
Thank you for the quick tips. I have posted some results at the end of this post, but they seem inconsistent. glxspheres64 shows the correct renderer respectively and the performance shows the 6x results I was expecting. However I do not see the same gains in glmark2, even though it also reports the correct renderer in each case. Again, I see a glmark of 2000+ when running it in display :0.
You can confirm that that's the case by running glmark2 on
your local display without VirtualGL and forcing the use of the
swrast driver. I suspect that the difference between swrast and
i965 won't be very great in that scenario, either. (I should
also mention that Intel GPUs aren't the fastest in the world, so
you're never going to see as much of a speedup-- nor as large of
a speedup in as many cases-- as you would see with AMD or
nVidia.)
The other thing is, if the benchmark is attempting to measure
unrealistic frame rates-- like hundreds or thousands of frames
per second-- then there is a small amount of per-frame overhead
introduced by VirtualGL that may be limiting that frame rate.
But the reality is that human vision can't usually detect more
than 60 fps anyhow, so the difference between, say, 200 fps and
400 fps is not going to matter to an application user. At more
realistic frame rates, VGL's overhead won't be noticeable.
Performance measurement in a VirtualGL environment is more
complicated than performance measurement in a local display
environment, which is why there's a whole section of the
VirtualGL User's Guide dedicated to it. Basically, since VGL
introduces a small amount of per-frame overhead but no
per-vertex overhead, at realistic frame rates and with modern
server and client hardware, it will not appear any slower than a
local display. However, some synthetic benchmarks may record
slower performance due to the aforementioned overhead.
In the meantime I have been trying to get the DE as a whole to run under acceleration. I record my findings here as a possible clue to my VGL issues above. In my .vnc/xstartup.turbovnc I use the following command:
#normal start - works with llvmpipe and vglrun
#exec startplasma-x11
#VGL start
exec vglrun +wm startplasma-x11
And I also start tvnc with:
$vncserver -3dwm
I'm not sure if vglrun, +wm or -3dwm are redundant or working against each other, but I've also tried various combinations to no avail.
Just use the default xstartup.turbovnc script ('rm ~/.vnc/xstartup.turbovnc' and re-run /opt/TurboVNC/bin/vncserver to create it) and start TurboVNC with '-wm startplasma-x11 -vgl'.
* -3dwm is deprecated. Use -vgl instead. -3dwm/-vgl (or setting '$useVGL = 1;' in /etc/turbovncserver.conf or ~/.vnc/turbovncserver.conf) simply instructs xstartup.turbovnc to run the window manager startup script using 'vglrun +wm'.
* Passing -wm to /opt/TurboVNC/bin/vncserver (or setting '$wm =
{script};' in turbovncserver.conf) instructs xstartup.turbovnc to
execute the specified window manager startup script rather than
/etc/X11/xinit/xinitrc.
* +wm is a feature of VirtualGL, not TurboVNC. Normally, if
VirtualGL detects that an OpenGL application is not monitoring
StructureNotify events, VGL will monitor those events on behalf of
the application (which allows VGL to be notified when the window
changes size, thus allowing VGL to change the size of the
corresponding Pbuffer.) This is, however, unnecessary with window
managers and interferes with some of them (compiz, specifically),
so +wm disables that behavior in VirtualGL. It's also a
placeholder in case future issues are discovered that are specific
to compositing window managers (+wm could easily be extended to
handle those issues as well.)
Interestingly I had to update the vglrun script to have the full paths to /usr/lib/libdlfaker.so and the others otherwise I see the following in the TVNC logs:
ERROR: ld.so: object 'libdlfaker.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.ERROR: ld.so: object 'libvglfaker.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
That said, my desktop is still broken even when these errors disappear.
Could my various issues be to do with KDE?
The LD_PRELOAD issues can be fixed as described here:
https://cdn.rawgit.com/VirtualGL/virtualgl/2.6.3/doc/index.html#hd0012
All of that aside, I have not personally tested the bleeding-edge KDE Plasma release, which is what Arch presumably ships, so I have no idea whether it works with VirtualGL or TurboVNC. The window managers I have tested are listed here:
https://turbovnc.org/Documentation/Compatibility22
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/f462fa23-9363-43f1-9001-ced5eae3f925%40googlegroups.com.
#!/bin/sh
dbus-launch gnome-sessionTo view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/f462fa23-9363-43f1-9001-ced5eae3f925%40googlegroups.com.
I ran glmark on the host display normally and then with software rendering. I've attached the results at the end of this message. I've attached this for completion rather than to contradict your hunch, but they do tie up with the numbers I see via VGL so I don't think this is a CPU/VNC issue.
I've tried repeating my experiments using gnome, in case the issue is with KDE. However I get the following when trying to run vglrun:
$ vglrun glxspheres64/usr/bin/vglrun: line 191: hostname: command not found[VGL] NOTICE: Automatically setting VGL_CLIENT environment variable to[VGL] 10.10.7.1, the IP address of your SSH client.Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)libGL error: failed to authenticate magic 1libGL error: failed to load driver: i965GLX FB config ID of window: 0x6b (8/8/8/0)Visual ID of window: 0x21Context is DirectOpenGL Renderer: llvmpipe (LLVM 9.0.1, 256 bits)17.228616 frames/sec - 17.859872 Mpixels/sec16.580449 frames/sec - 17.187957 Mpixels/sec
You need to install whatever package provides /usr/bin/hostname for your Linux distribution. That will eliminate the vglrun error, although it's probably unrelated to this problem. (Because of the error, vglrun is falsely detecting an X11-forward SSH environment and setting VGL_CLIENT, which would normally be used for the VGL Transport. However, since VirtualGL auto-detects an X11 proxy environment and enables the X11 Transport, the value of VGL_CLIENT should be ignored in this case.)
I honestly have no clue how to proceed. I haven't observed these
problems in any of the distributions I officially support, and I
have no way to test Arch.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/bd7beb3a-2cfe-4f2b-8440-4ddd2dd812a8%40googlegroups.com.
#!/bin/sh
dbus-launch gnome-session
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/bd7beb3a-2cfe-4f2b-8440-4ddd2dd812a8%40googlegroups.com.
Bear in mind that passing -wm and -vgl to the vncserver script does nothing but set environment variables (TVNC_WM and TVNC_VGL) that are picked up by the default xstartup.turbovnc script, so make sure you are using the default xstartup.turbovnc script. It's easy to verify whether the window manager is using VirtualGL. Just open a terminal in the TurboVNC session and echo the value of $LD_PRELOAD. It should contain something like "libdlfaker.so:libvglfaker.so" if VirtualGL is active, and you should be able to run OpenGL applications in the session without vglrun, and those applications should show that they are using the Intel OpenGL renderer.
As far as the performance, you haven't mentioned any other benchmarks you have tested, other than glmark2. I've explained why that benchmark may be demonstrating lackluster performance. If you have other data points, then please share them.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/16cd2e22-2380-447c-b3ca-a494a17e324f%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/16cd2e22-2380-447c-b3ca-a494a17e324f%40googlegroups.com.
I honestly have no idea. I am successfully able to use your ~/gnome script on my CentOS 7 and 8 machines (one has an nVidia GPU, the other AMD), as long as I make the script executable. The WM launches using VirtualGL, as expected.
As far as performance, it occurred to me that the Intel GPU
might have slow pixel readback. Try running
'/opt/VirtualGL/bin/glreadtest' and
'/opt/VirtualGL/bin/glreadtest -pbo' on the local display and
post the results. If one particular readback mode is slow but
others are fast, then we can work around that by using
environment variables to tell VirtualGL which mode to use.
DRC
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/8a38e956-d7fa-4dbc-8063-350c22c97a88%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/8a38e956-d7fa-4dbc-8063-350c22c97a88%40googlegroups.com.
On Apr 17, 2020, at 5:21 PM, Shak <ssh...@gmail.com> wrote:
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/b7ba42fd-9d84-4914-acea-27ccb4df6e74%40googlegroups.com.
On Apr 17, 2020, at 5:27 PM, Shak <ssh...@gmail.com> wrote:
--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/ea90fd9d-ec3e-4d42-a048-342939d96614%40googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to virtual...@googlegroups.com.
On Apr 19, 2020, at 7:46 AM, Shak <ssh...@gmail.com> wrote:
--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/500d9067-da2d-420e-9883-04ab9173e0c3%40googlegroups.com.