--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/af241b34-b994-4127-834b-45d241110a5dn%40googlegroups.com.
No, the UltraVNC suggestion is a red
herring. UltraVNC uses the TurboVNC encoder, but it only
supports Windows servers, whereas TurboVNC only supports Un*x
servers. They are orthogonal solutions, not interchangeable
solutions.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/c14899aa-f90a-4e1c-a619-f5c49147fbc0n%40googlegroups.com.
In the interest of being methodical, let me enumerate the things that are unexpected and how to possibly diagnose them:
1. 47-52 Mpixels/sec is way too slow for blitting on a local display that has a GPU connected.
- Run glxinfo on the local display.
That will tell you whether the local display is using the GPU.
(However, this is a shot in the dark, because you shouldn't be
able to get 460-540 Mpixels/sec of readback performance without
a GPU, unless modern versions of Mesa are a lot faster than I
think they are.)
- Run fbxtest (from the VirtualGL source) on the local display. That will tell you whether the raw blitting performance of the local display matches the blitting performance you observed when you ran 'vglrun +pr glxspheres64'.
2. Except for the slow blitting performance, the output of 'vglrun +pr glxspheres64' is otherwise expected, since it shows a readback throughput that is consistent with that of other nVidia GPUs and a total throughput that is limited by the slowest pipeline stage (the blitting thread.) However, the output of 'vglrun +pr {your_application}' is not expected, since it shows a total throughput that is much less than the throughput of the slowest pipeline stage.
- Is there any way that your application could print the output of glGetString(GL_RENDERER) or implement some other method to verify that the GPU is actually in use? The behavior is odd enough that it makes me suspect that the GPU may not be in use in all cases.
- Is there any way that your application can measure its frame rate when running on the local display without VirtualGL?
- Do you actually observe 10 fps in the
application? In other words, does the GUI appear to only be
refreshing at that rate?
- If you are using VirtualGL with a 3D X server, then try using it with an EGL device instead (pass '-d /dev/dri/card0' to vglrun) and see if that changes anything.
- If you are using a 3D X server, then make sure that
Option "HardDPMS" "false"
is in either the Device or the Screen
section of xorg.conf. Otherwise, the nVidia drivers will
throttle down the GPU to a ridiculously slow level when the
screen saver activates. (However, this usually slows the
readback performance to a crawl, which you haven't observed, so
I doubt that this is the cause of the issue. Also, obviously
the screen saver wasn't active when you used VGL on the local
display, so I mention HardDPMS mostly to ensure that that issue,
which is probably unrelated, doesn't interfere with the
observations of the issue you reported.)
3. Passing -vgl to /opt/TurboVNC/bin/vncserver (or setting '$useVGL = 1;' in turbovncserver.conf) basically just runs the entire window manager with vglrun, so all applications launched from the window manager will have VirtualGL preloaded into them. This enables GPU acceleration for the window manager itself, and it also allows you to launch 3D applications with GPU acceleration without invoking them using 'vglrun'. However, this is not expected to have any effect on blitting performance.
- Try running fbxtest in a TurboVNC session launched with -vgl and compare the raw blitting performance to that of a TurboVNC session launched without -vgl.
- If there is a way to verify in your application that the GPU is in use, such as calling glGetString(GL_RENDERER), then please do so and verify that the GPU is in use in both cases (the TurboVNC session launched with -vgl and the TurboVNC session launched without -vgl.)
- Try using a non-compositing window
manager, such as MATE or Xfce, and verify whether that affects
the results.
Otherwise, I don't have a clue. I would need to understand more about your environment, such as:
- How is your application performing 3D rendering? Does it have its own off-screen rendering and blitting mechanism that possibly interferes with VirtualGL's? (Some applications do their own Pbuffer redirection, readback, and blitting, so it is necessary to set VGL_READBACK=none with such applications. Effectively that means that VGL is used only for redirecting the OpenGL context to a GPU. Its readback and transport mechanisms are disabled.)
- What window manager are you using?
- What operating system are you using?
Note also that one of the professional services I provide is remote diagnosis and resolution of such issues. Please contact me off-list to discuss this.
DRC
--
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/c14899aa-f90a-4e1c-a619-f5c49147fbc0n%40googlegroups.com.