Client side OpenGL

50 views
Skip to first unread message

Iar De

unread,
Feb 26, 2019, 11:39:16 PM2/26/19
to VirtualGL User Discussion/Support
I understand, that VGL is for offloading OpenGL to an X server with 3D support, but, do you have a solution to do it the other way around, almost:
Is it possible, while serving a remote application, get OpenGL commands instead of a rendered image on server, so OpenGL portion of the screen gets rendered on client's GPU locally, not on the remote server?

DRC

unread,
Apr 17, 2019, 2:17:05 PM4/17/19
to virtual...@googlegroups.com
Sorry for the delayed response. This got lost in my inbox. I don't
understand how what you're describing is any different from indirect
rendering, which has performance and compatibility problems described
here: https://virtualgl.org/About/Background. Not only doesn't
VirtualGL support indirect rendering, but one of the big reasons why
VirtualGL exists is to work around problems with indirect rendering.
When VGL became fully baked in early 2003 (as a proprietary solution,
well over a year before Landmark Graphics agreed to open source it),
most Unix/Linux technical application users were using remote X/GLX with
Exceed 3D, because that was effectively the only way to display 3D
applications remotely with GPU acceleration to Windows clients (although
few people were even using the term "GPU" yet.) Initially, TurboVNC
wasn't a thing, and even when it became a thing, it took years for it to
reach the level of maturity at which it could replace the VGL Transport.
Thus, throughout most of VGL's early existence, it was primarily a
bolt-on technology for remote X, and TurboVNC was primarily used only on
high-latency/low-bandwidth networks where the VGL Transport didn't
perform well.

Long and the short of it-- yes, you can do client-side OpenGL rendering
using remote GLX. That doesn't require VirtualGL, but it does have deep
performance and compatibility problems (generally indirect rendering
doesn't support OpenGL > 2.1.) It also introduces more stringent
requirements for Windows clients, since the Windows X servers out there
mostly don't support OpenGL acceleration (I know Cygwin doesn't, for
instance.) Apart from that, you're probably SOL. Supporting
client-side OpenGL rendering without an X server would be hard for any
solution to do, since it would require marshaling all of the OpenGL
commands from server to client (a potential compatibility minefield) and
then figuring out how to composite the OpenGL-rendered pixels on the
client with the X-rendered pixels from the server (a potential
conformance minefield.) Any solution that requires intercepting all of
OpenGL, rather than just GLX, is a non-starter. It would extend the
scope of VirtualGL so much that I would be incapable of singlehandedly
maintaining the project, and if I'm barely able to afford paying my own
bills through my work on The VirtualGL Project, I certainly can't afford
to hire employees.

falde

unread,
Jun 11, 2019, 5:30:45 AM6/11/19
to VirtualGL User Discussion/Support
There are already solutions that stream OpenGL commands. I will give you a hint. A 1x PCI 1.0 slot transfers 250 MB/s. That is Megabyte not Megabit. So it is about 2 Gbit. That isn't considered enough for anything but a very low end GPU. How many people do you think have internet connections that are 2 Gbit or above? And even if you have mote than 2 Gbit internet connection the latency would be to high. OpenGL as a network protocol requires extremely low latency. It is designed for for PCI. Infiniband is a low latency network technology that are designed for supercomputing. I tried running GL pipelining over that. It lagged. Not as much as pipelining it over Ethernet but it still lagged a lot.

It would be possible to have some scene graph API that you run over a network and that renders with OpenGL cient side. This is basically what X11 is and remote X11 can be pretty fast. It doesn't have any 3D graphics but if you use raw X11 graphics it can be fast. That is what NX does. It runs an X11 proxy on the "terminal server" that does "X11 compression" and runs another proxy on the client that use an X11 server which in the right configuration use OpenGL to render. Modern applications use a lot of things on top of X11 that can slow this down a lot. Such as antialiased font which is rendered to an image before sent to the X11 server. 

So yes it is possible to use client side GPU Power. But if you want anything beyond what is fast in X11 then you will have to create your own high level protocol. That minimizes traffic between client and server. And that in turns also means that you will have to port applications to your protocol. Well you could take for example JavaFX and implement the API as a network protocol and perhaps even run all JavaFX applications without modifications with full client side rendering. There of course can be things that some applications do that slows this down a lot.

Remote Java AWT. Note that this was released when many developers already moved on to Java Swing. However it is still maintained by IBM so they are using it for something.

The most optimized remote application clients are multiplayer game engines. They usually have a protocol designed specifically for that game engines. And many game engines are game specific, an example of that would be Starcraft.

And of course another way to do remote application that often perform way better than any thin client solution is HTML5 + JavaScript + WebGL. Again this requires you to port applications to it.

DRC

unread,
Jun 11, 2019, 11:49:40 AM6/11/19
to virtual...@googlegroups.com
It is worth pointing out that DCV, a proprietary solution that IBM developed in parallel with VirtualGL and which is now owned by Amazon, does support OpenGL protocol compression and streaming, but it mainly uses this capability to allow OpenGL rendering to occur on a different server than the application server. When using this capability, the OpenGL commands are intercepted on the application server, compressed, and forwarded to a rendering server, and the OpenGL-rendered frames are compressed on the rendering server and streamed directly to the client. My experience with this feature (which is admittedly 5-6 years out of date) is that it has a lot of performance compromises, much as you describe.

On a side note, I was the one who sent IBM the U. Stuttgart research paper that inspired both VGL and DCV. At the time, Landmark was going down the path of working with IBM to develop the solution, but IBM was not receptive to code contributions that were necessary to make their solution compatible with our applications, so we developed our own. I chose not to implement an OpenGL forwarding feature because it would have introduced compatibility and maintainability problems, as previously described, and the performance issues would have limited the usefulness of such a feature anyhow.
--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/e488cbda-442f-4029-90a6-e95b9c7f8d5e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages