distributing GL accelerated desktop accross multiple machines ?

7 views
Skip to first unread message

bruno schwander

unread,
Oct 27, 2025, 10:26:43 AMOct 27
to VirtualGL User Discussion/Support
Hell,

here is my scenario
- 1 server machine with GPU (server PC)
- 2 client mini pc machine running some client PCa and PCb
all three on the same high speed 1 or 20 Gbit network

The idea would be to display the desktop running on server PC across both PCa and PCb, each showing one half of the desktop.

I imagined this could work
- run Xdmx server, run 2 turbovncservers on the server PC. Thanks to Xdmx, the unified Xdmx server distributes the display to each turbovncserver
- each PCa and PCb connect to each turbovncserver
however I am not sure if this would still allow accelerated GL.
In this setup applications would just be started and connect to the Xdmx server.

I see Xdmx has some GLx support, so maybe that works ?


alternatively, 
- run one turbovncserver on the server PC
- each PCa and PCb connects to the turbovncserver and only renders half of the desktop. However I do not know if turbovnc can do that, or if it could be feasible to modify client and server to add that capability.

I'd appreciate greatly any feedback on this idea and how to accomplish this

Thanks a lot
Bruno


DRC

unread,
Oct 27, 2025, 11:15:29 AMOct 27
to virtual...@googlegroups.com

In theory, that setup should work with VirtualGL, because from VirtualGL's point of view, Xdmx would only be used to display the final XImages containing the OpenGL-rendered frames.  (In VirtualGL parlance, Xdmx would be the "2D X server.")  The Pbuffer that VGL creates to hold the rendered images would be unified and as large as the combined area of the two displays.  VGL would perform a single glReadPixels() call and XShmPutImage() call to transport a rendered frame from the Pbuffer to Xdmx, and Xdmx would be responsible for splitting the XImage across the two X displays.

However, given that Xdmx is unmaintained, it wouldn't surprise me if there are issues that prevent it from working in practice.

--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/virtualgl-users/0e410bb1-7ed5-4f91-b37c-099107cee5a6n%40googlegroups.com.

DRC

unread,
Oct 27, 2025, 11:25:52 AMOct 27
to virtual...@googlegroups.com

Update: This seems to work on my machine:

$ /opt/TurboVNC/bin/vncserver -noxstartup :1
$ /opt/TurboVNC/bin/vncserver -noxstartup :2
$ Xdmx +xinerama :3 -display :1 -display :2 -xinput :1 -input :1 &
$ DISPLAY=:3 vglrun /opt/VirtualGL/bin/glxspheres64 -fs -i

The output of GLXspheres is split across the two sessions, as desired.  The mouse/keyboard input is taken from the first session.

However, I was unable to run a full window manager using that setup.  Both GNOME and Xfce seem to cause Xdmx to segfault.

bruno schwander

unread,
Oct 27, 2025, 11:37:19 AMOct 27
to virtual...@googlegroups.com

that is very encouraging though ! thanks a lot for checking this.

I was reading more blogs and how-to about Xdmx and it was running with LXDE.

Maybe Gnome et al. use some other heavy stuff that is not supported by xdmx anymore ?

I wonder if using the very basic twm would work instead of the bloated Gnome ?

You received this message because you are subscribed to a topic in the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/virtualgl-users/D8oQqkkMe1M/unsubscribe.
To unsubscribe from this group and all its topics, send an email to virtualgl-use...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/virtualgl-users/5509ce7c-c653-4c8b-826e-e30ad0f65416%40virtualgl.org.

DRC

unread,
Oct 27, 2025, 12:08:08 PMOct 27
to virtual...@googlegroups.com

Worth a try, but Xfce is significantly less bloated than GNOME (and still X11-friendly, whereas GNOME is moving quickly toward Wayland), and it doesn't work either.  I don't have any more time to look into it, but feel free to update this thread with your findings.

bruno schwander

unread,
Oct 29, 2025, 7:23:19 AMOct 29
to virtual...@googlegroups.com

tightvnc viewer has a -clip parameter allowing it to display just a portion of the server display, how difficult would it be to support such a feature in turbovnc ? (I did not find anything similar in the docs)

I am asking because, as I dug further into Xdmx, I found that it has been removed from X.org (even though linux packages still exist) because it was unmaintained, and the relevant ports for dmx, dmxproto, libdmx have also been removed from FreeBSD (my OS of choice). So, I am considering reviving Xdmx by making a new branch of Xlibre and reverting all commits which removed it, then re-adding the FreeBSD port. Obviously it's a big project, but it would have some good applications not just for my own project. I think distributed X can be very useful and flexible for many situations.

However, maybe the second option I had initially mentioned below (show part of turbovnc server display) is much easier to attain with turbovnc. 

My concern is that I do not know if that is possible in an efficient way. For example, maybe the protocol does not allow to do that without sending updates for the whole server screen, which then would lead to sending the whole display to each viewer before clipping client side ? Or maybe there are some other limitations.

What is your top-level feel on this, would it be possible ? difficult ? easy ? which areas would require modification ? Overall what kind of effort would be required ?

Thanks a lot for your feedback

DRC

unread,
Oct 30, 2025, 8:00:32 AMOct 30
to virtual...@googlegroups.com

This has come up before:

https://github.com/TurboVNC/turbovnc/issues/141

However, I still don't have a good understanding of why such a solution is necessary.  Why couldn't you just attach multiple monitors to the same client machine?  That works fine with TurboVNC already.

Splitting a TurboVNC session across multiple in-process RFB servers (each listening on a different port) would be a significant undertaking, because the RFB server code is not in any way designed to handle that.  Minimally it would require:

- extending the vncserver script to handle multiple listening ports (or Unix domain sockets?) for each TurboVNC session,
- extending Xvnc's -rfbport/-rfbunixpath/-rfbunixmode options to allow multiple ports/sockets to be specified,
- providing some way of connecting a particular X RandR output to a particular RFB listener,
- extending all of the RFB code so that it can be N-deep instead of 1-deep,
- figuring out how to handle keyboard/mouse I/O (duplicate it? assign it to only one RFB server?),
- extending all of the encoders so that they can split RFB rectangles that fall on a screen boundary,
- and probably 100 other things that I haven't considered yet.

This would be incredibly disruptive to the stability of TurboVNC.  I embrace the fact that TurboVNC has become a sort of Swiss army knife for remote display, but adopting a new feature means managing it (including ensuring that it doesn't regress as I add other features) in all future TurboVNC releases.  I can't do that unless the feature will be broadly useful to the TurboVNC community, and even then, I am hesitant to adopt any feature that might be a "problem child."

A less disruptive approach might be to take advantage of the fact that TurboVNC can already handle multiple viewer connections to the same session.  Hypothetically, a new RFB extension could be implemented that allows a viewer to specify that it wants to receive only a subset (clipping region) of the remote desktop.  However, I would still need to understand the overall purpose of this solution before I would be willing to adopt it.

DRC

bruno schwander

unread,
Oct 30, 2025, 9:53:41 AMOct 30
to 'DRC' via VirtualGL User Discussion/Support


On 10/30/2025 13:00, 'DRC' via VirtualGL User Discussion/Support wrote:

This has come up before:

https://github.com/TurboVNC/turbovnc/issues/141

However, I still don't have a good understanding of why such a solution is necessary.  Why couldn't you just attach multiple monitors to the same client machine?  That works fine with TurboVNC already.

I saw that issue 141 but I think the premise of it with multiple ports is unnecessarily complicated and not required at all.

And the reason for not having multiple monitors on one machine is because that is not my use case. My use case is, multiple machines running each one display.

Why ? because the client machine in your suggested case would require a good or multiple video cards with multiple outputs. In my suggested configuration, each client machine is a lightweight VNC client. It can be a small minimal pc or single-board system.

Splitting a TurboVNC session across multiple in-process RFB servers (each listening on a different port) would be a significant undertaking, because the RFB server code is not in any way designed to handle that.  Minimally it would require:

- extending the vncserver script to handle multiple listening ports (or Unix domain sockets?) for each TurboVNC session,
- extending Xvnc's -rfbport/-rfbunixpath/-rfbunixmode options to allow multiple ports/sockets to be specified,
- providing some way of connecting a particular X RandR output to a particular RFB listener,
- extending all of the RFB code so that it can be N-deep instead of 1-deep,
- figuring out how to handle keyboard/mouse I/O (duplicate it? assign it to only one RFB server?),
- extending all of the encoders so that they can split RFB rectangles that fall on a screen boundary,
- and probably 100 other things that I haven't considered yet.

This would be incredibly disruptive to the stability of TurboVNC.  I embrace the fact that TurboVNC has become a sort of Swiss army knife for remote display, but adopting a new feature means managing it (including ensuring that it doesn't regress as I add other features) in all future TurboVNC releases.  I can't do that unless the feature will be broadly useful to the TurboVNC community, and even then, I am hesitant to adopt any feature that might be a "problem child."

I think you are thinking too far :-) I do not suggest or think it would be necessary to split it all at that level, I think it might be possible to split the resulting rendered server desktop instead.

A less disruptive approach might be to take advantage of the fact that TurboVNC can already handle multiple viewer connections to the same session.  Hypothetically, a new RFB extension could be implemented that allows a viewer to specify that it wants to receive only a subset (clipping region) of the remote desktop.  However, I would still need to understand the overall purpose of this solution before I would be willing to adopt it.

that is exactly what I was asking about. Then the client would just pass the -clip parameter asking for the subset of the remote desktop it wants to render, and the server just sends updates to that client accordingly.

The overall purpose is basically every problem that Xdmx was able to solve, but with less overhead and not the chatty X11 protocol, better compression and encoding, etc.  

for example remote, delocalized clients, wall of monitors, etc.

So basically, it would allow low-powered clients without 3D acceleration to render each a part of the remote desktop, while taking advantage of the remote server 3D hardware. Also there are advantages from the network traffic point of view: the server can have a 10GB network connection while the clients are on just a 1GB connection each, a good switch would handle spreading the bandwidth. That would allow to saturate the server and network better while using lower end hardware for the clients and lower network requirements as (in my example) we just need 1GB network overall except in the server room where the server is hooked to the switch with a 10GB line.


Bruno


DRC

unread,
Oct 30, 2025, 5:23:06 PMOct 30
to virtual...@googlegroups.com
> On 10/30/2025 13:00, 'DRC' via VirtualGL User Discussion/Support wrote:
>>
>> This has come up before:
>>
>> https://github.com/TurboVNC/turbovnc/issues/141
>>
>> However, I still don't have a good understanding of why such a
>> solution is necessary.  Why couldn't you just attach multiple
>> monitors to the same client machine?  That works fine with TurboVNC
>> already.
>>
> I saw that issue 141 but I think the premise of it with multiple ports
> is unnecessarily complicated and not required at all.
>
> And the reason for not having multiple monitors on one machine is
> because that is not my use case. My use case is, multiple machines
> running each one display.
>
> Why ? because the client machine in your suggested case would require
> a good or multiple video cards with multiple outputs. In my suggested
> configuration, each client machine is a lightweight VNC client. It can
> be a small minimal pc or single-board system.
>
OK, but why is that your suggested configuration?  There are SBCs
available with multiple display outputs, and multi-display-capable GPUs
retail for as low as US$50 (e.g. the nVidia Quadro NVS series.)  I still
don't understand why it is more cost-effective to use multiple
independent computers, particularly when you consider that my labor to
develop the proposed feature would not be free of charge.

Display walls in which each display was controlled by a separate
computer were necessary in the 1990s because each computer didn't have
enough performance to drive the whole thing by itself.  In 2025,
however, even the lowest-end CPUs have multiple cores and can decompress
at least hundreds of megapixels/second with libjpeg-turbo.  If you find
that the performance of the existing TurboVNC Viewer is insufficient to
drive multiple displays, then the most cost-effective way to address
that would be to implement multithreading in the TurboVNC Viewer, which
I am absolutely willing to do (with funding, but it would be a lesser
effort than anything else proposed here.)

I think this discussion would benefit from some concrete numbers to
frame the problem, such as the specific performance and cost constraints
you are encountering.


>> This would be incredibly disruptive to the stability of TurboVNC.  I
>> embrace the fact that TurboVNC has become a sort of Swiss army knife
>> for remote display, but adopting a new feature means managing it
>> (including ensuring that it doesn't regress as I add other features)
>> in all future TurboVNC releases.  I can't do that unless the feature
>> will be broadly useful to the TurboVNC community, and even then, I am
>> hesitant to adopt any feature that might be a "problem child."
>>
> I think you are thinking too far :-) I do not suggest or think it
> would be necessary to split it all at that level, I think it might be
> possible to split the resulting rendered server desktop instead.

That was not clear in your post.  If I don't have enough information to
know exactly what you had in mind, then I will often reply with multiple
possibilities.  That is not overthinking.  It is an attempt to
thoroughly express the problem so as to maximize the throughput of
information in a high-latency environment (such as communicating with
someone on the other side of the world who cannot reply immediately
because our work days do not overlap.)
Your argument is still not landing with me, for the following reasons:

1. The use of VirtualGL and TurboVNC eliminates the 3D acceleration
requirement on the client, irrespective of the number of monitors.  That
is just as true for the existing TurboVNC use case (one client -->
multiple monitors) as it would be for your proposed use case (multiple
clients --> one monitor each.)

2. The primary bottleneck is JPEG compression/decompression, not the
network.  Even a 15-year-old desktop CPU can decompress at 200
Mpixels/sec, which is enough to feed two 2k monitors at 40+ fps using a
single thread.  Multithreading would obviously improve that situation.

3. The "Perceptually Lossless" JPEG mode in TurboVNC compresses at about
2 bits/pixel, give or take, so you would have to have a CPU capable of
processing more than 500 Mpixels/sec to fill up a 1-gigabit pipe.  Also,
you could simply reduce the JPEG quality a tiny bit if you're running
into network constraints.  Quality 95 is probably an overkill for
perceptual losslessness.  The best studies I've read on the topic say
that the inflection point is closer to Quality 90.

Again, I think that the discussion would benefit from some concrete numbers.


bruno schwander

unread,
Oct 30, 2025, 6:30:54 PMOct 30
to 'DRC' via VirtualGL User Discussion/Support
ok, here is a very simple example for the low-end PC render client use case:

You can place a client PC on the end of a 2000m 1GBit fiberoptic network
cable no problem at a minimal cost.

You can NOT place a display hooked up to a 2000m HDMI cable because such
thing does not exist. There are boxes that try to do that, but have
issues and are as expensive as a PC.

My use case is much more flexible than a PC with multiple monitor
output. Also, it allows to repurpose essentially free PCs as network
displays. And those PCs rarely have 3 or 4 or 16 video output ports.
However it is quite easy to gather 8 or 16 mini PCs with each 1 or 2
video output ports.

Anyway, I see that you do no see the point of this so I am not going to
try to convince you. Besides, my line of questioning was not about
enticing you to implement it, but instead to garner a bit of feedback
and your opinion on the feasability and difficulty of implementing it so
I can decide what to do about it and how.


Thanks a lot for all the feedback and experiments, cheers !
well I gave the example of the other VNC client that has a -clip command
line parameter, sorry if that was not clear
Reply all
Reply to author
Forward
0 new messages