In theory, that setup should work with VirtualGL, because from VirtualGL's point of view, Xdmx would only be used to display the final XImages containing the OpenGL-rendered frames. (In VirtualGL parlance, Xdmx would be the "2D X server.") The Pbuffer that VGL creates to hold the rendered images would be unified and as large as the combined area of the two displays. VGL would perform a single glReadPixels() call and XShmPutImage() call to transport a rendered frame from the Pbuffer to Xdmx, and Xdmx would be responsible for splitting the XImage across the two X displays.
However, given that Xdmx is unmaintained, it wouldn't surprise me if there are issues that prevent it from working in practice.
--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/virtualgl-users/0e410bb1-7ed5-4f91-b37c-099107cee5a6n%40googlegroups.com.
Update: This seems to work on my machine:
$ /opt/TurboVNC/bin/vncserver -noxstartup
:1
$ /opt/TurboVNC/bin/vncserver
-noxstartup :2
$ Xdmx +xinerama :3 -display :1 -display :2 -xinput :1 -input :1
&
$ DISPLAY=:3 vglrun /opt/VirtualGL/bin/glxspheres64 -fs -i
The output of GLXspheres is split across the two sessions, as desired. The mouse/keyboard input is taken from the first session.
However, I was unable to run a full window
manager using that setup. Both GNOME and Xfce seem to cause
Xdmx to segfault.
that is very encouraging though ! thanks a lot for checking this.
I was reading more blogs and how-to about Xdmx and it was running with LXDE.
Maybe Gnome et al. use some other heavy stuff that is not supported by xdmx anymore ?
I wonder if using the very basic twm would work instead of the
bloated Gnome ?
You received this message because you are subscribed to a topic in the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/virtualgl-users/D8oQqkkMe1M/unsubscribe.
To unsubscribe from this group and all its topics, send an email to virtualgl-use...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/virtualgl-users/5509ce7c-c653-4c8b-826e-e30ad0f65416%40virtualgl.org.
Worth a try, but Xfce is significantly
less bloated than GNOME (and still X11-friendly, whereas GNOME
is moving quickly toward Wayland), and it doesn't work either.
I don't have any more time to look into it, but feel free to
update this thread with your findings.
To view this discussion visit https://groups.google.com/d/msgid/virtualgl-users/9c0b51e6-8ea5-4a50-adfa-d8490c3b5316%40gmail.com.
tightvnc viewer has a -clip parameter allowing it to display just
a portion of the server display, how difficult would it be to
support such a feature in turbovnc ? (I did not find anything
similar in the docs)
I am asking because, as I dug further into Xdmx, I found that it
has been removed from X.org (even though linux packages still
exist) because it was unmaintained, and the relevant ports for
dmx, dmxproto, libdmx have also been removed from FreeBSD (my OS
of choice). So, I am considering reviving Xdmx by making a new
branch of Xlibre and reverting all commits which removed it, then
re-adding the FreeBSD port. Obviously it's a big project, but it
would have some good applications not just for my own project. I
think distributed X can be very useful and flexible for many
situations.
However, maybe the second option I had initially mentioned below (show part of turbovnc server display) is much easier to attain with turbovnc.
My concern is that I do not know if that is possible in an efficient way. For example, maybe the protocol does not allow to do that without sending updates for the whole server screen, which then would lead to sending the whole display to each viewer before clipping client side ? Or maybe there are some other limitations.
What is your top-level feel on this, would it be possible ? difficult ? easy ? which areas would require modification ? Overall what kind of effort would be required ?
Thanks a lot for your feedback
To view this discussion visit https://groups.google.com/d/msgid/virtualgl-users/cded255a-ddbb-4d30-93ab-c489b186c7e3%40virtualgl.org.
This has come up before:
https://github.com/TurboVNC/turbovnc/issues/141
However, I still don't have a good understanding of why such a solution is necessary. Why couldn't you just attach multiple monitors to the same client machine? That works fine with TurboVNC already.
Splitting a TurboVNC session across multiple in-process RFB servers (each listening on a different port) would be a significant undertaking, because the RFB server code is not in any way designed to handle that. Minimally it would require:
- extending the vncserver script to handle
multiple listening ports (or Unix domain sockets?) for each
TurboVNC session,
- extending Xvnc's -rfbport/-rfbunixpath/-rfbunixmode options to
allow multiple ports/sockets to be specified,
- providing some way of connecting a particular X RandR output
to a particular RFB listener,
- extending all of the RFB code so that it can be N-deep instead
of 1-deep,
- figuring out how to handle keyboard/mouse I/O (duplicate it?
assign it to only one RFB server?),
- extending all of the encoders so that they can split RFB
rectangles that fall on a screen boundary,
- and probably 100 other things that I haven't considered yet.
This would be incredibly disruptive to the stability of TurboVNC. I embrace the fact that TurboVNC has become a sort of Swiss army knife for remote display, but adopting a new feature means managing it (including ensuring that it doesn't regress as I add other features) in all future TurboVNC releases. I can't do that unless the feature will be broadly useful to the TurboVNC community, and even then, I am hesitant to adopt any feature that might be a "problem child."
A less disruptive approach might be to
take advantage of the fact that TurboVNC can already handle
multiple viewer connections to the same session.
Hypothetically, a new RFB extension could be implemented that
allows a viewer to specify that it wants to receive only a
subset (clipping region) of the remote desktop. However, I
would still need to understand the overall purpose of this
solution before I would be willing to adopt it.
DRC
To view this discussion visit https://groups.google.com/d/msgid/virtualgl-users/4c9dc7a2-1ddd-4dbf-a392-77f55d5c44d9%40gmail.com.
This has come up before:
https://github.com/TurboVNC/turbovnc/issues/141
However, I still don't have a good understanding of why such a solution is necessary. Why couldn't you just attach multiple monitors to the same client machine? That works fine with TurboVNC already.
I saw that issue 141 but I think the premise of it with multiple ports is unnecessarily complicated and not required at all.
And the reason for not having multiple monitors on one machine is
because that is not my use case. My use case is, multiple machines
running each one display.
Why ? because the client machine in your suggested case would require a good or multiple video cards with multiple outputs. In my suggested configuration, each client machine is a lightweight VNC client. It can be a small minimal pc or single-board system.
Splitting a TurboVNC session across multiple in-process RFB servers (each listening on a different port) would be a significant undertaking, because the RFB server code is not in any way designed to handle that. Minimally it would require:
- extending the vncserver script to handle multiple listening ports (or Unix domain sockets?) for each TurboVNC session,
- extending Xvnc's -rfbport/-rfbunixpath/-rfbunixmode options to allow multiple ports/sockets to be specified,
- providing some way of connecting a particular X RandR output to a particular RFB listener,
- extending all of the RFB code so that it can be N-deep instead of 1-deep,
- figuring out how to handle keyboard/mouse I/O (duplicate it? assign it to only one RFB server?),
- extending all of the encoders so that they can split RFB rectangles that fall on a screen boundary,
- and probably 100 other things that I haven't considered yet.This would be incredibly disruptive to the stability of TurboVNC. I embrace the fact that TurboVNC has become a sort of Swiss army knife for remote display, but adopting a new feature means managing it (including ensuring that it doesn't regress as I add other features) in all future TurboVNC releases. I can't do that unless the feature will be broadly useful to the TurboVNC community, and even then, I am hesitant to adopt any feature that might be a "problem child."
A less disruptive approach might be to take advantage of the fact that TurboVNC can already handle multiple viewer connections to the same session. Hypothetically, a new RFB extension could be implemented that allows a viewer to specify that it wants to receive only a subset (clipping region) of the remote desktop. However, I would still need to understand the overall purpose of this solution before I would be willing to adopt it.
that is exactly what I was asking about. Then the client would just pass the -clip parameter asking for the subset of the remote desktop it wants to render, and the server just sends updates to that client accordingly.
The overall purpose is basically every problem that Xdmx was able to solve, but with less overhead and not the chatty X11 protocol, better compression and encoding, etc.
for example remote, delocalized clients, wall of monitors, etc.
So basically, it would allow low-powered clients without 3D acceleration to render each a part of the remote desktop, while taking advantage of the remote server 3D hardware. Also there are advantages from the network traffic point of view: the server can have a 10GB network connection while the clients are on just a 1GB connection each, a good switch would handle spreading the bandwidth. That would allow to saturate the server and network better while using lower end hardware for the clients and lower network requirements as (in my example) we just need 1GB network overall except in the server room where the server is hooked to the switch with a 10GB line.
Bruno
To view this discussion visit https://groups.google.com/d/msgid/virtualgl-users/005b776c-ce79-4ba7-ade4-9031cbfbbc8b%40virtualgl.org.