This should be fixed in the latest dev/3.0 evolving pre-release build of
the TurboVNC Server, but please let me know if it isn't. In addition to
fixing a couple of errors I made in the process of porting the
overhauled congestion control algorithms from TigerVNC 1.10.x into
TurboVNC 3.0, I also revised the algorithms so that they treat an ETA of
<= 0 as uncongested. TigerVNC can get away with not doing that because
it has a "frame timer" that, by default, wakes up every 1/60 sec and
attempts to send any framebuffer updates that were previously deferred
(due to congestion or otherwise.) In the case of TurboVNC, however,
reporting congestion without setting the congestion timer results in
updates not being delivered in a timely manner. (Basically the
undelivered updates languished until mouse input was received, which
triggered a new framebuffer update in order to deliver the updated
cursor position.)
Please also let me know if the performance on high-latency/low-bandwidth
networks doesn't meet your expectations. I test this stuff by using two
Linux machines, both of which are using the built-in Linux traffic
control mechanism to emulate a 200 ms/100 Mbit WAN connection. With the
TurboVNC Viewer maximized on a 1920x1200 (2-megapixel) screen and using
the "Tight + Low-Quality JPEG" preset, I execute
vglrun /opt/VirtualGL/bin/glxspheres64 -fs -i
in the TurboVNC session and
tcbench -lb -mx 100 -s 200
on the client to both drive continuous mouse input into GLXspheres and
measure the end-to-end frame rate. With this setup, I measure about 35
frames/sec with TurboVNC 2.2.6, about 50 frames/sec with the tip of the
dev branch, and about 30 frames/sec with TigerVNC 1.10.x. The reduced
frame rate with TigerVNC may be due to the aforementioned frame timer.
I also observed random black rectangles in the middle of the spheres
when using TigerVNC, due to their partial framebuffer update delivery
"feature." (Frankly, I do not like that feature, because it effectively
causes 3D applications with VirtualGL to appear as if they are not
double-buffered.) I would love to have an open dialogue with the
TigerVNC developers regarding these issues, particularly if that
dialogue included best practices for benchmarking the congestion control
algorithms, but given their unwillingness to answer a simple question
regarding the algorithms, I am not hopeful. I think it best if we just
test things ourselves and thus build confidence in TurboVNC's
implementation.
DRC
On 10/22/21 3:42 PM, DRC wrote: