i'm developing an application that need vsync to avoid tearing but i'm having some problems.
Here is the (really simplified) code structure of my application:
Code:
...
while ( !mViewer->done() ) {
mGraph->Update(...);
mViewer->frame();
}
I've noticed frame() function is blocking when vsync is enabled.
This means that i call the frame function, i stay blocked for something like 20ms (50hz PAL), and then i must terminate the Update call in the vsync period otherwise i won't be able to draw a new frame ( the graphic card draw the old content of frame buffer without changes performed in the update function. Changes will be performed in the next frame ).
As you can immagine this is a big problem for real-time applications cause i'm introducing 20ms of retard.
Is there a way to syncronize frame call without using vsync ? or can i scompose the frame function to be able to separate functions that operate on the graph from those that perform the rendering ?
The second solution could help me cause i would be able to operate on the graph with mGraph->Update(...) even if the frame "part" that write the frameBuffer is blocked.
I hope i've explained my problem clearly.
Thank you!
Cheers,
Giovanni
------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=29295#29295
_______________________________________________
osg-users mailing list
osg-...@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
The sync actually blocks on the first open gl call. So your osg update
and cull stages will run then you will block on the draw stage until the
vsync. Your problem is actually worse than 20ms w/o you knowing it.
For Nvidia cards there is a built in 2 frames of latency. So even after
your sync you won't see the image you updated come out the DVI until 2
render frames later.
In order for you to do what you want you will need some expensive frame
lock hardware with external syncs to run at a multiple of the frame
rate.
Hi,
Code:
Thank you!
Cheers,
Giovanni
Classification: UNCLASSIFIED
Caveats: FOUO
I tell you this cause i used my system both with and without vsync and i noticed than running with vsync i don't have tearing but i measure extra 40ms of retard in the total closed loop respect to the free run case (no vsync). (with total closed loop i mean image rendering time + transmission time + elaboration of frame time + results_time)
------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=29347#29347
Nvidia chips use a very long pipeline and the GPU's like processing
data. When they are not processing data they have a hard time
performing. In order to keep the performance up, NVIDIA introduces the
2 frames latency to keep the pipeline full. Most likely the 40ms of
retard you are noticing is your 2 frames latency. You will probably
only notice the 2 frame latency when it is vsync'ed. When you are not
vsync'ed you will get the tearing but you are also running faster than
the sync rate and probably won't notice that built-in latency.
Hope that helps...
-----Original Message-----
From: osg-user...@lists.openscenegraph.org
[mailto:osg-user...@lists.openscenegraph.org] On Behalf Of
Giovanni Ferrari
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g
Classification: UNCLASSIFIED
Caveats: FOUO
> For Nvidia cards there is a built in 2 frames of latency. So even after
> your sync you won't see the image you updated come out the DVI until 2
> render frames later.
Where does this information come from Dennis? Where is this delay happening? I doubt it's triple buffering, since the extra memory would have to be accounted for, and it makes tearing mostly impossible (as on the Mac).
So you believe the Gl command queue is buffered/delayed somewhere? Doesn't that have huge implications for things like glGet, making them impossible to use without at least halving the frame rate?
Bruce
Yes we found this through internal testing. Nvidia later confirmed it.
This isn't related to double or triple buffering either. The pipeline
as explained to me is similar to a break line in a car. Everything
works well unless you inject an air bubble into the brake line. This
would be similar to doing certain glGet's commands. The driver will tell
the GPU to stop it's processing so that it can handle your glGet
request. So for real-time programming you really need to be aware of
this -- and don't do it --. Depending upon the type of readback you are
performing you will introduce a momentary lag in the system because the
GPU has to stop everything it is doing to respond back to you.
glReadPixels behaves a little differently as long as the pixel format
aligns with the frame buffer format where the driver can just dma that
framebuffer back to the cpu. If your pixel formats are not aligned you
will get bad performance again, because the GPU will have to stop what
it is doing and reformat the data to send back.
Bruce
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g
Classification: UNCLASSIFIED
Caveats: FOUO
If you are interested in seeing what we did WRT latency testing go to
this link
http://www.ccur.com/Libraries/docs_pdf/AMRDEC_PC_Scenegeneration_whitepa
per.pdf
Bruce
Giovanni, what you're seeing is typical behavior when syncing with the vertical retrace.
To maintain real-time at 50Hz each frame must be rendered in less than 20ms (1/50).
If a frame just happens to take 21ms, then the buffer swap will block for 19ms before actually swapping buffers.
Hence, your frame rate is cut completely in half (21ms + 19ms = 40ms = 25Hz).
And it also introduces nasty temporal aliasing.
I'm not aware of another way to synchronize with such regularity as the monitor retrace.
I'm guessing that deterministic hardware is required given the nature of something like OpenSceneGraph on a PC.
Although, you can achieve near real-time by things like database tuning and pre-emptive rendering.
But, nothing to guarantee actual real time.
Extra hardware is not needed to run at multiples of the frame rate.
Just set the retrace rate to the desired frame rate and run sync'd.
Boom - multiples of the frame rate.
BTW, there's something about this alleged '2 frame latency' charge that just doesn't pass the smell test.
Mostly - ATI sure the hell wouldn't let 'em!
Bob
-----Original Message-----
From: osg-user...@lists.openscenegraph.org [mailto:osg-user...@lists.openscenegraph.org] On Behalf Of Bunfield, Dennis AMRDEC/AEgis
Sent: Wednesday, June 23, 2010 4:35 PM
To: osg-...@lists.openscenegraph.org
Subject: Re: [osg-users] OSG and vsync (UNCLASSIFIED)
The first opengl call will block if the back buffer is not available at that
point.
The swapbuffers will block if there is already a pending swap queued (by the
last call to swapbuffers)
So, depending on the circumstances, both can block.
Lilith
The 2 frame latency thing didn't smell right to me either, having been
used SGI machines extensively in the past. However after doing various
closed loop latency tests, I'm convinced that the latency of modern
Nvidia cards is higher than I would have expected. The 2 frame latency
when OpenGL sync appears to account for this.
My tests involve an analog camera feeding into a framegrabber board,
which then downloads to texture and is displayed on the screen.
Overlayed on that is a frame time in seconds and milliseconds. By
pointing the camera at the screen you can create a tunnel of frame
times, the time between them is the closed loop latency. This
measurement techinique is probably accurate to ~10ms.
With Vysnc on, the latency is around 30-40ms worse than when it is off.
This is about 2 frames of 60Hz video worth. The draw time is less than a
millisecond, so the 2 frames latency is probably only 1-2ms with vsync
off.
As for ATI not letting nVidia get away with it, for both companies their
high end market is for first person shooter junkies willing to buy
multiple graphics cards and SLI them. From what I've read, most of these
people who care about latency have been playing with vsync off for over
ten years, so they wouldn't notice or care. Also the graphics benchmarks
tend to be run with vsync off, but they normally just deal with
throughput so wouldn't pick up latency anyway.
Both nvidia and ATI have been known to play dirty tricks with drivers to
improve benchmarks in the past. However I don't think this is one of
them, as frames of latency is a feature of DirectX, there is even a
setting in the control panel of some windows drivers to change it. I
don't think this setting changes the OpenGL behaviour though.
Colin.
PS I've only tested 7600 and 9400 properly, but I've seen symptoms of
latency on GeForce 8 series cards too. I think Geforce 5 and 6 didn't
have this 'feature' though. Interestingly, I also found that there were
some monitors that had over 66 milliseconds of extra latency.
From what I've read, newer monitors tend to have a lot more latency
than the old CRTs did. My TV at home (a Samsung LED DLP) has almost a
half second of latency! It's one of those numbers that I wish more
manufacturers published.
--"J"
>
> Message: 4
> Date: Fri, 6 Aug 2010 11:26:09 -0400
> From: Jason Daly <jd...@ist.ucf.edu>
> To: OpenSceneGraph Users <osg-...@lists.openscenegraph.org>
> Subject: Re: [osg-users] OSG and vsync (UNCLASSIFIED)
> Message-ID: <4C5C2991...@ist.ucf.edu>
> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
Bob
________________________________