Vsync Fps Cap

0 views
Skip to first unread message

Sear Sommerfeldt

unread,
Aug 3, 2024, 4:37:26 PM8/3/24
to winthrifengpho

I am creating programs that display rectangles that follow the mouse cursor.Ostensibly OpenGL provides a vsync mechanism in the glXSwapBuffers function, but I have found this to be unreliable. With some card drivers you get vsync; with others you don't. On some you get vsync but you also get an extra 2-frames of latency.

But this is not a bug in OpenGL. The spec is intentionally vague: "The contents of the back buffer then become undefined. The update typically takes place during the vertical retrace of the monitor, rather than immediately after glXSwapBuffers is called."The key word being "typically"... basically glXSwapBuffers doesn't promise squat w.r.t. vsync. Go figure.

In my current attempt to solve this basic problem, I currently guess an initial vsync time and then afterwords assume the phase equals elapsed time MOD 1/(59.85Hz) which seems to sync up with my current monitor. But this doesn't work so well because I don't actually know the initial phase. So I get one tear. At least it doesn't move around. But what I really need is to just measure the current vsync phase somehow.

No, I don't want to rely on some SGI extension or some other thing that has to be installed to make it work. This is graphics 101. Vsync. Just need a way to query its state. SOME builtin, always-installed API must have this.

After all, you do not own the screen. The system owns the screen; you are only renting some portion of it, and thus are at the mercy of the system. The system deals with vsync; your job is to fill in the image(s) that get displayed there.

Consider Vulkan, which is about as low level as you're going to get these days without actually being the graphics driver. Its WSI interface is explicitly designed to avoid allowing you to do things like "wait until the next vsync".

Its presentation system does offer a variety of modes, but the only one that implementations are required to support is FIFO: strict vsync, but no tearing. Of course, Vulkan's WSI does at least allow you to choose how much image buffering you want. But if you use FIFO with only a double-buffer, and you're late at providing that image, then your swap isn't going to be visible until the next vsync.

A short answer is: vsync used to be popular on computers when video buffering was expensive. Nowadays, with general use of double buffer animation, it is less important. I used to get access to vsync from the graphics card on a IBM-PC before Windowing systems, and would not mind getting VSYNC even now. With double buffering, you still have the risk that your raster scan can occur while bltting the buffer to the video memory, so it would be nice to sync that. However, with double buffering you are going to eliminate a lot of the "sparkle" effects and other artifacts from direct video drawing, because you are doing a linear blt instead of individual pixel manipulation.

Its also possible (as the previous poster implied) that both the fact that both of your buffers exist in video memory, and the idea that the display manager can carefully manage the blts to the screen (compositing) can render effects nonexistent.

(obviously this will vary based on almost everything; your CPU speed, GPU speed, how bottlenecked your GPU is, monitor refresh rate, and all sorts of other stuff. But modern graphic drivers are kind of absurdly smart at this kind of scheduling and will do a lot to try to wring absolutely as much performance out of a program as they possibly can)

I spent more than two hours of my spare time writing and re-writing a five-paragraph technical explanation to explain how this complicated system works in the simplest and clearest way I could figure out because I was trying to be helpful.

In this approach, a game measures the time between frames using a high precision clock, and then you have your game simulation use that time to determine how far ahead to run the simulation. With SDL code, your game loop would look something like this:

Under this approach, you declare that your game simulation is only going to run at a particular frame rate, and you instead run your simulation a variable number of times per rendered frame. It looks something like this:

The usual approach to address this is to decouple your rendering from the simulation entirely; set things up so your simulation runs at one rate, but then your rendering can run at a different rate, rendering your game stuff in between the positions specified by the simulation so that your rendering remains smooth when updated at 75fps or 300fps even when the simulation is only running at 60hz or even lower. All of that is a big and complicated topic, and folks usually point to resources like Fix Your Timestep! Gaffer On Games for the gory details.

Live timestep measurement causes additional problems and averaging does not solve them, and the latter method is overcomplicated. I turned off vsync and added at the end of the loop:
dc = SDLTime-dc;
If (dc

So after a few weeks i was finally able to get my P3D where i wanted it to be, no blurries and good fps. Originally i had Vsync and triple buffering set to off, but i find in the cruise when the fps gets very high its almost so smooth its not smooth if you get me. After doing a few flights with Vsync and triple buffering on i have seen that these 2 settings seems to make my sim stutter more and also gives me some of those blurries i didnt want back.

But when vsync is off and triple buffering is off, i seem to get really good crisp textures, no stutters but on like final approach the fps gets quite bad, like when vsync is on its so much smoother.

What im trying to say about the smooth but no smooth, when im in the cruize the fps must get so high, that when i pan around its smooth but its almost like when i pan it stops for a millisecond and stutters almost

Well, the triple buffer refers to the three screen memory storage areas the GPU uses for frames being made and displayed. There is a front buffer that is currently being fed to the screen, then there are two backbuffer frames. Any one of the backbuffer frames is being drawn by the sim, the other is ready to be switched with the front buffer. So with triple buffer there's never any waiting to switch the front buffer with a completed buffer frame, since there's always one of the two available, while the other can be drawn. So the CPU and GPU go full steam ahead. So can introduce blurries as it leaves too little time for the background tasks to populate the tiles.

With the VSync, that synchronises the GPU display with the output device feeding the screen, not to be confused with the vsync of a raster monitor. If the GPU hasn't finished a frame and the buffer run out, due perhaps to too much detail, then it waits on that frame. Without VSync, displays the same frame until the new one can be switched in.

Interesting information. So in order to test should we be flying in an area with heavy hit on frames and then what should we be looking for when following your two methods and comparing to unlimited in P3D and no limit in NI ?

There may be an option to use adaptive VSync. Then VSync is enabled when you have more FPS then your monitor can display but it is disabled when FPS drops below in your case 60. This is usually the best compromise as VSync may suffer performance when it is enabled and your game runs at lower FPS then your monitor.

Without adaptive sync it's usually best to enable vsync via the GW2 settings menu and run the game uncapped otherwise. That should get rid of the tearing. Just make sure you don't override these settings in the nVidia driver. If that somehow doesn't work for your hardware, you can always try to override the settings via the driver, but it's usually better to do it in-game for most game engines.

Since I never tried that combo in GW2 myself, I assume that it's usually best to disable vsync in the game settings and run uncapped, but force the framerate via the nVidia drivers. There should be an additional driver setting that let's you choose your desired framerate. If it doesn't work for your hardware without artifacts, you can always fall back to the static 60Hz method mentioned above.

What I found in driver program settings is "Max Frame Rate". I tested at 120 cap and I had vertical tearing during camera move, but I do not see tearing at 160 FPS cap. Is it ok to use such option in this way?

When I teleport to Elon RIverlands and stay afk near teleports with 300-400 FPS MSI Afterburner shows my gpu eats almost max wattage. What do you think about "Max Frame Rate" and "Vertical Sync - Fast" in nvidia option?

c80f0f1006
Reply all
Reply to author
Forward
0 new messages