Is it possible to set a custom cut-off value for Adaptive V-Sync? (Enhanced Sync for AMD)

Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by BlindBison, Dec 11, 2019.

  1. BlindBison

    BlindBison Master Guru

    Messages:
    665
    Likes Received:
    120
    GPU:
    RTX 2080 Super
    Hi there guys,

    Something I've been wondering for sometime now after doing some testing on PC and watching some Digital Foundry tests for Xbox One (which typically uses Double-Buffered Adaptive V-Sync) is why on PC we cannot currently set a custom cutoff point for Adaptive V-Sync. Now, you might think this sounds like a worthless feature request or perhaps it may seem I don't understand how the feature works so let me explain what the purpose of this would be.

    On the Xbox One for example where Double Buffered Adaptive V-Sync is used, typically games appear to properly cap their framerate at 30 fps in conjunction with this (almost certainly since it's known to reduce input lag in conjunction with V-Sync - see BlurBusters article on LowLag V-Sync for more on why this is done for example).

    However, currently on PC you cannot do this with Nvidia's implementation of Adaptive V-Sync!

    This is because if you cap the framerate to 60 in-engine or with RTSS for example, the cut-off that Nvidia has set for Adaptive V-Sync is currently far too aggressive and turns off unless your FPS is capped closer to a value of 61 which discernibly hurts one's efforts to properly V-Sync in such a way as to reduce their input lag.

    The solution to this is of course then to simply use traditional Double-Buffered V-Sync then set an in-engine fps cap to 60 if available or use RTSS to cap fps to 59.986 for example (the BlurBusters low lag method that requires knowing one's "true" monitor refresh rate).

    However, this method simply does not handle the occasional fps dip well and that's a problem. What would be optimal then I believe is to have Double-Buffered Adaptive V-Sync with a custom cut-off value -- say, it would toggle off if the framerate dipped beneath 59.9 or 59.5 (or at least a true 60.0) for example. That way, one could cap their framerate in-engine (or via RTSS inline with the BlurBusters guide) to bring down input lag as much as possible which still using Adaptive V-Sync in order to better handle the occasional fps dip.

    For me, Triple Buffering is simply out of the question (unless you're playing at very high framerates on a high refresh rate monitor at least) since even at 60 fps I find this adds a discernible amount of input lag.

    Anyway, is it possible for me to submit this to Nvidia/AMD in some context? I think this would be an awesome feature to have myself, even if it's only available in the Nvidia Inspector. Especially considering that to use traditional Half-Refresh Double-Buffer V-Sync currently without Adaptive requires the Inspector. Thanks for your time,
     
    Last edited: Dec 12, 2019
  2. jorimt

    jorimt Active Member

    Messages:
    58
    Likes Received:
    31
    GPU:
    EVGA 1080 Ti FTW3
    It doesn't "sound like a worthless feature," I'm just pretty sure it's not a possible one.

    Any form of V-SYNC relies on the VBLANK (the span between the previous and next frame scan) to time frame delivery to the beginning of each scanout cycle to prevent tearing.

    Unfortunately, with fixed refresh rate V-SYNC, the VBLANK can't be manipulated, which, again, occurs between every refresh cycle. So if you're at 60Hz, it occurs roughly every 16.6ms, at 144Hz, 6.9ms, and so on.

    What you're asking for would require VRR (variable refresh rate, aka G-SYNC/FreeSync), which is already available for this very purpose by effectively "padding" the VBLANK duration as needed between scanout cycles to match the "refresh rate" to the framerate output by the system, and thus prevent tearing.
     
    BlindBison likes this.
  3. BlindBison

    BlindBison Master Guru

    Messages:
    665
    Likes Received:
    120
    GPU:
    RTX 2080 Super
    @jorimt Thank you very much for explaining all of that, that's very helpful.

    Fascinating stuff -- from the input lag tests I've seen, Xbox One (generally uses Adaptive V-Sync) at the same framerate VS PS4 for example (which normally uses triple buffering) has often had a bit less input lag. I expect this is just because double buffer VS triple buffer then.

    Perhaps I'm mistaken then about how the devs are internally capping the fps of their games then -- on PC, you can't use the "Low Lag" vsync trick with adaptive only standard it seems and that makes sense then going off what you're saying there (capping fps with RTSS to something like 59.986 on a 59.996 hz monitor for example or capping at 60 with an in-engine limiter for example).

    So, I guess standard double-buffer V-Sync with a proper fps limit and a reduce MPRF/flip queue value is the best one can do for input lag if they're a traditional v-sync user at this point in time then, eh? Thanks for explaining all of that, that's very helful!
     
  4. jorimt

    jorimt Active Member

    Messages:
    58
    Likes Received:
    31
    GPU:
    EVGA 1080 Ti FTW3
    With a sustained framerate above the refresh rate, that type of triple buffer V-SYNC will always have up to 1 frame more input lag than double buffer V-SYNC simply due to the extra buffer; the more buffers that are available to be overfilled in that scenario, the more input lag; 2 vs. 3 in this case.

    If double buffer is what's being used, pretty much.

    There are also other low lag V-SYNC solutions, including Fast Sync/Enhanced Sync (which require excessive framerates above the refresh rate to reduce input lag significantly, and still stutter due to dropped frames) or RTSS Scan Line Sync (which technically runs with V-SYNC OFF, and allows the user to "steer" the tearline offscreen, but it requires the framerate to remain above the refresh rate at all times to function properly).
     
    BlindBison likes this.

  5. BlindBison

    BlindBison Master Guru

    Messages:
    665
    Likes Received:
    120
    GPU:
    RTX 2080 Super
    Sorry to bring this up after so much time has passed, but I had a question regarding your comment if you get a moment.

    So, I noticed you said, "With a sustained framerate above the refresh rate, that type of triple buffer v-sync will always have up to 1 frame more input lag than double buffer v-sync..."

    So, here's my question -- suppose you do NOT have a sustained framerate above the refresh rate. So, suppose your using the blurbuster's low lag v-sync trick and capping to precisely monitor refresh - 0.01 (or using an in-engine limiter to cap precisely to 60 I suppose could work too).

    Technically you don't have a framerate which is sustained above refresh, no? It's a hair lower so occasionally you'll get 1 repeated frame if I understand correctly. Does triple buffer v-sync still have 1 more frame of latency vs double buffering?

    If not, does it still help to more elegantly handle drops beneath monitor refresh? For example double buffering drops hard from 60 to 30 to 20 to 15, but triple buffering at least reports out variable framerates and Digital Foundry stated triple buffering would have lower input lag than double buffering if a drop were to occur since you'd be effectively at say 50 fps instead of 30.

    So, does the low lag v-sync trick in conjunction with triple buffering give it the same latency as double buffering + low lag v-sync trick since neither technically has a framerate sustained above refresh? In particular assuming one will have occasional drops beneath their target it seems like either triple buffering + framerate cap OR adaptive v-sync + framerate cap (as low as you can go with it before tearing starts -- low lag v-sync trick results in constant tear or even 0.5 above refresh in my tests) would be most "elegant" since double buffering stutters really hard when it drops. I did notice consoles pretty much always do either triple buffer v-sync or adaptive v-sync going off the Digital Foundry tests nowadays.
     
  6. AsiJu

    AsiJu Ancient Guru

    Messages:
    7,234
    Likes Received:
    2,185
    GPU:
    MSI 6800XT GamingX
    I know you didn't ask me but jorimt doesn't seem to be around any more.

    My understanding is yes, triple buffer would handle a framerate drop below refresh rate better than double buffer and with a capped framerate shouldn't have any more input lag than double buffer.

    But let me ask: what game or scenario do you have in mind?

    I don't think you can force triple buffering in anything but OpenGL games and low level API games ie. DX12 and Vulkan handle frame delivery themselves.
     
    BlindBison likes this.
  7. BlindBison

    BlindBison Master Guru

    Messages:
    665
    Likes Received:
    120
    GPU:
    RTX 2080 Super
    Thanks, yeah that’s a good question. A lot of modern games seem to triple buffer internally/only have one Vsync option available. If you have to force it via driver for half refresh rate or whatever you might just have to do the adaptive route though or flat double buffering seems like if forcing it via driver doesn’t work.
     
  8. AsiJu

    AsiJu Ancient Guru

    Messages:
    7,234
    Likes Received:
    2,185
    GPU:
    MSI 6800XT GamingX
    To my knowledge modern games do not even do proper triple buffering any more, what they call triple buffering is actually double buffer + pre-render queue = 1 front buffer, 1 back buffer, pre-render queue.

    I guess it at least could work like triple buffering if the most up to date frame would be taken from the pre-render queue, but given the longer the queue, the more input lag you get, I doubt it works like that.

    Proper triple buffering has 1 front and 2 back buffers.

    Goes something like this as an example (someone more knowledgeable correct if necessary):
    60 Hz (= 16.67 ms) triple buffered with 240 fps internal framerate = during one screen refresh 4 frames are actually created.

    At 0.00 milliseconds:
    1. n frame in front buffer

    At ~5.56 ms:
    2. second buffer receives n+1 frame from the CPU

    At ~11.11 ms:
    3. third buffer receives n+2 frame from the CPU

    At 16.67 ms:
    4. second buffer receives n+3 frame from the CPU replacing n+1, buffers flip, n+3 is now in front buffer (frames n+1 and n+2 were discarded)

    -> no tearing and low input lag as the latest frame is always used to refresh the screen.

    Proper triple buffering is the best of both worlds. Why it's not really used anymore I do not know. Or if it ever indeed was.

    Ofc VRR solves this problem as well and makes buffering somewhat arbitrary, at least in the traditional Vsync sense.
    Actually VRR is even better as it's basically the same as Vsync off. Instead of syncing frames output to screen refresh, screen refresh is synced to frames output. Genius!

    Edited description of TB to be more accurate.
     
    Last edited: May 23, 2021
    BlindBison likes this.
  9. BlindBison

    BlindBison Master Guru

    Messages:
    665
    Likes Received:
    120
    GPU:
    RTX 2080 Super
    Thanks, that’s helpful! If they use the latest frame for lowest latency doesn’t it act like fast sync then where you’re not seeing linear motion/frames are “thrown out” or swapped based on most recent?

    iirc in Digital Foundry’s Vsync video they stated fast sync just renders frames constantly then pulls the most recent when it’s time for monitor refresh with the con of jittery looking motion since frames are being rendered then tossed out. Perhaps I misunderstood there.

    Im just a little confused about what’s different between triple buffering, fast sync, and is there a third/different scenario as well?
     
  10. AsiJu

    AsiJu Ancient Guru

    Messages:
    7,234
    Likes Received:
    2,185
    GPU:
    MSI 6800XT GamingX
    According to this fast sync is indeed a driver level og. triple buffering solution, in practice:

    https://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/13

    yeah the discarded frames and variable frame times (if uncapped or variable framerate) cause that often jittery motion.

    Also realized my previous description of TB was inaccurate:
    Software draws to both back buffers alternatively and the most recent frame is used, buffer flipping happens between back and front buffer (back sent to front).
    Previous post edited.
     
    Last edited: May 23, 2021
    BlindBison likes this.

Share This Page