Is it possible to set a custom cut-off value for Adaptive V-Sync? (Enhanced Sync for AMD)

Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by EerieEgg, Dec 11, 2019.

  1. EerieEgg

    EerieEgg Master Guru

    Messages:
    229
    Likes Received:
    23
    GPU:
    RTX 2080 Super
    Hi there guys,

    Something I've been wondering for sometime now after doing some testing on PC and watching some Digital Foundry tests for Xbox One (which typically uses Double-Buffered Adaptive V-Sync) is why on PC we cannot currently set a custom cutoff point for Adaptive V-Sync. Now, you might think this sounds like a worthless feature request or perhaps it may seem I don't understand how the feature works so let me explain what the purpose of this would be.

    On the Xbox One for example where Double Buffered Adaptive V-Sync is used, typically games appear to properly cap their framerate at 30 fps in conjunction with this (almost certainly since it's known to reduce input lag in conjunction with V-Sync - see BlurBusters article on LowLag V-Sync for more on why this is done for example).

    However, currently on PC you cannot do this with Nvidia's implementation of Adaptive V-Sync!

    This is because if you cap the framerate to 60 in-engine or with RTSS for example, the cut-off that Nvidia has set for Adaptive V-Sync is currently far too aggressive and turns off unless your FPS is capped closer to a value of 61 which discernibly hurts one's efforts to properly V-Sync in such a way as to reduce their input lag.

    The solution to this is of course then to simply use traditional Double-Buffered V-Sync then set an in-engine fps cap to 60 if available or use RTSS to cap fps to 59.986 for example (the BlurBusters low lag method that requires knowing one's "true" monitor refresh rate).

    However, this method simply does not handle the occasional fps dip well and that's a problem. What would be optimal then I believe is to have Double-Buffered Adaptive V-Sync with a custom cut-off value -- say, it would toggle off if the framerate dipped beneath 59.9 or 59.5 (or at least a true 60.0) for example. That way, one could cap their framerate in-engine (or via RTSS inline with the BlurBusters guide) to bring down input lag as much as possible which still using Adaptive V-Sync in order to better handle the occasional fps dip.

    For me, Triple Buffering is simply out of the question (unless you're playing at very high framerates on a high refresh rate monitor at least) since even at 60 fps I find this adds a discernible amount of input lag.

    Anyway, is it possible for me to submit this to Nvidia/AMD in some context? I think this would be an awesome feature to have myself, even if it's only available in the Nvidia Inspector. Especially considering that to use traditional Half-Refresh Double-Buffer V-Sync currently without Adaptive requires the Inspector. Thanks for your time,
     
    Last edited: Dec 12, 2019
  2. jorimt

    jorimt Active Member

    Messages:
    56
    Likes Received:
    19
    GPU:
    EVGA 1080 Ti FTW3
    It doesn't "sound like a worthless feature," I'm just pretty sure it's not a possible one.

    Any form of V-SYNC relies on the VBLANK (the span between the previous and next frame scan) to time frame delivery to the beginning of each scanout cycle to prevent tearing.

    Unfortunately, with fixed refresh rate V-SYNC, the VBLANK can't be manipulated, which, again, occurs between every refresh cycle. So if you're at 60Hz, it occurs roughly every 16.6ms, at 144Hz, 6.9ms, and so on.

    What you're asking for would require VRR (variable refresh rate, aka G-SYNC/FreeSync), which is already available for this very purpose by effectively "padding" the VBLANK duration as needed between scanout cycles to match the "refresh rate" to the framerate output by the system, and thus prevent tearing.
     
    EerieEgg likes this.
  3. EerieEgg

    EerieEgg Master Guru

    Messages:
    229
    Likes Received:
    23
    GPU:
    RTX 2080 Super
    @jorimt Thank you very much for explaining all of that, that's very helpful.

    Fascinating stuff -- from the input lag tests I've seen, Xbox One (generally uses Adaptive V-Sync) at the same framerate VS PS4 for example (which normally uses triple buffering) has often had a bit less input lag. I expect this is just because double buffer VS triple buffer then.

    Perhaps I'm mistaken then about how the devs are internally capping the fps of their games then -- on PC, you can't use the "Low Lag" vsync trick with adaptive only standard it seems and that makes sense then going off what you're saying there (capping fps with RTSS to something like 59.986 on a 59.996 hz monitor for example or capping at 60 with an in-engine limiter for example).

    So, I guess standard double-buffer V-Sync with a proper fps limit and a reduce MPRF/flip queue value is the best one can do for input lag if they're a traditional v-sync user at this point in time then, eh? Thanks for explaining all of that, that's very helful!
     
  4. jorimt

    jorimt Active Member

    Messages:
    56
    Likes Received:
    19
    GPU:
    EVGA 1080 Ti FTW3
    With a sustained framerate above the refresh rate, that type of triple buffer V-SYNC will always have up to 1 frame more input lag than double buffer V-SYNC simply due to the extra buffer; the more buffers that are available to be overfilled in that scenario, the more input lag; 2 vs. 3 in this case.

    If double buffer is what's being used, pretty much.

    There are also other low lag V-SYNC solutions, including Fast Sync/Enhanced Sync (which require excessive framerates above the refresh rate to reduce input lag significantly, and still stutter due to dropped frames) or RTSS Scan Line Sync (which technically runs with V-SYNC OFF, and allows the user to "steer" the tearline offscreen, but it requires the framerate to remain above the refresh rate at all times to function properly).
     
    EerieEgg likes this.

Share This Page