Does the Low Lag V-Sync trick still work with Triple Buffering?

Discussion in 'Rivatuner Statistics Server (RTSS) Forum' started by BlindBison, Sep 25, 2020.

  1. BlindBison

    BlindBison Master Guru

    Messages:
    772
    Likes Received:
    156
    GPU:
    RTX 2080 Super
    Hi there guys,

    Sorry to bother you all, but I was wondering whether or not the "Low Lag V-Sync Trick" still works if a user is triple buffering rather than double buffering.

    So, I'm referring to this technique here: https://blurbusters.com/howto-low-lag-vsync-on/

    Or, the same thing as described here (where you cap FPS to something like ~0.01-0.02 beneath monitor "true" refresh OR ("true refresh"/2) - 0.01 for Nvidia Inspector Half-Refresh V-Sync): https://medium.com/@petrakeas/vsync-with-low-input-lag-50ms-lower-2437118bfa5

    I know this works with traditional double buffer V-Sync, but does it still reduce input delay when Triple Buffered V-Sync is enabled?

    If it does still work to reduce delay I suppose I'm just a little confused about how that works -- since I imagine that the input lag reduction does NOT apply if you're at say, 50-55 FPS triple buffered on a 60 hz display, why would it work to reduce lag at 59.98 or some such? Am hoping to learn more about how this works.

    Thanks!
     
  2. BlindBison

    BlindBison Master Guru

    Messages:
    772
    Likes Received:
    156
    GPU:
    RTX 2080 Super
    @RealNC Sorry to bother you, but I figured I’d tag you since historically you’ve been a wealth of knowledge for this sort of thing. Thanks,
     
  3. liluglymane

    liluglymane New Member

    Messages:
    6
    Likes Received:
    3
    GPU:
    GTX 1070
    Hey BlindBison, I'm not familiar with that V-Sync trick but I thought I'd chime in just to suggest tinkering with some of the alternative v-sync modes in the NVidia control panel, just to see if you find a worthwhile combination. There's like 4 different types of v-sync listed in the description. Might be worthwhile to tinker with the Low Latency Mode setting too.

    Good luck!
     
    BlindBison likes this.
  4. BlindBison

    BlindBison Master Guru

    Messages:
    772
    Likes Received:
    156
    GPU:
    RTX 2080 Super
    @liluglymane Thanks for your comment, yup there are several types available with their own pros and cons -- Digital Foundry did a video covering these at some point in the past.

    1) Traditional Double-Buffered V-Sync (Jumps from 60 to 30 to 20 with FPS drops -- in my experience drops usually just manifest as stuttering though so perhaps it only does the whole 60/30/20/15 swap if they're GPU related drops or some such? Not sure)
    2) Triple Buffered V-Sync (Supports variable framerates beneath monitor refresh but will have one extra frame of input delay while at monitor refresh and if you do end up dropping to something like half refresh often then you may get better results just using half-refresh double buffered vsync)
    3) Adaptive V-Sync (Same as double buffered, but will just toggle off v-sync when it dips beneath refresh -- the issue with this one is that it doesn't work with the low lag trick as described in the OP -- it just toggles off v-sync)
    4) Fast Sync (Lower latency, but very jittery)

    In my own local tests, I usually prefer either 1 or 2 since 3 doesn't work with the low lag fps capping trick and 4 looks super jittery. Triple buffering can be nice to more elegantly handle drops beneath refresh but I'm not certain whether or not the low lag fps capping trick works with it or not. Half-Refresh V-Sync is also an option for those with Nvidia Inspector -- used to use that all the time on my laptop.
     

  5. AsiJu

    AsiJu Ancient Guru

    Messages:
    7,436
    Likes Received:
    2,342
    GPU:
    MSI 6800XT GamingX
    Give this a good read:

    https://www.anandtech.com/show/2794

    properly implemented triple buffering would be best of both worlds (no tearing, no fps halving, least input lag).

    Edit:

    As far as I know, framelimiters help work around vsync's lag by keeping the back buffer(s) starved.
    Limiting pre-render queue also helps as the CPU won't attempt to send more frames to the GPU.

    If framerate is same as refresh rate and pre-render limit is 1, there can be at worst 1 frame in the back buffer and none are discarded.

    This should be just as beneficial with triple buffering.
    TB should be a bit more lenient even as if there happens to be more frames queued than buffers, TB can just flip the back buffers and use the most recent one.
     
    Last edited: Sep 26, 2020
    BlindBison likes this.
  6. BlindBison

    BlindBison Master Guru

    Messages:
    772
    Likes Received:
    156
    GPU:
    RTX 2080 Super
    Thanks! That’s helpful to know — for triple buffering I may be misremembering but if I recall in Digital Foundry’s Vsync video they said there are two forms of triple buffering where the con of the lower lag version is it can appear sort of jittery (something to do with using the most recent buffer rather than showing them in order or something?)

    Anyway, thanks for explaining all of that! In that case would the optimal setup then be capping framerate + ultra low latency mode + Vsync ON?

    My only concern there is that in BattleNonSense tests in Overwatch reducing the flip queue via control panel actually increased input lag slightly when he was not gpu bound (for some reason). Though, he was using the in engine FPS limiter so maybe you’d see different results with RTSS, not sure.

    EDIT: leaving that df video: I’m gonna rewatch this later lol — hopefully I’m not misremembering anything.
     
  7. AsiJu

    AsiJu Ancient Guru

    Messages:
    7,436
    Likes Received:
    2,342
    GPU:
    MSI 6800XT GamingX
    I don't know the details of triple buffering itself.
    I remember there were at least 2 buffer flipping modes available for OGL in older tweaking software like RivaTuner.

    What mode is used and a bunch of other things probably affect how smooth the image is.

    As to why reducing flip queue increases input lag is likely tied to the game's internal limiter and maybe it overrides the driver setting.

    Or maybe the CPU couldn't keep up, increasing flip queue helps the CPU as it has "more time" to prepare frames for the GPU.

    In theory less frames in queue = less input lag always but in practice...
     
    BlindBison likes this.
  8. BlindBison

    BlindBison Master Guru

    Messages:
    772
    Likes Received:
    156
    GPU:
    RTX 2080 Super
    @AsiJu Nice icon by the way -- DOOM 2016/Eternal are awesome games imo -- I wish every PC port/release could be so well optimized and multithreaded. One of the few games out there I know of that scales really efficiently with all the threads on my 3900X. Technical marvel of a game imo
     
    AsiJu likes this.
  9. AsiJu

    AsiJu Ancient Guru

    Messages:
    7,436
    Likes Received:
    2,342
    GPU:
    MSI 6800XT GamingX
    Heh, thanks.

    I was thinking the other day what would be a good avatar and remembered the Icons from Doom Eternal.
    That rip and tear icon in the style of old posters is neat.

    (I've replayed Eternal 5 or 6 times already and am very much a fps guy in general so it fits.)
     
    BlindBison likes this.
  10. RealNC

    RealNC Ancient Guru

    Messages:
    3,525
    Likes Received:
    1,699
    GPU:
    EVGA GTX 980 Ti FTW
    The cap reduces vsync backpressure lag. It should still do that regardless of triple vs double buffer vsync.

    Also note that triple buffering as described in the anandtech article is what you get with fastsync (nvidia) or enhanced sync (AMD.) Triple buffer vsync as offered in some games options has nothing to do with it.
     
    BlindBison likes this.

  11. BlindBison

    BlindBison Master Guru

    Messages:
    772
    Likes Received:
    156
    GPU:
    RTX 2080 Super
  12. AsiJu

    AsiJu Ancient Guru

    Messages:
    7,436
    Likes Received:
    2,342
    GPU:
    MSI 6800XT GamingX
    Yeah games usually actually have double buffer + pre-render queue what they call triple buffering.

    Why native / proper triple buffering isn't implemented anymore I don't know.
    Might have something to do with modern engines often using deferred rendering.
     
  13. Astyanax

    Astyanax Ancient Guru

    Messages:
    11,027
    Likes Received:
    4,084
    GPU:
    GTX 1080ti
    deferred rendering has nothing to do with the swap chain, completely different locations in the pipeline.
     
    BlindBison likes this.

Share This Page