Low Latency Modes w/ DirectX 11 & GPUView

Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by rewt, Sep 20, 2019.

  1. rewt

    rewt Maha Guru

    Likes Received:
    Ok maybe I'm just a geek but I thought it would be fun to see the effects of low latency modes visually. I don't own a high speed camera so GPUView was the best I could come up with to produce some meaningful, accurate, reproducible results.

    Vsync was off for the test (did anyone tell you pre-rendering only affected vsync?)

    Context Queues (shorter height is better for input response):
    Essentially the higher the packets are stacked, the more commands/frames are building up in the queue, which increases input latency.

    Low Latency Mode Off (FRL Low Latency = 0, Maximum pre-rendered frames = Application Controlled)
    [​IMG] Does anything come to mind when you see 3 orange rows when referring to application controlled?

    Low Latency Mode On (FRL Low Latency = 0, Maximum pre-rendered frames = 1)

    Low Latency Mode Ultra (FRL Low Latency = 1, Maximum pre-rendered frames = 1)

    Tools used for the test:
    • Unigine Heaven Benchmark 4.0 (Custom Extreme Preset DirectX 11 Fullscreen)
    • UI for ETW 1.51 w/ Windows Performance Toolkit & GPUView (included)
    • NVIDIA Driver 436.30
    • NVIDIA Profile Inspector
    • MSI Afterburner v4.6.2 beta 2 w/RTSS 7.2.3 beta 3

    I listed the tools I used above, and I'd be interested to see your results as well, in any of your favorite apps/games, at any settings you desire! (Note: low latency mode doesn't affect DirectX 12 or Vulkan)
    Last edited: Sep 20, 2019
    Mda400, AsiJu, Xtreme512 and 4 others like this.
  2. lime

    lime Member

    Likes Received:
    Xtreme512 and mbk1969 like this.
  3. mbk1969

    mbk1969 Ancient Guru

    Likes Received:
    GeForce GTX 1070
    So V-Sync decreases input lag?
  4. janos666

    janos666 Master Guru

    Likes Received:
    MSI RTX2060 6Gb
    Well, the old "low-lag V-sync" theory (https://forums.guru3d.com/threads/the-truth-about-pre-rendering-0.365860/page-12#post-5380262) already argued that prerender-limit is practically ineffective when you cap the framerate below the possible maximum on the CPU side because the queues get starved (on both sides). The allowed queue length is uninteresting (could be infinite) if it never gets filled above 1. (Although, I wasn't completely sure if that's entirely true.) This is practically a halfway reinvention of that theory.
    But I think the main point is that GPU utilization (and I guess the same goes for the CPU as well) should be kept below ~100% if you want the lowest possible lag (no matter the *Sync method or how you achieve this - but CPU-side framerate cap seems like the practical choice). The problem is that this method uses a static limit which is fine for static refresh rate V-sync but not so great for VRR A-Sync. So, AtiLag and Low Latency still have some use case: when you don't want to statically cap your framerate below it's possible average (for the limit to be in effect most of the time) but still want to cut back from the lag as much as possible (within the possibilities of the circumstances).
    Last edited: Sep 20, 2019

  5. Cave Waverider

    Cave Waverider Master Guru

    Likes Received:
    GeForce RTX 2080 Ti
    This video might also be helpful for the discussion:

Share This Page