Does lowering "Max Prerendered Frames" reduce input lag when using traditional V-Sync?

Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by BlindBison, Dec 2, 2019.

  1. BlindBison

    BlindBison Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,139
    GPU:
    RTX 3070
    Hi there guys,

    I had previously taken this as a no-brainer for years now, but the reason I ask is because recently BattleNonSense conducted an input lag test on YouTube with some results that surprised me and caused me to reconsider this (link at the bottom of post).

    For example, in those tests Chris found that in Overwatch and Battlefield 5, while using G-Sync and using the in-game frame rate limiter to cap fps to a value that is consistently achievable, forcing Max Prerendered Frames to the new Ultra Low Latency mode actually increased input lag by a small amount.

    However, he did not test traditional V-Sync methods nor did he repeat these tests with Rivatuner (RTSS).

    Additionally, Overwatch and Battlefield V have proper in-engine fps limiters is my understanding which is not always the case (many games don't include them or the included ones are actually just worse than RTSS in some cases it would seem going off a test from Hardware Unboxed I saw where they tested the Far Cry 5 limiter for example).

    Also notably, it seems the in-game Overwatch setting "Reduce Buffering" option was omitted from testing as well presumably because it does something similar to reducing the size of the flip queue/prerendered frames/the new low lag ultra mode would be my bet, but still would've been nice to see it tested for differences between it and the new Ultra low lag mode I think.

    Anyway, some of those results seem odd I think -- DisplayLag did a test a long while back for example that showed in street fighter iv, setting max prerendered frames to 1 (the minimum) actually reduced input lag (where SFIV has a forced in-engine fps cap of 60 iirc) so strange to see that wasn't also the case for Chris's tests using the new Ultra Low Lag modes from AMD and Nvidia in the games he tested.

    I do understand that these low-lag modes only work if you're limited by your GPU (or so Chris from BattleNonSense explains in his video). However, if you cap your fps to a consistently achievable value, then you shouldn't be limited by either your GPU or CPU I would think.

    Anyway, just trying to make sense of all this and determine what settings are best to use overall -- if the BattleNonSense test is accurate then it seems Default Prerender/Flip Queue settings + an in-game/in-engine fps cap to a consistently achievable value is the way to go (though he did not test MPRF = 1, only Ultra here so perhaps it's Ultra that is the problem, I'm unsure).

    However, what about traditional V-Sync users and the old DisplayLag StreetFighter tests that showed MPRF = 1 reduced input lag? Notably, Street Fighter is capped to 60 fps in-engine so it's already limiting it's fps correctly right? Just seems odd they'd get different tests results though perhaps the difference has to do with G-Sync VS traditional V-Sync, the different fps limiting implementations of Street Fighter VS Overwatch/Battlefield or perhaps the difference between the new Ultra low lag mode and the older MPRF size = 1, I really don't know.

    Thanks for your help and time, I appreciate it.

    BattleNonSense test/video:
    Older DisplayLag tests (these tests may have been flawed though it seems -- it's been said for example that V-Sync (Smooth) wouldn't reduce input lag, though I've only seen this one test done for it). SLI from what I've read also adds a frame of input lag to boot as I understand it: https://displaylag.com/reduce-input-lag-in-pc-games-the-definitive-guide/

    P.S.

    Currently, for Traditional V-Sync, my understanding is that the "optimal" setup is as follows (BlurBusters did a low lag guide a ways back so this is largely where I take this from):

    1) Cap FPS to 60 (assuming a 60 hz monitor and use an in-engine fps limiter if possible -- if using RTSS, then you can set this to a more exact decimal value inline with the blurbusters guide, though proper in-engine limiters still seem to have lower input lag from the tests I've seen although their frame pacing can be discernibly worse).
    2) Set Max Prerendered Frames/Flip Queue size to the minimum (1 before, but now perhaps Ultra is superior, but this is uncertain to me going off what may be the conflicting tests listed above)
    3) Use Double-Buffered Single-GPU V-Sync (SLI adds a frame of lag and triple buffering adds a frame of lag in the tests I've seen).
    4) If at 30 fps, force Half-Refresh V-Sync via Nvidia Inspector.
     
    Last edited: Dec 2, 2019
  2. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,011
    Likes Received:
    7,351
    GPU:
    GTX 1080ti
    Not any more.
     
  3. BlackNova92

    BlackNova92 Master Guru

    Messages:
    206
    Likes Received:
    13
    GPU:
    16gb
    i'm so confused, should we enable both reduce buffering and pre rendered at the same time?
    I thought the reduce buffering is doing the pre rendered 1 trick already or am i wrong here?
    I was testing the ultra and currently pre rendered 1 but didn't really see or feel a difference,
    So i'm actually wondering if reduce buffering is basically all we need in OW.
     
  4. BlindBison

    BlindBison Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,139
    GPU:
    RTX 3070
    @Astyanax Could you elaborate on your comment there? It seems extremely odd that these results would suddenly change -- unless it has to do with a difference between MPRF = 1 and MPRF = new Ultra mode or unless the difference is limited to certain games -- unfortunately those BattleNonSense tests, left out a lot of cases like RTSS + MPRF = 1 instead of Ultra + Traditional V-Sync + etc so it's tough to make general heads or tails out of it considering it seems to conflict with the DisplayLag tests linked in the OP.

    @BlackNova92 Depending on the game in question, forcing the flip queue/max prerendered frames setting lower won't actually work from what I've read since it only changes the MPRF setting for certain APIs iirc.

    I'm not 100% certain, but my understanding is that really those settings should do exactly the same thing where the control panel setting normally overrides the in-game setting if it works.

    I don't expect it to hurt anything to have both enabled, but if there's an in-game setting as a rule of thumb, I typically use that instead personally.

    Still, you may not want either enabled if you're running uncapped framerates or using G-Sync going off those BattleNonSense tests linked in the OP and CaptaPraelium's Post regarding the flip queue from aways back -- in fact where I expected this setting to help was in cases where users capped their framerates, but those BattleNonSense tests seem to demonstrate the opposite. Very unfortunately though, he only tested the new "Ultra" low lag mode, not the old MPRF = 1 setting and he did not conduct these tests with traditional V-Sync. He also didn't conduct his tests with Rivatuner's FPS limiter so it could be that his test results are exclusive to certain games like BFV and OW that have proper in-engine fps limiting, I really don't know.

    That DisplayLag link in the OP may be out of date at this point (or even have errors since nobody seems to understand how the heck V-Sync (Smooth) for SLI could possibly reduce input lag at the same fps), but historically lowering the flip queue size (MPRF) in conjunction with V-Sync and a proper fps cap and double buffered single gpu V-Sync provided the lowest lag V-Sync solution was my understanding.
     
    Last edited: May 27, 2020

  5. Mda400

    Mda400 Maha Guru

    Messages:
    1,089
    Likes Received:
    200
    GPU:
    4070Ti 3GHz/24GHz
    Maximum Pre-Rendered Frames (now labeled Low Latency Mode) is independant of any screen synchronization setting.

    It only works on applications that use DirectX 11 /OpenGL and older API's. For DirectX 12 and Vulkan, the application controls the frame queueing.

    Even if you have Vsync disabled and let the framerate run uncapped, there will still be a difference in input delay between each of the 3 settings.

    https://www.nvidia.com/en-us/geforce/news/gamescom-2019-game-ready-driver/

    https://forums.guru3d.com/threads/low-latency-modes-w-directx-11-gpuview.428628/#post-5712884
     
    Last edited: Dec 3, 2019
    BlindBison likes this.
  6. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,011
    Likes Received:
    7,351
    GPU:
    GTX 1080ti
    perceived input latency just doesn't show up, it used to be that it did on single core systems but not so much these days.

    rather the setting only has a perceivable affect on input latency when the gpu is taxed.
     
    BlindBison likes this.
  7. WillG

    WillG Active Member

    Messages:
    74
    Likes Received:
    11
    GPU:
    MSI RTX 2080
    I'm one of those people that is very easily able too see stuttering, despise it - yet I too simply use V-sync 'Force on' setting.
    I found the newer settings such as Ultra low latency etc. leads to stuff just not working and/or micro-stuttering - whatever it's speed increase it never feels as smooth as regular V-Sync. So i choose KISS method, simply flip on the V-Sync, and it's served me well with years of smooth gaming.

    Within Custom resolution settings in the driver control panel (or tool such as CRU) you will see that the monitor Hz range is 59-61hz for 60Hz setting. This of course is because it's not perfect, nor is a monitor frequency (or computers frequencies) static. Also in older UE3 config files (and other games) the internal engines smoothing would often cap themselves at 62 fps.
    Therefore I set a frame cap of 62 fps in driver (Nvidia Inspector)(122fps for 120hz etc). It works well.

    Triple buffer = off, which is what I use - but triple does have nice smoothness, albeit at the expense of some latency/response.

    Max pre-rendered frames I leave on Auto therefore letting the application set it's own. (Every now and then (not often) a certain game feels not quite fast enough - then I'll set it to '2' - but most time 'Auto' works fine)
    Edit: I see 'Mda400' has covered it nicely above)
     
  8. Mda400

    Mda400 Maha Guru

    Messages:
    1,089
    Likes Received:
    200
    GPU:
    4070Ti 3GHz/24GHz
    The Maximum Pre-Rendered Frames setting has always had an effect on input latency, regardless if the GPU is taxed or not.

    Depending on what its set to, the GPU will wait for the CPU to prepare that many frames before rendering them.
    The more time the GPU is spent near 100%, the more efficient a CPU can be considered in sending frames to the GPU when needed.

    If the value is set too low, (On or Ultra Low Latency Mode) the framerate may drop if the CPU is too saturated to send more frames to the GPU that its ready for.
    If the value is set too high, you may experience stutters or low performance if the GPU is holding back the CPU by waiting for it to prepare more frames than necessary.

    It is a buffer and it should be set to the lowest setting (Ultra in this case), unless you are not seeing the performance you expect due to an under-performing CPU.

    https://forums.guru3d.com/threads/good-and-bad-with-max-prerender-frames-1-vs-8.420499/
     
    Last edited: Dec 3, 2019
    BlindBison likes this.
  9. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,011
    Likes Received:
    7,351
    GPU:
    GTX 1080ti
    "Perceived" is the key word.
     
  10. HeavyHemi

    HeavyHemi Guest

    Messages:
    6,952
    Likes Received:
    960
    GPU:
    GTX1080Ti
    Facts, is the key word. You're claiming it has zero or is only a 'perceived' effect when the facts say otherwise depending upon many factors including for example, deferred rendering. My opinion based on the evidence is your perception is wrong.
     

  11. BlackNova92

    BlackNova92 Master Guru

    Messages:
    206
    Likes Received:
    13
    GPU:
    16gb
    i usually cap my fps with rtss and s-sync, i think reduce buffering is on and pre render is at 1, but yeah in overwatch's case it's really hard to understand what you actually need to enable and what not.
     
  12. HeavyHemi

    HeavyHemi Guest

    Messages:
    6,952
    Likes Received:
    960
    GPU:
    GTX1080Ti
    This is true for quite a few games where vsync settings do not always work the way you would expect. Particularly when using NVCP versus in game settings. It also depends upon your FPS in game and how it deals with going over and under your screen sync rate. I've almost started a list of settings per game on what will tear and what will not. But driver and game updates can change that.
     
  13. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,011
    Likes Received:
    7,351
    GPU:
    GTX 1080ti
    there is no perceived affect from changing this if your gpu isn't maxed out. thats a fact.
     
  14. BlindBison

    BlindBison Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,139
    GPU:
    RTX 3070
    @Mda400 Thank you very much for your comments, that’s really very interesting.
     
    Last edited: Dec 4, 2019
  15. PQED

    PQED Active Member

    Messages:
    92
    Likes Received:
    32
    GPU:
    MSI 1070 Gaming X
    It is not. Learn to admit when you're wrong, it might serve you better in the future.
    Any comeback from you now is only going to sound petty to anyone reading this thread, so don't bother.
     

  16. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,011
    Likes Received:
    7,351
    GPU:
    GTX 1080ti
    Learn to discern reality from placebo.


    When the GPU is not in full use the cpu and gpu are just spitting frames out unrestrained with the frametime interval fitting within swap intervals.

    You only perceive the pre-render setting having any affect when the GPU is not maintaining a persistent frametime that fits within the swap interval and "buffer bloat" takes place.
     
    Last edited: Dec 4, 2019
    BlindBison likes this.
  17. BlindBison

    BlindBison Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,139
    GPU:
    RTX 3070
    @PQED @Astyanax For what it's worth, Chris from BattleNonSense did find in his tests for Overwatch and Battlefield V recently that having Ultra Low Latency enabled actually increased input lag by a little bit when FPS were capped using those games in-engine fps limiters and the GPU wasn't maxed out.

    Now, I don't know if there was an error in his testing or if this is limited to his rig or those games or their limiters, etc -- but -- it seemed to demonstrate that the low lag modes helped when you are GPU limited, but did nothing or even hurt you if you were CPU limited. That's my understanding and I could be wrong though so correct me if I'm misinterpreting the results of those tests.

    The results of those tests are somewhat unusual I think since I remember the DisplayLag website tests that found lowering MPRF to 1 did result in an input lag reduction even though the title they were testing had its fps capped in-engine (Street Fighter IV).
     
    Last edited: Dec 4, 2019
  18. BlindBison

    BlindBison Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,139
    GPU:
    RTX 3070
    @Mda400 If you get the chance, could you clarify something for me? I'm trying to interpret what you're saying here correctly:

    > "Depending on what its set to, the GPU will wait for the CPU to prepare that many frames before rendering them.
    The more time the GPU is spent near 100%, the more efficient a CPU can be considered in sending frames to the GPU when needed. If the value is set too low, (On or Ultra Low Latency Mode) the framerate may drop if the CPU is too saturated to send more frames to the GPU that its ready for. If the value is set too high, you may experience stutters or low performance if the GPU is holding back the CPU by waiting for it to prepare more frames than necessary. It is a buffer and it should be set to the lowest setting (Ultra in this case), unless you are not seeing the performance you expect due to an under-performing CPU."

    If I'm understanding you correctly, you're saying that at least in some cases increasing the size of the flip queue/MPRF can actually result in MORE stutter/hitching. I wanted to double check and make sure I had you right there since iirc AMD's note for the flip queue used to state that lowering it could cause stuttering so I always thought higher values prevented stutter/hitching, but I definitely could be wrong here.

    @CaptaPraelium did a write-up here regarding Future Frame Rendering in Battlefield V for example (https://www.reddit.com/r/BattlefieldV/comments/9vte98/future_frame_rendering_an_explanation/) where he described how the flip queue works and I wonder if I am misunderstanding something here.

    Does the flip queue/MPRF works like A) or like B) here:

    A) The CPU prepares 3 frames then sends all of them to the GPU and once and begins working on the next 3 frames
    B) The CPU prepares 1 frames then sends it to the GPU then while the GPU is working on that frame, it begins work on the 2nd frame, and -- if it has time (if the gpu is still working on the 1st frame while the cpu has the second frame finished), it will begin working on the 3rd frame where it is passing its most recent frame to the GPU whenever the GPU is ready.

    Going of Praelium's post I'd thought it was B), but going off your note here, it sounds like you're saying it's A).

    Thanks for your time, I really appreciate it -- very hard to find good info regarding the flip queue/mprf and what one should best set it too I find.

    EDIT: Here's a Google Doc with that AMD note on how lower values can cause hitching/stutter: https://docs.google.com/document/d/1ydvAKHF4NvybJIWmZEocoi0-gMYg4NhvYhWNLyohNAQ/edit?usp=sharing

    Really the debate should be with @CaptaPraelium 's reddit post on FFR since -- provided he got everything right about how FFR works -- I think he's made a very compelling case to leave MPRF's at the default setting or at least a value of 2 or 3 (where 3 is generally the default from what I've read).
     
    Last edited: Dec 4, 2019
  19. Mda400

    Mda400 Maha Guru

    Messages:
    1,089
    Likes Received:
    200
    GPU:
    4070Ti 3GHz/24GHz
    MPRF/flip queue size controls how many frames a CPU can prepare before sending them to the GPU to be painted\rendered.

    Higher values can allow a weak CPU more time to prepare frames for a powerful GPU, at the expense of additional latency.
    This would make sure framerates stay optimal with a CPU under stress.

    Lower values can allow a powerful CPU to prepare frames unconstrained and pass them to a powerful or weak GPU to render with little latency.

    If set too high and the framerate is capped your CPU could be waiting on the GPU to render frames and may cause this queue to "bloat" and produce stutters/low performance.

    This is why its best to set it globally, as low as possible for all applications (Ultra Low Latency Mode in Nvidia's case) and if you experience low performance in certain applications while knowing your CPU may be saturated, then this would be a setting to start changing when troubleshooting.

    Again, it depends on the hardware in use. But as an example for my setup, if i choose "Off" for Low Latency Mode and use the sys_MaxFPS command to cap my framerate in Crysis 2, my performance varies from ~40-60 fps where with Ultra Low Latency Mode, it will stay 60 fps all the time.
     
    Last edited: Dec 4, 2019
    BlindBison likes this.
  20. BlindBison

    BlindBison Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,139
    GPU:
    RTX 3070
    @Mda400 Thanks for explaining that, that's really fascinating -- glad you brought up Crysis 2 since I noticed anecdotally my 8700K + 1080 Ti rig stuttering less frequently in Crysis 3 with MPRF set to lowest value rather than default, but I figured it was just me or just the test run since online most things i've seen say higher MPRF/Flip Queue values generally prevent stuttering.

    What I find odd with all this is the conflicting tests I've seen where in those BattleNonSense tests, higher flipqueue resulted in lower lag with the in-engine fps limiters while in DisplayLag's tests lower MPRF setting decreased input lag with an in-engine fps cap. Very odd results, but perhaps one of them was limited by the CPU and one by the GPU then, I'm uncertain.

    It's sort of annoying to me that Nvidia removed the old MPRF setting since the new one no longer permits a value of "2" for MPRF -- only Default, 1, or Ultra where 2 could in some cases be a good compromise (since default can be 3 or even more potentially based on game as I understand it).

    So, have I got this correct then?

    1) Weak CPU + Powerful GPU = Default MPRF/FP settings
    2) Powerful CPU + Weak GPU = Ultra Low Lag Mode ON (or MPRF = the min)
    3) Powerful CPU + Powerful GPU = What exactly? Still the min? Or would you want a balanced setting like MPRF = 2 or ON rather than Ultra here perhaps?
    4) Weak CPU + Weak GPU = Same as #3?

    Thank you for your time, it's awesome getting to talk with people who understand these settings.
     
    Last edited: Dec 4, 2019

Share This Page