Hi there guys, I had previously taken this as a no-brainer for years now, but the reason I ask is because recently BattleNonSense conducted an input lag test on YouTube with some results that surprised me and caused me to reconsider this (link at the bottom of post). For example, in those tests Chris found that in Overwatch and Battlefield 5, while using G-Sync and using the in-game frame rate limiter to cap fps to a value that is consistently achievable, forcing Max Prerendered Frames to the new Ultra Low Latency mode actually increased input lag by a small amount. However, he did not test traditional V-Sync methods nor did he repeat these tests with Rivatuner (RTSS). Additionally, Overwatch and Battlefield V have proper in-engine fps limiters is my understanding which is not always the case (many games don't include them or the included ones are actually just worse than RTSS in some cases it would seem going off a test from Hardware Unboxed I saw where they tested the Far Cry 5 limiter for example). Also notably, it seems the in-game Overwatch setting "Reduce Buffering" option was omitted from testing as well presumably because it does something similar to reducing the size of the flip queue/prerendered frames/the new low lag ultra mode would be my bet, but still would've been nice to see it tested for differences between it and the new Ultra low lag mode I think. Anyway, some of those results seem odd I think -- DisplayLag did a test a long while back for example that showed in street fighter iv, setting max prerendered frames to 1 (the minimum) actually reduced input lag (where SFIV has a forced in-engine fps cap of 60 iirc) so strange to see that wasn't also the case for Chris's tests using the new Ultra Low Lag modes from AMD and Nvidia in the games he tested. I do understand that these low-lag modes only work if you're limited by your GPU (or so Chris from BattleNonSense explains in his video). However, if you cap your fps to a consistently achievable value, then you shouldn't be limited by either your GPU or CPU I would think. Anyway, just trying to make sense of all this and determine what settings are best to use overall -- if the BattleNonSense test is accurate then it seems Default Prerender/Flip Queue settings + an in-game/in-engine fps cap to a consistently achievable value is the way to go (though he did not test MPRF = 1, only Ultra here so perhaps it's Ultra that is the problem, I'm unsure). However, what about traditional V-Sync users and the old DisplayLag StreetFighter tests that showed MPRF = 1 reduced input lag? Notably, Street Fighter is capped to 60 fps in-engine so it's already limiting it's fps correctly right? Just seems odd they'd get different tests results though perhaps the difference has to do with G-Sync VS traditional V-Sync, the different fps limiting implementations of Street Fighter VS Overwatch/Battlefield or perhaps the difference between the new Ultra low lag mode and the older MPRF size = 1, I really don't know. Thanks for your help and time, I appreciate it. BattleNonSense test/video: Older DisplayLag tests (these tests may have been flawed though it seems -- it's been said for example that V-Sync (Smooth) wouldn't reduce input lag, though I've only seen this one test done for it). SLI from what I've read also adds a frame of input lag to boot as I understand it: https://displaylag.com/reduce-input-lag-in-pc-games-the-definitive-guide/ P.S. Currently, for Traditional V-Sync, my understanding is that the "optimal" setup is as follows (BlurBusters did a low lag guide a ways back so this is largely where I take this from): 1) Cap FPS to 60 (assuming a 60 hz monitor and use an in-engine fps limiter if possible -- if using RTSS, then you can set this to a more exact decimal value inline with the blurbusters guide, though proper in-engine limiters still seem to have lower input lag from the tests I've seen although their frame pacing can be discernibly worse). 2) Set Max Prerendered Frames/Flip Queue size to the minimum (1 before, but now perhaps Ultra is superior, but this is uncertain to me going off what may be the conflicting tests listed above) 3) Use Double-Buffered Single-GPU V-Sync (SLI adds a frame of lag and triple buffering adds a frame of lag in the tests I've seen). 4) If at 30 fps, force Half-Refresh V-Sync via Nvidia Inspector.