Hello there guys, I recently came across CaptaPraelium's thread here on how "Maximum Pre-rendered Frames" works and was wondering if anyone could help clarify certain points in the write-up for me: https://www.reddit.com/r/BattlefieldV/comments/9vte98/future_frame_rendering_an_explanation/ After reading that write-up regarding FFR (Future Frame Rendering), my understanding is that a forcing the setting to 1 in the control panel "disables" FFR (since the CPU always has to pre-render at least 1 frame). So, the CPU will prepare the first frame, then pass it to the GPU, then the CPU idles until the GPU is done with the first frame before starting on the next. Then, while the CPU is working on the next frame, the GPU is idle til the CPU is done with its part (as I understand it based on the write-up). 1 is probably not recommended then -- at least not generally or when you're likely to be CPU bound as I understand it since then you'll get worse hardware utilization and your CPU + GPU will idle more often resulting in worse framepacing/frametimes and a lower overall framerate (correct me if I got anything wrong there of course). Forcing the value to 2 should make more efficient use of one's hardware and seems to have several advantages as described in the write-up (less cpu/gpu idleing + better frame pacing + higher overall fps). My main question then is regarding the "Default" value of "3" set by windows and "most" apps as I understand it. I have a guess why this is the default, but I wanted to ask since I may have this wrong. 1) Suppose the value is forced to "2" -- the CPU prepares the first frame then passes it to the GPU and begins work on the second frame. Now the CPU and GPU both start work on a frame at the same time. What happens if the GPU wins that race and finishes its work on the first frame before the CPU finishes its work on the second frame? Well, then I expect the GPU would idle until the CPU finishes its work on the second frame resulting in wasted hardware potential/idling GPU for some amount of time. This is why I bet the default value is "3". So that the "race" between the CPU/GPU where the GPU wins and then idles is mitigated and will probably not often occur. I could be wrong there of course, but that's my thought on it initially here for why the default would be 3 instead of 2 based on how I've read that write-up. 2) If a game is GPU bound instead of CPU bound, then I expect the flip queue size / FFR setting wouldn't matter past "2" since the GPU won't be winning that "race" I described above -- at least not often. I would expect based on the write-up that a value of "2" vs "1" would ALWAYS result in some level of performance improvement because with a value of 1, the CPU and GPU will have idling time built in no matter what while the other component works on the current (and only) frame. Of course correct me if I'm wrong there. All that to say, my expectation is that generally going off CaptaPraelium’s thread above -- you'd probably want at least "2" for this setting (enable FFR) to improve hardware utilization/fps/frame pacing/reduce stutter, but "3" might be the overall best value since I bet it helps to mitigate if the GPU finishes work on the first frame before the CPU has finished working on the second frame sometimes. Is there any case where disabling FFR WON'T negatively impact performance/framepacing/etc? As I understand it, disabling FFR completely would always reduce perf by at least some amount due to the fact that CPU/GPU will both definitely idle (if I'm understanding that correctly). 3) CaptaPraelium’s thread above mentions that individual testing has to be done for the "optimal" value for this since it is so game and hardware dependent (so, different games and different hardware setups will be more or less CPU bound is my interpretation of that where if you're more CPU bound, a higher setting for FFR will help more as I understand it -- I may be misunderstanding something there though). I noticed in some games like Witcher 3, changing this to be any less than 3 or 4 resulted in horrendous stutter/hitching/micro stutter with uncapped framerates (with 1080 Ti + 8700K at stock clocks). In light of all this and how I, and I expect many others, really just want a global value of sorts to force and forget, I've just gone ahead and defaulted to "Let the 3D Application Decide" for the time being. It seems unfortunate that finding more information on this setting has been rather difficult and even now it seems to be an extremely controversial setting depending on who you ask/where you look. I hope AMD's new "Low Lag" feature is some sort of "Dynamic" Flip Queue or some such and that Nvidia follows suit because as it works now, seems like the setting is pretty clunky and comes with notable cons no matter how you slice it. Thank you for your help and time, I really appreciate it (sorry for the long read).
@CaptaPraelium Tagging you here since I believe the reddit write-up in question is yours if you have any further input regarding this. Thanks
Very very very interesting here, thanks alot friend. I knew some of this details for a very long time but not in-depth like this or the link shown. And yes it can help with frametime and stuttering however every game is going to re-act different and the resolution your playing it at. For me, I am basically ALL gpu bound, 4k(2400), well at least on my 2 1080ti's, At least 4k(2400p) with max settings and or some kind of sgssaa+msaa injection or 5k,6k or even 8k(4800p), and yes I have tried 8k on my 2 1080ti's, incredible some of the games are even moving, 4 times the resolution, 4k to 8k = 4 times, that is tremendous. If your on a sli system, tuning those bits within nvidia inspector, can play a HUGE role of reducing frametime and or stuttering. Take for example, Alan Wake, whenever your near lights, man the frametime(MS) goes to like 11 or so to a whopping 60+, it is so bad it ain't funny... But if you look into my 4-way thread, I described on how to eliminate the issue 100% while increasing performance(SLI Compatibility bits). Single gpu mode is very little but no where as bad during sli. When your in the sli realm, anything can go wrong, wether be in-game settings, tuning those bits, Pre-rendered frames ahead, can be anything that could be a culprit and that is why many gamers choose 1 card over all, and I do not disagree with them, alot more less headache and surely no tuning is involved lol. But yes thanks for that link, man that is alot of info on this topic, thanks alot.
Just a note to add, Blizzard support recommends enabling "Reduce Buffering" in Overwatch (my understanding is that this is the equivalent of Max Prerendered Frames = 1) if a player is consistently at or above their monitor refresh rate. One case where forcing a value of "1" might be preferred is in cases where you cap your framerate. For example, if you're playing Overwatch on a 144 hz G-Sync monitor with a high end PC and are nearly always at your fps cap of 141, you could probably set this to 1 without issue I expect since even if you're losing some potential fps, it doesn't matter since you're capping at a value that is easily achievable even with FFR/MPRF at 1. I expect though if a user is frequently dipping beneath their cap though or if they get hitching/stutter, best to turn OFF reduce buffering though. If someone is playing a game with an uncapped framerate, I'd expect that's when higher MPRF values would mostly kick in since then you'll maybe be CPU limited more frequently. In any case, though I could be wrong and I hope somebody in the know corrects me if so, it seems to me that there will always be some performance loss moving from 2 to 1 since according to the CaptaPraelium write-up at least, it means built in CPU/GPU idling as I currently understand it. @A M D BugBear Thanks for your comment there, that's good to know. Your SLI setup sounds like a treasure trove of useful information.