Hello there guys, I recently came across CaptaPraelium's thread here on how "Maximum Pre-rendered Frames" works and was wondering if anyone could help clarify certain points in the write-up for me: https://www.reddit.com/r/BattlefieldV/comments/9vte98/future_frame_rendering_an_explanation/ After reading that write-up regarding FFR (Future Frame Rendering), my understanding is that a forcing the setting to 1 in the control panel "disables" FFR (since the CPU always has to pre-render at least 1 frame). So, the CPU will prepare the first frame, then pass it to the GPU, then the CPU idles until the GPU is done with the first frame before starting on the next. Then, while the CPU is working on the next frame, the GPU is idle til the CPU is done with its part (as I understand it based on the write-up). 1 is probably not recommended then -- at least not generally or when you're likely to be CPU bound as I understand it since then you'll get worse hardware utilization and your CPU + GPU will idle more often resulting in worse framepacing/frametimes and a lower overall framerate (correct me if I got anything wrong there of course). Forcing the value to 2 should make more efficient use of one's hardware and seems to have several advantages as described in the write-up (less cpu/gpu idleing + better frame pacing + higher overall fps). My main question then is regarding the "Default" value of "3" set by windows and "most" apps as I understand it. I have a guess why this is the default, but I wanted to ask since I may have this wrong. 1) Suppose the value is forced to "2" -- the CPU prepares the first frame then passes it to the GPU and begins work on the second frame. Now the CPU and GPU both start work on a frame at the same time. What happens if the GPU wins that race and finishes its work on the first frame before the CPU finishes its work on the second frame? Well, then I expect the GPU would idle until the CPU finishes its work on the second frame resulting in wasted hardware potential/idling GPU for some amount of time. This is why I bet the default value is "3". So that the "race" between the CPU/GPU where the GPU wins and then idles is mitigated and will probably not often occur. I could be wrong there of course, but that's my thought on it initially here for why the default would be 3 instead of 2 based on how I've read that write-up. 2) If a game is GPU bound instead of CPU bound, then I expect the flip queue size / FFR setting wouldn't matter past "2" since the GPU won't be winning that "race" I described above -- at least not often. I would expect based on the write-up that a value of "2" vs "1" would ALWAYS result in some level of performance improvement because with a value of 1, the CPU and GPU will have idling time built in no matter what while the other component works on the current (and only) frame. Of course correct me if I'm wrong there. All that to say, my expectation is that generally going off CaptaPraelium’s thread above -- you'd probably want at least "2" for this setting (enable FFR) to improve hardware utilization/fps/frame pacing/reduce stutter, but "3" might be the overall best value since I bet it helps to mitigate if the GPU finishes work on the first frame before the CPU has finished working on the second frame sometimes. Is there any case where disabling FFR WON'T negatively impact performance/framepacing/etc? As I understand it, disabling FFR completely would always reduce perf by at least some amount due to the fact that CPU/GPU will both definitely idle (if I'm understanding that correctly). 3) CaptaPraelium’s thread above mentions that individual testing has to be done for the "optimal" value for this since it is so game and hardware dependent (so, different games and different hardware setups will be more or less CPU bound is my interpretation of that where if you're more CPU bound, a higher setting for FFR will help more as I understand it -- I may be misunderstanding something there though). I noticed in some games like Witcher 3, changing this to be any less than 3 or 4 resulted in horrendous stutter/hitching/micro stutter with uncapped framerates (with 1080 Ti + 8700K at stock clocks). In light of all this and how I, and I expect many others, really just want a global value of sorts to force and forget, I've just gone ahead and defaulted to "Let the 3D Application Decide" for the time being. It seems unfortunate that finding more information on this setting has been rather difficult and even now it seems to be an extremely controversial setting depending on who you ask/where you look. I hope AMD's new "Low Lag" feature is some sort of "Dynamic" Flip Queue or some such and that Nvidia follows suit because as it works now, seems like the setting is pretty clunky and comes with notable cons no matter how you slice it. Thank you for your help and time, I really appreciate it (sorry for the long read).