Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by Tastic, Jul 16, 2012.
@CrunchyBiscuit Thanks! That's good to know
hi, just want to make sure... my laptop screen native refresh rate was tested at 120.113hz on both sites... so, I just need to put 120.103 fps in RTSS fps limiter? and then enable v-sync right?
and if my GPU only manage to perform 60 fps, i should set the fps limiter to 60.046 fps and enable 1/2 v-sync in nvidia inspector, is that correct?
or would it be better if i change the refresh rate to 120.010hz and cap it at 120 fps (60.010hz cap @60 fps)?
or maybe it doesn't matter in my case?
Wasn't sure where to ask this or if it's already been asked and answered but....does the max prerender setting have any affect if you choose to run Fast Sync?
Yep, it does. Fast sync does not change anything. MPRF will have a big effect if the GPU can't keep up with the CPU. The higher MPRF is, the further ahead the CPU gets of the GPU. Since fast sync seems to frame throttles to a multiple of the refresh rate, this might happen less often, but it can still happen when there's no throttling.
Thank you that's great to know
Just a heads-up since I know we talked about this in previous threads/earlier in this one, but regarding the "optimal" / "preferred" global value for the "Flip Queue" / "Max Prerendered Frames" setting, I found this recently which I wanted to verify here at Guru3D:
MPRF / Flip Queue explanation:
This guy seems to go pretty in depth with it and seems to describe some of the problems with lower MPRF / FQ values such as idleing CPU / GPU & worse frametimes.
A notable point made here seems to be this one:
> "It is also important to consider the meaning of the setting - it is a MAXIMUM pre-rendered frames, or a render ahead LIMIT. This means that if you set it to 3, that does not mean that the CPU will ALWAYS render a full three frames in advance of the GPU. It may only render one, and the GPU might be ready and then will take that frame to render, or it may render one and a small amount - meaning only that small amount is added to your input lag, or 2 and some fraction of a frame - meaning only one and a fraction is added to your lag.... Setting it to 3 does NOT mean it's always adding 3 full frames of input lag."
Going off of this post after reading over it a few times, it seems a global value of "Use the Default" OR perhaps 2 or 3 (the windows/amd default iirc) would be the overall preferred value for this. At 1, it seems one will usually lose some frames per second (saw some Battlefield 5 benchmarks on YouTube where they swapped between MPRF 1 and 4 that seemed to confirm this though the fps loss was relatively small), stutter/hitching will likely be more discernible/frequent, and your CPU & GPU are likely to idle more frequently.
Due to all that, a global value of at least 2 is probably recommended for general gameplay and the Default value for this setting may actually be the best all-rounder if this write-up is correct about the CPU not always prerendering 3 frames ahead (but rather that's the MAX it could render ahead going off the quote above).
Of course all of this is assuming everything in that write-up is correct -- it may not be (I don't have the technical knowledge for all of this unfortunately so I'm going off other what I've read online for how this feature works), so I wanted to post it here and verify it with the Guru3D community and make sure the write-up there got everything right/didn't miss anthing and I figured at least you two might have an interest in the subject/thread there.
Anyway, thanks for your time.
@Tastic You may also be interesting in that thread seeing as you began the OP for this thread and seem to have an interest in the matter.
EDIT: lol this is your post isn't it @CaptaPraelium -- just noticed that -- thanks for the detailed write-up there, it's extremely helpful. Did Chris from BattleNonsense ever get around to making tests for the MPRF size? I know he tried in OW, but only tested "Let the 3D App decide" with Reduce Buffering ON vs forced 1 in the control panel which didn't have any difference iirc (which is expected). Ideally he'd test forcing 1 in the control panel, then forcing 2, then 3, then 4 and do each with the in-game "Reduce Buffering" setting set to ON and OFF to check if there's any difference there while listing the input delay at each setting, but I don't know that he's put out a video like that yet (or if anyone has -- have yet to find one online as of the time of this writing unfortunately).
Yep, a low MPRF will prevent the CPU from running too far ahead of the GPU. If the GPU always keeps up (by using a frame cap, for example) then this never happens. But if it does happen, then a low setting increases the chance of getting so-called "CPU stutter". It can be a trade-off between low lag and chance of stutter in some games. There is no best setting. It depends on whether you prefer low lag compared to higher latency in GPU-limited situations, and also on how powerful your CPU is or, in other words, how CPU-intensive the game you're playing is.
For many games, 1 works perfectly. In others, it might reduce performance or increase the chance of stutter. By how much depends on your CPU and the game's settings ("low" vs "high" settings that affect CPU load.)
With that being said, in my experience, a higher MPRF will not actually prevent stutter. It will simply change the nature of the stutter. With MPRF 1, the stutter looks more fine-grained. Meaning you get duplicated frames at very short intervals. At higher MPRF, the stutter interval is longer. Both look bad, but I guess the longer intervals mean that if the GPU is only overloaded for a very short period of time (50ms or so), then the frame duplications can be kept to a minimum and you get less stutter.
Anyway, long story short: if MPRF 1 gives you stutter, then just try 2 and then default and see if that helps. So for me, MPRF 1 by default, unless you have stutter issues. As for the FPS cost, I haven't experienced that. (I don't count an FPS difference of 2 or 3 or something like that as performance loss. That's an acceptable price to pay for lower latency in my case.) And I cap my FPS on a per-game basis anyway, to a value the game can reach the majority (90%) of the time. That's why I got a g-sync monitor after all, so I can do exactly that: lagless vsync that looks smooth at any FPS cap I choose. The choice of MPRF then only affects the remaining 10% of the game where the FPS cap isn't reached.
@RealNC Yeah, most of the time, the impact on FPS seems relatively small -- at least with good hardware. However, for "some" games, the perf impact can be a decent amount if you're CPU limited is my understanding where the GPU is idle waiting to be fed more frames (I could be misunderstanding something there). For example, in this test, Battlefield V actuallys hows a discernible difference in fps when the framerate is uncapped (see test at around 1:20 in the video):
Seems to be something like a 10-15 fps drop in some cases. Of course capping ones fps to what is consistently achievable would avoid this I imagine as you described.
I also tested Hunt Showdown with the driver forcing MPRF to 1, 2, then 3 where I saw about a 5-10 fps improvement on my rig moving from 1 to 3 (1080 ti + 8700K at stock clocks w/ 16 gbs ddr3 RAM).
Just an update regarding all the above @RealNC @CrunchyBiscuit
It seems that forcing the "Power Management Mode" to "Prefer Max Performance" has largely resolved the awful stutter issues I was experiencing previously.
Though Higher Values for MPRF / Flip Queue do appear to help provide a more "smooth" looking presentation in some titles (especially with an uncapped framerate) and hitching/stutter may be reduced even further with higher values there along with other apparent benefits such as at least slightly higher overall fps in some titles, I'm now able to globally force the value to 1 with hardly any discernible hitching and stutter in my tests.
What I did was to force the games to use "High" CPU priority via a script (so they default to high cpu priority) + force the performance mode to "Prefer Max Performance".
Then I ran all the same tests that I did before -- G-Sync + V-Sync ON & G-Sync + V-Sync OFF using different values for MPRF / Flip Queue (done with 8700K + 1080 Ti at stock clocks).
I got the idea for the power mode setting in the control panel because I noticed that DOOM 2016 and Wolfenstein 2 did NOT exhibit stutter regardless of MPRF setting and both defaulted the power mode to "Prefer max perf".
So, not any definitive answer for what the overall "optimal" setting is or anything, but it seems that for several titles the issue was linked to "Optimal Power" being used as the default power option. Have tested this so far with Overwatch + Hunt Showdown + Crysis 3 and Prefer Max Perf along with forcing them to default to High CPU priority via script seems to have almost entirely removed the frequent hitching/stutter I previously got playing those games.
Optimal power is crap, I don't understand why this is still set as the default value after clean driver installation. It always stuttered in most of my games, so I am changing it to adaptive - that works OK. Btw Doom and Wolf are using OGL/Vulkan, MPRF does not have any effect on them. OGL/Vulkan titles are affecfted by "Maximum frames allowed" variable which can be set trough nvInspector. Or at least it was, I didn't update drivers in a long time (still on 399.24 + win7).
Optimal is for laptops iirc, and pointless on desktop cards.
@jorimt Regarding capping fps at 141 (3 beneath refresh), is this also the correct value for in-game/in-engine limiters? My understanding is that the 3 beneath refresh was determined using RTSS, but if in-game limiters are generally less accurate, I'd expect perhaps 4 frames would be preferred, but I could be wrong. Thanks for your time.
@EerieEgg The input lag tests of CS:GO and Overwatch showed that -3 is enough for them. Other games might differ, of course. Test each game using the RTSS frame time graph and FPS indicator to see if the game is mostly not exceeding the cap. Frame time jitter is OK if it averages the -3 cap.
@RealNC Wolfenstein Youngblood's in-game fps limiter appears to only limit plus or minus ~2 frames or so.
For example, the fps when capped at 141 seems to actually vary between 139 and 143 with an average of 141. In light of that, for this title, I may drop the cap down to -4, but hopefully it's not overkill. I would think that provided the fps never reached 144 (or 143.8 or whatever the exact refresh is) that V-Sync wouldn't kick in though, but I could be wrong there since I don't know if V-sync kicks in a little early so I'll probably just drop the cap down to 140 so the range will be 138-142.
So, after merely 7 years, pre-render 0 seems to have arrived with 436.02
its not pre render 0.
Well, the closest it practically gets, I guess. (I knew somebody will come and say something similar even if I intended my post to be a little sarcastic. )
Don't worry about it janos, Astyanax just likes to pretend he works for Nvidia.
Like that time when he pretended to speak for Nvidia and stated that 30-bit color OpenGL app support is a Studio-only feature even though I've pointed out that Nvidia has stated otherwise?
Oh wait, that's on the other forum with another name.
Best way to deal with him is to be able to tell when he knows his stuff, and when he's just pulling things out of his behind & passing them off as "facts".
Sarcasm doesn't translate on the internet
If this was a 0 prerender, just about every opengl application would crash using it
It was put to me when i queried Andrew B about it that it would be segmented into the studio driver and not provided to cards outside of the studio driver, The decision to add it to the Geforce package must be a recent thing.