The truth about PRE-RENDERING 0?

Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by Tastic, Jul 16, 2012.

  1. Finnen

    Finnen Member

    Messages:
    18
    Likes Received:
    0
    GPU:
    GTX 970 4GB
    Because keeping the regular V-Sync ON + applying 30 FPS cap in RTSS still produces micro stuttering, sadly. But it really IS micro, I actually played for years like that and only noticed it quite recently. If you play on keyboard and mouse, then it's possible that you won't actually notice it. It really is only visible if you pan your camera around at constant rate - easily visible, if you play with controller and just slightly turn your right analog stick to constantly move your camera at one rate. Then you will notice that, from time to time, there is kind of "hickup", a little, little stutter, even though your FPS stays rock solid. It's not consistent, it just sometimes happens.

    If you use 1/2-Vsync from NV Inspector then these stutters do not appear (with or without RTSS) but input lag is much higher. The RTSS cap is used in this case to fight the extremely big input lag that 1/2-Vsync adds. It helps a bit, but it's still bigger than regular V-Sync + 30 FPS cap. From what I see it's a trade-off. Want super, extra smooth frames at 30 FPS? Use 1/2-Vsync but suffer from really big input lag (that can be slightly reduced with RTSS). Want good enough frame pacing with lower input lag on 30 FPS? Keep the original V-Sync and only add 30 FPS cap with RTSS. There is some tiny, tiny little bit of micro stuttering, but input lag is smaller and the game feels more responsive.

    Exactly as I thought. haha. But I live in Poland, there are very rarely any sales for G-Sync monitors here and the base price for the cheapest G-Sync monitor (AOC G2460PG) is higher than, I don't know, GTX 1060 6GB for example. That monitor currently goes for 441 USD here and it's the cheapest option. GTX 1060 6GB can be bought already for around 350 USD. These damn G-Sync monitors are extremely expensive :(
     
  2. janos666

    janos666 Ancient Guru

    Messages:
    1,645
    Likes Received:
    405
    GPU:
    MSI RTX3080 10Gb
    I see. Indeed, I never used an analog controller. And yeah, I never used 30 fps cap at 60Hz for long. I just quickly tested it out of curiosity when this question popped up.

    That's why I use FastSync pretty much exclusively nowadays. It's "jittery" as hell compared to V-sync or G/Free-Sync at similar average framerates but the lag is pretty much as low as it gets with constant refresh rate (and no tearing).
    I couldn't stand the random frame duplications on "pulsing" displays (I was a hard core plasma fan, I still hold on to my old Pioneer 9G as a relic) but it's not nearly as bad on "sample and hold" displays (every single LCD or OLED screen without [proper] BFI [which is very rare]). So I used V-sync for PDPs and now FastSync with the 2017 LG OLED (there is no BFI option on this model).

    G-sync never was and probably never will be an option for me since I hate the whole LCD tech like a bloody epidemic and I prefer big screen sizes (~50"). My ticket into the FreeSync era could only be HDMI VRR on OLED TVs.
     
    Last edited: Jun 18, 2018
  3. Kidoki

    Kidoki Guest

    Messages:
    1
    Likes Received:
    0
    GPU:
    GTX 970
    So, to have a properly smooth 30FPS lock i should lock it at 30.000 - 0.01 while using 1/2 VSYNC? Or do i have to consider my refresh rate while doing this 30FPS lock? (Reported as 60.001 on TestUFO)

    edit: Nvm, just saw the other page were this was discussed... So, i should lock it to 60.001/2 - 0.01? I saw some people mentioning the -0.007 value too, so should i just keep trying between 0.007 and 0.015 to find the best spot?
     
    Last edited: Jun 19, 2018
  4. RealNC

    RealNC Ancient Guru

    Messages:
    4,893
    Likes Received:
    3,168
    GPU:
    RTX 4070 Ti Super
    You need to use your actual refresh rate. So if it's 60.001 (remember to let the test run for at least 3 minutes; yes it takes that long.) If it's really 60.001, then you need to cap to 60.001 / 2 - 0.01, which is 29.99.

    You can just use -0.01 and forget about it. The whole 0.007..0.015 thing is just there in case you opt for a custom refresh rate instead of an RTSS cap, because you can't use any Hz you want in a custom refresh rate, so anything between 0.007Hz and 0.015Hz will work fine.
     
    Kidoki likes this.

  5. Smough

    Smough Master Guru

    Messages:
    983
    Likes Received:
    302
    GPU:
    GTX 1660
    Only on rare occasions should you use the nVidia Inspector frame rate limiter, its pretty lousy and never limit the frame rate properly except for a few games, such as Ghost Recon: Wildlands. Always use RTSS or simply let Windows 10 full screen optimizations work out the frame time for you, generally, it makes the games smooth. Some games don't respond to them at all so keep this in mind, other games stutter because of it, so enabling and disabling this on the game .exe is something you should try on any game that is giving you issues. Sometimes, RTSS is not needed.

    Now as for 30 fps, my recommendation is 37.50 fps, its much more smoother than PC 30 fps. Download Custom Resolution Utility, go to the "Add" option, on the bottom type "75.000", apply and save the changes. Click on "restart64.exe" so the program applies the 75 Hz to your monitor. Then set it on the desktop or in-game, use half v-sync refresh rate. Done. You'll be getting 37.50 fps which is a world beyond smoothness compared to 30 fps on 60 Hz.

    Now go to "C:\Program Files (x86)\RivaTuner Statistics Server\Profiles", click on any game's profile that you may play at less than 60 fps and do the following: Find the line called [Framerate], below it add this line: LimitDenominator=0 and leave the line "Limit=0" alone.

    Then set the frame rate cap for 37.5 fps.
    Should look this this:

    [Framerate]
    LimitDenominator=1000
    Limit=37498

    Try the games and see how they turn out.

    Remember: For all of this to work properly, the nVidia scaling must be set to "GPU" and not "Display".
     
  6. Finnen

    Finnen Member

    Messages:
    18
    Likes Received:
    0
    GPU:
    GTX 970 4GB
    Smough, thanks for suggestion, it's a good one and I tried it before already since I was using CRU for some time. Unfortunately, I have a really old monitor (Samsung 2333SW) and it doesn't work with 75Hz - the whole screen becomes blurry. Might try this on my TV, though. Maybe it will make the natural input lag of my TV more bearable since 37,5 FPS should be more responsive.
     
  7. Smough

    Smough Master Guru

    Messages:
    983
    Likes Received:
    302
    GPU:
    GTX 1660
    Sucks that your monitor can't get those 75 Hz, when converted to half the results are much more bearable than 30 fps on 60 Hz. Your T.V should be able to pull this out if its modern enough, try it out. And yes, 37.5 fps is much more responsive and not only that, due to the v-sync, is way smoother and better than, for example, using RTSS to limit the frame rate to 40 fps or 50 fps on 60 Hz mode without v-sync.
     
    Last edited: Jun 20, 2018
  8. Finnen

    Finnen Member

    Messages:
    18
    Likes Received:
    0
    GPU:
    GTX 970 4GB
    Yeah, it sucks, but I took your advice and reverted it a bit.

    Here's the problem I have with games these days. I don't have a terrible PC, so I usually get 50-60 FPS in games with the right settings (mostly medium with textures on high), except for few games that are borderline unoptimized. 50 FPS+ is good, but these 50-60 fluctuations are annoying since they introduce a lot of stutters. And there's a lot of games where I usually have 52-55-57 FPS on average - something that as a 60Hz monitor owner I call "the worst FPS" (it's still quite high, so it seems like a real waste to chop it down to 30, but on the other hand the stuttering is annoying and I want to improve frame pacing without playing on ultra low).

    So I thought to myself... if I can get 50+FPS easily, then what if I actually downclocked my monitor to 50Hz and limited the FPS to "Refresh Rate - 0.01"? Using CRU I added a new resolution with 50Hz, checked the VSync with VSyncTester.com (turned out to be 49.99Hz) and limited my FPS to 49.98 FPS with RTSS. The results? It's wonderful, actually! Since I can keep 50FPS all the time, all the annoying fluctuations are gone, along with the stuttering. And 50Hz doesn't feel choppy or slow. Also, since the monitor is running at actual 50Hz the frame pacing is perfect with -0.01 tweak. It's actually very smooth! I mean, sure, in theory, it has increased latency (20ms) when compared to 60Hz (16.67ms), but that difference is barely noticeable. It's definitely a lot better than playing on 60Hz monitor with FPS fluctuating between 50 and 60.

    I think it's a great alternative for people who are on a tighter budget and want to improve the smoothness of their games. This seems like the best compromise among all the options available to a poor gamer like me, but obviously, it all depends on the hardware, the average FPS and so on. But 50Hz/50FPS gaming is definitely better than "60Hz/30FPS" or "60Hz/50-60FPS", at least to me. The only downside is that, of course, you rather want your normal desktop apps to run at 60Hz, so you have to manually switch to 50Hz before running the game and then manually go back to 60Hz after you stop playing (and it's advisable to do it in the Windows's monitor settings instead of Nvidia CP, since I noticed that NVCP settings are usually overwritten by games, but Windows settings are respected). Maybe there's an app that runs in the tray area and allows to quickly switch refresh rate by right-clicking it or something (edit: there is, HRC – HotKey Resolution Changer, allows to switch resolutions and refresh rates by hotkeys or context menu in the tray area)
     
    Last edited: Jun 19, 2018
  9. janos666

    janos666 Ancient Guru

    Messages:
    1,645
    Likes Received:
    405
    GPU:
    MSI RTX3080 10Gb
    Reasonable amount of money can't magically solve all the issues.

    Once you bought a faster VGA card, you will feel like you are wasting the horsepower and wish to buy a display with higher resolution, higher refresh rate or preferably both (if still possible) and then you have the same issue with quality/performance optimization again.
    My biggest quality issue was aliasing (since MSAA is long dead and 99% of post-process AA solutions are a bad joke --- they are much more efficient in destroying all the sharp details all over the picture than smoothing the actual jagged edges which might still remain jagged, just a little more blurry) and higher native resolution is much more efficient than super-sampling (it provides additional natural sharpness and detail while efficiently removing most of the aliasing rather than just preserving the details while removing some aliasing), so I went for higher resolution (2k->4k). But now I have to deal with <60fps again in many cases.

    And some games are just awful. They look like a disaster at default settings (I hate the overly popular forceful application of heavy DoF, aggressive motion blur and similar post-process effects --- !often undefeatable! but certainly ON by default on PC and usually forced on consoles) and run like crap (you just won't get stable 60+ fps on any PC unless you drop the quality settings back to the previous century), no matter what (in most cases, most of the "advanced" settings do pretty much nothing, at least on the performance side on relatively modern systems because the "chain" has 1 or 2 bottlenecks and the rest is practically irrelevant --- but fiddling with those bottlenecks is often painful, like loosing 50% perceived quality level to gain 25% performance, so not a good deal unless you absolutely need that performance to make the game playable/bearable).

    IMO, it's a total disaster these days. Hardware is ever more expensive and content is designed in the worst ways. For example, you have to pay much more money for a display which has lower inherent motion blur but then they put aggressive undefeatable motion blur effects into games. You pay more money for higher resolution displays but they apply heavy depth of field (and similar) effects in games to make everything look like a blurry dream. Or they put a load of film grain / perlin-noise to the image to make it look like a cinema movie from the '90s (before Hollywood had the kind of high dynamic range / low noise sensors or celluloid films they have today --- back then it wasn't a choice but a necessity, yet they often emulate it today with artificial processing "just because", yet you should buy the high resolution display...). And when you have low fps (partially thanks to the cacophony of various questionable post-processing steps), they offer you VRR capable displays to cope with it (but it's not a real solution, since low average fps still yields low motion resolution because VRR is incompatible with BFI / impulse display driving and low average fps still yields lower level of "control/aim" -> VRR is just a clever tool but not the holy grail of all...).

    So, if you are a maximalist and "purist", you simply can't get it right on all fronts. (Of course not *all* is *only* bad but I feel like it's certainly "mixed" a lot.)
     
  10. Finnen

    Finnen Member

    Messages:
    18
    Likes Received:
    0
    GPU:
    GTX 970 4GB
    I totally agree, especially with the cost of the hardware. I remember back in the days where I could upgrade my PC without breaking a bank. I usually spent up to ~630-800 PLN (170-220 USD) for GPU and that lasted for years without causing any serious problems. I used Radeon HD 4890 for such a long time and I paid like 629 PLN for that card. For CPUs I usually picked AMD. Oh boy, the AMD Phenom II X2 550 was SUCH a good performer that lasted for sooo long! And it did cost like... 60 USD? These are the times that are long gone and will probably never come back :(

    And yeah, the performance in games is absolutely horrible. It's even more problematic since I mostly play open-world games and they are the ones causing most of the problems and have an insane amount of stuttering. Games like Kingdom Come Deliverance and Mass Effect Andromeda reminded me that I would have to shell out an insane amount of money to remove horrible stuttering and FPS drops (upgrading CPU from Haswell to the latest i5 or i7 means new MOBO and DDR4 RAM). Sure, these games were badly designed in the first place, but in the end, who cares? It is what it is - you take it or you leave it. Andromeda can still stutter when you're walking around your ship (The Tempest), Kingdom Come has terrible stuttering on my CPU (i5-4440) when walking around the towns (locking the game to 30 FPS fixes that). I'm really afraid of next big open world RPGs like Dragon Age or Cyberpunk 2077 - I don't even know if I'll be able to hold stable 30 FPS without stuttering anymore. As for the quality, don't get me started on AA - I absolutely hate both FXAA and TAA, which make a blurry mess out of the screen, especially in Fallout 4 (face details are completely lost with Temporal AA).

    And, as you said, adjusting the settings manually isn't as good as it used to be. Back in the days you could customize the settings and drop like one or two biggest offenders to High or Medium and suddenly the game worked so "lightly" and smoothly. Today it feels like my PC is choking, transitions between cutscenes usually cause temporary stutter (probably because of the aforementioned amount of post-processing that is being activated every time you start a conversation with an NPC), and there's always some camera angle where your FPS just goes down for no reason whatsoever, causing stuttering. Ehhhhhhh. Life was easier during CRT era.

    All in all that makes me only happy that I found these forums and applied some of the tweaks here. Tweaks like "1/2 Refresh Rate Vsync" or "50Hz refresh rate in CRU" along with the "-0.01 fps lock" really work like a miracle for improving the smoothness of the games on cheaper hardware. If only I knew before that there were methods to improve frame pacing... that would save me and my girlfriend a lot of nerves (she was devastated by the stutterfest in Andromeda on her i5-4460/GTX970 and she mostly bought her PC for Bioware games). It's a terrible thing for less knowledgeable people. I noticed recently that I can barely run any game without doing either some .INI tweaks, RTSS caps or VSync changes. Either it stutters, has low FPS or terrible input lag by forced mouse smoothing. As you said, it's a total disaster these days.
     

  11. janos666

    janos666 Ancient Guru

    Messages:
    1,645
    Likes Received:
    405
    GPU:
    MSI RTX3080 10Gb
    Off-topic
    Andromeda is a good polypathological test case. It prompted me to do unscheduled hardware upgrades on an unusually liberal budget and eventually to give up on the hopes of experiencing a game like Bioware used to make (thin ME1 and DAO, or even just ME2 and DAI) again in the not to distant future (before everything comes full circle, as things usually do).

    Hahh. I think you might be negatively surprised (even coming from a little pessimist view). Those games could be several years away and especially C77 looks like something which intends to push the limits on many fronts, PQ included. And I think this will be a general trend for some years. High budget games will push for the best possible PQ (even if it's mostly just "eye candy" rather than actual pure detail/quality, or so called "artistic value") to starve the low budget (but ambitious) indie games (which seem to be on a rise in popularity) to death with their 2010-ish graphics (or really horrible PQ/performance).

    As an example, the Torment Numenera game is great (I enjoyed it almost as much as I enjoyed DAO back in the days, though my standards/expectations probably rose since DAO was my first RPG experience) but we wouldn't have considered it nice looking in 2006 (10 years before it's release), let alone today. On the other hand, Dreamfall: Chapters looked very pretty around it's release date but ran terribly bad, especially on AMD cards.

    Yes, a transition from gameplay to realtime rendered cutscenes (not videos) often followed by (a theoretically seamless) on-the-fly change to a bunch of parameters. I think the new habit of changing the render resolution is one of the most offending of them nowadays (in terms of smoothness) but a LOD change can also be both visible (if the detail level gradually changes in front of your eyes, sometimes in multiple steps over several seconds) and cause fps drops (a sudden change in the parameters can promote a lot of tasks which are normally handled at the background to be immediately necessary -- or more like already too late at the moment).
     
    CrazyBaldhead likes this.
  12. Monchis

    Monchis Guest

    Messages:
    1,303
    Likes Received:
    36
    GPU:
    GTX 950
    I have not found a game with micro-stuttering fixable with this trick in a while, until the new Wreckfest racing game a couple of days ago, perfect smoothnes once I set the pre-render frames to 1.
     
  13. janos666

    janos666 Ancient Guru

    Messages:
    1,645
    Likes Received:
    405
    GPU:
    MSI RTX3080 10Gb
    Edited: Sorry, needs more testing.
     
    Last edited: Jul 12, 2018
  14. KneehighPark

    KneehighPark Guest

    Messages:
    2
    Likes Received:
    1
    GPU:
    GTX 1050 Ti
    The information in this thread has been super useful. Been playing older games in QHD/UHD on my budget rig hooked up to my PC, and got the RTSS limits figured out. Tests reported my TV's refresh rate as 60.002, so I capped RTSS to 59.990.

    Just had a couple of questions regarding Vsync and MPRF.

    1) Is it best to use Nvidia Control Panel Vsync, or in-game?

    2) What should MPRF be set at? I tried using "1", but I ended up getting more stutters and frame drops, even when my GPU wasn't pegged at 99 or 100% (when playing Arkham City). Will this setting vary on a game-by-game basis?

    3) Most of the time, I am hitting a locked 60, but rarely, I dip below, for a second, if that. Is that "alright"? Or is keeping a constant 60 fps mandatory for all of this to work?
     
    BuildeR2 likes this.
  15. RealNC

    RealNC Ancient Guru

    Messages:
    4,893
    Likes Received:
    3,168
    GPU:
    RTX 4070 Ti Super
    Shouldn't matter. Unless the game has a bad vsync setting (like the Bethesda RPGs.)

    If you always reach the cap, MPRF shouldn't have made a difference. Are you sure your monitor really is 60.002Hz? Maybe it's a browser bug? Try CRU to see what the EDID timings say.

    When you reach the FPS cap, it works. When you don't, it doesn't.
     

  16. janos666

    janos666 Ancient Guru

    Messages:
    1,645
    Likes Received:
    405
    GPU:
    MSI RTX3080 10Gb
    I'm sure some games override this parameter. See Frotbite3 with RenderDevice.RenderAheadLimit: the default value might suggests this is a no-op (Windows default or VGA driver overrides remain in effect) but I think the game still sets it to 2 by default (can be changed on-the-fly from the developer console).
     
  17. RealNC

    RealNC Ancient Guru

    Messages:
    4,893
    Likes Received:
    3,168
    GPU:
    RTX 4070 Ti Super
    It could also be that the NVCP setting can decrease it, but not increase it. If a game sets it to 2, but you set it to 1 in the nvcp, it's gonna be 1. But if a game sets it to 1 and you set it to 2 in the nvcp, it's gonna be 1.

    I don't have this game, but in Forza Horizon 3 there's an actual in-game setting for this and you can set it to 1, 2 or 3. This would make it rather easy to test whether what I wrote above is true or not by running the game at 60Hz with 1/3 vsync (20FPS.) Two frames difference mean 100ms input lag, which should be easy to tell.
     
  18. 2mg

    2mg Member

    Messages:
    38
    Likes Received:
    1
    GPU:
    760
    Sorry if I'm necroing a post, but:

    1.) If Vsync isn't limiting "behind the scenes" engine, then why does the FPS (with possibly uneven frametimes) lock to your refresh rate, and GPU/CPU usage does in fact drop?

    2.) Why doesn't the "fake" (aka DirectX's pre-render frames) Triple Buffer Vsync halve the FPS like DB Vsync does? A missing frame is a missing frame, no matter the number of buffers, no?

    3.) If we could call Nvidia's FastSync "real" Triple Buffered Vsync, why does it not lock FPS like "fake" Triple Buffer Vsync does, since both of them, like DB Vsync, are actually working "behind the scenes"?

    4.) Since DWM/Desktop Compositor has actually a real Triple buffer Vsync, why doesn't it operate like above mentioned FastSync? Run a Windowed Fullscreen and you will get locked to your refresh rate with it, why?

    5.) Does DWM in Win7, besides adding input lag because of Vsync, add additional lag because it has to switch frames from the game to the DWM then to display? Did Win10's compositor improve upon this?
     
  19. RealNC

    RealNC Ancient Guru

    Messages:
    4,893
    Likes Received:
    3,168
    GPU:
    RTX 4070 Ti Super
    https://forums.guru3d.com/threads/the-truth-about-pre-rendering-0.365860/page-19#post-5494650

    It's about the game not being able to render the next frame while the current one is being scanned out, because in double buffering both buffers are used and the game has to wait for one of them to become available again. That's what causes the FPS halving. We already explained this when you asked about it on the Blur Busters forum.

    Note that pre-rendered frames are NOT a triple buffer mechanism! Pre-rendered frames are frames that have NOT been rendered yet. In a double buffer setup, you have two rendered frames (one in the front buffer, one in the back buffer.) If you add one more back buffer, you have what we today call "triple buffer vsync." All three buffers contain rendered frames. The pre-rendered frames are something else. These are buffers containing the data that the GPU will later use to render frames.

    How the pre-rendered frames interact with double buffer vsync FPS halving is not something I have a clear idea about. I suspect they sometimes can prevent FPS halving due to the asynchronous nature of frame presentation in modern DirectX versions, but really, I could be talking out of my butt here.

    It seems fast sync uses a frame limiter to try and pace the game's FPS in a way that it produces frames at intervals that result in less microstutter. "Real" triple buffering on its own doesn't actually have an effect on FPS.

    Are you sure? If you use vsync off, windowed mode will not lock your FPS.

    Edit: Or it might in the latest W10 version (1803.) I haven't upgraded to that. I think I've seen people complaining about windowed mode now enforcing vsync in W10 1803? I'm still on version 17-something.

    DWM does add 1 frame of lag because of the compositing step, and that's on top of what vsync already adds (if you enable vsync). If you play with vsync off, then you just get the 1 frame of DWM lag.

    W10 does not improve on that, but it can turn itself off for some windowed fullscreen modes (DX12 Windows App Store games) and for "redirected" exclusive fullscreen modes. By default, and if the game is supported, W10 will use "fullscreen optimizations" which will use windowed mode even if you've set exclusive fullscreen in the game's settings. It will then disable DWM. This works as well as exclusive fullscreen but has the benefit of fast alt+tabbing and being able to see DWM overlays (like the volume bar) because DWM is enabled temporarily to display the overlay, and then disabled again once no longer needed. The "disable fullscreen optimizations" compatibility setting disables that behavior and you get exclusive fullscreen back.
     
    Last edited: Jul 20, 2018
  20. 2mg

    2mg Member

    Messages:
    38
    Likes Received:
    1
    GPU:
    760




    I still don't understand it this way:
    Assuming the gpu/cpu can render above 60fps at all times.
    When you fill all buffers, the engine and/or gpu goes to wait state (you stop producing cars).
    You're outputting buffers at 16.6ms.
    Each 16.6ms, a buffer is freed, and the game/gpu wakes up, and renders into the buffer at speed of less than 16.6ms.
    But each time it does it, it's goes back to wait state.
    So why would there even be a bottleneck since
    if buffer=full then wait
    else
    if buffer=empty then write + goto wait?
    What goes out of buffers is at 16.6ms. What goes into is less or equal to 16.6ms. The rest is gpu sleeping.


    1. But isn't it known that common Triple Buffer in DirectX titles means just using "pre-rendered frames X"?
    2. Why this doesn't halve the FPS like DB Vsync does is beyond me.
    3. What is actually a real TB Vsync is either FastSync, OpenGL, or DWM/Compositor, no?


    I only noticed microstuttering, and the frames were uncapped though...


    Windowed modes, at least on Win7, with Vsync OFF in app will not lock your FPS, but it will enforce DWM's Vsync (what FastSync does).


    But DWM automatically enforces Vsyncing too, so there's additional, although lesser, input lag.

    So unless the app has "redirected exclusive fullscreen" that will turn DWM off, it's better to use "disable fullscreen optimizations" to reduce input lag?
     
    Last edited: Jul 21, 2018

Share This Page