Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by Tastic, Jul 16, 2012.
Post that in every RTSS thread until we get some response
I can easily predict the response you'll get by posting the same stuff in each thread.
Yeah same, no response.
Wow, finally people are putting some attention to this, it's been years that I've been talking about it and nobody actually believed it, eventually came up to make a guide that barely got anyone's interest.
There's a few flaws in using Monitor Sync which I do not recommend, this method should always and only be used for V-Sync because of that to avoid weird issues like Brightness Flicker, or refresh rate inconsistencies.
Put up a very detailed guide here over Steam that many would like to have a read from I'm sure. http://steamcommunity.com/sharedfiles/filedetails/?id=668808553
BTW, I am very serious about paying a bounty for adding predictive frame-rate capping to RTSS.
Tests do confirm that this is a very worthwhile no-compromises lag-reducing feature for short-frametime engines (especially emulators, locked-framerate console ports and games, perfect jitterless/stutterless ULMB without the VSYNC ON monster for that arcade/Nintendo buttersmooth CRT effect, etc) and should be added to RTSS so we don't have to use GeDoSaTo for that sort of thing.
EDIT: Even if you prefer not to accept any funds, I'll help out to save time: I can even privately beta test for you to confirm if your predictive/adaptive change worked. You add unannounced config options in a quick-dirty way, I'll do fluidity & high speed camera tests (Blur Busters equipment) and confirm that the feature successfully reduces lag, before you publicly announce the feature's existence.
It would be great if Nvidia themselves could do this as some sort of "Low Latency VSYNC". They already have APIs to define custom timing parameters for GPU output which can be used instead of having to do EDID overrides, it would be put in use solely for accurate timing readings and adjusting the framerate cap on the fly so the user wouldn't need to do it themselves. This would handle Steps 2 and 3 in the Blurbuster guide for "Low-Lag Vsync for Common monitors"
Obviously they would need to greatly improve their Framerate capping implementation but I don't see why they couldn't do it beside their will to do so, it could be an extra service that get executed at demand once the "Low Latency VSYNC" setting gets enabled in the Nvidia Control Panel and the driver detects a 3D application that polls the GPU in the same way they can already detect game's executables using profiles or the way NVidia Shadowplay can detect and hook into 3D applications.
Once "Low Latency VSYNC" is set it would enable normal Vsync plus cap the game framerate below the refresh rate using the information polled using their APIs and account for big clock variances and adjust in real time.
That would be Steps 1, 4 and 5 in the Blurbuster guide.
(Missed this post. Sorry.)
The most recent example I remember is Witcher 3. There was slight microstutter in the 75-90FPS range (1440p on a 980Ti.) RTSS fixes that.
Another game affected by this was Elite: Dangerous, but this seems to have been fixed.
There's also CS:GO, but I don't use RTSS for it; slight microstutter is OK here since latency is more important than perfect fluidity. Most people don't use g-sync at all and just play v-sync off anyway, which obviously gives the biggest amount of microstutter, but gets you the least latency.
There's been other games too, but unfortunately I never kept a list. But it's not hard to tell on your own. If you see microstutter with the in-game limiter, try RTSS. You'll pay a small amount of latency for better motion fluidity in most cases. Also, the microstutter is usually very small and most of the time, people won't see it.
And if you encounter high CPU usage with RTSS, use in-game FPS limiter instead, like me for BF1.
The high CPU usage you're seeing is a false alarm. This is merely due to using a busy wait loop in order to guarantee high precision. Windows is reporting that the CPU core is being used 100% of the time, but that does not mean it is being 100% utilized during that time - far from it. This is why you don't see temperatures rise.
Unwinder explains it here:
Yes I know they have informed me of the situation. But I also have drops in FPS in BF1 due to that.
Cool post RealNC, thank you.
But how can I achieve a smooth low lag gameplay in games which does not have an Exclusive Fullscreen mode and works only in windowed/borderless windowed?
I mean as far as I know I can not set pre-rendered frames to 1 in NV Control Panel for windowed mode.
I set my monitors refresh to 60.007 using CRU. If I turn VSYNC OFF in game, Cuphead for example, I have a 90% smooth - 10% very stuttery gameplay. When VSYNC is ON picture becomes very smooth, but what about lag? The game can use pre-rendered frames=3 by default, I guess.
Sorry for my english and thank you!
I just tried the RTSS method of limiting, and it breaks the Epic Game Launcher.
Exclude the Epic Game Launcher, Shift + "Add" on RTSS.
Thanks, that did the trick
Must say that limiting FPS with RTSS didn't feel smoother for me.
My monitor is 59,95 Hz, so i set it to 59943 in RTSS which caused a hickup every now and then.
Did you disable the in-game V-Sync & enable V-Sync thru NvCpl?
Doesn't work with Unreal Engine 4 games sadly, they ignore the NVCP.
I've been using this trick for over a half a decade now, and there have been multiple numbers mentioned and methods for achieving this goal.
One thing I would like to get some feedback on, is this method that seems particularly tailored for Source engine games: http://steamcommunity.com/sharedfiles/filedetails/?id=129836524
It lists approximate times needed to be taken into account for the amount of time required for input polling in the engine at a given refresh/frame rate, and this being a useful calculation when capping fps with vsync on. For instance, it lists ~7ms as the necessary input polling time at 120 fps/hz, and ~19 ms at 60 fps/hz. it then recommends a specific framerate to cap the game to internally, or by rounding to the nearest whole number for external framerate cappers, as they were not able to be as granular as the more recent version/beta of RTSS, back when this guide was written.
Any of you Gurus have some input on this, or think it is relevant/useful compared to just capping at .007 below refresh as is suggested here?
Is what's suggested in this guide the best method for source games since more information is generally known about that engine given it's age/lineage, whereas we may not have as accurate information about the required input polling time for specific fps/hz rates in newer/different engines, thus making a general rule of .007 below refresh rate the best known option for other/newer engines?
There is some very solid information in this thread for people interested in this method. I'm glad I stumbled upon it, and look forward to seeing the evolution of things like this, whether implemented directly in RTSS or into future game engines by developers.
EDIT: In the past, I've used this frame cap/vsync method along with ULMB on my monitor, to get maximum smoothness with minimum possible input lag in various games. Is this a good practice, or is there some reason I'm not aware of that this should not be used in conjunction with ULMB? In some games (like the indie game Ziggurat), it has been necessary to use methods like this to fix inherent stuttering in the engine when running at high refresh rates/fps.
It's especially useful with ULMB, actually.
As for your other questions, I don't know. I somehow doubt you get get <0.01FPS precision with in-game limiters though. I would really only trust RTSS here. I was never able to get a stable tear line with the in-game limiters I tried (I don't claim to have tested every in-game limiter out there though, mind you.) It's how you can tell whether a frame limiter is accurate enough or not; you disable vsync and cap to exactly your refresh rate. If the tear line doesn't move up/down at random, but is relatively stable at one position on the screen, you've got an accurate frame limiter. If you then reduce the cap by ~0.008FPS, you should see the tear line moving downwards very slowly. This (the downwards moving tear line) is actually what gives you the latency reduction once you activate vsync. If you can't achieve that effect with a limiter, it's not going to work well in preventing vsync backpressure (downwards moving tear line = no vsync backpressure and thus less input lag, upwards moving tear line = vsync bakcpressure thus more input lag.)
If you can get the in-game limiter to produce this effect though, then you can use it.
Note: while trying to test this and produce this effect, it's important to note what you should not be moving around in the game; loading assets and such will interfere since even the slightest FPS fluctuation will move the tear line around.
For 30FPS games in 60Hz would be 29.993 cap work(monitor runs at 60.00041Hz) or is there something else that needs to be done?