Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by BlindBison, Feb 16, 2022.
numbers and pictures please
Did you read this?
The clue is globally enabled, disable it and you have your normal performance, enable it and set the fitting resolution in game when needed and disable it when done.
With nice words, that implementation isn't user-friendly at all,
Astyanax posted a workaround that reverts back to the old scaling:
Windows Registry Editor Version 5.00
With these issues, why does Nvidia's development team hold back on giving the option to chose between the 2 methods?
I mean, both options are there, so why not give people to chose one over the other?
Would be for the best if they can't figure out to assign them pr. game profile, you get what I'm saying?
Windows Registry Editor Version 5.00
You seem to have forgotten that you are already aware of this:
Yes, I saw it. Still doesn't excuse nvidia making these GPU generations slower through a driver update. Who the heck is supposed to know about undocumented, arcane registry tweaks like that? Most people will instead consider buying a new GPU. Make your GPU slower so you go buy a new one.
People with older GPUs depend on sharpening for modern games to get good frame rates with lower resolutions and not have the image look like a blurry mess. Let's make that use case slower then, shall we?
How convenient for nvidia.
Let's get one thing straight, the driver does NOT make your older GPU slower.
Enabling scaling on a global level should make you think twice as well.
But as I try to address to Nvidia's community manager above in hope of that he forwards it to development, the way it is implemented and that there is no option to toggle between the two is far from ideal.
On far sight bad discissions can harm future sales, not saying that the alternatives are any better, but they are still there.
So no, it's not so convenient for Nvidia.
There a chance I'm witnessing something along this.
The issue I'm seeing with the 980TI roughly translate to a loss of CPU performance on the desktop and a chunk of score reduction with CPUz benchmark and other application.
The 2080TI it's not affected by this unswitchable CPU performance hit past the 472.12 driver.
The EnableGR535 setting doesn't revert those CPU performance hit on desktop.
So you are saying that there is a CPU performance difference if you switch between 472.12 and 511.79 today on the same windows installation with the exact same patches/updates?
Nope, underlying hardware is different, same version of OS. Cards cannot be swapped duo the water cooling condition.
By just switching driver between 472.12 and 511.79 right now on the same windows installation (currently 19044.1526) bind to the system with the 980TI there a CPU performance hit!
The performance hit seems to be appear with 496.13 onward.
Reverting to an older driver remove the CPU performance loss!
? I meant only the 980Ti system where the only thing that has been changed between benchmarking is the driver, nothing else .
Aside from that, how many runs to out-rule margin of error and how large of a difference are we talking about?
There a CPU performance hit system wide by changing just the driver with the 980TI past the 472.12.
The CPU performance loss is noticeable by sight and realistically reproducible on anything running with the CPU.
Loss on the 980TI system with anything greater 472 is almost never less 40 on multi-core, to give you the metric on that system having afterburner enabled reduce the score by 10.
Sometime, past 472 on those CPUz run I saw the score dipping as low as the card begin actually disabled on the device manger.
System is overly stable around 460/2280 pretty much each run with 472 and older driver.
On the 2080TI I did just 3 CPUz run after the system settled before anyway reverting to a previous system image.
40 is quite a noticeable drop with 2280 as multicore baseline score, is that with an i7 4770/4790K?
If true, that sucks I have no idea what's possible on Nvidia's end (I would "hope" changes like what we're suggesting would be possible), but I appreciate you providing your rationale in any case.
I really hope what you're saying isn't true since it sucks crashing up against CPU limits and getting stuttering as a result. If this video is to be believed:
And, assuming I've understood it right, AMD seems to be using a hardware scheduler and typically nabs superior CPU side performance in Vulkan/DX12 games right now while Nvidia uses a software scheduler and does worse/has a higher CPU overhead in DX12/Vulkan titles (while having a better driver for DX11 ones). Again, I'm not an expert so correct me if I'm wrong, but Hardware Unboxed was talking about this sometime ago when they're CPU benchmarks came out painting Nvidia in a bad light.
All of this is pure BS on part of HUB.
That's a relief to me as I have an Nvidia GPU, but how do we know that? At least at the time of the video their testing did show Vulkan/DX12 games with worse CPU performance on Nvidia cards and "if we assume" the video I linked above is true (perhaps it's not) then I could see why if AMD uses a hardware scheduler while Nvidia only has a software scheduler. My understanding of this is pretty limited though so I could definitely be wrong/missing something.
Considering it's a semi-permanent reduction of the CPU capability without any user usable feature and explanation, something is definitively off. I agree.
It's a skylake 6700 system by the way and I don't know if this issue is related with this new NIS or something else, changed or added after 472.12 and in between 496.13 driver.
On the only one game I've tested, with a simple run the performance impact was negligible and within an error margin, even the CPU scored on the downside of the error margin in percentile total time.
So, hopefully it's an unseen bug with the driver since it's observable within the desktop.
If we can disable sharpening from the NVCP with a registry edit and use something like Reshade and leaving NIS upscaler on, i think this could solve the performance drop, because apparently the problem is that NIS Sharpening is still active even at 0%. Or just enable the old sharpening, but keep NIS on, that could work, but i don't think it can be done.
Means nvidia cpu driver overhead problem is even worse now. Amazing. I also had a similar problem with a GTX 1060, the newer the driver, the more my CPU would suffer, this is also with any "mitigations" disabled (spectre&meltdown with inspectre, registry keys to remove them as well, so any CPU slowdown is taken out of the question), the only drivers that would work normally would be the 446.14 ones and before, anything after that would give me a CPU hit for no reason (more usage in games, 15 to 20% for no reason) and even after 2 days with any of those drivers, the issue would remain, so is not any sort of cache being build up.
Obviously, the newer your CPU, the less noticeable this is, how convenient, don't you think? At first, a lot of us never believed spectre&meltdown BS fiasco, so we got our performance back by disabling that "security" stuff to gimp our CPU and we were happy with older, but good CPU's. Now we have nvidia drivers that make those CPU's work much harder for no logical reason (which means in a way, we are getting CPU performance with some form of penalty hit) and probably forcing people to upgrade, if this isn't a possible planned obsolescence example, then I don't know what is.
Around 457 NVIDIA did changed something about the rendering pipeline, anyway, begin this a device driver the performance changes within the desktop is the giveaway of something wrong with.
To me, since it come without any feature enhancement, on mine 980TI until fixed it's a specific direct sign of the driver unsafeness.
For performance all around in the working driver with the 980TI I'll just stick with reshade 22.214.171.1241 and fideltyFX while taking advantage of FSR if the application give support and jolly the profile with NVprofileinspector, like it has always been. I wasn't going to use the DCH versioning on those system, so, not a big deal.
Is there a guide you used to disable those CPU performance ruining security patches? (Spectre/Meltdown, etc) — EDIT: nevermind, found one Googling round: https://www.howtogeek.com/339559/ho...pectre-patches-from-slowing-down-your-pc/amp/ will try this later
People seem to hate on HUB (perhaps deservedly I really don’t know) for bringing the Nvidia CPU overhead to light but it does seem to be a real problem going off the tests I’ve seen.