Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by Bukkake, Sep 18, 2012.
It does not.
PS I can continue with Origin and UPlay.
Afaik 1ms is the official maximum resolution documented by Microsoft, though 0.5ms has always worked until now.
Indeed newer Windows 10 versions automatically set the timer resolution, this was not the case in the past with Crysis 3 (Origin closed, returns to 15ms after closing the game):
(Yes, the game has shitty frame time consistency in CPU bound grass scenes. It gets slightly better with increasing CPU core count/threads.)
To learn what process requested current timer resolution you have to execute command "powercfg -energy" and then look in created report file.
Something strange is going on in 20H1. Both ISLC and the old TimerTool report ~0.5ms when it's requested but powercfg -energy says otherwise:
It's the same when Origin.exe requests 10000
I did that and there really is no mentioning in the report of any process requesting a certain timer resolution (I started Crysis 3 without starting Origin first via some way). So it must indeed be Windows (or the video driver?) itself that sets it.
Though the report is not correct regarding what allegedly is the timer resolution in use after running the command, or it measures it only once before Crysis 3 gets started. It reports 15.6ms, which contradicts other tools and the actual performance.
Which means that at the time of report there was no request for timer resolution.
FWIW for me, TSC divided by 1024.
0.1ms works for me but not seeing any real world advantages.
For me using "useplatformclock no" means not using the HPET counter for QPC but still see's HPET being used for timer and / or interrupt redirection so BIOS setting is still influential. Maybe it works differently for other hardware.
How are you setting 0.1ms timer resolution ? If I can ask?
I have set it for 1ms for long time in throttlestop and Everything is great. Recently I tried 0.5ms and in games and there are no harm from it, while my graphics score in FS went up by around 50-100points on my laptop, you say margin of error but when I switched back to 1ms I lost these few points. And I can reproduce that any time. So for now I set it to 0.5ms via islc.
If it is possible to set it lower like 0.1 I just want to know how to set it as islc lowest is 0.5.
Just for sake of making few tests.
By changing the hard coded value in ntoskrnl, saving it under a different name in system32 folder and using a bcdedit /copy with kernel option set. Note however even with self signing and test mode I still need to use windows boot start-up settings to disable driver signatures in order to boot. That's fine though for testing purposes.
I am not so good at making this just on my own.
But If you could write me detailed guide I could repeat it on my side with spare ssd with w10 20h1 on it which I
made for testings with different tweaks first so I am safe with not breaking my main w10 instalation
Do not do that.
So I finally wasted my time and watched the video by that FR33THY guy, of whom Smough seems to be some kind of advocate (at least this is my conclusion, given he seems to uncritically believe and repeat everything he states).
He claims that setting tick rate to 0.5ms would be largely beneficial over 1ms, which is totally ridiculous. Of course he doesn't provide any actual examples or numbers (btw. this too reminds me an awful lot of Smough). FFS, just benchmark any actual game in CPU bound scenarios or such which are close to it - zero difference vs. 1ms.
Then that mouse responsiveness/precision claim regarding system timers: Just another stupid joke. Try some online input lag tester and you will see that none of any of the proclaimed tweaking parameters will improve either reaction time, nor precision at all. To see if there are other issues with mouse input, you can also just drag some windows with mouse acceleration turned off - it should never stutter or jump without any tweaks (best choose any ModernUI window like new system settings, as lots of other windows show minor stuttering when dragging them across screen borders). The conclusion is that there are plenty of input events events available to sufficiently feed your display. I really wonder what kind of crap hardware one needs to get to non-properly working USB input devices (or is it the incompetence when configuring a PC?).
Well, the best of course is that I need to completely turn off cstates when setting useplatformtick yes, else I get stutter at vsynctester.com in Firefox and also quite worse graphs in Chrome (just in case: HPET enabled in bios only). Of course no game runs any better either.
10W more idle consumption for 0 benefit vs. perfect result without any retarded Windows "tweaks" to get "even" timer numbers. Amazing!
This thread is more like a running gag. I've tried out all these different tweaks. Some do cause some performance differences, whether good or bad depends upon your own system and apps/games you are running.
I've come to the conclusion from personal experience and reading literally thousands of conflicting claims from everyone who claims to be right: the default OS and motherboard settings are the best for 99% of users unless you have a specific use case otherwise.
How about VT-d ? It's usually disabled by default, even for workstation chipsets and I couldn't figure it out if it has any effect while not using any hypervisors/virtualmachines. I read somewhere that Linux uses it (while running natively and not running VMs) and it is faster than the software alternative (tied to some default security functions) but I am not sure about Windows 10.
Run benchmarks and see that it's not any beneficial, apart from its dedicated use case.
How about I'm talking about specifically the topic of this thread and not other settings? Of course there may be settings you need to make in your BIOS specific to your use. You know, like from IDE to AHCI for example. We are not discussing that nor would I be so absurd as to claim the GLOBAL default BIOS settings are the best. Wow...
Ok you guys seem like you're the most on top of latency and how to optimize it (even more than FR33THY). I'll get right to the point, what are the most significant tweaks I can do in Windows 10 1909 to lower my latency and do I have to revert to an older version like 1709 to get an even better response time than the one I have today?
Nothing specific to any Windows version. The one-size-fits-all answer is things you likely already know about, tried and true methods.
Higher more consistent FPS, fast monitor refresh rate and response time. Not using Vsync (competitive games), or ideally using Gsync/Freesync with a FPS limit of 3~ FPS below your refresh rate. Try using NULL or Anti-lag. And a good mouse with 1000hz polling to top it off. That's pretty much in order of importance.
I'm sure I'm missing something there, but if you're on this thread and know who "FR33THY" is, then you're already deep in the rabbit hole. You know these things. If you still have issues, it's probably more appropriate to blame the specific scenario, blame the game and Google around for fixes. It's easy to look in the wrong place on this stuff. There's someone just like you who probably blames their latency issue on the graphics driver version, instead of windows version.
Now, that's the one-size-fits-all answer. Sometimes there are issues with Windows that require a fix for stutter or latency, like ISLC.
But when that's the case, it's pretty easy to find consensus on what works and what doesn't. Point is, you won't find your salvation from some random soul on YouTube. Same goes for a single voice on a tech forum.
well, if you wanna go really deep make shure you also did read this here....
what you guys think of this tool?
you can get it on that site, but im pretty shure this tool was allready mentioned here..
I do find that tool informative, it also has a graph now, for better comparing, wich is really nice...
It's a cool tool, but ultimately it was made to help with a question we now already know the answer to, leave HPET off (default).