Guru3D.com Forums

Go Back   Guru3D.com Forums > Videocards > Videocards - NVIDIA GeForce Drivers Section
Videocards - NVIDIA GeForce Drivers Section In this section you can discuss everything GeForce driver related. GeForce ForceWare and GeForce Exerience drivers are for NVIDIA Quadro and all GeForce based videocards.



Reply
 
Thread Tools Display Modes
Old
  (#76)
rewt
Maha Guru
 
Videocard: √
Processor: √
Mainboard: √
Memory: √
Soundcard: √
PSU: √
Default 07-20-2012, 15:09 | posts: 1,245 | Location: Americas

Quote:
Originally Posted by Raiga View Post
Its advisable to play with 1-2 values of prerender limit , because all the CPU intense game is going to spoil your consistent frame rate otherwise.
Yeah, and being that a value of 0 can harm fps by as much as 25% in games like Skyrim, that's probably another reason Nvidia chose to remove it.
   
Reply With Quote
 
Old
  (#77)
Raiga
Maha Guru
 
Videocard: GPU
Processor: CPU
Mainboard: Chipset
Memory: RAM
Soundcard:
PSU: PSU
Default 07-20-2012, 17:18 | posts: 1,099

Quote:
Originally Posted by Prophet View Post
Quote:
Originally Posted by Raiga
Its advisable to play with 1-2 values of prerender limit , because all the CPU intense game is going to spoil your consistent frame rate otherwise.

Also there isn't much loss with 1 or 2 frames ahead, if the game is running at more than 30 FPS..that would translate into 33.34 or 66.67 ms video render latency, which is purely acceptable for most utilization.
I have 140 fps consistently in the game of my choice atm. Nothing spoiled there.

Also I cannot tell you how much I disagree with 34-70 ms latency being acceptable. I can accept some lag in game where instant reactions dont matter a lot. Mmorpgs for example. But even there 70 ms would be a lot. In fps, which is what I play mostly, more than whats absolutely nessecary is simply unacceptable. Also Id like you to confirm the numbers if possible.
Oh, sorry..I was mentioning 33ms(for prerender=1) and 66ms(for prerender=2) for anything running at 30 FPS..

If the frame rates are faster, then the delay for 1 or 2 frames from prerender would also be lot less.. 140 FPS with prerender 1 & 2 would be around 7ms &14ms ..

because 7ms(round figure) would be the cost to render each frame on your GPU (for it to be 140FPS).

-----
Edit
Anyways..again, its actually destructive to actually completely remove prerender limit.

By any standards it should be at least at 1 frame, because the CPU calculations for games are not clamped upto a certain point. If it was, then you'll never need prerender limit.

Edit 2
But there could be another way...if the game calculation and render push to GPU are deferential (I don't know the actual terminology).

Even if the game state(physics, engine, etc) aren't updated in a frame, still the CPU should send the frame push to the GPU and re-render the un_updated game state on screen.

So in a bucket, you could have:

(game state/calculation) -> updating frames whenever possible -> (game render pool) -> push every frame (60) -> GPU to render

In this case, GPU utilization will also be high..

Or is it already like that? <- yelp, I am completely unfamiliar with this.

Edit 3
With the above, the inputs related to camera view could be tied to that GPU rendering with priority (with or without un_updated state)..which would give you smooth camera motion even if the game (worldspace) calculations aren't updated.

So you don't have dodgy camera movements.

(I apologize for many multi edits "-_- )

Last edited by Raiga; 07-20-2012 at 17:44.
   
Reply With Quote
Old
  (#78)
rewt
Maha Guru
 
Videocard: √
Processor: √
Mainboard: √
Memory: √
Soundcard: √
PSU: √
Default 07-20-2012, 17:31 | posts: 1,245 | Location: Americas

The command queue also serves to minimize those costly user/kernel mode transitions. It's generally just a bad idea to disable it, even if the system is able to pump out a huge amount of frames. Like Raiga said, the higher the FPS, the lower the latency caused by pre-rendering.
   
Reply With Quote
Old
  (#79)
Prophet
Master Guru
 
Prophet's Avatar
 
Videocard: Msi 680 Gtx Twin Frozr
Processor: Intel Sb@4.7
Mainboard: Asus P8Z68V Progen3
Memory: 12 Gb Kingston
Soundcard: Asus Essence STX|Akg k701
PSU: Corsair 1200w
Default 07-20-2012, 18:28 | posts: 776 | Location: Heaven

Quote:
Originally Posted by Raiga View Post
Oh, sorry..I was mentioning 33ms(for prerender=1) and 66ms(for prerender=2) for anything running at 30 FPS..

If the frame rates are faster, then the delay for 1 or 2 frames from prerender would also be lot less.. 140 FPS with prerender 1 & 2 would be around 7ms &14ms ..

because 7ms(round figure) would be the cost to render each frame on your GPU (for it to be 140FPS).

-----
Edit
Anyways..again, its actually destructive to actually completely remove prerender limit.

By any standards it should be at least at 1 frame, because the CPU calculations for games are not clamped upto a certain point. If it was, then you'll never need prerender limit.

Edit 2
But there could be another way...if the game calculation and render push to GPU are deferential (I don't know the actual terminology).

Even if the game state(physics, engine, etc) aren't updated in a frame, still the CPU should send the frame push to the GPU and re-render the un_updated game state on screen.

So in a bucket, you could have:

(game state/calculation) -> updating frames whenever possible -> (game render pool) -> push every frame (60) -> GPU to render

In this case, GPU utilization will also be high..

Or is it already like that? <- yelp, I am completely unfamiliar with this.

Edit 3
With the above, the inputs related to camera view could be tied to that GPU rendering with priority (with or without un_updated state)..which would give you smooth camera motion even if the game (worldspace) calculations aren't updated.

So you don't have dodgy camera movements.

(I apologize for many multi edits "-_- )
Do you have anything to support your claims of how long it takes to calculate a frame? Not saying you are wrong, thats pretty consisent with how I understand it and what my experience in gaming. But still, do you have some reliable source?

My cpu/gpu utilization is pretty high, about as much as I expect from a subpar gameengine along with the cpu utilization. What I mean by this is that my hardware completely outranges the gameengine / settings Im currently using (bfbc2 ie frostbite.. 1.5? with pretty much all the lowest settings and the lowest possible screen resolution before it breaks out the 'schweden' borders. )



Ill see if my dpc and cpu/gpu utilization changes when chagning pre-rendered frames also. I suspect the know-it-all rewt is yet again just using big words.


The camera doesnt feel or react dodgy when I set to 0. It just feels laggy when I set it to 1 or higher. I have only tested this thouroughly with bfbc2 though, dont know how other games would react.

I guess most would say 'ok 7 ms is nothing to worry about'. But I feel those and it affects my aim. Heck, I have even rather extensively tested pretty much all the intel infs cause they give me different dpcs (and different stability in mouserate).

So, how do we get John Carmack in here to pitch in?

Last edited by Prophet; 07-20-2012 at 18:32.
   
Reply With Quote
 
Old
  (#80)
Raiga
Maha Guru
 
Videocard: GPU
Processor: CPU
Mainboard: Chipset
Memory: RAM
Soundcard:
PSU: PSU
Default 07-20-2012, 18:56 | posts: 1,099

Quote:
Originally Posted by Prophet View Post
The camera doesnt feel or react dodgy when I set to 0. It just feels laggy when I set it to 1 or higher. I have only tested this thouroughly with bfbc2 though, dont know how other games would react.
When I meant dodgy, I meant the whole game frames if your FPS is 30..which means the camera turning is also updated at 30ticks/persecond.

Quote:
Originally Posted by Prophet View Post
I guess most would say 'ok 7 ms is nothing to worry about'. But I feel those and it affects my aim. Heck, I have even rather extensively tested pretty much all the intel infs cause they give me different dpcs (and different stability in mouserate).
Lol..how absurd..are you absolutely sure about 7ms affecting your aim?

Try -> http://www.humanbenchmark.com/tests/...time/index.php

You might just be under the false pretense that you can predict target movements in the game and react according, rather than actually be fast. But you are not in real sense.

Last edited by Raiga; 07-20-2012 at 19:19.
   
Reply With Quote
Old
  (#81)
tweakpower
Banned
 
Videocard: MSI HD 6770
Processor: FX-4100 4.0Ghz
Mainboard: MSI 970A-G46
Memory: HuperX 2x4GB PC12800
Soundcard: Realtek Onboard
PSU: LC-Power 600W
Default 07-20-2012, 20:02 | posts: 932 | Location: Serbia

Quote:
Originally Posted by snowdweller View Post
Well when your playing games like Solitare yes 300+ fps probably won't give tearing as the game is sending out alot more frames than the refresh hence you will see a complete picture 99% of the time. Compared to say Crysis @ 150fps with 120hz screen. Did I ever "used" a CRT the answer is yes unfortunately I had a 21" which was not fun to carry around to LANs.
It douse not matter what game you play, as long as you have high refresh rate. Other than that, you are right, it's not easy to carry , also when you got some insane FPS at some games like 1000, still there is no tearing, so that's good enough for me .

Quote:
Originally Posted by Brendruis View Post
There is tearing on CRTs it isn't unique to LCDs... it's just that a CRT you can raise the refresh rate higher so it is much less noticeable. You can get the same effect by buying a 120Hz LCD these days
Well, tearing problem first happened on CRT's, and that's logical. But as long as you have high refresh rate, it is not problem, it should not matter if is LCD or CRT.
   
Reply With Quote
Old
  (#82)
visine
Master Guru
 
Videocard: Gigabyte GTX 970 G1
Processor: i7-2600K @ 4.5GHz
Mainboard: Gigabyte GA-P67A-UD4-B3
Memory: Corsair Vengeance 16GB
Soundcard: Realtek ALC889
PSU: Corsair HX 850W
Default 07-25-2012, 23:30 | posts: 410 | Location: Norway

So can someone clear stuff up? Is it best to have 1 if u wanna have as little "flickering" as possible? I haven't really bothered to change anything in the nvidia control panel except to change the texture quality from "quality" to "high quality".
   
Reply With Quote
Old
  (#83)
rewt
Maha Guru
 
Videocard: √
Processor: √
Mainboard: √
Memory: √
Soundcard: √
PSU: √
Default 07-26-2012, 01:28 | posts: 1,245 | Location: Americas

What is left to clear up, and what is this "flickering" you speak of?

BTW, there is next to zero visual difference between Q and HQ. High Quality mode was introduced for purists long ago when older hardware had some inherent flaws in texture filtering.[/offtopic]

Last edited by rewt; 07-26-2012 at 02:01.
   
Reply With Quote
Old
  (#84)
-Tj-
Ancient Guru
 
-Tj-'s Avatar
 
Videocard: ZOTAC GTX980Ti Amp!Omega
Processor: Intel i7 4770K OC 4.7GHz
Mainboard: ASUS Z87 Deluxe 2103
Memory: Gskill TridX 16GB 2400MHz
Soundcard: X-Fi Titanium HD @Bose A5
PSU: Nitro88+ 650W 52A
Default 07-26-2012, 02:32 | posts: 13,795 | Location: Proxima \/82

^
Its frame tearing..


Quote:
Originally Posted by visine View Post
So can someone clear stuff up? Is it best to have 1 if u wanna have as little "flickering" as possible? I haven't really bothered to change anything in the nvidia control panel except to change the texture quality from "quality" to "high quality".

Depends on the game and frame rate cap, sometimes 1-2 (both look the same) sometimes default 3.

For example in COD4 (125fps cap) i see it less with 3 frames vs 1-2., but then there is this slight lag - although barely noticeable a higher mouse dpi.

Last edited by -Tj-; 07-26-2012 at 02:35.
   
Reply With Quote
Old
  (#85)
IcE
Don Snow
 
IcE's Avatar
 
Videocard: Gigabyte 970 G1
Processor: Intel i7 4790K
Mainboard: Maximus VII Hero
Memory: 16GB Vengeance Pro 2400
Soundcard: Aune T1 + CAL
PSU: EVGA SuperNova 750 G2
Default 07-26-2012, 02:33 | posts: 10,640 | Location: Toledo

Quote:
Originally Posted by rewt View Post
What is left to clear up, and what is this "flickering" you speak of?

BTW, there is next to zero visual difference between Q and HQ. High Quality mode was introduced for purists long ago when older hardware had some inherent flaws in texture filtering.[/offtopic]
I think what he means is what is the "best" render ahead setting. Application controlled, 2, 1, etc.
   
Reply With Quote
Old
  (#86)
visine
Master Guru
 
Videocard: Gigabyte GTX 970 G1
Processor: i7-2600K @ 4.5GHz
Mainboard: Gigabyte GA-P67A-UD4-B3
Memory: Corsair Vengeance 16GB
Soundcard: Realtek ALC889
PSU: Corsair HX 850W
Default 07-26-2012, 12:15 | posts: 410 | Location: Norway

That's pretty much what I ment yes. I usually just keep it simple and let it stay at default. That's what I do with pretty much all the settings in the control panel. Anyways, will render set at "1" make less tearing?
   
Reply With Quote
Old
  (#87)
Sajittarius
Master Guru
 
Videocard: Gigabyte GTX980Ti
Processor: Core I7 4790k
Mainboard: ASUS MAXIMUS VII HERO
Memory: 16GB Crucial Ballistix
Soundcard: X-Fi Titanium HD
PSU: 850W Corsair
Default 07-26-2012, 13:08 | posts: 281 | Location: New York, USA

Quote:
Originally Posted by Raiga View Post
When I meant dodgy, I meant the whole game frames if your FPS is 30..which means the camera turning is also updated at 30ticks/persecond.



Lol..how absurd..are you absolutely sure about 7ms affecting your aim?

Try -> http://www.humanbenchmark.com/tests/...time/index.php

You might just be under the false pretense that you can predict target movements in the game and react according, rather than actually be fast. But you are not in real sense.
Some people are just more sensitive to the lag, lol. The EVO fighting game tournament (street fighter etc.) uses special monitors based on their lag. Also, if you practice that humanbenchmark you would get better at predicting (which is what happens to people who play the same game regularly)

Which is another thing, say the monitor has an inout lag of 10-15 and you add +7 for the prerendered fram that pushes you over 16ms which is 1 frame @60fps. Alot of people would notice a frame of input lag. I definitely would. One of my friends plays street fighter 4 on a 60" hdtv and the lag doesnt bother him but a bunch of us hate the thing, lol.
   
Reply With Quote
Old
  (#88)
rewt
Maha Guru
 
Videocard: √
Processor: √
Mainboard: √
Memory: √
Soundcard: √
PSU: √
Default 07-26-2012, 20:13 | posts: 1,245 | Location: Americas

Quote:
Originally Posted by visine View Post
Anyways, will render set at "1" make less tearing?
Yes, no, maybe. Tearing is a function of frame rate and refresh rate, and pre-rendering can have an affect on frame rate.

Tearing (or "flickering") is not caused by pre-rendering. Tearing is caused by the display updating the screen out of sync with the video card.

Quote:
Originally Posted by IcE View Post
I think what he means is what is the "best" render ahead setting. Application controlled, 2, 1, etc.
There is no "best" setting in my opinion. The value I prefer often differs by application

Last edited by rewt; 07-26-2012 at 20:26.
   
Reply With Quote
Old
  (#89)
visine
Master Guru
 
Videocard: Gigabyte GTX 970 G1
Processor: i7-2600K @ 4.5GHz
Mainboard: Gigabyte GA-P67A-UD4-B3
Memory: Corsair Vengeance 16GB
Soundcard: Realtek ALC889
PSU: Corsair HX 850W
Default 07-27-2012, 12:38 | posts: 410 | Location: Norway

I usually play with 120hz in every single game, and most of the time I have vsync off. I also have never bothered to touch the settings in control panel. I tried the new FXAA option but in games the text can get "blurry" or uglier than it usually does. But will pre-render value "1" reduce the input lag more than for example 2,3 or default?
   
Reply With Quote
Old
  (#90)
Prophet
Master Guru
 
Prophet's Avatar
 
Videocard: Msi 680 Gtx Twin Frozr
Processor: Intel Sb@4.7
Mainboard: Asus P8Z68V Progen3
Memory: 12 Gb Kingston
Soundcard: Asus Essence STX|Akg k701
PSU: Corsair 1200w
Default 07-27-2012, 13:02 | posts: 776 | Location: Heaven

Quote:
Originally Posted by visine View Post
I usually play with 120hz in every single game, and most of the time I have vsync off. I also have never bothered to touch the settings in control panel. I tried the new FXAA option but in games the text can get "blurry" or uglier than it usually does. But will pre-render value "1" reduce the input lag more than for example 2,3 or default?
Yes. You can try smaa instead of fxaa. http://mrhaandi.blogspot.nl/p/injectsmaa.html
   
Reply With Quote
Old
  (#91)
ManuelG
NVIDIA Rep
 
ManuelG's Avatar
 
Videocard: Geforce GTX TitanX Pascal
Processor: Intel Core i7 5960X
Mainboard: Alienware Area 51 R2
Memory: 32GB DDR4
Soundcard: Creative Sound Blaster Zx
PSU: Alienware 1200W
Default 07-28-2012, 01:29 | posts: 541 | Location: Santa Clara, California

Just want to follow up on my previous comment. The pre-render limit was previously supported on XP only. We changed the NVIDIA Control Panel setting so that it starts at 1 rather than 0, but a setting of 1 should behave the same as 0 did before. It was removed because it was redundant.
   
Reply With Quote
Old
  (#92)
rewt
Maha Guru
 
Videocard: √
Processor: √
Mainboard: √
Memory: √
Soundcard: √
PSU: √
Default 07-28-2012, 02:51 | posts: 1,245 | Location: Americas

I always told people that XP does not suffer as much input lag. However, your driver team has removed zero which was indeed a valid setting for Direct3D. That's what some people are complaining about.

Quote:
Originally Posted by ManuelG View Post
but a setting of 1 should behave the same as 0 did before.
But it doesn't. A setting of zero behaved more similar to tools like D3D antilag flushing the render queue for every frame. This is not something like 5 milliseconds that people who think they're superhuman claim to detect, this is something that can be proven and documented (using fps/performance graphs for example, which I will leave up to the complainers to provide if it behooves them).

Thanks for stopping by.

Last edited by rewt; 07-28-2012 at 05:12.
   
Reply With Quote
Old
  (#93)
visine
Master Guru
 
Videocard: Gigabyte GTX 970 G1
Processor: i7-2600K @ 4.5GHz
Mainboard: Gigabyte GA-P67A-UD4-B3
Memory: Corsair Vengeance 16GB
Soundcard: Realtek ALC889
PSU: Corsair HX 850W
Default 07-28-2012, 16:39 | posts: 410 | Location: Norway

Quote:
Originally Posted by ManuelG View Post
Just want to follow up on my previous comment. The pre-render limit was previously supported on XP only. We changed the NVIDIA Control Panel setting so that it starts at 1 rather than 0, but a setting of 1 should behave the same as 0 did before. It was removed because it was redundant.


So can you explain the difference between the 1,2,3 and what it does? Will it reduce input lag and will it decrease the FPS? What does it really do.
   
Reply With Quote
Old
  (#94)
rewt
Maha Guru
 
Videocard: √
Processor: √
Mainboard: √
Memory: √
Soundcard: √
PSU: √
Default 07-28-2012, 16:56 | posts: 1,245 | Location: Americas

Lower values help decrease input lag. Higher values help increase FPS and "smoothness". That's all there is to it.

For more info hover your mouse cursor over the setting in the control panel and read the help tip.

Last edited by rewt; 07-28-2012 at 16:59.
   
Reply With Quote
Old
  (#95)
visine
Master Guru
 
Videocard: Gigabyte GTX 970 G1
Processor: i7-2600K @ 4.5GHz
Mainboard: Gigabyte GA-P67A-UD4-B3
Memory: Corsair Vengeance 16GB
Soundcard: Realtek ALC889
PSU: Corsair HX 850W
Default 07-30-2012, 16:29 | posts: 410 | Location: Norway

Thanks for the clear up. I guess I rather want more fps and smoothness considering I own a benq 120hz monitor with 2ms and reduced input lag.
   
Reply With Quote
Old
  (#96)
gx-x
Maha Guru
 
gx-x's Avatar
 
Videocard: MSI 780Ti Gaming
Processor: intel i5 3570K
Mainboard: ASRock Extreme4 Gen3
Memory: patriot 4x2GB @1333 ddr3
Soundcard: Yamaha RX-V550 w/!JBL
PSU: tT SmartSE 530W
Default 08-04-2012, 17:15 | posts: 946 | Location: Serbia

Quote:
Originally Posted by rewt View Post
I always told people that XP does not suffer as much input lag. However, your driver team has removed zero which was indeed a valid setting for Direct3D. That's what some people are complaining about.



But it doesn't. A setting of zero behaved more similar to tools like D3D antilag flushing the render queue for every frame. This is not something like 5 milliseconds that people who think they're superhuman claim to detect, this is something that can be proven and documented (using fps/performance graphs for example, which I will leave up to the complainers to provide if it behooves them).

Thanks for stopping by.

So, basically you are saying that you know better than nV engineers do. Kudos to you mate...
   
Reply With Quote
Old
  (#97)
rewt
Maha Guru
 
Videocard: √
Processor: √
Mainboard: √
Memory: √
Soundcard: √
PSU: √
Default 08-04-2012, 19:44 | posts: 1,245 | Location: Americas

Kudos

All they need do is compare r296 and r300 drivers with a pre-render of 1, then compare again r296 with a pre-render of 0. In my trials there is a notable performance difference between 0 and 1, which proves the settings are not redundant.

Besides, if they were really concerned about removing redundant and inapplicable settings, why haven't they removed those anisotropic filtering optimizations and corresponding profile bugs long ago?

Last edited by rewt; 08-12-2012 at 06:13.
   
Reply With Quote
Old
  (#98)
gx-x
Maha Guru
 
gx-x's Avatar
 
Videocard: MSI 780Ti Gaming
Processor: intel i5 3570K
Mainboard: ASRock Extreme4 Gen3
Memory: patriot 4x2GB @1333 ddr3
Soundcard: Yamaha RX-V550 w/!JBL
PSU: tT SmartSE 530W
Default 08-04-2012, 20:05 | posts: 946 | Location: Serbia

because they know exactly what is behind each option because they write the driver and have source. All you here, and elsewhere, just speculate. Differences are so marginal in some cases that they are most likely statistical errors.

"why haven't they removed those anisotropic filtering optimizations"

because they work? You can check it yourself, you just need to know WHEN and HOW they work There are articles on it thou so you can prolly can just google some of those. OR run 3dm06 with and without them and see difference in score

PS. There used to be a setting of -1 to prerendered frames in nibitor if I am not mistaken...go figure that one out lol...weird...Besides, it has been said, it is CPU related, how much frames CPU prepares for sending to GPU, not how many frames GPU holds in "stock", because in modern games some GPU's struggle to render even one per 1/60 of a second...

Last edited by gx-x; 08-04-2012 at 20:09.
   
Reply With Quote
Old
  (#99)
Falkentyne
Master Guru
 
Videocard: Sapphire HD 7970 Ghz Ed.
Processor: Core i7 2600k
Mainboard:
Memory:
Soundcard:
PSU: Seasonic Platinum 1000W
Default 08-04-2012, 20:39 | posts: 409

Quote:
Originally Posted by gx-x View Post
So, basically you are saying that you know better than nV engineers do. Kudos to you mate...
Actually, he does.
I don't even own an Nvidia card (besides a Geforce 4), but I can tell you directly that Prerender limit functioned better, and PERFECTLY on windows XP, while it acts different on windows 7 (at least on AMD Cards, but I'm betting my buns this also applies to Nvidia too). Now, on XP, a few old games (Drakan Order of the Flame comes to mind) would crash on startup with a prerender limit of 0 (this might have been a driver bug back then on Detonator drivers, forgot if this was fixed), setting a prerender limit of 0 would almost completely remove any sort of mouse lag if you were at 60 fps or higher, with vsync on.

However windows 7 has significantly higher mouse lag with the same prerender as XP. And setting a prerender limit of 1 causes strange things to happen that did NOT happen in XP.

Just to see the true test of prerender limit of 0 in XP, you NEED a CRT monitor. Sorry, LCD guys, but you simply won't be able to tell the smoothness when comparing it with W7 with a 120hz LCD.

The best test:
Run UT2004. Enable vsync, use 60 hz refresh rate with a CRT. With a prerender limit of 0, you will has a very slight lag feeling, but the game will be fully 100% playable and the turning will be completely smooth. (Turn the mouse slowly--you wlll notice the turning will be GLASS SMOOTH, and will look exactly the same as turning your head in real life). With the default prerender limit (3), and 60hz you will have lag that will make the game feel as if you are playing in molasses. If you then force the refresh rate to 100, you will see the lag get much lower. Basically the lag at 100 hz refresh rate and limit=3 will feel about the same as 60hz refresh rate and prerender limit of 0.

Now just for kicks, set a limit of 15 (you may have to registry edit for this. At least it works with AMD cards (FlipQueueSize=15 string value). Still in XP. Now, in UT, you will have about HALF a second of mouse input lag at 60 hz refresh rate. And you will notice it ALL the time.

Now going back to windows 7.
Prerender limit of 1 in UT: 60hz refresh rate:
The first thing you will notice is that there is more input lag than there was in XP. Also, the game does NOT feel anywhere near as smooth; it will seem as if the game is 'jumping' from pixel to pixel instead of smoothly turning (you will only notice this on a CRT screen! LCD's are NOT fast enough !!). And the mouse lag will be much more noticable and annoying. Also if you do this in CS:Go (except use a 100 hz refresh rate now), on the main menu, the mouse movement of the pointer will be jittery instead of smooth (AGAIN you need a CRT to notice this!). Also, in CS:Go, you will get horrible frame jittering in many areas, when close to a wall (MOST noticable by the fences in Train at T spawn).

Prerender limit of 2: 60hz refresh rate:
CS:Go: mouse pointer smooth in main menu. The jittering is gone on Train, by the fences at T spawn). UT STILL is not smooth (but it's smoother). Mouse lag now makes 60hz vsync on just unplayable.

Prerender limit of 3 (aka default): 60hzrefresh rate:
UT is smooth now. No frame skipping. But mouse lag makes this unplayable.

Limit 15:
Ok, now we see where W7 and XP differ for sure now.
At 15, you will see only SLIGHTLY higher mouse lag than at 3 (default), but it seems like some areas of the game (CS:Go) will suddenly cause a HUGE increase in mouse lag while other areas will be fine. UT will have slightly higher mouse lag but not the 1/2 second lag of XP.

So it definitely is a big difference compared to XP. The jitteriness of the mouse cursor in CS:Go (at 100 hz refresh rate mind you) with a prerender of 1 as well as the UT2004 panning jitterness at 60 hz) is a giveaway that something different is going on.

However if you use 100hz refresh rate in UT instead of 60 hz (in W7), prerender limit of 1 is glass smooth with no noticable input lag, while XP was glass smooth at 60 hz.

TL;DR: Basically, in XP: Set a prerender limit (or Flip Queue Size) of 0 and leave it there and have NO drawbacks. In W7, setting "1" (lowest value) has drawbacks that were NOT present in XP at 0 OR 1.

If you guys want to test that in 7 on your Nvidia cards, go ahead.
Remember vsync must be enabled otherwise you will hardly notice anything. But people without 120hz screens and who are capped at 60 fps are DEFINITELY getting the short end of the stick here in windows 7.
   
Reply With Quote
Old
  (#100)
rewt
Maha Guru
 
Videocard: √
Processor: √
Mainboard: √
Memory: √
Soundcard: √
PSU: √
Default 08-04-2012, 21:02 | posts: 1,245 | Location: Americas

Quote:
Originally Posted by gx-x View Post
"why haven't they removed those anisotropic filtering optimizations"

because they work?
All optimizations besides trilinear have no effect on current hardware. Trilinear optimization is also a redundant setting since HQ & Q modes toggle it automatically.
   
Reply With Quote
Reply

Tags
flip-queue, input lag, pre-rendered frames, prerendered

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump



Powered by vBulletin®
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
vBulletin Skin developed by: vBStyles.com
Copyright (c) 2017, All Rights Reserved. The Guru of 3D, the Hardware Guru, and 3D Guru are trademarks owned by Hilbert Hagedoorn.