Discussion in 'RivaTuner Advanced Discussion forum' started by Hilbert Hagedoorn, Sep 30, 2007.
Thx for the heads-up. But one question emerge: is this available only for 8xxx nvidia cards or for older series also?
I've tried this on my 7950GT 512 PCI-E and it was a no-no.
From release notes:
Correct, Only DX10 8x00 cards have a unified shader domain.
Aye. Forgot about that. Sry!
I too was wondering if it was possible to overclock all three domains in GeForce 7 series GPU's separately, rather than just core clock and memory clock. Geometric domain can obviously be overclocked separately by changing "Geometric Delta" in BIOS with NiBiTor, then flashing the resulting ROM file. So it's probably a matter of register settings, and should be possible by directly messing with the PLL registers using /WR switch, coupled with "Force constant performance level" flag -- I just was curious if there is any GUI or any power user settings that would allow separate clock adjustments for each domain, to date I haven't found any. But I probably wasn't looking hard enough, possibly did not thoroughly read the manual.
There is no such functionality implemented for NV4x family in the ForceWare.
In other words, it cannot be done entirely at the driver level. I thought so too -- one would have to first fix the performance level somehow so that the driver doesn't mess with the affected registers, and then write registers directly. Even then, the "Test" button would be no good anymore because (1) the associated driver call probably changes the frequencies briefly, destroying register values and (2) the internal test really only tests the shader domain, from what I understand. It does not test the geometric domain, at least not extensively (as my experiments with geometric delta seem to confirm), and I'm not sure if it tests the rasterizer either. Which is OK for general purposes (overclock all three with the same frequency), because normally the shading units are what starts flaking out first (someone correct me if I'm wrong).
In short, a "user-friendly" implementation of such functionality would be a mess... especially that overclocking different domains separately is more trouble than it's worth -- with a possible exception of geometric domain, which can already be done via NiBiTor if one really wants to squeeze every last volt and megahertz out of their card.
Do you realize that low-level overclocking is not just a single trivial direct GPU register write operation? Normally that includes a few dozens of reads/writes/delays.
I do know that there are usually a few thousand stabilization cycles needed to let various circuits recalibrate themselves, yes -- at least, that's the way it is on motherboards that can change CPU clock frequencies "in software"... How many writes need to be performed and where, I would not know, and probably would not be able to figure out without a data sheet by just comparing register dumps. But probably yes, one would need to do something to shut down all I/O operations currently in progress, etc. before changing clock frequencies, or we would just get a hang. RC will probably reset our adapter, but in doing so will reset the performance level as well.
My guess is, you figured out how to do low level ATi overclocking by stepping through relevant driver code with a kernel debugger? (Perhaps even more involved, as newer ATi boards seem to handle overclocking partially at BIOS level?)
Cool; I've been running my XFX 8800GTX at XXX speeds, but left the shader alone (which means it was beyond XXX speeds.) The XFX XXX is 621/1400/1000 according to Hilbert's Article.
Since I went with some aggressive fan settings, I decided to "go for it" and try the "Ultra" settings of 612/1500/1080. I played the entire COD4 demo max'd out and no artifacts or lockups.
I was going to go for more, but I think I'll play the demo one more time and see what my temps end up. No sense cookin' it for little payback.
And a no no on my 7950GX2 with 169.04 driver.
After a bit of experimentation with Test Drive Unlimited "TDU" last night (a very demanding game on the gfx card), I've noticed something odd that I can't quite explain.
Before I enabled the shader control, I had my standard XFX 8800GTX running at 621/1000. This runs the shader faster than a stock XXX card, because I learned from previous postings that the shader is tied to the GPU speed. I ran my card at 621/1000 for months with stock fan speeds; no problems.
So, I unlocked the shader control and set it to 612/1500/1080 like the Ultra is set. When I went to play TDU, after a few minutes I get the "pink screen of death". (Graphics card lockup.)
I figure OK, 1500/1080 is too fast for my shader or RAM, so I'll back it off to the real XXX speed of 621/1400/1000. TDU gives me the pink screen again after a short period of time.
So, I undo the shader unlock in Rivatuner and go back to 621/1000; no more pink screens. Now, I'm a little puzzled. Either my particular card doesn't like the shader to be unlocked from the GPU, or perhaps the frequencies aren't being calculated correctly when in this mode? This isn't something that I suggest Unwinder spend hours pondering; as very few people unlock the shader controls and my card is perfectly happy at 621/1000. I just thought I'd mention it in case someone else has similar issues.
Or perhaps you failed to use search correctly?
Slowly reread the last sentence.
Thanks for your reply Unwinder.
I think I found what it was, after my Nephew played the Crysis demo for awhile I was still getting the pink screen of death so I installed the ATITool (in order to see if I was still stable at 621/1000).
The pink screen returned after only ~2 minutes and I started to think "ok, so either my card is getting whimpy or it was because I changed something else." Yep, in this thread:
I started to play with the card's internal fan speed settings because a "cooler card is better" right? Well, it crashed at only 68C when I had some custom settings in there after 2 minutes. When I went back to the default fan settings, sure the card went up to 80C after a half hour but it just kept going like that bunny with the battery.
I was debating about trying the shader control again, but then again...if it works fine at 621/1458/1000 (according to the hardware monitor), it's probably time to call it a day. As you've said above, I could start having more problems again.
I'll respond separately in that other thread, but for now, time for more games.
Do try a shader clock speed of 1674 MHz (with some hardware/driver combinations, the next step of 1728 MHz is also stable).
Every little bit helps. Especially with all these new games that use loads of shader stuff. TimeShift's loading phases are 70% "shader processing", only the rest is everything else. Things can only get more complicated from there (the authors said "our shaders are as advanced as they come" and I can second that).
Luckily, your card won't really need memory overclocking thanks to the bandwidth you have on it. The core speed is superb too, even if you could keep on going a bit (if you can). But those pesky shaders... set them at max.
I did the things blow, but I still can't see shader overclocking option!
I'm using 162.18 driver. ( I know it's a little old. is this the problem?)
and an other question?!
is default core & memory clock of this card ok?!
I'd read that the memory clock must be 700 for 8600GT.
my system info:
CPU: 6550 Core 2 Dou
Main: Asus P5K
Graphic: XFX 8600 GT
ram: 2x1 GB Ballistix CL4
EDIT: this is screen shot link
mate, yes the problem is ur driver, it clearly states in the link "This new functions is actually enabled within Vista drivers only in combo with NVIDIA ForceWare 163.67 and newer (163.71 works fine also)."
Quick question regarding the Shader clock maximum? IS it limited to 1782Mhz max? When unlinked from the core clock if I set it to say 1800 the temp monitor reads 1782Mhz