Discussion in 'Videocards - NVIDIA GeForce' started by Veteran, Sep 19, 2014.
Take me out of the GTX 980 club.... I switched....
SLI GTX 970's...Gigabyte G1 Gaming Windforce X 2
Heaven is poor as a stability test (even 3DMark 06 is better). Get yourself GRID 2 or GRID Autosport. If it passes 40 minute bench loop on one or the other, it will endure anything.
i'll just leave this here:
GPU on PCI-E x1
Well old gpus dont benefit much..
check this and what boosts it can get pcie2.0 vs pcie3.0
There is no difference between PCI-E 2.0 and PCI-E 3.0 unless you have 4 beast cards in SLI.
well, after being annoyed by the screen flickering - i can't say i am entirely confident about that for several reasons. the comments i heard were . .prejudiced(?)
anandtech has a series of articles testing several platforms (1366, 1155, AM3+) with multi gpus (xfire/SLI). the differences in benchmarks between pci 2.0 and 3.0 are well within the margin of error.
sorry no offense but an anadtech series > than a video "review" (edit) IMO.
excuse if i leave this on hold for further discussion (if you care to) been a long day.
Then what's causing my bottleneck if OCing my CPU makes no performance difference and there's no difference between PCI-E 2.0 and 3.0?
Granted going from a single GTX 680 to dual GTX 980's is a huge step up, even if my 980's are only getting 970 levels of performance, but I don't like the fact that I'm losing $200 of value per card.
What makes you think theyre performing as 970s?
Matching Hilbert's settings in Tomb Raider, with single GPU and SLI tested, my system is pulling identical frames as the two GTX 970's Hilbert recently benched in the SLI GTX 970 article. Also, matching the settings in the Unigine Heaven hothardware.com benchmark, a single GTX 980 is pulling about 62 FPS, which is 3 FPS more than the GTX 970 they benched, and 6 FPS less than the GTX 980 they benched.
Edit - Also, I just ran a 3DMark Fire Strike benchmark. A single GTX 980 in my system scored 10,001. The GTX 970 and GTX 980 from Hilbert's review scored 9,568 and 11,168 respectively. So, in my system, a GTX 980 is scoring less than 500 points more than a GTX 970 in a gen 3 PCI-E lane, and more than 1,000 points less than a GTX 980.
So clock of 2500k is ?
What about memory speed and timings?
I tested both stock and 4.5GHz. The benchmark results I posted of stock vs OC back on page 14 of this thread shows absolutely no improvement with the OC enabled. I left my memory modules at default mobo speeds of 1333MHz (though its stock speeds is rated at 1600MHz).
Be careful with 3DMark, you're talking about overall score here so obviously Hilbert's i7 3960X @ 4.6 GHz will destroy your 2500K.
Look at the GPU scores : 12704 for the 980 and 10667 for the 970.
You got 12458 with your 980 so you're far from a 970 here. Basically no issue with your score.
About Tomb Raider and Heaven, it may have to do with CPU/RAM (1333 is looking awful for a system like yours) or PCIE bottleneck, or just lower turbo because of hotter cards/room than testers.
Though, looking at your results page 14, the OC'd CPU benches are always lower than stock (if you didn't swap them). Looks like an unstable OC to me, and that may very well be your problem.
BTW, drivers are buggy for now as my 680 SLi is getting much higher GPU score than Hilbert's (and a few others reviewers) 970 SLi, even overclocked. That's just not normal
EDIT: After I flashed a beta bios on my motherboard, and loaded optimized defaults, rebooted, fixed some settings. I forgot to set XMP Profile for my RAM lol. Fixed that and ran Firestrike again! After removing the 4.6 overclock on my 4790k, and restoring stock clock I decided to run Firestrike again. Well, here is my updated score for EVGA GTX 980 SC
NVIDIA GTX 980
2048 CUDA Cores
1241 MHz Base Clock
1342 MHz Boost Clock
159GT/s Texture Fill Rate
4096 MB, 256 bit GDDR5
7010 MHz (effective)
224.3 GB/s Memory Bandwidth
If there's no bottleneck then I would almost imagine it would be some sort of driver issue. I'm using Windows 7 ATM, so drivers/OS differences may account for some discrepancies.
I went back and set the DRAM speeds to 1600MHz and tested the OC stability with IBT. The results came back as stable at ~115GFlops, which is what I recall it being originally but it's been a while since I've tested my processor's computational speed. Regardless, I reran a single GPU benchmark on Tomb Raider and the speeds increased to nearly 90 FPS, which is an improvement, but still lower than what Hilbert scored on a reference GTX 980, which was 100 FPS respectively. However, overclocking my GTX 980 boosted the FPS to 95 average, so I'm beginning to suspect it might not be bottlenecked and there may be other factors limiting my cards, such as my OS and driver versions as mentioned earlier.
Edit - Now that I think about it, there may be some driver issue. In Planet Side 2 I get quite a bit of stuttering when even a single GPU is loaded to 100%. Also, in FurMark, I get very similar stuttering, so much that my card fails to exceed 80% TDP with a burn in test. Other games like Tomb Raider, Hitman, Skyrim, Heaven, and 3DMark has no stuttering at all.
Gtx 980 5k Grid Autosport benchmark test.
Nice work ludwig
Very minor bottleneck with 2 cards in sli at 8x 8x 2.0, biggest difference would be tri/quad sli high end and higher resolutions than 1440p. ie 4k, triple monitor setup 5760x1080.
OC your ram to 2133 with your max oc of CPU. I have seen large boosts in some games with high mem OC with tight timings.
Yeah, it doubles your fps in games.
from 1333 (which is what the guy has) to higher is the largest boost. 1600 to 2133 I noticed 0 difference in fps