You cant play 3DMark. Synthetic tests dont tell you how games play. What comparisons do you need? I'll ask again, what will it achieve? The benchmark tests throughput under different scenarios. Changing the res wont change how fast a card is. 3DMark doesnt use enough video ram to show off high end cards either, so its not a be all, end all test suite in many ways. It does NOT accurately simulate how a game will play, this is why we have game benchmarks. Games vary wildly in their fps, a single benchmark result cannot possibly reflect this.
pls READ my posts before "complaining". never said im playing 3dmark.. i said: no one plays @1280x1024 with a GTX680. correct? as long as "you" dont test all the games on the market, how do you know that "i/user" can compare cards based on those results?? a synthetic benchmark lets me just do that, compare different cards. but in lower than 1920x1080 you have the cpu limiting the GCards performance, correct? thats why i want tests in fullHD, not lower..
He's not complaining you are. With quite a bit of attitude as well. The reason P mode is used is simple, it's the default test that covers the entire spectrum from low to high-end. We test graphics cards in that same spectrum, hence 1280x1024 is included in our tests as well, it wouldn't be much of a comparison of we tested a R7770 at FullHD or higher. So for all cards we test four common single monitor resolutions. Pick whichever one you deem is righ/closett for you. Except thus the 3DMark series for which we have chosen the P mode which gives the most respectable and understandable number with the most used and tested preset. Don't like that?, soz ...
I'll spell it out for you. 3DMark does not represent game play. Therefore it doesnt represent games at 1280 res or 1920 res. So it makes no difference which res is used unless you want to see higher res on your screen. A synthetic benchmark generally tests max throughput. So it will tell you the max throughput under ideal conditions, not what games can use. Not in this benchmark unless you go to ultra low resolutions maybe, it doesnt use much CPU in the graphics tests by design. And thats why its irrelevant.
Anno 2070 is such an excellent benchmark, really able to show the shortcomings of a card. for instance the evga gtx670 SC is superclocked and yet its almost 10fps behind the gtx680. this is a great indication of things to come, could very well be that some of the latest games (in the future) may truly underperform when played with a 670 regardless of core and memory clock speeds, compared a gtx680. something to keep in mind.
Future games, especially after the new consoles come out, are going to be way heavier on shader/geometry performance than memory. That's where you'll start to see the 680's extra shaders/geometry clusters pull away from the rest. I guarantee games coming out two years from now will have a much larger performance gap between a 580 and a 680 than current games. Don't get me wrong, the 670 will still perform similarly to a 680, but the way current generation games are designed don't even come close to utilizing newer architectures properly. That won't happen till DX11 is standard across the gaming industry. I mean a 680 is 16x faster than a XBOX360, 2.6x faster than the rumored next generation consoles. Hopefully these consoles will utilize DX11 based architecture to provide consistency between PC Games and console games.
How many people in 2 years time though, are going to be bothered about a larger performance gap between the 680 and 670, when they'll be old hat.
My point is that the design of GPU's is changing. Right now memory bottlenecking in consoles is a huge factor. Every developer and their mother are trying to find clever little ways to reduce texture bandwidth via texture streaming, new compression algorithms etc. Current generation technology isn't limited in this regard anymore, whether is via general system hardware or the architecture of the GPU itself. Both AMD/Nvidia have made strides in balancing their geometry/shader/memory output when it comes to designing new systems, but the memory bandwidth itself is metric-****tons better than what you have on say a 360 or PS3. But anyway, its' the reason why you see some games better show the difference between the 670 and 680. They are no longer focusing on memory bandwidth issues because for our generation cards it simply isn't an issue. Instead they'll improve quality through things like tessellation and more technically advanced and realistic shaders. I guess I just appreciate the development of the architecture behind these chips more so than the actual performance. I rarely play games that require these chips anyway, mostly just Starcraft II now in days which can be played on just about anything.
Metro is old as Bad Company 2 which the 7970 loses by 28fps under DX11, it got stomped. BF3 is much newer than Metro and under dx11 the 7970 again got stomped. In DX11 3dMark11 the 7970 gets humilated by this Msi 680, it gets crushed by 2000 points. should i go on? or should we nitpick 2-4fps difference in Metro and forget about the ones that really matter.
The comments link for the new review on the Corsair Vengeance 2000 is mistakenly linked to this thread. Hope a mod will see this and correct the link for those who want to comment on that review.
nice review. Too bad you didn't include the 7970 ghz edition in the comparison. Doing so myself by hand I see the Radeon is the winner by 5-10% and will cost only approx. 10 dollars extra. Multiple monitor performance is even giving bigger gaps. Loved the read though, thanx!
Yes, but these cards have 2 or 3gb of Vram for using reso's like 5760x1080 w 8x msaa and on that front the 7970 smiles friendly at the green's leaving them behind the next moment. Troubles of the gf cards are caused by their ever cheaper mem solutions getting stomped in the final. And then.....the mighty mr. ghz edition shows up and makes clear once and for alll how to run 3 displays properly hehe...btw I play Sniper Elite V2 on 3 displays with only 1 hd6xxx card at 45 fps most of the time (without aa but that's not a problem at this resolution). It's really better than at 1080p. The biggest difference in gaming experience since the intrduction of the pentium processor or the commodore64 lol. Thank you AMD