wtf is all this blabern in this thread? When ATI 7970 was launched alot of people were dissapointed, now 680 launched and the same people are having a boner...uke2:...GTX 680 is a good card yes but not breathtaking. 7970 is still a very good card and if I have to choose I'll take 7970 as more memory and better implementation of it. I run in Eyefinity and quite certain in 2 months 2Gig of RAM is not enough. 500 EUR (hier in Germany are priced both at the same level) I'll not gonna pay for none of these cards. If the price drops toward 350 EUR then maybe consider it:3eyes:.
As the review states, there is a 4gb version of the card. Also, the reason a lot are hyped up is because this may actully be the mid range card marked as high end (because it's faster than the 7970), so if this was supposed to be the mid range for nvidia imagine what the true high end will be like :banana:.
I just read reviews from guru3d, anandtech, techpowerup and hardwarecanucks. It seems 680 surpass 7970 in most recent games and matches it in 2-3. The only games that it consistently loses are: Anno 1404 Metro 2033 Aliens vs. Predator Pretty solid considering it's 50$ cheaper and consumes less power. P.S. I'm comparing single display results here.
Yeah it'll be the same in New Zealand. We always get shafted. Would have thought with our current exchange rate prices would go down but it hasn't been the case.
I got BIG doubts that this is mid range. ATI will ofcourse hit back hard with 7990 and this is where nvidia would want to conter. If they will succed is still open. Again, the biggest issue with these cards is the PRICE. I'll get drunk and puke 5 times in a row, before paying <500 EUR on one of these these cards.uke2: If nvidia would have launched the card at 300 EUR range then YES, kudos to them but is not the case..
Thx Hilbert and crew. Great review as always. Would've been nice to see some benchmarks with the 7970 running at same speed in boost mode. I read that the boost even went to 1100mhz on core? So in fact it's not a true comparison. I know it's new tech but it does give nV an advantage to the non overclocker/noob out there. Food for thought. Great card btw. Hope to see prices on both sides drop soon.
Why do people want to see clock for clock comparisions? It makes no sense when dealing with different ways of doing things. It's like testing two performance cars, adding a test where you test how fast they are at 2500 rpm on 4th gear and proclaiming that the faster one is doing something better.
Heres some tri-monitor benches @ 5760X1080. Surprised that 256bit bus card can do this. http://www.bit-tech.net/hardware/2012/03/22/nvidia-geforce-gtx-680-2gb-review/10 http://www.bit-tech.net/hardware/2012/03/22/nvidia-geforce-gtx-680-2gb-review/11
Nvidia won this high-end round, I however will not bother with 200w cards, it needs to be max 120w or something like that. GPGPU is better on AMD though?
Hilbert touches on the mid range issue in the last page. http://www.guru3d.com/article/geforce-gtx-680-review/26 I dont think anything is going to beat the full kepler gk110.
It is relevant. When ATi ran higher clock the nVidiots screamed "It's running higher clocks". 7970 easily clock to 1250. So yes, it's relevant. I think 28nm yields need to improve before we see price drops.
Cards are reviewed in their out of the box configuration, I'm pretty sure everyone looking to overclock will look at the overclocking articles that Hilbert does with each major release, and make a decision based on that. imo I think 28nm is going to have to improve greatly if we're to see the GK110 any time this year, the thing is huge.
It's relevant to test at stock speeds as that indicates what the cards do out of the box and what most users will experience. It might also be relevant to test at max overclock as that shows the full potential of the cards (even though different chips of the same type most likely won't overclock exactly the same.) It might also be relevant to test performance per watt to see which architecture is the most power efficient. It might be relevant to test at a level where the cards are equally loud or at the same temperature to see which cooler is preferable when it comes to noise levels. It is however not relevant to test at the same clock speeds when they do not work the same way.
We'll probably find ourselves in the strange situation that it'll run Metro 2033 at 60 fps and Portal 2 at 500 fps at 1080p (using that 7 GHz i7 3770 lol) and wonder about the state of PC graphics.
Excellent power consumption chart: It's in ultra high-res that the Nvidia is mostly more efficient: Source: http://www.nordichardware.se/test-l...s-snabbaste-grafikkrets.html?start=22#content New vsync is mostly interesting for me as well as FXAA.
No it's not relevant because the nvidia people that said that then and the ATI people that want it now are both retarded.