It's nostalgia as games like zelda/worms/final fantasy were all 2d and low res and games weren't even 3d until the late snes/playsation days. PC games likes doom were pixelated to hell and back (pun intended). Another revolutinary 3d game was golden eye and just looking at the blocky polygons makes my eyes hurt.
Tried playing golden eye just last week Remote mines were always fun for me Back on topic, can't wait to see what both teams bring out, just wish the high end cards weren't marked for Q2 :nerd:
No, because discounting dual GPU cards it is the best AMD has. It can't compete with the 580 and that's why AMD has to be careful how they priced the 6970. Cost is so close it's splitting hairs. Performance wise in games if you didn't know what card was in the system you wouldn't have a clue either way. The power usage of the 2 cards is so close you would never be able to tell the difference. As for Vram, how many people play at 2560x1600 or higher? And then 2Gb isn't really enough anyway so it's a moot point. Vram is a selling point, always has been and always will be. Some people just want more because they think it makes the card better regardless. AMD capitalizes on this. 6970CF performance wise, again it depends on the game. If you didn't know what cards were installed you wouldn't be able to tell a difference. So all that about "much better performance" is crap. -edit- 1Gb of Vram is low for HD+ resolutions but that doesn't mean you need 2Gb either.
All your points are moot, the 'various' reviews and charts in my last post show that, infact at multimonitor reolutions 570 sli can't even touch hd 6970 cfx as it competes with gtx 580 sli (hardocp review) and the ~50watts extra power usage of 570 sli is a huge difference and any reasonable person would see that. As for discounting dual gpu cards, why exactly should that be the case when you clearly posted that ATi can't compete with nvidia in terms of performance when infact they have the fastest single card... anyone running high res/multimonitor would be able to tell the difference between 570 sli and hd 6970 cfx as 6970 cfx is way more superior :stewpid:
Superior, ROFL. Most lightbulbs use more than 50 watts. If I gave you two systems, one each with a 590 and 6990 but otherwise identical you wouldn't be able to figure out which system had which card.
Your light bulb seems to have gone out... first you talk about ati not competing and this is shown to be wrong now you are trying to show that nvida can compete.. how the tables have turned :stewpid:
I think you guys do not get the news. 45% improvement in transistor performance over the previous 40nm node != 45% increase in GPU performance, given how sucky TSMC's 40nm node was I wouldn't expect much. Also the increase in performance is mainly due to the high dielectric constant within the gate which Intel has been doing for a while. At the end of the road TSMC is doing nothing special here.
i expected 20nm and going for 3rd dimension (i.e upwards) for building em would double or triple the performance (not really sure if they use that 3rd now or not) 45% is very disappointing
Because then games had thick/wide simple geometry that hid it better. Just a bunch of joined up square/rectangle rooms with squarish objects strewn about, it was all about textures rather than complex geometry. As games are getting more advanced, geometry is getting more complex, thinner sharper edges everywhere it shows up far more jaggies. As does the larger maps and larger viewing distances. Larger screen also show it up much more, before you had a 17" 4:3 screen running at 1600x1200 or 1920x1200 or even more which eliminated jaggies completely. Now you have 24" wide screen with a much larger viewable area with only 1920x1200 or even worse 1900x1080 Back then I never used AA as 1600x1200 rez pretty much eliminated the need, now with 1920x1200 24" screens I need 2x AA minimum, preferably 4x.
why, dont be all surprised when it does.. I have no doubt it will be a monster gpu and much much better than Kepler. Ok except when it comes to drivers lol, but this new next gen core will be friendlier then VLIW so no more very long instruction words
Some of you guys need read article again, it's saying 45% improvement in speed as in they can get 45% higher clock speeds on 28nm process. They aren't saying next generation cards will only be 45% faster just the process allows for 45% better clock speeds. Imagine 680GTX consumes 250w of power has 768 cuda cores on 384bit or 512bit memory bus with 3GB or 4GB GDDR5 running at core speed of 1.1GHz an shader speed of 2.2GHz Compared to 580GTX consumes 250w 512 cuda cores 384bit memory bus with 1.5GB GDDR5 running at core speed of 772MHz an shader speed of 1.54GHz Easy talking 100% better fps for 680GTX
TSMC is still running on planar tech. Also again we are talking about TRANSISTOR PERFORMANCE which is not computational power.
if the article is true, performance could be much higher than 45%. 45% potentially higher clock speed plus much more shaders can lead to a dramatic performance improvement.