Those are commas not periods so gigaflops stands. I don't know about that. 1024 CC is about ~700 mm² I'm pretty sure it will shock everyone when they won't see GK104 be a 1536 SP part like Theo and nameless german* guy thinks. *Or was he dutch?
me too! but there are links to his info: http://www.semiaccurate.com/forums/showpost.php?p=155081&postcount=1437
seronx dunno if you noticed but your 650m looks weaker then 540m yet it beats the crap out of 550m.... WHAT GIVES
LucidLogix http://www.samsung.com/cn/consumer/computers-office/ultra-mobile-pc/q-series/NP-Q470-JT02CN http://www.youtube.com/watch?v=Qdkem0RpUT0 <-- video
I think that the best realistic outcome would be for the highest end mainstream Kepler part performing between a 7950 and 7970 while costing around $400(US). This would force AMD to lower their prices on their excellent cards and we consumers win. The only thing I don't understand is why Nvidia would hamstring their cards with slower and less memory. I'm already exceeding 2GB VRAM usage in Skyrim which makes me extremely glad that I opted for my 6970 instead of the GTX 570. 2GB is adequate for 99% of PC gamers but I always mod my games out the ass if possible. It's the only thing that makes my investments worth it considering the stagnation in PC gaming tech due to the utter obsolescence of console hardware. And please, I am no fanboy of either company. My 6970 replaced two GTX 260's and I ALWAYS go with the best price/performance value. I could give a crap about brand loyalty.
Simple error. Read to much into the 384 cc number. http://www.abload.de/img/221138yniyzzoykyy5iky6cjs3.jpg http://img210.imageshack.us/img210/9239/221138yniyzzoykyy5iky6c.jpg <- GT 650M 2GB http://3dvision-blog.com/wp-content/uploads/2011/11/toshiba-540m-gpu-z.jpg <- GT 540M 2GB The only issue for the 650M is the clock rate.
Ok, but the GK110 is suppose to be more like 550mm² TX680 GK110 550mm2 TBD~850MHz TBD~1.7GHZ 1024 32 64 5.5 GHz GDDR5 512bit 352GB/s GTX670 GK110 550mm2 TBD~850MHz TBD~1.7GHZ 896 28 56 5 GHz GDDR5 448bit 280GB/s http://lenzfire.com/2012/02/entire-nvidia-kepler-series-specifications-price-release-date-43823/ lol you mean that wurst picture leak (that's German), i think it showed 596? core, then 768cores, now its 1536 with no hot clocks
ONLY? I wish Isn't it kinda obvious that such a lowly 405MHz clock is all wrong? And is in fact calculated by mistakenly dividing shader clocks by 2, an unnecessary move because shader clock = gpu clock ?
(580)550 mm² * 2 * (28÷40)² => 539 mm² but realistically the equation goes like this (580)550 mm² * 2 * (28÷40) => 770 mm² the reason for this is because of only width is the changing factor. Length and Height of a node stays the same. That is why UTBB for FD-SOI is so important for AMD and their CPUs. UTBB effectively shortens the length of a node. http://www.3dcenter.org/news/die-aktuellen-spezifikationen-zum-gk104-kepler-performance-chip http://www.brightsideofnews.com/new...k1042c-geforce-gtx-670680-specs-leak-out.aspx Talking about these guys. :bang: You could be right that would explain a lot. Though it would only apply to these rebandaged Fermi parts. 16 Pixel/ops * 810MHz => 12.96 Gpixels 16 Texture/ops * 810MHz => 12.96 Gtexels http://img402.imageshack.us/img402/6776/597562jpg.png http://www.abload.de/img/2211389hlhr3zl0ulz85epyjgn.jpg The only issue then is that^
Well color me blown away - I mean 1.2x more power!? That means 2x by the year 2016. Also explains how Mr. Eric Demers could not adapt to Rory's ubiquitous pace and had to leave. Spoiler enableing higher performance
Looks like AMd have their own cards to launch at about the same time and ruin Nvidia's party a little. taken from here: http://forums.guru3d.com/showthread.php?t=359547