Discussion in 'Videocards - NVIDIA GeForce' started by Shadowdane, Mar 17, 2016.
GP100 supports HBM 2 as well as GDDR5 ??? Highly Unlikely
Even the name, X80, is totally un-nvidia. That's just not happening.
HBM would allow for something like this without having to make 2 different chips.
The memory controller for GDDR5 takes up a massive amount of die space. HBM's doesn't and you would need to have both to support this configuration. It's wasted transistors on both chips(added cost), it's higher leakage on both chips, it's lower yields on both chips, and/or potentially lost performance on both chips.
I don't think it makes sense to have both
TDP is lower, yet twice more processing power - 6 TFLOPS vs 12.
And again. It's just TDP, not actual energy consumption. Let's wait serial tests. So far AMD have much larger actual energy consumption.
Obvious its fake.High TDP,GDDR5,512 bits with GDDR5 on Nvidia card??? Pure joke.
Or its just a PR joke to make gamers let them simmer in its own juice.
Just wait the official statement.
You can't have the same chip GP100 with two different memory controlers, way too expensive.
Sign me up for a X80, as long as the price is similar to 970 around $300.
My grain of salt.
Actually going from 28nm down to 16nm is a huge decrease in size, even more than the 57% of the size you talked about. It's because you have to think about silicon chips nm in terms of area (they're 2D structures effectively), so nm-squared. This is the calculation showing theoretically how small 16nm is compared to 28nm:
(16x16)/(28*28) = 0.33
Therefore 16nm transistors only take up 33% of the space of their 28nm brothers. (Another way of saying it is that 28nm is 3 times the size (100/33) of 16nm). They skipped a node, that's why they're so much smaller, they skipped the 20nm node.
Anyway, I'm not sure I believe this table showing the X80, etc, as me & some others were speculating a couple of days ago with names for the next Pascal architecture, and you can see from Post #1516 on the following page (http://forum.notebookreview.com/thr...ews-updates-1000m-series-gpus.763032/page-152) that we came up with that naming scheme. I reckon someone nicked that idea & just fabbed a spreadsheet.
GDDR5 would make sense on an NVIDIA product releasing this year. There is no way that GDDR5X production would ramp up fast enough to cover millions of cards in sales. The speed and width of the GDDR5 make sense too. NVIDIA has had a very effective memory controller on Maxwell, if they translate that to a card with actual 400-500GB/sec bandwidth, they will be fine.
By the way, "X" is the latin numeral for "10". So the naming scheme does make a lot of sense. These might be fake, but they do make sense.
Is not latin, its roman number.
Well your math is all wrong for one. You didn't covert bits to bytes (divide by 8). Then convert megabytes to gigabytes.
Lets take the 980Ti for example:
3505Mhz Memory Clock
384 * (3505 * 2) / 8 / 1000 = 336.48GB/s
So it here is your example:
512 * (4000 * 2) / 8 / 1000 = 512GB/s
if anyone releases a consumer graphics card with a 600mm^2 die size on 16/14nm finfet this year I'll eat my hat.
Also, remind me to buy a hat
Who were the Romans, oh wait, the Latins. :infinity:
Roman numerals use letters from the Latin alphabet, and they are alternative known as latin numerals.
If the Titan specs are true, it means it will be released in 2017.
The romans spoke latin, they were not the latins
This is neither the time nor the place
Fixed that for you my friend
Also the 16nm is done on Finfet design compared to planar on 28nm, which has added benefits of better performance and lower power draw in direct comparison transistor for transistor. Lot of users reckon next generation of cards can't be that much better than 900 series, many will be surprised.
Early test results from Samsung comparing 28nm planar to 16/14nm Finfet.
Well it's looking better & better for Pascal!