Nvidia Pascal Specs?

Discussion in 'Videocards - NVIDIA GeForce' started by Shadowdane, Mar 17, 2016.

  1. poornaprakash

    poornaprakash Active Member

    Messages:
    93
    Likes Received:
    15
    GPU:
    AMD/Nvidia
    GP100 supports HBM 2 as well as GDDR5 ??? Highly Unlikely
     
  2. spine

    spine Member Guru

    Messages:
    152
    Likes Received:
    16
    GPU:
    Titan Xp CE + EK WB
    Bull****!

    Even the name, X80, is totally un-nvidia. That's just not happening.
     
  3. semantics

    semantics Member

    Messages:
    39
    Likes Received:
    4
    GPU:
    N/A
    HBM would allow for something like this without having to make 2 different chips.
     
  4. Denial

    Denial Ancient Guru

    Messages:
    13,527
    Likes Received:
    3,073
    GPU:
    EVGA RTX 3080
    The memory controller for GDDR5 takes up a massive amount of die space. HBM's doesn't and you would need to have both to support this configuration. It's wasted transistors on both chips(added cost), it's higher leakage on both chips, it's lower yields on both chips, and/or potentially lost performance on both chips.

    I don't think it makes sense to have both
     

  5. coth

    coth Master Guru

    Messages:
    512
    Likes Received:
    61
    GPU:
    KFA2 2060 Super EX
    TDP is lower, yet twice more processing power - 6 TFLOPS vs 12.

    And again. It's just TDP, not actual energy consumption. Let's wait serial tests. So far AMD have much larger actual energy consumption.
     
  6. Turanis

    Turanis Ancient Guru

    Messages:
    1,779
    Likes Received:
    475
    GPU:
    Gigabyte RX500
    Obvious its fake.High TDP,GDDR5,512 bits with GDDR5 on Nvidia card??? Pure joke.
    Or its just a PR joke to make gamers let them simmer in its own juice. :D

    Just wait the official statement.
     
  7. YetYhunter

    YetYhunter Maha Guru

    Messages:
    1,231
    Likes Received:
    6
    GPU:
    Jetstream GTX1080
    You can't have the same chip GP100 with two different memory controlers, way too expensive.
     
  8. trentbg

    trentbg Active Member

    Messages:
    56
    Likes Received:
    0
    GPU:
    nVidia
    Sign me up for a X80, as long as the price is similar to 970 around $300.
     
  9. Solfaur

    Solfaur Ancient Guru

    Messages:
    7,579
    Likes Received:
    1,043
    GPU:
    GB 3080Ti Gaming OC
    My grain of salt.

    [​IMG]
     
  10. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,556
    Likes Received:
    293
    GPU:
    GTX1070 @2050Mhz
    Actually going from 28nm down to 16nm is a huge decrease in size, even more than the 57% of the size you talked about. It's because you have to think about silicon chips nm in terms of area (they're 2D structures effectively), so nm-squared. This is the calculation showing theoretically how small 16nm is compared to 28nm:
    (16x16)/(28*28) = 0.33
    Therefore 16nm transistors only take up 33% of the space of their 28nm brothers. (Another way of saying it is that 28nm is 3 times the size (100/33) of 16nm). They skipped a node, that's why they're so much smaller, they skipped the 20nm node.

    Anyway, I'm not sure I believe this table showing the X80, etc, as me & some others were speculating a couple of days ago with names for the next Pascal architecture, and you can see from Post #1516 on the following page (http://forum.notebookreview.com/thr...ews-updates-1000m-series-gpus.763032/page-152) that we came up with that naming scheme. I reckon someone nicked that idea & just fabbed a spreadsheet.
     
    Last edited: Mar 17, 2016

  11. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,888
    Likes Received:
    754
    GPU:
    Inno3D RTX 3090
    GDDR5 would make sense on an NVIDIA product releasing this year. There is no way that GDDR5X production would ramp up fast enough to cover millions of cards in sales. The speed and width of the GDDR5 make sense too. NVIDIA has had a very effective memory controller on Maxwell, if they translate that to a card with actual 400-500GB/sec bandwidth, they will be fine.

    By the way, "X" is the latin numeral for "10". So the naming scheme does make a lot of sense. These might be fake, but they do make sense.
     
  12. TirolokoRD

    TirolokoRD Ancient Guru

    Messages:
    1,934
    Likes Received:
    0
    GPU:
    EVGA 980TI ACX 2
    Is not latin, its roman number.
     
  13. Shadowdane

    Shadowdane Maha Guru

    Messages:
    1,420
    Likes Received:
    70
    GPU:
    MSI RTX 3080 Suprim

    Well your math is all wrong for one. You didn't covert bits to bytes (divide by 8). Then convert megabytes to gigabytes.

    Lets take the 980Ti for example:
    384bit bus
    3505Mhz Memory Clock

    384 * (3505 * 2) / 8 / 1000 = 336.48GB/s


    So it here is your example:
    512 * (4000 * 2) / 8 / 1000 = 512GB/s
     
  14. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    if anyone releases a consumer graphics card with a 600mm^2 die size on 16/14nm finfet this year I'll eat my hat.

    Also, remind me to buy a hat
     
  15. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,888
    Likes Received:
    754
    GPU:
    Inno3D RTX 3090
    Who were the Romans, oh wait, the Latins. :infinity:

    Roman numerals use letters from the Latin alphabet, and they are alternative known as latin numerals.

    If the Titan specs are true, it means it will be released in 2017.
     

  16. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    The romans spoke latin, they were not the latins
     
  17. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,888
    Likes Received:
    754
    GPU:
    Inno3D RTX 3090
    This is neither the time nor the place :D
     
  18. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    12,469
    Likes Received:
    4,755
    GPU:
    2080Ti @h2o
    Fixed that for you my friend :D
     
  19. k3vst3r

    k3vst3r Ancient Guru

    Messages:
    3,501
    Likes Received:
    76
    GPU:
    KP3090 G9 240Hz
    Also the 16nm is done on Finfet design compared to planar on 28nm, which has added benefits of better performance and lower power draw in direct comparison transistor for transistor. Lot of users reckon next generation of cards can't be that much better than 900 series, many will be surprised.

    Early test results from Samsung comparing 28nm planar to 16/14nm Finfet.
     
    Last edited: Mar 17, 2016
  20. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,556
    Likes Received:
    293
    GPU:
    GTX1070 @2050Mhz
    Well it's looking better & better for Pascal!
     

Share This Page