New AMD roadmap gives more insight in polaris 10 and 11

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Apr 21, 2016.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,546
    Likes Received:
    18,860
    GPU:
    AMD | NVIDIA
  2. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    DP 1.3 => 3840x2160 @ 120Hz = 1920x1080 @ 480Hz.
     
  3. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,760
    Likes Received:
    9,656
    GPU:
    4090@H2O
    Does look pretty powerful. But just reading Async Compute on that slide makes me cringe...
     
  4. CrisanT

    CrisanT Guest

    I am still confused about the core clocks. I mean, the green team manages 1,5 GHz now (yes, in cherry picked chips). Why are we still arround the 1GHz mark at the red team?
     

  5. Bogeyx

    Bogeyx Active Member

    Messages:
    72
    Likes Received:
    0
    GPU:
    2070 RTX @ 2050Mhz
    So the new Enthusiast cards (including a jump from 28 to 14nm) can do 8,2 Tflops while the current cards can do 8,6 Tflops?? (Fury)
    What did i miss?
     
    Last edited: Apr 21, 2016
  6. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,760
    Likes Received:
    9,656
    GPU:
    4090@H2O
    Well, reaching 1500 with Maxwell with overclocks wasn't that difficult for most, no actual cherry picking needed.
    But my simple guess for why they don't really surpass 1GHz, they might not need the performance. :)
     
  7. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    17,564
    Likes Received:
    2,962
    GPU:
    XFX 7900XTX M'310
    I was under the impression that "Polaris 11" was a bit of a low-end GPU and "Polaris 10" mid-range, "Vega 10" coming in 2017 should be the enthusiast model though the 14nm die shrink might still be able to give some nice performance gains for Polaris 10/11 GPU's even if they're not quite as "equipped" as the current Fury models are.
    (No idea how it looks like for Nvidia, X70 is apparently the initial low/mid-end model with X80 a bit above that and then possibly a X80Ti later as a high/enthusiast-end model but I have no idea besides what some rumors hinted at.)
     
    Last edited: Apr 21, 2016
  8. Ryu5uzaku

    Ryu5uzaku Ancient Guru

    Messages:
    7,552
    Likes Received:
    609
    GPU:
    6800 XT
    Hmmh I guess I will wait for Vega then for sure :)
     
  9. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I think this tells entire story of this particular step from 28 to 14/16nm.
    https://devblogs.nvidia.com/parallelforall/inside-pascal/
    nVidia built GP100 (610mm^2) as bigger chip than GM200 (601mm^).
    On 28 nm, they reached with GM200 250W limit with 8 billion transistors ticking at 948/1114MHz base/boost while equipped with power hungry GDDR5.
    on 16nm, they reached with GP100 300W limit with 15.3 billion transistors ticking at 1328/1480MHz base/boost while equipped with power efficient HBM2.

    In other words TDP was limit, not achievable clock. And only other reason for lower clock than TDP is that GPU makes errors in calculation on higher clocks. (unacceptable for business grade HW) And that may be for consumer cards much higher. I guess many people here OC their cards by rule of: "I see artifact, so -20MHz and keep it there"
    But many of those high OC end up failing in compute tests like GPUPI.

    Taking in account power efficiency: 1.9125 more transistors + 40% increased base clock/33% increased turbo clock. And increased total TDP of GPU (around 210 vs 270W).
    16nm even on described condition delivers: 2.03 times higher (transistor and clock) to power consumption ratio.

    So taking GPU like gtx 980 has, putting it on 16nm it will eat around 1/2 of original power. How high OC can we expect before GPU power consumption matches last generation? 40%? 60%?

    Same goes for those announced r9-480(x) chips, unless AMD breaks their design in some way, they'll clock them much higher than those (guesstimated 1GHz what floats around net).
    And if they were not able to clock them higher, power consumption would be ridiculously low.
    Basically that r9-480x with 2560SP, 160TMU, 64ROPs with proclaimed 800MHz base would fall into sub 100W notebook category. (More like under 80W)
    Because Nano is 175W TDP card which ticks around 950~1000MHz depending on airflow and has more transistors than r9-480x.
     
    Last edited: Apr 21, 2016
  10. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,518
    Likes Received:
    2,361
    GPU:
    Nvidia 4070 FE
    Yeah. There's no way that ~800MHz is for real. If it is, it's a failure of a chip design.
     

  11. slyphnier

    slyphnier Guest

    Messages:
    813
    Likes Received:
    71
    GPU:
    GTX1070
    cmiiw but from what i know, the logic when designing a chip is not how to make it run at higher speed (more ghz is not always means better)
    but instead how to run more efficiently

    if i can use cpu as example, pentium4@4ghz vs skylake@2ghz... skylake should win because it have more cores and intruction per cycle

    anyway what matter is real performance ...
    they can wrote spec like double or even triple from current lineup, but if performance increase only like 10% up then its means nothing
     
  12. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    And that's the thing. While GCN is more efficient than Maxwell if we take: performance / (transistor*clock)

    Maxwell clocks higher ultimately allowing smaller (cheaper to make chip) perform competitively or even better than AMD competitor.

    As people are used to 150~250W cards, nVidia will target this range. And if power efficiency from 16nm allows them to clock above 1.5GHz... Then they can deliver adequate performance through use of higher clock on smaller (cheaper to make) chip.

    But if AMD can't clock GCN that high even on 14nm, then you get your power efficient chips which will require more transistors to compete with pascal's higher clock. (GCN will be more expensive to make.)

    And that either means much higher profits for nVidia as AMD can't undercut them or very low profits for AMD. Either way, as you get more than double power efficiency and PCIe standard says 300W top, then we should expect cards to be clocked as high as TDP/chip design allows.
     
  13. Humanoid_1

    Humanoid_1 Guest

    Messages:
    959
    Likes Received:
    66
    GPU:
    MSI RTX 2080 X Trio
    I was also expecting much higher clock speeds this time out.

    Perhaps AMD want to make good on that Overclockers Dream thing they mentioned a while back lol
     
  14. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Both companies should just release a card at 1mhz. "THE BEST OVERCLOCKER EVER!!!11!1"

    Fox2232, you think it's an architecture design choice by AMD, or a limit of the 14nm node?
     
  15. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,440
    Likes Received:
    109
    GPU:
    Surpim LiquidX 4090
    That core clock has to be so it can fit in SFFs like the X51, AMD quantum thinget etc...

    Kinda annoyed that Pascal 104 will surpass Polaris 10, but whatever.
     
    Last edited: Apr 21, 2016

  16. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    not certain, but yeah... it proly will, hopefully not by much

    AMD can still win if they have truly competitive product.
    Small dies and perf/mm2 has always been their bread and butter.

    One thing they clearly did good this gen is Polaris 11 and focus on mobile graphics. JChrist... finally!
    GCN/7970 (paper) launch, with no mobile in sight was a disaster.
    But Nvidia now late with mobile? Whats going on? Waiting on.. Intel, or what..?
     
  17. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    and that's why performance / (transistor*clock) is... a nonsense metric :nerd:

    look, performance is tangible metric, and so is transistor#
    clock.. no one gives a **** about clock, because its significance is already accounted for in performance

    what performance / (transistor*clock) metric does is... it punishes high-clocking arch/design. makes no sense.
     
  18. Valken

    Valken Ancient Guru

    Messages:
    2,924
    Likes Received:
    901
    GPU:
    Forsa 1060 3GB Temp GPU
    I am so ready to buy if I can find a good 4K TV to go with it! Or should wait for Vega 10?! Sigh...
     
  19. waltc3

    waltc3 Maha Guru

    Messages:
    1,445
    Likes Received:
    562
    GPU:
    AMD 50th Ann 5700XT
    It's interesting to see that even today people confuse MHz with performance...;)
     
  20. Koniakki

    Koniakki Guest

    Messages:
    2,843
    Likes Received:
    452
    GPU:
    ZOTAC GTX 1080Ti FE
    The infamous "more is better" assumption. ;)
     

Share This Page