Rumor: Radeon RX 6000 Series as fast as a standard RTX 2080 Ti?

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Sep 14, 2020.

  1. itpro

    itpro Master Guru

    Messages:
    551
    Likes Received:
    285
    GPU:
    Radeon Technologies
    I really want a phone with ryzen c7 rdna2 capabilities. Even if it would cost an arm and a leg, it should be a huge leap ahead current snapdragons. They became far too common nowadays. Enough.
     
    Fox2232 likes this.
  2. Fox2232

    Fox2232 Ancient Guru

    Messages:
    10,788
    Likes Received:
    2,721
    GPU:
    5700XT+AW@240Hz
    Let's hope that "costing an arm" will not gain new meaning with nVidia getting it :D
     
    itpro and Maddness like this.
  3. Aura89

    Aura89 Ancient Guru

    Messages:
    8,021
    Likes Received:
    1,182
    GPU:
    -
    It'd be sweet if your rumor was true but, i don't hold any hope in it.

    RNDA1 on 7nm wasn't more efficient then nvida on 12nm. RDNA2 also on 7nm, but may be improved 7nm, and probably has efficiency improvements in the design too, one can hope at least. But to expect AMD to become more efficient then Nvidia 8nm(10?) when they couldn't best then at 12nm in one generation, i don't buy it.

    Again, would be sweet if your rumor was true, but, sorry, logic states otherwise.
     
  4. kings

    kings Member Guru

    Messages:
    157
    Likes Received:
    131
    GPU:
    GTX 980Ti / RX 580
    You guys are very optimistic, 16GB GDDR6 alone cost close to $200, depending on the speed.
     
    Ricepudding and itpro like this.

  5. CPC_RedDawn

    CPC_RedDawn Ancient Guru

    Messages:
    8,197
    Likes Received:
    593
    GPU:
    Zotac GTX1080Ti AMP
    Capture.PNG

    Buildzoid pointed this out on the fortnite in game reveal. The layout on the back of the card points more towards a HBM design as GDDR6 requires the memory chips to be a lot closer to the gpu substrate so why have screw holes right next to the GPU core when you should have memory modules there. It correlates to a similar design to the Radeon VII which used HBM2 memory.

    SOURCE:

     
    Maddness likes this.
  6. itpro

    itpro Master Guru

    Messages:
    551
    Likes Received:
    285
    GPU:
    Radeon Technologies
    History states otherwise. When ATI wanted to fight Nvidia they did win in the end. Now it's time for AMD to prove they didn't ruin ATI for ever. It's about time to either prove they can provide enthusiast hardware or not anymore. All other talking is rubbish and extra leather coating for Nvidia's CEO.
     
  7. Fox2232

    Fox2232 Ancient Guru

    Messages:
    10,788
    Likes Received:
    2,721
    GPU:
    5700XT+AW@240Hz
    I heard of some mysterious AMD's card with 7nm GPU that had 28% more transistors than Navi10 and ate 295W while performing same in games as 225W Navi10 based card.
    And that mysterious thing had even advantage in using HBM to conserve energy.

    Do you by chance have some info about it? May be interesting to actually factor in VRMs efficiency, subtract other components on PCB/interposer to get as close as possible to power draw of that mysterious GPU and then see same value for Navi10.
    I wonder if you can find GPU power efficiency improvement close to 50% as I did.

    Then comes one little realization. (Which applies to nV too.)
    When GPU actually eats only fraction of power from total board power 225W, increasing board limit to 300W (which is by 33%) means that GPU's power budget could go up even by 60%. (Depending on power draw of components around.)
     
    itpro and PrMinisterGR like this.
  8. Aura89

    Aura89 Ancient Guru

    Messages:
    8,021
    Likes Received:
    1,182
    GPU:
    -
    What kind of logic is this even? I see none....


    I was more commenting on his statement that the 3000 series isn't efficient, simply because he says so. Disregarding the fact that the 3070 should in theory be similar performance to the 2080 ti, give or take, and take 50 watts less to do it. And because of the implication that the 3000 series isn't efficient, that therefore means RDNA 2 will be more efficient. That's really what i'm refuting, as no sorry, i don't believe it. Again it'll be great, but i don't believe it. There's no logic to state this will be the case.
     
    Last edited: Sep 15, 2020
  9. holeindalip

    holeindalip Member

    Messages:
    23
    Likes Received:
    3
    GPU:
    Sapphire Vega64
    i just want to point out that AMD has stated that they will not do raytracing until the whole product stack is capable with good performance, hence the leaked card that says its around 2080ti performance is probably the lower end of the product stack or mid tier and its around 3070 performance also. 12 Tflops out of an apu on the xbox series x is nothing to sneeze at either with the power budget they are working with. all of this is adding up to a very powerful,very power efficient card and i absolutely cant wait for AMD to finally push NVIDIA back for being bullied into the corner for so long....
     
  10. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,381
    Likes Received:
    403
    GPU:
    Sapphire 7970 Quadrobake
    I was thinking that we actually do know what RDNA 2.0 can do, mainly from the PS5 and Xbox information we have.

    AMD can probably stuff ~80CUs in a package ~600mm2, if not smaller. Keep in mind from the Ratchet and Clank demo for the PS5, that a 36CU RDNA 2.0 GPU can do 1440p60 locked with ray tracing.

    If you project this to a linearly scaled 80CU GPU with slightly lower clocks, or the same clocks, the card will have 20-24TFlop of FP32 without any "conditions" like Ampere does. If it's paired with something like 16GB of HBM2 (don't forget that AMD gets that for a lower price as they own patents on it), then it has the possibility of being faster than the 3090.

    HBM would also help the total package to stay ~320-350W, which is the ballpark that Nvidia plays with.

    Nvidia got too greedy and gimped themselves with Samsung this time around I think.

    None of this is in any dreamland, we have some performance figures from actual, working hardware. I find it interesting that literally one day after AMD shows their PCB, the first Nvidia "leak" was about the "Titan" and how it would have a faster memory subsystem with 23Gbps GDDR6X. To me this sounds like NVIDIA is worried that AMD might actually surpass them on the raw specs. The move to artificially "double" the CUDA cores, while they will never be fully utilised this way, also speaks volumes about being afraid of some crazy raw power comparisons (ie Big Navi 22Tflop vs 3090 18Tflop), but with the AMD cards having recovered the gaming performance/Tflop gap.
     

  11. Fox2232

    Fox2232 Ancient Guru

    Messages:
    10,788
    Likes Received:
    2,721
    GPU:
    5700XT+AW@240Hz
    nVidia moved to Samsung 8nm. It has transistor density. It certainly is more power efficient than that "12nm". That's why we see 28B transistors within one chip at given power draw while more transistors per SM flop due to better design and as it seems, GPUs are pushed to clock limits.
    We have no clue how much better TSMC 7nm is over 8nm Samsung. But it will not be big difference. If anything is really important, it is AMD's "breakthrough". Which can mean anything (nothing of importance) or it can mean smaller CUs that do more work with fewer flops than RDNA1 did => extra energy efficiency. (RDNA1 did improve over GCN-Vega too in similar way.)
    And AMD's patents which I expected as not viable for small RDNA1 GPUs (and no big one came) may finally come into fruition with RDNA2.
     
  12. Astyanax

    Astyanax Ancient Guru

    Messages:
    7,699
    Likes Received:
    2,576
    GPU:
    GTX 1080ti
    we have no clue IF tmsc 7nm is better than 8nm


    Tech Node name (MTr/mm²)

    Intel 7nm (2??)
    TSMC 5nm EUV 171.3
    TSMC 7nm+ EUV 115.8
    Intel 10nm 100.8
    TSMC 7nm Mobile 96.5
    Samsung 7nm EUV 95.3
    TSMC 7nm HPC 66.7
    Samsung 8nm 61.2
    TSMC 10nm 60.3
    Samsung 10nm 51.8
    Intel 14nm 43.5
    GF 12nm 36.7
    TSMC 12nm 33.8
    Samsung/GF 14nm 32.5
    TSMC 16nm 28.2

    By this, i'd say "barely"
     
  13. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,381
    Likes Received:
    403
    GPU:
    Sapphire 7970 Quadrobake
    nikobellic, Evildead666 and itpro like this.
  14. Astyanax

    Astyanax Ancient Guru

    Messages:
    7,699
    Likes Received:
    2,576
    GPU:
    GTX 1080ti
  15. Denial

    Denial Ancient Guru

    Messages:
    13,002
    Likes Received:
    2,404
    GPU:
    EVGA 1080Ti
    The theoretical node performance is basically irrelevant. Different libraries will perform differently on varying nodes, that's not to mention that the design of the chip heavily effects the transistor density. 5700XT for example is only 41.04 MT/mm2 vs 3080's 44.56 MT/mm2.

    TSMC's 7nm probably allows for a "better" chip, given an optimal design, but it didn't help 5700XT or Vega VII compete against Nvidia on what was essentially 16nm TSMC.

    That being said I fully believe AMD will have a 3080 competitor on RDNA2, the question is in the specifics. Will it's RT performance be competitive? Will it have feature parity? Can AMD manufacture it for a price that's worth it for them?

    And then the most important question - does it even matter? Seems to me like AMD would not only have to match a 3080 but handily beat it at a price point for it to really get a "win". We've seen AMD GPUs competitive or in some cases slightly beat Nvidia's at specific pricepoints before and it just doesn't seem to matter.
     
    Last edited: Sep 15, 2020

  16. AuerX

    AuerX Active Member

    Messages:
    64
    Likes Received:
    19
    GPU:
    PNY RTX2070 OC
    Yeah, being "Equal" is not good enough for the Radeon cards.
    A really good DLSS equivalent and some killer drivers would help a lot.
     
  17. itpro

    itpro Master Guru

    Messages:
    551
    Likes Received:
    285
    GPU:
    Radeon Technologies
    I will die if AMD manages to beat 3090, all JHH hardcore fans are gonna burn leather jackets under a rain of tears.
     
    moo100times and PrMinisterGR like this.
  18. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,345
    Likes Received:
    147
    GPU:
    RX 580 8GB
    We all would die. That's why it won't happen. There would be no one left to buy their upcoming CPU's and GPU's.
     
    PrMinisterGR likes this.
  19. asturur

    asturur Master Guru

    Messages:
    824
    Likes Received:
    213
    GPU:
    Geforce Gtx 1080TI
    DO we agree at least that a 3070, if it perform as a 2080ti is still high end gaming, and if AMD makes a card that is better than the 3070, amd is competitive in high end gaming?

    What is high end gaming?
    1080p 360fps? 4k 100fps?
    explain me
     
  20. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,381
    Likes Received:
    403
    GPU:
    Sapphire 7970 Quadrobake
    AKHUALLY it is.

    This is correct, but as you say below, the TSMC process does allow for a better chip. For the same density you get better thermals and less power consumption, so it's not irrelevant at all.

    This only reinforces the idea of the first Navi cards as "pipe cleaners". The 7nm Vega was necessary because it was promised to investors, and it most likely gave them experience with large chips in the process, and Navi 1 gave them experience with specific libraries for mass production with 7nm. The real testament to what they can do is the Series X chip, which is impressive. Very small, high frequency CPU parts in it, and a fairly large high-frequency GPU running on a complex bus alongside very fast I/O. All that under 360mm2.

    The only feature they might not have is a DLSS equivalent, but since they still need to do denoise passes for RT to work (which they're already doing in hardware obviously), to me it sounds like they have all the parts. It looks like DirectML is a thing and it has already been demoed with AMD and Nvidia hardware, so that answers that question. It will also be the way that Xbox games will do "DLSS", and it is already part of Windows. Look here:
    https://github.com/microsoft/DirectML

    They even have a demo and a nice docx README.
    This is out since 2019.
    [​IMG]

    This is what people were saying about Intel. AMD needs to have a persistently competitive product, like Zen is, and it will turn around. They need to start from somewhere.
     
    moo100times likes this.

Share This Page