Tensor Core equivalent Likely to Get Embedded in AMD rDNA3

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jun 30, 2022.

  1. Horus-Anhur

    Horus-Anhur Ancient Guru

    Messages:
    8,712
    Likes Received:
    10,793
    GPU:
    RX 6800 XT
    A tensor core is just a unit that does matrix to matrix multiplication and addition.
    The issue is doing it so many times, for each element of each matrix. This is very computational and memory intensive.

    Some time ago I asked a dev on another forum, to use nsight to measure Tensor utilization while using DLSS 2.x.
    He booted Cyberpunk and reported that it was using around 50%, on a 3090 at 4K. So for the frame rate and resolution he was getting, it was using around 140 TOPs.
    I expect that for lower resolutions, and lower frame rates, the amount of TOPs needed for DLSS would be lower.
     
  2. AuerX

    AuerX Ancient Guru

    Messages:
    2,616
    Likes Received:
    2,469
    GPU:
    Militech Apogee
    So AMD GPU's will be like Nvidia GPUS then. Cool.
     
  3. Airbud

    Airbud Ancient Guru

    Messages:
    2,595
    Likes Received:
    4,140
    GPU:
    XFX RX 5600XT
    AMD was always good at gaming.

    I would dare to say the best budget card in history would be the HD 5770....that card sold like hotcakes with extra syrup!
     
    Undying and pegasus1 like this.
  4. NiColaoS

    NiColaoS Master Guru

    Messages:
    720
    Likes Received:
    76
    GPU:
    3060 12GB
    Have you ever wondered which is the equilavent phrase for us, Greeks? No? I'll tell you anyway. "It's all Chinese to me" hehe!
     
    Venix and pegasus1 like this.

  5. pegasus1

    pegasus1 Ancient Guru

    Messages:
    5,182
    Likes Received:
    3,574
    GPU:
    TUF 4090
    Ha ha ha, il remember that, il be in Cyprus tomorrow.
     
    NiColaoS likes this.
  6. Venix

    Venix Ancient Guru

    Messages:
    3,471
    Likes Received:
    1,971
    GPU:
    Rtx 4070 super
    So Vega 56 and 64 ? You are aware those came out a bit before rtx 2xxx series right ?
     
  7. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,742
    Likes Received:
    9,635
    GPU:
    4090@H2O
    I had two of them... until I learned that crossfire doesn't combine memory pools :D
     
    Airbud and pegasus1 like this.
  8. Rich_Guy

    Rich_Guy Ancient Guru

    Messages:
    13,146
    Likes Received:
    1,096
    GPU:
    MSI 2070S X-Trio
    so they've killed off FSR 2.0 already.
     
  9. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,035
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    hmm, the 5770 was in an interesting space, a feature level 11 card at a point in time where nvidia had none so it had basically no competition feature wise at the time,

    When competition did arrive, it sat between the GTS 450 and GTX 460, the former being 10 dollars cheaper but offering only 84% of the performance of the 5770 (where not using tesselation). The latter was a fairbit faster but came with additional cost since it had twice as many memory modules.
     
    Last edited: Jul 2, 2022
    Airbud likes this.
  10. pegasus1

    pegasus1 Ancient Guru

    Messages:
    5,182
    Likes Received:
    3,574
    GPU:
    TUF 4090
    I skipped the 4** series, went from a 8800gtx to 280gtx to the 580gtx.
     

  11. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,035
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    technically you didn't skip the 4** series since the 500 series was just a tweak to it.
     
    Airbud, Undying and carnivore like this.
  12. dampflokfreund

    dampflokfreund Master Guru

    Messages:
    203
    Likes Received:
    31
    GPU:
    8600/8700M Series
    I am really not a fan of this.

    Ultimately, this approach means AMD cards would offer much less value to the customer compared to Nvidia.
    With Nvidia, you basically get the same architecture that supercomputers use in your home for a similar price point that you can use for all kinds of different purposes, for example training ML models, use DLSS, use really handy AI features in content creation software, accelerate Blender, noise cancellation, AI effects in video conferencing and many more.

    You're not getting any of that with an AMD GPU because it lacks ML acceleration as it's not a supercomputer architecture, but is dumbed down for gaming. So with AMD you basically get a GPU that is good at gaming but nothing else, while with Nvidia you get a GPU that you can do anything with and at a high performance.

    Until AMD steps up their game by adding proper ML acceleration and is also competitive in software, I will always choose Nvidia because I get much more value for my money there.
     
  13. pegasus1

    pegasus1 Ancient Guru

    Messages:
    5,182
    Likes Received:
    3,574
    GPU:
    TUF 4090
    You know what i mean, the 580 was a serious leap from the 480. Ok maybe not the same leap that the 8800gtx was over the 7800gtx but a leap non the less.
     
  14. Undying

    Undying Ancient Guru

    Messages:
    25,471
    Likes Received:
    12,876
    GPU:
    XFX RX6800XT 16GB
    It was on the same arch and it has same 1.5gb vram. Not until kepler nvidia actually didnt had a leap in perfromance and amd had legendary tahiti. :)
     
  15. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,035
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    we can argue the differences between gf110 and gf100 if we want, i think it was texture samplers ported from gf104 and full rate fp16?
     

  16. pegasus1

    pegasus1 Ancient Guru

    Messages:
    5,182
    Likes Received:
    3,574
    GPU:
    TUF 4090
    Many of you guys are very technically knowledgeable, im not, i just judge things by heat, performance, OCability etc, i dont get into the weeds on the technical side to be honest. I dont even know what names any particular CPU or GPU core has, but ive built dozens of custom WCed rigs, fabricated panels and modded cases etc. I was an aircraft engineer before i decided joining the Army was way sexier.
     
    Airbud likes this.
  17. aufkrawall2

    aufkrawall2 Ancient Guru

    Messages:
    4,484
    Likes Received:
    1,870
    GPU:
    7800 XT Hellhound
    Question is if this would work out with the computational complexity of e.g. DLSS. Afair it puts quite some load on Turing's/Ampere's TCs.
     
    AuerX likes this.
  18. Exodite

    Exodite Guest

    Messages:
    2,087
    Likes Received:
    276
    GPU:
    Sapphire Vega 56
    AMD did use that approach, up until RDNA - arguably RDNA 2.

    GCN had excellent compute performance and did better than Nvidia's (gaming) offerings in many specialized tasks due to being essentially the same cards as their pro lines.

    Unfortunately that also meant the cards weren't as ideally suited for actual gaming as they could be, considering heat/power and die size.

    So we got RDNA and CDNA.

    I, too, like the idea of 'fully enabled' gaming graphics cards - in theory.

    In practice I'll gladly sacrifice the features that I, who primarily watch YouTube and argue with strangers on the web game with my card, don't have much use for to get more performance in my primary usage scenario.

    And here we are.

    Once we've established what new features are actually, long-term, usable for gaming those will trickle down regardless. A good bet is to keep a look out for what gets introduced on the consoles.
     
    Kaleid likes this.
  19. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,872
    Likes Received:
    446
    GPU:
    RTX3080ti Founders
    Once again Nvidia paves the way and AMD follows. It's all good for us consumers as in the long run we will have parity.
     
  20. pegasus1

    pegasus1 Ancient Guru

    Messages:
    5,182
    Likes Received:
    3,574
    GPU:
    TUF 4090
    Lets wait until the new cards are out before we decide if NV paved or led the way.
     

Share This Page