AMD Radeon 5800 XT (big NAVI) to get 80 Compute units?

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jan 21, 2020.

  1. nizzen

    nizzen Maha Guru

    Messages:
    1,149
    Likes Received:
    271
    GPU:
    3x2080ti/5700x/1060
    It can easily be 20% slower than 2080ti too :p

    "Fury X the Titan X killer"
    -was ~20% slower than most 980ti's

    Ps: I had Fury X as one of the few in Europe :p
     
    warlord likes this.
  2. Denial

    Denial Ancient Guru

    Messages:
    13,003
    Likes Received:
    2,404
    GPU:
    EVGA 1080Ti
    I mean they gained ~35% efficiency by putting Vega on 7nm with VII. RDNA from VII added another like 10-15%. Honestly if anything it sounds like the Vega cores benefited from RDNA stuff, I don't think the Vega improvements in APUs are going to swing back around.
     
    Evildead666 likes this.
  3. Evildead666

    Evildead666 Maha Guru

    Messages:
    1,296
    Likes Received:
    272
    GPU:
    Vega64/EKWB/Noctua
    Yeah, that's what was stated for the APU launch, the Vega cores in there benefited greatly from what they engineered with Navi/RDNA first gen.
    You're quite right in stating that it won't feed back into RDNA2. (Infinite Feedback loop, lol :))

    I mean, AMD/TSMC might have found a bit more, but nowhere near 50%, more like 5-10%, and even then it would be amazing to squeeze even that amount from an already efficient RDNA 7nm core.

    There's still a good chance/hope Big Navi/RDNA(2) might be on 7nm+, and might just have that 10-15% efficiency edge over 7nm.
    Also, Xbox and PS5 are both going to want to privilege Heat/Power consumption, any benefits from that go direct to us ;)
     
  4. Middleman

    Middleman New Member

    Messages:
    9
    Likes Received:
    5
    GPU:
    1070TI
    "They should reduce the price of 5700/xt cards also. They are overpriced"


    "I see everywhere 5700XT priced as 2060 super, it does not look overpriced with the current trend."

    They are both overpriced by $100.00 dollars.
     
    wavetrex, Maddness and Loophole35 like this.

  5. Loophole35

    Loophole35 Ancient Guru

    Messages:
    9,717
    Likes Received:
    1,062
    GPU:
    EVGA 1080ti SC
    QFT.... Id say all GPU's are overpriced right now.
     
    Maddness likes this.
  6. mohiuddin

    mohiuddin Master Guru

    Messages:
    836
    Likes Received:
    76
    GPU:
    GTX670 4gb ll RX480 8gb
    Rtx series is way overpriced. That doesnt necessarily justify rx5700/xt pricing. May be competitively priced but not just. U have to think differently . in 2019, at what range do these cards fit in? Mid range? High-mid range? Then justify the price.
     
  7. Yogi

    Yogi Master Guru

    Messages:
    253
    Likes Received:
    58
    GPU:
    Sapphire R9 290X Vapour X
    Twice the size of 5700 xt chip. Wouldn't that still make it ~33% smaller than the 2080ti and about the same size as a 2080
     
  8. Gomez Addams

    Gomez Addams Member Guru

    Messages:
    135
    Likes Received:
    84
    GPU:
    Titan RTX, 24GB
    It's just like how things work with server CPUs. Nvidia's high-end computational cards, Tesla V100s with 32GB and no outputs, cost 9K each. I have Titan RTXs, which have outputs, and those are 2500 each. Those are high-margin devices and Nvidia has a large lead over everyone else in them. They also have a head-shoulders lead over everyone else in terms of software infrastructure and support so they are really, really tough to compete with in this domain.

    As an aside, if the Google-Oracle lawsuit is settled with Google winning that will open the door for someone to replicate the CUDA API and that could be HUGE. It could be a massive game-changer that completely disrupts the status quo in computational GPUs. I really hope that happens and I am certain that Nvidia's stockholders do not.
     
    jura11 likes this.
  9. JamesSneed

    JamesSneed Maha Guru

    Messages:
    1,002
    Likes Received:
    403
    GPU:
    GTX 1070
    Yes there abouts. It would make it about 10% smaller than the 2080 die if we assume the big Navi chip is 2x the 5700 XT chip. I expect the actual die size for Big Navi's 80CU chip to come in closer to 1.85X than 2x. Not everything in the chip must be doubled to scale out the CU's and the 7nm+ process is about 17% more dense than TSMC's first generation 7nm which will work out to maybee 10-12% in reality. If I had to bet I would guess big Navi is going to land around 425 mm2 which is about 15% smaller than 2x the 5700 XT chip.
     
  10. angelgraves13

    angelgraves13 Ancient Guru

    Messages:
    2,097
    Likes Received:
    575
    GPU:
    RTX 2080 Ti FE
    Navi can be very efficient with lower level APIs. Sometimes a lot more than Turing.

    The card is pretty much DOA unless it's going to be priced around $800.
     

  11. Eastcoasthandle

    Eastcoasthandle Ancient Guru

    Messages:
    2,664
    Likes Received:
    380
    GPU:
    Nitro 5700 XT
    There are 2 prevailing rumors out there in the wild for this NV killer. Whatever this card is it will actually be either
    -Larger monolithic design
    -Chiplet design
    It's also rumored that Intel gpu skus will include chiplet designs. If true, a coincidence?
     
  12. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    16,887
    Likes Received:
    2,439
    GPU:
    AMD S. 5700XT Pulse
    A GPU is a very different design from a CPU so I can see AMD (Think there was some patent involved here.) possibly testing and experimenting with it but not for a release product anywhere near soon however they're going to be splitting something like that up for a chiplet type design though I am sure there are benefits but they'd need improvements to the drawbacks like latency and what not too before it's feasible and well however these chiplets would be split up to begin with and what other plans might be involved in getting something like this to work and be beneficial instead of complicated with a number of drawbacks.

    Arcturus perhaps but probably not Navi20 but we'll see, new stepping on the Navi10 cores for the 5600 shows AMD is also still tuning and tweaking things and I think the earlier 5500 had a hardware bugfix too though some of these can be solved via driver updates and that doesn't quite apply to whatever major changes go into the next core with Navi20 or whatever the 5800 will actually be.
     
    Undying, Embra and Maddness like this.
  13. Maddness

    Maddness Maha Guru

    Messages:
    1,292
    Likes Received:
    496
    GPU:
    Asus Strix RX 480
    Yeah I'd love it to be a chiplet design, but everything I have read points to that still being quite a while away yet. As long as the performance is there, I'll be happy.
     
    Embra likes this.
  14. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    16,887
    Likes Received:
    2,439
    GPU:
    AMD S. 5700XT Pulse
    That and as far as I'm aware there's a few years of research and development so while GCN went through it's revisions and updates Navi was probably nearing finalization so changing the designs to that extent sounds very unlikely to be something we'll see in such a short time frame though Navi20 could still have several improvements from Navi10 plus possibly 7nm+ and what that might also bring.

    Not a hardware expert at all though or very knowledgeable on the subject but yes it would be a interesting design but not without it's problems and with the time this takes I'm probably being a bit too optimistic anyway pointing at "Next" (Arcturus?) and it might be after that for just how long these things take from ideas to finalization and when engineering and such begins and all the research involved into it and problems and potential setbacks plus just the cost of it too.

    EDIT: TMU's, ROP's, overall cores and these clusters it's build into or around and how it scales, cache and bus width memory such as HBM or more integrated than that and if it's shared or separated and then how it all connects since we're talking upwards of 4000 some cores here and the rest split into 2 - 4 little parts possibly disabling non-perfect dies for lower-end GPU models same as on CPU but yeah it won't be an easy thing to solve that.

    Very greatly simplified for just part of the complexity involved.
    Plus the delay and latency involved as it goes from these core chips to the node they're on and that sort of thing. Something else for HBM to excel at perhaps?
    (Simplified yet again as compared to RAM and the Infinity Fabric and speeds and latency CPU side though it's not quite comparable either.)
     
    Evildead666 likes this.
  15. Evildead666

    Evildead666 Maha Guru

    Messages:
    1,296
    Likes Received:
    272
    GPU:
    Vega64/EKWB/Noctua
    Going "Chiplets" shouldn't be that hard.
    Its just, what do you put in the IO die ?
    For CPU's, its actually a lot easier.

    For GPU's, what goes on the IO die, and what stays with the GPU's ?
    I can see the IO die getting the memory controllers for sure, but what about Anti-Aliasing off-gpu* ? or the TMU's, ROP's ?
    The Display engine would have to be on the IO die I would expect, or you could have one or two per GPU die, but how you would route that through the IO die, I don't know.

    The GPU chiplet could end up being very small, compared to the IO die, much like Zen.

    Also, if this GPU chiplet were to be added to a future Zen architecture, what would have to be added to a hypothetical CPU IO die for the GPU to be added into the mix ?

    The IO die actually seems to be the hardest part to configure.

    * iirc the original Xbox design with eDRAM actually had anti-aliasing in the eDRAM die.
    https://en.wikipedia.org/wiki/Xenos_(graphics_chip)
     
    Last edited: Jan 24, 2020
    JonasBeckman likes this.

  16. Fox2232

    Fox2232 Ancient Guru

    Messages:
    10,788
    Likes Received:
    2,721
    GPU:
    5700XT+AW@240Hz
    If they sell, they are not overpriced.

    Do you believe that $100 price reduction will result in 100% higher sales? Because right now, they make like $200 profit on each card. Reduce price by $100 and profit is only $100. Will there be double sales to compensate?

    And if so, is that good strategy to saturate GPU market twice as fast without having higher profit?
    Because each extra new GPU sold this year results in fewer sales next year.
     
  17. -Tj-

    -Tj- Ancient Guru

    Messages:
    16,948
    Likes Received:
    1,825
    GPU:
    Zotac GTX980Ti OC
    80cu.. is that still 64rop design?
     
  18. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    16,887
    Likes Received:
    2,439
    GPU:
    AMD S. 5700XT Pulse
    Only thing I've heard is that the design is more flexible thus scaling better and also allowing for higher instead of ~4096 cores and then each cluster has to be this design with X ROP's and X TMU's so if it's decoupled that could allow AMD to make adjustments more dynamic than before having more or less of these components with the increased core count. :)

    Add more maybe handle certain bottlenecks better or keep it the same or adjust how it's configured a bit and it might balance costs and what the GPU requires a bit although if AMD is going for a high-end card they'd probably have to scale it up more not strike some balance but that's a interesting change assuming it's accurate which I don't know how it went but it's something I vaguely recall about Navi and RDNA over GCN for what's been changed or what's possible to change for a high-end model at least if they're moving past the 4096 cores and these nodes of how it was set up before.
     
    Undying and Embra like this.
  19. Embra

    Embra Maha Guru

    Messages:
    1,071
    Likes Received:
    292
    GPU:
    Vega 64 Nitro+LE
    You sure got a lot in that last sentence JonasBeckman. :)

    I do enjoy your posts. You are very informative. ;)
     
    Undying likes this.
  20. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    16,887
    Likes Received:
    2,439
    GPU:
    AMD S. 5700XT Pulse
    Lots of free time and trying to understand these things at least somewhat but it does often end up a bit of a mishmash of different info from all over and I've tried improving source validity and discarding less credible info over time but yeah tech and both software and hardware wise I'll never understand it as well as I'd like to because it's just that huge of a subject and so complex and very in-depth but it can be useful knowing a few things here and there. :D

    Well occasionally at least, still bouncing around on this Navi black screen error maybe hardware maybe software maybe even hardware AMD's trying to resolve through software or micro code patches through the driver among other things with Navi and the newer drivers and not really getting anywhere though then I'm not expecting to completely crack it either but some knowledge and understanding here would be helpful. :)


    And yeah I tend to go for pretty lengthy posts and mix of grammar and structural sentences plus a mixture of EN-US and EN-GB as if the rest wasn't problematic enough ha ha.
    Well what would one expect from primarily self taught English language understanding and some outright terrible dubs plus schools here often liked the whole British angle and extra O's and U's and what not and shows and other media were primarily US based. :p
    (Some things don't change though, that particular four letter word is still among the first ones learned and the rest of the fun words follows pretty quickly after that. :D )

    Trying to find a source for this info actually but it's a mix of 5600's rumors and details since that GPU just released to searches related to Navi well they're a bit of a problem for anything older as a result of this.
    (Promotions and other sales sites taking most of the hits and almost all of the first results page stands out too.)

    Confirming if it's from AMD or at least some whitepaper or design for what could be even if Navi10 follows a similar system with 160 TMU's and 64 ROP's which I believe hasn't changed since the 290X which doubled up from the initial GCN 1.0 and 7000 series.
    (Well GCN1, Gen1 or 1.0 I think AMD uses GCN1 to GCN5 and then some tech sites used the others and it got a bit mixed up as a result.)


    EDIT: Well interest and such but as a regular user of average skill and understanding of this type of thing of course I'm not going to be able to just figure out the more complicated matters the driver and software side is a huge mix of five or so different API's and numerous underlying systems sorta somehow working.
    (Lots of hacky stuff and who knows what too.)

    And hardware wise well a complex system of all kinds of fancy stuff from the instruction set itself (GCN to RDNA.) to different architecture (Islands something to Navi.) and all other things.

    It did stand out though just how much RDNA can stand to improve over GCN at least for a gaming oriented GPU whereas GCN could still excel at compute though stronger Navi / RDNA GPU hardware and software (driver.) maturity could change even that over time. :)

    NVIDIA's wave system and AMD's warp and the threading and requirements and improvements in particular, not the full story but certainly one piece to how "little" Navi10 almost reaches the Radeon VII and pulls ahead of the Vega64 in many tests.


    EDIT: And now we might actually get to see how the big one scales up and how well AMD has done in that regard, going to be fun and hopefully see more competition to the mid and upper end hardware segment.

    Don't have too high expectations for the enthusiast end myself but then the 3080 let alone the 3080Ti is a pretty niche product for a smaller segment of the user base so reaching that performance level might be a bit unrealistic for AMD's capabilities yet.

    Let alone what a competitive Navi20 GPU would require and what the cost would be which will be a factor too as per usual with the price/performance scale much as mind-share and overall NVIDIA GPU's being favored is a big thing among the consumer base as well. (Something like that, features will be equally important too and this ray-trace "arms race." thing could be big as well for which titles support what and AMD or NVIDIA or - maybe. - even both.)


    Well I certainly turned this into a lengthy post now. :p
     
    Last edited: Jan 24, 2020
    Embra and Maddness like this.

Share This Page