AMD NAVI 12 chip intended for RX 5800 ?

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 30, 2019.

  1. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Radeon 7 is a rebranded instinct card. It's cost of development was subsidized and it's still outrageously expensive. To bring out a large Navi chip requires AMD to fund the engineering, QA, software development, channel inventory, marketing all PRIOR to selling a single card. It's roughly ~$100M cost they have to carry and may not see a return on for months. For a company with AMD's financials, launching CPU's, motherboard chipsets and midrange GPUs (which requires arguably more inventory) it's just not attainable.

    Especially when they can just launch it next year on 7nm EUV for way cheaper.
     
  2. vbetts

    vbetts Don Vincenzo Staff Member

    Messages:
    15,140
    Likes Received:
    1,743
    GPU:
    GTX 1080 Ti
    2060 is far from the bottom, and has always been considered the mid range series( 7600, 8600, 760, 960, 1060, 1660, 2060). The x7 series has been in the middle ground of mid and high, sort of the entry point. That being said, technology has cost more to make, which is why we're seeing more entries in the midrange. Again, this isn't just from Nvidia either.
     
    Stormyandcold likes this.
  3. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Yep

    [​IMG]

    For years node shrinks brought manufacturing cost savings but this is no longer the case.
     
    Stormyandcold likes this.
  4. Michal Turlik 21

    Michal Turlik 21 Active Member

    Messages:
    97
    Likes Received:
    32
    GPU:
    Geforce RTX 2080Ti
    Good point,
    However we should keep in mind that a low/middle entry point card is going to age very soon and today it happens even faster.
    Just as pointed out by someone on a different thread, times when AMD/Ati was targeting the real performance (read as Radeon Hd 7970) have ended.
    Probably I am getting it wrong but this 5700 xt seems to me the best compromise that AMD could afford...is it for some hw design boundaries (who knows, maybe the next gpu will be a chiplet just as their cpu's and the 2560 su's count could represent the magic number with which they need to stay with) or is it for the best performance / tdp ratio but that gpu will not make the deal here.
    As for me and fortunately this is my opinion only they ve missed the goal.
    To conclude, in the past years we ve seen some hipe around Vega, we were given slides, videos, ads and we know what we managed to receive at the end - Radeon 7 was a gymnic exercise and this whole new Navi architecture has started to spread with the wrong candidate, imho.
     

  5. Lordhawkwind

    Lordhawkwind Guest

    Messages:
    23
    Likes Received:
    10
    GPU:
    Palit Jetstream GTX970
    I can't see another GPU from AMD until late 2020 as next EPYC 2 is on the way and that will be followed in 2020 by Zen 3. Also bear in mind AMD will be gearing up for the new Xbox and Playstation next year so not too much time/cash for another GPU launch. Just my 2p's worth.
     
  6. CPC_RedDawn

    CPC_RedDawn Ancient Guru

    Messages:
    10,413
    Likes Received:
    3,078
    GPU:
    PNY RTX4090
    I remember reading that 7nm yields were actually really good. Remember the chips are small, you get more chips per wafer. This is one reason why AMD's approach to both CPU and GPU is such a good idea and puts them in such a better place than Nvidia.

    I suspect as well that Nvidia's 3000 series will just be a refresh of the 2000 series with minimal increases to actual shader/compute performance, Nvidia want to push RTX as much as they can. So I suspect the increase on the 3000 series cards will be more ray tracing performance with more RT and Tensor cores.

    This actually plays into AMD's favour as well, most people simply don't care about RTX. I highly doubt the 3000 series will be able to push 4K60fps with RTX features, so all AMD need to do is push more shader/compute performance and they can catch up to Nvidia's performance. Then also remember AMD will be cheaper than Nvidia as well as they didn't pump billion's into RnD for RTX.
     
  7. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,872
    Likes Received:
    446
    GPU:
    RTX3080ti Founders
    There's a big market out there (myself included) who would like to see 1440p60fps+ with RTX on. The games available with RTX will help drive demand for the 3000 series going into 2020 and beyond.
     
  8. CPC_RedDawn

    CPC_RedDawn Ancient Guru

    Messages:
    10,413
    Likes Received:
    3,078
    GPU:
    PNY RTX4090
    Sure there is a market, but I really don't think its as big as you think. Not right now anyway and I don't see it getting much bigger in the next few years even with increased performance. Until it becomes viable to do within current shader hardware it won't catch on. The games out now don't look much different and I don't see companies adding it into the more popular games such as fornite, , dota, csgo, apex, etc, etc. Just look at HDR content, there was around 10 different variations of it a few years back. Until most companies decided to use one standard, and now it finally is catching on. I don't see AMD biting the bullet and accepting RTX as their standard and I don't see Nvidia letting them have it either. We need AMD to come up with a new open source standard so both companies can use it. Then Nvidia can put RTX to bed.
     
  9. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    7nm yields are good for how long the process has been out but TSMC said they won't hit 16nm parity till the end of this year. So while it's good it's still worse than 16 and marketing offshoots (14/12). Couple that with what I posted above (that the cost per transistor on 7nm is higher than 16/14/12) and it's arguable about who made the right node choice. I think Navi in consoles was a big consideration and push for AMD to launch it on 7nm and Nvidia doesn't really have a design that needs 7nm to work.

    I disagree about them adding more RT and Tensor cores. Firstly, Tensor cores aren't used at all for RTX. None of the current games do denosing in INT8/4 - it's all FP16 - so Nvidia could easily cut Tensor/DLSS and just do denosing on smaller, dedicated, FP16 cores - like the ones found in little Turing. Secondly the bottleneck with RTX, based on posting by various devs, doesn't even seem to be in the intersect calculations but in sharing the data between the BVH representation and the raster. I'm not convinced that simply adding more RT cores without adding shader performance would even yield an improvement in RT performance. Third, Nvidia has shown a lot of interest lately in moving to a chiplet based GPU design with NVLink interconnects - we've already seen it demonstrated in their 32 chiplet inference chip and they published multiple research papers about it. It wouldn't surprise me if they did something along this route with their next generation of chips - potentially allowing them to scale up the tensors core count while reducing the overall chip cost - if they wanted to keep tensors.

    Also nvidia didn't spend billions on R&D with RTX and regardless to whether you think RTX is successful or not in games - it's being implemented into basically every single major production raytracer. So commercially it's successful.

    I don't think anyone wants a 40-50% performance loss in the competitive games you listed.

    Also RTX is just a library of effects based on DXR. DXR is the standard RT - most of the major engines implementing RT (for example Unreal/Unity) use DXR as the base - it functions on both AMD and Nvidia. It's just accelerated on Nvidia via RTX hardware.
     
    Last edited: Aug 1, 2019
    Stormyandcold likes this.
  10. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    On other hand, take peak clock on each of those nodes. And translate it to performance per $ uplift.
    In case of ATi and 90nm we are talking about 650MHz range.(X1950 650MHz)
    For 65nm it is 750MHz range. (HD 3870 777MHz - 55nm)
    For 45/40nm it is 850MHz range. (HD 5870 850MHz - 40nm; 6870 900MHz - 40nm)
    For AMD 28nm finally reached 1GHz and went only a bit above that with big GPUs. (HD 7970 & Fury X 1050MHz as 1st and last GPU on 28nm.)
    14nm GPUs managed to go to 1400~1500MHz range. (Both Polaris and Vega.)
    And now 7nm reach 2GHz. (Navi)

    Then there is time:
    X1950 : 2006Q4
    HD 3870 : 2007Q4
    HD 5870 & 6870 : 2009Q3 & 2010Q4
    HD 7970 & Fury X : 2012Q2 & 2015Q2
    Polaris & Vega : 2016Q2 & 2017Q2
    Navi : 2019Q3

    I am pretty sure that if we collected high end GPUs from each generation. Added manufacturing node, date and clock. We would see trend over time showing that cost per 100M gates at normalized 1GHz clock would show trend that continues to improve.
    One can say that from 2006Q4 to 2012~2015 (where cost per gates stopped to improve) we got only some 65% clock uplift.
    While at time from 2015 to 2019 (where cost per gates slightly goes up) we got already 85% clock uplift.
    - - - -
    On nVidia's side it would be different, but it would still show similar trend.
     
    carnivore likes this.

  11. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,872
    Likes Received:
    446
    GPU:
    RTX3080ti Founders
    It's easy to list what hasn't got RTX support (I could do it almost as easily listing games with no DX12 support). However, most of the games that do support RTX are big franchises. Also, RTX is supported in Unreal Engine, so, Fortnite could support RTX right now. This is unlikely though, due to the backlash from multiplayer gamers who prefer frame-rate over eye-candy.

    Also refer to end of Denial's post #29. DXR isn't Nvidia exclusive and is one standard. I've also mentioned in another post that when it suits AMD, they will champion an api (like they did with DX12). As soon as they start losing their edge they go and work on something that only benefits them. Which is exactly what they're doing with their own RT solution.

    Everyone was loving the Crytek RT demo running on AMD...until they found out it was 1080p30. With that kind of performance; is the vendor agnostic approach really the best way forward?

    In the grand scheme of things what we're actually seeing is every major company has jumped on the RT bandwagon and are ready to deploy RT in their software/games soon, if not already. It's only a matter of time.
     
  12. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Actually taking just those 100M gates cost and dividing them by achievable clock in GHz to get to performance per $ if all were used to build same chip:
    90nm: $6.15 per unit of performance
    65nm: $3.76 per unit of performance
    45/40nm: $2.28 per unit of performance
    28nm: $1.24 per unit of performance
    14nm: $0.95 per unit of performance
    7nm: $0.8 per unit of performance
     
  13. Loophole35

    Loophole35 Guest

    Messages:
    9,797
    Likes Received:
    1,161
    GPU:
    EVGA 1080ti SC
    You may have just proven Moore’s law is dead. lol
     
  14. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Does the frequency uplift come because of the new node allows it or does it come because these new nodes are no longer offering the benefits of true half size shrinks and increasing the frequency, employing custom cell libraries and optimizing the critical path are now the real path forward? There has been 3Ghz 400mm2+ chips on 28-90nm nodes.

    Regardless the question isn't whether 7nm is better or not - it obviously is overall. The question is if it was a better decision for Nvidia to also pursue 7nm for Turing (presumably sans RTX) or stay on 12nm and just build a giant chip because they can.
     
  15. D3M1G0D

    D3M1G0D Guest

    Messages:
    2,068
    Likes Received:
    1,341
    GPU:
    2 x GeForce 1080 Ti
    What the... did you forget that Nvidia staggered their Turing release? They released the high-end cards first (2080 Ti, 2080, 2070) and only released the mid-tier cards later on. They're even staggering the Super series, releasing the 2080 Super later than the others. All their new cards at the same time? BS!
     

  16. Michal Turlik 21

    Michal Turlik 21 Active Member

    Messages:
    97
    Likes Received:
    32
    GPU:
    Geforce RTX 2080Ti
    Super cards are something that I still have to understand.
    At launch all the cards have been shown up, 2070, 2080, 2080ti, later we got the 2060.
    Three cards (the enthusiast included) vs one only (5700xt)
    Are u kidding?
     
  17. D3M1G0D

    D3M1G0D Guest

    Messages:
    2,068
    Likes Received:
    1,341
    GPU:
    2 x GeForce 1080 Ti
    You must have a serious case of amnesia since you once again forgot that AMD released two GPUs - the 5700 and 5700 XT. You also seem to have (conveniently) forgotten that the GTX 1660, 1660 Ti and 1650 are also part of the Turing family.
     
  18. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    Titan RTX was released December 18th 2018
    RTX 2080 ti was released September 27th 2018
    RTX 2080 was released September 20th 2018
    RTX 2070 was released October 18th 2018
    RTX 2060 was released January 15th 2019
    GTX 1660 Ti was released February 22nd 2019
    GTX 1660 was released March 14th 2019
    GTX 1650 was released April 23rd 2019.


    Zero non-super cards were released at the same time. 2080 and 2080 ti were intended to be released at the same time, so you could be expected to say that was released at the same time, but they delayed it by a week, and that's historically how it happened.

    And realistically i'm going to go against what @D3M1G0D stated and say 3 AMD GPUs were released at the same time, as though the 5700 XT and the 5700 XT 50th Anniversary Edition were "effectively" the same card under the hood, it did out of the box perform better then the normal 5700 XT if you wanted more performance out of the box. Point here is you had 3 choices of graphics cards, base models, and performance to choose from.
     
    Last edited: Aug 2, 2019

Share This Page