AMD’s Lisa Su Hints that high-end 7nm NAVI GPUs are on the way

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Aug 1, 2019.

  1. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,015
    Likes Received:
    4,390
    GPU:
    Asrock 7700XT
    @ttnaugmada
    No, I don't understand. Specifically, what Radeon VII or Vega64 have anything to do with this. You're comparing apples to oranges here. Nvidia already had a clock speed advantage for a while, so they're not going to get the same 15% increase. I can't say that with confidence, because again, this isn't a good comparison and we don't have sufficient data.
    Nvidia could squeeze better performance per watt out of the die shrink than what R7 got over V64, while having a minimal clock boost. It could be the exact opposite. Maybe both will be better. Maybe worse. We don't know, because the architectures are so drastically different. So your comparison is moot.
    Ryzen is made by the same facilities and that too can achieve very different clock speeds over R7. In another perspective, Zen2 didn't have much of a clock boost over Zen+. Sure, Ryzen is on a smaller die, but if die size correlates to max frequency then this 600mm^2 die you propose ought to clock even worse than you predict.

    Do you see the pattern here? There are WAY too many variables involved for you to be making a prediction on clock speed, but considering how 7nm seems to overall be worse at overclocking than larger nodes, I'm pretty sure Nvidia is not going to squeeze a 15% improvement there.

    EDIT:
    Also, V64 to R7 was 14nm to 7nm. Nvidia is going from 12nm to 7nm. Not a big difference, but, shifting down a node doesn't appear to be a very simple job, and makes the R7's gains not quite as drastic as you think they'll be for Nvidia.


    For the record, I never said that Nvidia can't achieve a sub-300W TDP on a 50% die increase of the TU102. Of course they can do that. What I don't agree with is your claims of a 50% performance improvement. If you meant a 50% increase in performance-per-watt, that I would totally agree with. But if that's what you meant, you didn't do a good job clarifying that.
     
    Last edited: Aug 2, 2019
  2. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    Haven't seen where you back up the 5 year thing, it seems like you've shortened it to 3 years, since you keep talking about pascal, rather then maxwell, which is 3 years old.

    As to being on par with pascal performance/watt, lets go with pascal then. 1080 ti and RX 5700 XT go head to head more often then not and the 5700 XT typically leading in 1440p and up.

    RX 5700 XT, 10.3 Billion Transistors, 225 Watt TDP (Closer to 200 Watt in real useage, see diagram below)
    GTX 1080 ti, 12 Billion Transistors, 250 watt TDP (Closer to 280 Watt in real useage, see diagram below)

    [​IMG]

    So we have an RX 5700 XT trading blows with a 1080 ti, 3 year old tech, while doing it with 75ish watts less. So how does it match, your words: "Navi is about on par with Pascal in perf/watt... while needing to be on 7nm to do it. Do the math."?

    I'm doing the math, and i'm not seeing your reasoning yet to state it's 5 years beyond nvidia, or 3 years. Now, i get that it's 7nm vs 12nm, in fact you get that too, as you stated it in what i quoted, and you could say that Navi isn't as good as 1080 ti architecture since if it was 16nm it would require more power. You might be right, but there's no way to realistically say you are, as we can't test this. We don't know how much less the 1080 ti would use at 7nm, and we don't know how much more the RX 5700 XT would use at 12/16nm. Sure, you could go and say "well TSMC states that 7nm uses (insert percentage here) less power" but that's a blanket number dependent on how the company using their fabs construct their dies. (See Intel last years release of their 10nm CPU, which used more wattage and performed less then its 14nm counterparts)

    But that's not what you just said either way, you stated it was on par with pascal in performance to wattage while on 7nm, and that's clearly wrong.

    Here are the facts:

    RX 5700 XT compared to the 1080 ti gets similar performance, while needing 1.7 billion less transistors and 75 less watts to do it. This does not say that "AMD is 3 years behind nvidia", let alone 5.

    Now comes to today, where the RX 5700 XT trades blows with the 2070 Super

    RX 5700 XT, 10.3 Billion Transistors, 225 Watt TDP (Closer to 200 Watt in real useage, see diagram above)
    RTX 2070 Super, 13.6 Billion Transistors, 215 Watt TDP (Closer to 210 watt in real useage, see diagram above)

    So now the RX 5700 XT uses 5 watts less then the 2070 Super, and uses 2.3 billion less transistors to do it(more difficult to quantify since RTX has dedicated ray tracing cores and etc.). If you apply your same statement as before, which is what it realistically is, your statement would be: "Navi is about on par with Turing in perf/watt... while needing to be on 7nm to do it. Do the math." And i'd agree, 100%, and if nvidia were on 7nm, you could easily expect them to be better wattage to performance then navi is.

    So is Navi 5 years behind nvidia? Turing? No. Is it 3 years behind turing? Getting closer, but can't say for certain. Are they right on the money with nvidia? I'd say not.

    Navi appears to be, performance to wattage as the basis to determining architecture advancement and taking into consideration 7nm vs 12/16nm, somewhere between Pascal and Turing.

    Not 5 years, not 3 years.
     
    carnivore likes this.
  3. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,015
    Likes Received:
    4,390
    GPU:
    Asrock 7700XT
    Last I heard, TSMC themselves claims better gains than what AMD achieved. Y'know why AMD didn't see those gains? Because not every architecture is the same.
    lol um... no. We don't know that. It's just of your opinion to expect that.
    How dense are you? What you said there is the whole reason I brought them up: different architectures don't clock the same way. So for you to compare to what AMD did going to 7nm on their GPUs is moot. Nvidia does things pretty drastically different, hence, in your words, being "5 years ahead".
    Actually, you're the one who keeps moving the goalpost. You're the one who brought up frequency, not me. That's your variable that you introduced.
    Well then, you're most likely going to be wrong.
    The size of the die isn't that important, it's the transistors themselves. Do you seriously think it's going to be affordable to get 680mm*2 on 7nm? Nobody is going to buy that.
    I never said there wasn't going to be a clock increase. I'm saying a 15% increase is, like everything else you claim, highly optimistic and deliberately ignorant of other variables.
     
  4. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    Sorry but no it's not. This is as far as i need to read into your reply, as something as nonsense as this is just bad. I can only imagine what the rest of the nonsense in your post states.
     

  5. Alienwarez567

    Alienwarez567 Active Member

    Messages:
    65
    Likes Received:
    12
    GPU:
    Gigabyte GTX 1080 G1
    Would be nice to have options to challenge Nvidia
     
  6. Jawnys

    Jawnys Master Guru

    Messages:
    225
    Likes Received:
    55
    GPU:
    asus tuf oc 3090
    The 1080ti was so good, that there is no worth upgrade for it atm, unless you re willing to pay the rtx taxe for the 2080ti, because at this point rtx is really wasted money, since there is basicly games worth playing taking advantage of it. so im hoping that amd comes up with a card around the 2080ti performance for half the price of it. nvidia came out to early with the rtx tech, they would bring a 2080ti gtx and i would buy it day one and replace my 1080ti with it, but they cant do that now, they would kill the 2070-2080 market doing so
     
    MonstroMart likes this.
  7. Andrew LB

    Andrew LB Maha Guru

    Messages:
    1,251
    Likes Received:
    232
    GPU:
    EVGA GTX 1080@2,025
    And as usual, AMD kills sales of their current cards by blabbing about even faster cards that wont be out for many months.
     
  8. anticupidon

    anticupidon Ancient Guru

    Messages:
    7,898
    Likes Received:
    4,149
    GPU:
    Polaris/Vega/Navi
    As usual, these threads are the ground where some gurus vent their own little ego and flex their "information sources".
    If you really have something constructive to say, have you say.
    But please, do everyone a big favour and keep it to yourself if you have nothing productive to say.
    I am no mod here, just wanted to remember everyone that this forum has an etiquette and some rules.
    Let's keep it civil, folks
     
    airbud7, carnivore and Loophole35 like this.
  9. vdelvec

    vdelvec Member Guru

    Messages:
    157
    Likes Received:
    16
    GPU:
    Nvidia RTX 3090
    I'm sorry... did you say "Nvidia came out to early with RTX"? Nothing is ever "to early". Its right on time. If developers of any form of entertainment or media had the mindset that "its to early for this" then new ideas and forward thinking would be stifled and we'd never get anything new. Nvidia pushed the boundaries, took chances. Do these gambles always pay off? Absolutely not but without forward thinking and people pushing for what was previously unheard of, unthinkable, or unobtainable (even if the final implementation is not ideal) we'd still be on dual and quad core mainstream CPUs, MIDI sound files, ISA slots on motherboards, AGP video card slots, HDR and 4K wouldn't be a thing, etc.,I could go on and on.

    Yeah, RTX isn't optimal but Nvidia started an (r)evolution with RTX and now consoles are adopting hardware based RTX, more GPUs are coming out with it, and adoption rate is steadily increasing. I don't know about you but I'm glad I'm not still playing games on my Nintendo GameBoy.
     
    Maddness likes this.
  10. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,015
    Likes Received:
    4,390
    GPU:
    Asrock 7700XT
    A processor is just a processor. Transistors are transistors. It doesn't matter whether it's a CPU or a GPU. What matters is the architecture itself. Nvidia has a very different architecture from AMD, which is why they get different performance numbers and different clock rates.
    How do you not understand that?
    This is why I'm arguing against you regarding clock speeds. You can't compare what Nvidia will accomplish based on what AMD did with their GPUs. They could do better and they could also do worse. You have no evidence to know what they'll do.
    Yes, you have. Every time I mention a reason why your 50% performance improvement isn't doable, you introduce another variable.
    Your "real world examples for Vega", as I have repeatedly told you are irrelevant and are not a good metric as to what Nvidia can do.
    Um... have you not heard the issues regarding transistors this size? Why do you think Intel's 10nm node is taking so long, despite actually being functional 2 years ago? Why do you think none of AMD's products clock very high?
    We're reaching the physical limits of silicon. Just a few years ago, people predicted 7nm wasn't even possible due to quantum tunneling. A higher frequency means more voltage, and more voltage increases the probability of leaking electrons.
    Nvidia is already getting decently high clock speeds for such a large die and complex architecture. It really is unrealistic of you honestly think they're going to get a 15% speed boost by going to 7nm.
    Y'know why AMD got that much speed when going from V64 to R7? It's because V64 was inefficient. With proper cooling and power delivery, it could go higher, but it was reaching TDP limits. By going to 7nm, they alleviated some of that overhead, giving them a healthy frequency boost. Nvidia doesn't have this problem.
    Name 1 variable I mentioned that doesn't exist.
    I never said a node shrink can't bring clock speed increases, I said it won't be the generous 15% you claim. I never said it won't allow for higher transistor counts, I said it won't be the generous 50% increase that you claim. But even if hypothetically Nvidia did do all of these things, you still have an unfounded optimism of a 50% performance increase, despite the data we already have proof of saying otherwise. Like I said, a 50% performance-per-watt increase is totally doable and believable, and is something to be commended. But you're predicting something that just isn't going to happen because of how absurdly expensive it will be.
     

  11. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,035
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    no it isn't.
     
  12. D3M1G0D

    D3M1G0D Guest

    Messages:
    2,068
    Likes Received:
    1,341
    GPU:
    2 x GeForce 1080 Ti
    By that logic, Nvidia also killed sales of their 2060 and 2070 Super cards by releasing a faster card 2080 Super later on. RX 5800/5900 will come with a higher price tag and will appeal to a different market segment, selling alongside the current 5700 cards.
     
  13. MegaFalloutFan

    MegaFalloutFan Maha Guru

    Messages:
    1,048
    Likes Received:
    203
    GPU:
    RTX4090 24Gb
    LOL, that will be you with your RTX hate, stay in your dark cave with 1990s graphics.
    I moved to next gen.
     
  14. Witcher29

    Witcher29 Ancient Guru

    Messages:
    1,708
    Likes Received:
    341
    GPU:
    3080 Gaming X Trio
    https://tweakers.net/categorie/49/videokaarten/producten/
     
  15. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -

  16. Eastcoasthandle

    Eastcoasthandle Guest

    Messages:
    3,365
    Likes Received:
    727
    GPU:
    Nitro 5700 XT
    The 5800 XT is suppose to be the "Wagyu" of Beef.
     
    Last edited: Aug 2, 2019
  17. Eastcoasthandle

    Eastcoasthandle Guest

    Messages:
    3,365
    Likes Received:
    727
    GPU:
    Nitro 5700 XT
    How are some going take it if the 5800XT is as fast as a 2080TI but at 2080 price?
    RDNA 2.0 is suppose to be used. Not sure what that means yet.
    But if it needs close to 300watts to come close to a 2080ti will people still complain? Are you kidding me...
     
  18. Eastcoasthandle

    Eastcoasthandle Guest

    Messages:
    3,365
    Likes Received:
    727
    GPU:
    Nitro 5700 XT
    AMD is prepared to release 5800 series this year from the latest rumors.
    Nvidia, not so much. If lucky, 4th quarter 2020. Hmm, isn't this the 1st time Nvidia is working with Samsung for 7nm? I've not heard of them using Samsung before for gaming gpus. This will be interesting :p
     
  19. Goiur

    Goiur Maha Guru

    Messages:
    1,341
    Likes Received:
    632
    GPU:
    ASUS TUF RTX 4080
    Can't hate ray tracing when is non existent... there are 2 games using it and not even fully.

    So my guess is...you are feeling pretty lonely on that next-gen of yours and you need to do some weird flexing with that 2080Ti of yours. Stay strong, in 2-3 years, we will be there with you with plenty of games and 400€/$ gpus.
     
  20. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    States non existent, then immediate states 2 games exist. Hmm....

    1+1=2 right? and 0 is non-existent right? 2=0?
    1. Battlefield V
    2. Metro Exodus
    3. Shadow of the Tomb Raider
    4. Quake II RTX (Yes, i will add this, it's an old game, who cares, it has ray tracing, and is a game)
    5. Stay in the Light (This game requires ray-tracing)
    6. Assetto Corsa Competizione (I won't count this one as i personally can't find much information about ray tracing, it says it uses RT cores, but who knows)
    So 5=2=0? Hm....

    Not to mention the games confirmed coming out
     
    Maddness and MegaFalloutFan like this.

Share This Page