1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

AMD’s Lisa Su Hints that high-end 7nm NAVI GPUs are on the way

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Aug 1, 2019.

  1. sykozis

    sykozis Ancient Guru

    Messages:
    20,990
    Likes Received:
    638
    GPU:
    MSI RX5700
    I read a "fairly in-depth and specific discussion" that contained a lot of conjecture....

    As to that claim of me being an AMD fanboy..... I'm the world's worst fanboy....lol
    NVidia: MX200 (x2), MX400 (x2), FX5700XT, GF6200, GF6800, GF7300, GF7600GT, GF8600GT (x2), GF9600GT, GT210, GT220, GTS240, GTS250, GTX275, GTX460, GTX560Ti, GT640, GTX660 (x2), GTX970, GTX1660Ti
    AMD/ATi: Rage Pro, 9200SE (x2), 9600XT, x700 Pro, HD2400, HD4850, HD7870, HD7950, R5 240, RX470, RX5700

    You, on the other hand, have spent this entire thread claiming AMD is "5 years behind" NVidia....however, the performance shows the contrary. Actual facts, say you're wrong. Your response to those facts seems to be to attempt to compare AMD's current products to NVidia's future (unreleased) products. If someone needs a graphics card TODAY, they are going to be looking at cards that are currently available. They aren't going to be looking at cards that are still months away at best.
     
    airbud7, carnivore and Loophole35 like this.
  2. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,341
    Likes Received:
    1,310
    GPU:
    HIS R9 290
    @ttnaugmada
    No, I don't understand. Specifically, what Radeon VII or Vega64 have anything to do with this. You're comparing apples to oranges here. Nvidia already had a clock speed advantage for a while, so they're not going to get the same 15% increase. I can't say that with confidence, because again, this isn't a good comparison and we don't have sufficient data.
    Nvidia could squeeze better performance per watt out of the die shrink than what R7 got over V64, while having a minimal clock boost. It could be the exact opposite. Maybe both will be better. Maybe worse. We don't know, because the architectures are so drastically different. So your comparison is moot.
    Ryzen is made by the same facilities and that too can achieve very different clock speeds over R7. In another perspective, Zen2 didn't have much of a clock boost over Zen+. Sure, Ryzen is on a smaller die, but if die size correlates to max frequency then this 600mm^2 die you propose ought to clock even worse than you predict.

    Do you see the pattern here? There are WAY too many variables involved for you to be making a prediction on clock speed, but considering how 7nm seems to overall be worse at overclocking than larger nodes, I'm pretty sure Nvidia is not going to squeeze a 15% improvement there.

    EDIT:
    Also, V64 to R7 was 14nm to 7nm. Nvidia is going from 12nm to 7nm. Not a big difference, but, shifting down a node doesn't appear to be a very simple job, and makes the R7's gains not quite as drastic as you think they'll be for Nvidia.


    For the record, I never said that Nvidia can't achieve a sub-300W TDP on a 50% die increase of the TU102. Of course they can do that. What I don't agree with is your claims of a 50% performance improvement. If you meant a 50% increase in performance-per-watt, that I would totally agree with. But if that's what you meant, you didn't do a good job clarifying that.
     
    Last edited: Aug 2, 2019
  3. Aura89

    Aura89 Ancient Guru

    Messages:
    7,581
    Likes Received:
    846
    GPU:
    -
    Haven't seen where you back up the 5 year thing, it seems like you've shortened it to 3 years, since you keep talking about pascal, rather then maxwell, which is 3 years old.

    As to being on par with pascal performance/watt, lets go with pascal then. 1080 ti and RX 5700 XT go head to head more often then not and the 5700 XT typically leading in 1440p and up.

    RX 5700 XT, 10.3 Billion Transistors, 225 Watt TDP (Closer to 200 Watt in real useage, see diagram below)
    GTX 1080 ti, 12 Billion Transistors, 250 watt TDP (Closer to 280 Watt in real useage, see diagram below)

    [​IMG]

    So we have an RX 5700 XT trading blows with a 1080 ti, 3 year old tech, while doing it with 75ish watts less. So how does it match, your words: "Navi is about on par with Pascal in perf/watt... while needing to be on 7nm to do it. Do the math."?

    I'm doing the math, and i'm not seeing your reasoning yet to state it's 5 years beyond nvidia, or 3 years. Now, i get that it's 7nm vs 12nm, in fact you get that too, as you stated it in what i quoted, and you could say that Navi isn't as good as 1080 ti architecture since if it was 16nm it would require more power. You might be right, but there's no way to realistically say you are, as we can't test this. We don't know how much less the 1080 ti would use at 7nm, and we don't know how much more the RX 5700 XT would use at 12/16nm. Sure, you could go and say "well TSMC states that 7nm uses (insert percentage here) less power" but that's a blanket number dependent on how the company using their fabs construct their dies. (See Intel last years release of their 10nm CPU, which used more wattage and performed less then its 14nm counterparts)

    But that's not what you just said either way, you stated it was on par with pascal in performance to wattage while on 7nm, and that's clearly wrong.

    Here are the facts:

    RX 5700 XT compared to the 1080 ti gets similar performance, while needing 1.7 billion less transistors and 75 less watts to do it. This does not say that "AMD is 3 years behind nvidia", let alone 5.

    Now comes to today, where the RX 5700 XT trades blows with the 2070 Super

    RX 5700 XT, 10.3 Billion Transistors, 225 Watt TDP (Closer to 200 Watt in real useage, see diagram above)
    RTX 2070 Super, 13.6 Billion Transistors, 215 Watt TDP (Closer to 210 watt in real useage, see diagram above)

    So now the RX 5700 XT uses 5 watts less then the 2070 Super, and uses 2.3 billion less transistors to do it(more difficult to quantify since RTX has dedicated ray tracing cores and etc.). If you apply your same statement as before, which is what it realistically is, your statement would be: "Navi is about on par with Turing in perf/watt... while needing to be on 7nm to do it. Do the math." And i'd agree, 100%, and if nvidia were on 7nm, you could easily expect them to be better wattage to performance then navi is.

    So is Navi 5 years behind nvidia? Turing? No. Is it 3 years behind turing? Getting closer, but can't say for certain. Are they right on the money with nvidia? I'd say not.

    Navi appears to be, performance to wattage as the basis to determining architecture advancement and taking into consideration 7nm vs 12/16nm, somewhere between Pascal and Turing.

    Not 5 years, not 3 years.
     
    carnivore likes this.
  4. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,341
    Likes Received:
    1,310
    GPU:
    HIS R9 290
    Last I heard, TSMC themselves claims better gains than what AMD achieved. Y'know why AMD didn't see those gains? Because not every architecture is the same.
    lol um... no. We don't know that. It's just of your opinion to expect that.
    How dense are you? What you said there is the whole reason I brought them up: different architectures don't clock the same way. So for you to compare to what AMD did going to 7nm on their GPUs is moot. Nvidia does things pretty drastically different, hence, in your words, being "5 years ahead".
    Actually, you're the one who keeps moving the goalpost. You're the one who brought up frequency, not me. That's your variable that you introduced.
    Well then, you're most likely going to be wrong.
    The size of the die isn't that important, it's the transistors themselves. Do you seriously think it's going to be affordable to get 680mm*2 on 7nm? Nobody is going to buy that.
    I never said there wasn't going to be a clock increase. I'm saying a 15% increase is, like everything else you claim, highly optimistic and deliberately ignorant of other variables.
     

  5. Aura89

    Aura89 Ancient Guru

    Messages:
    7,581
    Likes Received:
    846
    GPU:
    -
    Sorry but no it's not. This is as far as i need to read into your reply, as something as nonsense as this is just bad. I can only imagine what the rest of the nonsense in your post states.
     
  6. Alienwarez567

    Alienwarez567 Active Member

    Messages:
    55
    Likes Received:
    8
    GPU:
    Gigabyte GTX 1080 G1
    Would be nice to have options to challenge Nvidia
     
  7. Jawnys

    Jawnys Member Guru

    Messages:
    112
    Likes Received:
    14
    GPU:
    zotac amp extreme 1080ti
    The 1080ti was so good, that there is no worth upgrade for it atm, unless you re willing to pay the rtx taxe for the 2080ti, because at this point rtx is really wasted money, since there is basicly games worth playing taking advantage of it. so im hoping that amd comes up with a card around the 2080ti performance for half the price of it. nvidia came out to early with the rtx tech, they would bring a 2080ti gtx and i would buy it day one and replace my 1080ti with it, but they cant do that now, they would kill the 2070-2080 market doing so
     
    MonstroMart likes this.
  8. Andrew LB

    Andrew LB Maha Guru

    Messages:
    1,046
    Likes Received:
    133
    GPU:
    EVGA GTX 1080@2,025
    And as usual, AMD kills sales of their current cards by blabbing about even faster cards that wont be out for many months.
     
  9. anticupidon

    anticupidon Ancient Guru

    Messages:
    3,905
    Likes Received:
    693
    GPU:
    integrated
    As usual, these threads are the ground where some gurus vent their own little ego and flex their "information sources".
    If you really have something constructive to say, have you say.
    But please, do everyone a big favour and keep it to yourself if you have nothing productive to say.
    I am no mod here, just wanted to remember everyone that this forum has an etiquette and some rules.
    Let's keep it civil, folks
     
    airbud7, carnivore and Loophole35 like this.
  10. vdelvec

    vdelvec Member Guru

    Messages:
    148
    Likes Received:
    13
    GPU:
    Nvidia RTX TITAN
    I'm sorry... did you say "Nvidia came out to early with RTX"? Nothing is ever "to early". Its right on time. If developers of any form of entertainment or media had the mindset that "its to early for this" then new ideas and forward thinking would be stifled and we'd never get anything new. Nvidia pushed the boundaries, took chances. Do these gambles always pay off? Absolutely not but without forward thinking and people pushing for what was previously unheard of, unthinkable, or unobtainable (even if the final implementation is not ideal) we'd still be on dual and quad core mainstream CPUs, MIDI sound files, ISA slots on motherboards, AGP video card slots, HDR and 4K wouldn't be a thing, etc.,I could go on and on.

    Yeah, RTX isn't optimal but Nvidia started an (r)evolution with RTX and now consoles are adopting hardware based RTX, more GPUs are coming out with it, and adoption rate is steadily increasing. I don't know about you but I'm glad I'm not still playing games on my Nintendo GameBoy.
     
    Maddness likes this.

  11. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,341
    Likes Received:
    1,310
    GPU:
    HIS R9 290
    A processor is just a processor. Transistors are transistors. It doesn't matter whether it's a CPU or a GPU. What matters is the architecture itself. Nvidia has a very different architecture from AMD, which is why they get different performance numbers and different clock rates.
    How do you not understand that?
    This is why I'm arguing against you regarding clock speeds. You can't compare what Nvidia will accomplish based on what AMD did with their GPUs. They could do better and they could also do worse. You have no evidence to know what they'll do.
    Yes, you have. Every time I mention a reason why your 50% performance improvement isn't doable, you introduce another variable.
    Your "real world examples for Vega", as I have repeatedly told you are irrelevant and are not a good metric as to what Nvidia can do.
    Um... have you not heard the issues regarding transistors this size? Why do you think Intel's 10nm node is taking so long, despite actually being functional 2 years ago? Why do you think none of AMD's products clock very high?
    We're reaching the physical limits of silicon. Just a few years ago, people predicted 7nm wasn't even possible due to quantum tunneling. A higher frequency means more voltage, and more voltage increases the probability of leaking electrons.
    Nvidia is already getting decently high clock speeds for such a large die and complex architecture. It really is unrealistic of you honestly think they're going to get a 15% speed boost by going to 7nm.
    Y'know why AMD got that much speed when going from V64 to R7? It's because V64 was inefficient. With proper cooling and power delivery, it could go higher, but it was reaching TDP limits. By going to 7nm, they alleviated some of that overhead, giving them a healthy frequency boost. Nvidia doesn't have this problem.
    Name 1 variable I mentioned that doesn't exist.
    I never said a node shrink can't bring clock speed increases, I said it won't be the generous 15% you claim. I never said it won't allow for higher transistor counts, I said it won't be the generous 50% increase that you claim. But even if hypothetically Nvidia did do all of these things, you still have an unfounded optimism of a 50% performance increase, despite the data we already have proof of saying otherwise. Like I said, a 50% performance-per-watt increase is totally doable and believable, and is something to be commended. But you're predicting something that just isn't going to happen because of how absurdly expensive it will be.
     
  12. Astyanax

    Astyanax Ancient Guru

    Messages:
    3,061
    Likes Received:
    789
    GPU:
    GTX 1080ti
    no it isn't.
     
  13. D3M1G0D

    D3M1G0D Ancient Guru

    Messages:
    1,867
    Likes Received:
    1,197
    GPU:
    2 x GeForce 1080 Ti
    By that logic, Nvidia also killed sales of their 2060 and 2070 Super cards by releasing a faster card 2080 Super later on. RX 5800/5900 will come with a higher price tag and will appeal to a different market segment, selling alongside the current 5700 cards.
     
  14. MegaFalloutFan

    MegaFalloutFan Master Guru

    Messages:
    618
    Likes Received:
    73
    GPU:
    RTX 2080Ti 11Gb
    LOL, that will be you with your RTX hate, stay in your dark cave with 1990s graphics.
    I moved to next gen.
     
  15. angelgraves13

    angelgraves13 Maha Guru

    Messages:
    1,299
    Likes Received:
    278
    GPU:
    RTX 2080 Ti FE
    AMD will compete with 2080 Ti for likely $799

    Nvidia won’t have 7nm until the middle of next year.

    I still consider RTX useless for 4K unless Nvidia can double up the RT cores or improve IPC for 7nm.
     

  16. Witcher29

    Witcher29 Maha Guru

    Messages:
    1,094
    Likes Received:
    80
    GPU:
    1080 Ti Gaming X
    https://tweakers.net/categorie/49/videokaarten/producten/
     
  17. Aura89

    Aura89 Ancient Guru

    Messages:
    7,581
    Likes Received:
    846
    GPU:
    -
  18. Eastcoasthandle

    Eastcoasthandle Ancient Guru

    Messages:
    2,118
    Likes Received:
    201
    GPU:
    R9 Fury
    The 5800 XT is suppose to be the "Wagyu" of Beef.
     
    Last edited: Aug 2, 2019
  19. Eastcoasthandle

    Eastcoasthandle Ancient Guru

    Messages:
    2,118
    Likes Received:
    201
    GPU:
    R9 Fury
    How are some going take it if the 5800XT is as fast as a 2080TI but at 2080 price?
    RDNA 2.0 is suppose to be used. Not sure what that means yet.
    But if it needs close to 300watts to come close to a 2080ti will people still complain? Are you kidding me...
     
  20. Eastcoasthandle

    Eastcoasthandle Ancient Guru

    Messages:
    2,118
    Likes Received:
    201
    GPU:
    R9 Fury
    AMD is prepared to release 5800 series this year from the latest rumors.
    Nvidia, not so much. If lucky, 4th quarter 2020. Hmm, isn't this the 1st time Nvidia is working with Samsung for 7nm? I've not heard of them using Samsung before for gaming gpus. This will be interesting :p
     

Share This Page