1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Lisa Su confirms Q3 launch for Ryzen, Epyc and NAVI

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, May 16, 2019.

  1. MonstroMart

    MonstroMart Master Guru

    Messages:
    416
    Likes Received:
    107
    GPU:
    ASUS GTX 1070 Strix
    Why do we absolutely need to be disappointed beyond belief if it doesn't perform better than Vega 56 and 64? Why expecting it to be a rebrand of Vega56 and Vega64 will make it any less of a disappointment? Can't we expect AMD to deliver and then if they are not just be meh it's more of the same?

    I dunno i just don't see the point of not expecting a company to improve upon it's past products just to avoid being disappointed. I've had enough of disappointment for 2018-2019 with RTX and Radeon VII. More wont change anything to my overall level of disappointment. So yeah i'll wait and see and hope AMD delivers. I don't expect them to release a 2080 TI equivalent but if they release a 2070 TI for a fair price it will make my day.
     
  2. nevcairiel

    nevcairiel Master Guru

    Messages:
    559
    Likes Received:
    172
    GPU:
    MSI 1080 Gaming X
    Thats when it'll probably be announced, at Computex. Actual release a few weeks later, hopefully in July.
     
  3. ttnuagmada

    ttnuagmada Active Member

    Messages:
    99
    Likes Received:
    25
    GPU:
    1080 Ti SLI @2101
    Sorry mate, guess I don't have Lisa Su on stage using the exact words "Navi is still GCN". If you want to ignore all of the evidence out there, then be my guest (2 8 pin connectors on a mid-ranged part etc). When it does launch, and it's literally Polaris with GDDR6 and higher clocks, I'm sure you'll tell us about how great it still actually is because you can undervolt it to Nvidia 12nm perf/watt.

    Now let me fill you in. Most chips being made on it are sub 100mm low power parts. It's further along than it was a whopping 4 months ago with Radeon VII launched, but it's definitely light-years from being cheap enough to throw a 10+ billion transistor GPU into a 300 dollar product. All of those phone parts are going in $1000+ phones.

    I understand that you Fine Wine™ enthusiasts will deny it till the very end, but you're setting yourself up for disappointment.
     
    Last edited: May 17, 2019
  4. kings

    kings Active Member

    Messages:
    60
    Likes Received:
    44
    GPU:
    GTX 980Ti / RX 580
    What's the point of talking about GPUs to be released somewhere in 2020? Surely in 2020 Nvidia will also have new GPUs out there ...

    Since we are in this futurology thing, in 2020 Intel may also have out there interesting GPUs, we don´t know...

    Vega 64 performance is plausible and would not disappoint anyone, at least not those who have a little common sense and are not deluded .. It would be a perfectly good upgrade from Polaris.

    The disappointment comes from the people who are creating so much hype for Navi, that it will be difficult for the cards to meet these high expectations.
     
    Last edited: May 17, 2019

  5. sykozis

    sykozis Ancient Guru

    Messages:
    20,869
    Likes Received:
    538
    GPU:
    MSI GTX1660Ti
    GCN itself is not a uArch. Polaris is a uArch. Navi is a uArch. Navi can be GCN and still be a completely different uArch from Polaris. It helps to understand what GCN actually is, before making comments about it.

    https://en.wikipedia.org/wiki/Graphics_Core_Next
     
  6. ttnuagmada

    ttnuagmada Active Member

    Messages:
    99
    Likes Received:
    25
    GPU:
    1080 Ti SLI @2101
    Right, let's go ahead and ignore the last 7 years of what AMD's GPU's have been based on semantics. There's still hope! If you twist enough words around and stick your head deep enough in the sand, maybe you can will Navi into something other than a power-hungry underperformer! After a dozen iterations, they're finally going to give it all of these massive efficiency and performance boosts, right before moving on to their next gen architecture!
     
  7. sykozis

    sykozis Ancient Guru

    Messages:
    20,869
    Likes Received:
    538
    GPU:
    MSI GTX1660Ti
    Again, understanding WHAT you're talking about, is rather important. It's quite clear that you have no idea what you're talking about. An instruction set and "uArch family" name are both completely different from a uArch itself.

    Let's see if I can explain this in a way you will actually understand. "GCN" (Graphics Core Next) is as much a uArch as a submarine is a space ship.

    AMD uses GCN as a "uArch family" name. Same as Intel does with "Core" and AMD does with "Zen". Under the "Core Family" from Intel, you have more than a dozen different uArchs. Just off hand... Conroe, Allendale, Wolfdale, Kentsfield, Yorkfield, Clarkdale, Lynnfield, Arrandale, Bloomfield, Gulftown, Sandy Bridge, Ivy Bridge, Haswell, Broadwell, Skylake, Kaby Lake, Coffee Lake.... All of them fall under the "Core Family" of Intel processors.

    AMD also uses GCN as an instruction set name, just like "x86", "x87", "3DNow", "MMX", "IA64", "86x64", "SSE", "AVX".......

    AMD chose to name their uArch family after the instruction set that is used. The AMD/ATI graphics processors prior to the HD4000 series were all part of the "TeraScale" family of GPUs. Conveniently, "TeraScale" was also the name of the instruction set that those cards all used. "TeraScale" did not refer to a specific uArch either.

    Yes, Navi will be "GCN". It will be part of the GCN uArch family and use the GCN instruction set. The "Navi" uArch itself, will be quite different from Polaris though.
     
  8. ttnuagmada

    ttnuagmada Active Member

    Messages:
    99
    Likes Received:
    25
    GPU:
    1080 Ti SLI @2101
    Like I said. If semantics makes you feel better about the disappointment you're about to experience, then by all means, go ahead and pretend like the specific definition of what a micro-architecture is (i literally never even used the term), will change the fact that the base design is the same as it was in 2012 (wow, it's almost like a complete change to an instruction set coincides with a hardware component), and coincidentally happens to line up with what everyone everywhere has referred to as "GCN" for the last 7 years.

    Maybe if you nitpick enough people about how they've been using terminology wrong this entire time, it will give Navi the power to overcome the fact that it's going to be a power hungry underperformer that just happens to look an aaaawful lot like all of the previous GPU's that have used the "GCN instruction set", that the entire internet has been referring to as "GCN " this entire time.

    Thanks for clearing all of that up! I'm sure that all of those Wikipedia cut and pastes make you feel much more intelligent now that you've won an argument that no one else was having! Now people won't be all confused about what I was referring to, despite knowing what I was referring to.

    From now on, i'll specifically refer to all of the AMD GPU's made since 2012 as "that unchanging GPU design that also happens to use the GCN instruction set" as to not cause any confusion.

    oh wow! it's almost like that's how myself and everyone else has been using the term in this thread the entire time, and your post was completely pointless and tryhard!
     
    Last edited: May 17, 2019
  9. Fox2232

    Fox2232 Ancient Guru

    Messages:
    8,988
    Likes Received:
    1,794
    GPU:
    -NDA +AW@240Hz
    I bet that not most, but all chips being made are sub 10cm as you wrote. Radeon 7 is just older Pro GPU given to consumer market.
    Your $300 product statement is wrong again since you do not really know how much Vega on 7nm costs to make... And funnily enough, I could see you and others claiming that it is not GPU itself, but 4x 4GB of HBM2 what makes large portion of Radeon 7 price tag :D How those things twist like a snake when needed...

    And to you prone statement, again false. Next time make bad claims with plausible denial statement. Sorry, not "All" of those parts go to $1000 phones.
    I have lovely new phone with Snapdragon 855 for $440. (No discount from operator or anything like that. It is from regular shop.)
    So tell me how much of that price is that chip made on 7nm. How much of that is 6GB of RAM. Three cameras (48Mpix, another one with optical zoom and last one with wide angle lens). How much of that is display with built in fingerprint scanner. And so on.

    Then tell me how much money manufacturer, supply chain, shops made on that phone. Once you reverse end cost down to that 73mm^2 chip made on 7nm, you'll realize that it is not very expensive to made as everyone in process of making such phone does it to make money.

    Navi is not behemoth, it is made to be cost efficient same way as AMD's CPUs. You know what's expensive to make? nVidia's top tier GPUs on that 12nm you mentioned. They made biggest GPU they could on given manufacturing process same way as they did it 20 years ago when they almost went out of business.
    (That time, it did save them, this time they get flame for it.)
    = = = =

    And if you want to throw ad hominem on people like "you Fine Wine™ enthusiasts". I actually spent a lot of time explaining to people here and elsewhere that nVidia is not overpricing those new GPUs. And same can be said about my view of those new things built in Turing.

    I look at things for what they are and judge that. I am not saying that Navi can't be another GCN. It is possible, but there is no clear evidence. And known things hint that amount of work put into Navi is worth around 4 generations of GCN + details from AMD's GPU related patents... Maybe it is still GCN, but then it went through big redesign to point you can as well stop calling it that way. Or AMD did sit on their hands for all those years. (But that would mean their statement that even CPU division people went to help with Navi was false.)
    = = = =
    So we are at square 0 again. Your statements were mostly false. Few misdirected at best. Do better job at whatever your intent is.
     
  10. ttnuagmada

    ttnuagmada Active Member

    Messages:
    99
    Likes Received:
    25
    GPU:
    1080 Ti SLI @2101
    I'm glad you so freely admit that you literally need to put words in my mouth to make an argument

    fair enough

    If Navi is literally nothing but a 7nm Polaris, then it will still be twice that size. You've mostly been insisting that we should be expecting efficiency improvements, which the whole purpose of, would be allowing them to make a bigger chip. So which is it? Is AMD making a shrunken Polaris sized chip that they're going to blast with voltage and crank as high as it will go, allowing them to avoid how expensive it is to make a 7nm chip? Or did they improve the efficiency of the design allowing them to make something bigger, which would be cost prohibitive?


    This comment shows a complete and fundamental lack of understanding of why Nvidia is so far ahead of AMD. Nvidia's design is so efficient that they were able to scale it up to the physical limits of the 12nm fab itself without it being too power hungry. They didn't have to do it at all, they did do it because they could do it. They certainly weren't feeling any pressure from AMD to do so, that's for sure. AMD can't even match that performance on a smaller node because they couldn't have made Vega 20 bigger even if they wanted to.


    fair enough

    My statements are based on history and the large amount of information that's out there. We saw this exact denial with both Polaris and Vega prior to their release. We have a Navi PCB with 2 8 pin connectors. We have AMD admitting that the Radeon VII will still be their top tier product. We have 7 years of history of small iterations. We have slides from AMD that don't even attempt to make Navi look like it's going to be anything special. The writing is on the wall. Do these things individually prove anything? Of course not, but the bulk of them together says a whole heck of a lot.
     

  11. Fox2232

    Fox2232 Ancient Guru

    Messages:
    8,988
    Likes Received:
    1,794
    GPU:
    -NDA +AW@240Hz
    Did I? Can you do cost analysis of Radeon 7 GPU? And then approximate cost of GPU with half the transistor count? With 3/4 transistor count? You'll be as surprised as with cellphone example.
    Please do not go into that land. It is pointless to even fantasize that AMD did really sit on their hands doing nothing since they finalized Polaris design.
    Is that so I do not understand? Or is it you who missed that nVidia was ahead of AMD by not delivering features not needed. because now with Turing nVidia has same gaming performance per transistor on same clock as Vega. They added too many transistors for sake of compute they did not deliver properly before.
    And once normalized nVidia's power efficiency advantage is exactly same as AMD's compute advantage. Think about it for a moment. (Yes it is 7nm vs 12nm, but let's not go back to 7nm Polaris=Navi ideas.)
    You could have seen it, I may have seen it. I had Fury X and always stated that my next upgrade will not be from Polaris, but from Navi. Some of us knew that Polaris was mainstream oriented.
    Yes, it has been known for years that Navi will 1st come as mainstream and may not come as high end. But remind yourself of what's on that slide: "Q3"
    It means that Navi is not replacing Radeon 7 in Q3. And why would it. Radeon 7 is 16GB equipped card meant for people who need it.

    From pure kindness of my heart I'll remind you and some others by showing few images. It may light some bulbs:
    [​IMG]
    1st Gen: 2011Q1
    2nd Gen: 2013Q1
    3rd Gen: 2014Q3
    4th Gen: 2016Q2 - release matches planned time frame on roadmap
    5th Gen: 2017Q1 - release matches planned time frame on roadmap
    Navi: 2019Q3 - release 6Q after planned release; In works since 4th Gen design was complete.
     
  12. ttnuagmada

    ttnuagmada Active Member

    Messages:
    99
    Likes Received:
    25
    GPU:
    1080 Ti SLI @2101
    Oh really? you extrapolate this from a single phone using a tiny die size?


    They've done very little since they finalized Hawaii. I'm not sure where your confidence comes from.

    Only if you ignore RT/Tensor cores to make this claim

    Bold is what you need to think about.

    Radeon VII had it's compute performance gimped. It's a gaming card. It being their top gaming card means that Navi sure as crap ain't gonna be matching a 2080. Mark my words. We'll get Vega 64 performance at Nvidia 12nm perf/watt


    fair enough
     
  13. Fox2232

    Fox2232 Ancient Guru

    Messages:
    8,988
    Likes Received:
    1,794
    GPU:
    -NDA +AW@240Hz
    Why not? That Tiny Die has about same transistor count as RX-580.
    That "very little" is well known to me, since I did compile list of changes and magnitude of improvement those changes brought to GCN. And that's why very little is not exactly appropriate. Sure, AMD could have done more. But back then they did not have exactly large budget to do so. And later they needed those resources to do good Zen design. Those obstacles were gone as 1st Zen design was completed.
    Those are part of SM. I did originally think that they will be separate thing. But...
    Really, It is 7nm Vega, design released over 2 years ago. Enough of time for AMD to drop another 2 GCN generations as visible form list. That's why I mentioned that expecting Navi to be Some kind of no-work-done shrink is not going to get you anywhere in this discussion. (Or you can as well claim such thing directly as you point your fingers to that direction as often as you can.)
    Radeon 7 is still 30% faster in Compute than equally priced RTX 2080 in FP16/32 and 10 times as fast in FP64. Radeon 7's compute FP16/32 matches RTX 2080Ti and is still 8 times as fast in FP64 while RTX 2080TI has 41% more transistors. By all means, AMD invested more into compute capability and nVidia into gaming. But since nVidia is finally delivering reasonable compute performance, gaming is going to benefit from compute power one way or another.
    And mind that I explicitly wrote: "It means that Navi is not replacing Radeon 7 in Q3."
    If there is bigger Navi in Q2 2020, I do not know. I would expect it. But it may be another generation (code name), another bold design (iteration). That's because Navi should have been completed on design side quite some time ago and teams should have moved to next "big" thing.

    I'll ask you this: Why do you think Navi was delayed by 6 quarters? Is it because AMD did nothing? Is it because AMD did something, but it never worked? Is it because they did wait for 7nm to be economical?
    (Try to exclude yourself from routine about AMD being incompetent, Zen spoke about that long time ago.)
     
  14. ttnuagmada

    ttnuagmada Active Member

    Messages:
    99
    Likes Received:
    25
    GPU:
    1080 Ti SLI @2101
    Because you have the Vega 20 with not even twice as many that is 4x larger .


    I obviously know it's not going to be exactly the same, but it's just going to be another 5-10% architectural efficiency gain that's just a drop in the bucket towards them catching up with Nvidia.

    You need look no further than V100 to see that Nvidia could easily pour on the compute performance in the gaming sector the second it thought it needed to without losing a step in gaming performance efficiency.

    7nm becoming economically viable is why it was delayed. They clearly couldn't do much more with 14nm, and since they make such tiny efficiency gains architecturally with every release, they have to rely on a node shrink to allow them to release something that would make any sense. It's that simple.
     
  15. vbetts

    vbetts Don Vincenzo Staff Member

    Messages:
    14,401
    Likes Received:
    928
    GPU:
    Nvidia Geforce GTX 960M
    I've got no issues if you want to argue here, but keep it civil please guys. There's been a couple posts here that are borderline just rude.
     
    fantaskarsef and MonstroMart like this.

  16. Fox2232

    Fox2232 Ancient Guru

    Messages:
    8,988
    Likes Received:
    1,794
    GPU:
    -NDA +AW@240Hz
    We were already there, weren't we? You just skipped cost analysis of the chip as even rough approximate would make Navi quite cheaper.
    Reference your bottom quote...
    Like Turing again? So many more transistors for compute, little gaming improvement. Compute is not free. Just because there is chip that has even more transistors does not mean that 2080Ti can be that strong without that transistor investment.
    So, they did 10% efficiency improvement year and half ago, and then called it a day waiting for 7nm, right? So we are in incompetent AMD category. Spinning class 101.

    Edit: And just a side note: I would love to be in AMD's room when they decided on:
    "This is going to be last GCN iteration. Lets spend a lot of time and resources to make as good as GCN can be before we drop it for no further benefit."
    Then 2 and half years later another meeting room: "Well, we made this last GCN hurray, what about spending another year on it to fine tune everything?"

    I would be laughing at that moment right into their face. Because such investment into something that has no future should make everyone in that room laugh too.
     
    Last edited: May 17, 2019
  17. mohiuddin

    mohiuddin Master Guru

    Messages:
    720
    Likes Received:
    39
    GPU:
    GTX670 4gb ll RX480 8gb
    I am gonna need some popcorn here....:p:p
     
    MonstroMart likes this.
  18. ttnuagmada

    ttnuagmada Active Member

    Messages:
    99
    Likes Received:
    25
    GPU:
    1080 Ti SLI @2101
    I'm honestly not even following you on this one

    Now you're just being disingenuous. the added hardware of Turing is obviously very specialized. And V100 is a perfect example of how adding compute hardware isn't an excuse for why AMD's GPU's are so inefficient. That hardware is there, and it's till a more efficient gaming GPU than Vega 20 is, while being on a larger node.

    AMD's GPU division has been in incompetent territory for years now.

    right, because missing goals and unpredictable fab schedules aren't a thing.

    We'll all see soon enough. I'll enjoy reminding you how this turns out.
     
  19. Fox2232

    Fox2232 Ancient Guru

    Messages:
    8,988
    Likes Received:
    1,794
    GPU:
    -NDA +AW@240Hz
    V100 has 59.5% more transistors than Radeon 7 to have 5% higher FP16/32 and just double FP64. Want to take Pro cards here? Then lets take in account that Radeon 7 is Cut down and disabled MI60. AMD's MI60 is actually using all of those 13.23 Billion transistors And is faster in all FP16/32/64 than V100 that has 21.1 Billion transistors. To bring V100 into this made no sense from start as well as there was no reason to contest that AMD simply invested more into compute.
    Same could have been told about their CPU division. But hey, how incompetent are they from MI60 to V100 perspective now? You just want them to be incompetent even after showing that they are not for very long time.
    There is nothing to remind me about. Unless you forgot already that I had to remind you about: "Maybe it is still GCN" I did write multiple times.

    Think about your way of thinking about officially unconfirmed information which is often extrapolated from something vaguely related.
    While most people recognize it as maybe, maybe not. You present one side of maybe coin as only existing option which you defend very strongly. Is that even rational? Or is that borderline belief system?

    I do not know many related things as nobody outside inner AMD's circle knows. And I recognize that. Try to do same. Then add some basic logic marks to your statements. Or you can go bit farther and put probability of being true in source information you used to make such statements.
     
  20. Denial

    Denial Ancient Guru

    Messages:
    12,110
    Likes Received:
    1,251
    GPU:
    EVGA 1080Ti
    Yeah but V100 has tensors that are dedicated solely for INT8/INT4 and it more than doubles the performance of both operations compared to the MI60. MI60 has 59TOPS of INT8 and 118TOPS INT4 - V100 has 130 and 260 respectively. Now I doubt those make up the entire transistor difference but it's probably a fair percentage of it.

    That being said I think AMD's GPU division is incredible competent given the budget/financial restrictions they are operating under. GCN has scaled extremely well and their efforts to "steer" the industry with consoles towards their benefit was a pretty good call on their part. I'm excited to see what Navi brings honestly.
     
    fantaskarsef and Fox2232 like this.

Share This Page