AMD: 7nm Navi and Epyc 2 launch in third quarter

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, May 1, 2019.

  1. Denial

    Denial Ancient Guru

    Messages:
    14,217
    Likes Received:
    4,133
    GPU:
    EVGA RTX 3080
    Last edited: May 2, 2019
  2. JamesSneed

    JamesSneed Ancient Guru

    Messages:
    1,692
    Likes Received:
    962
    GPU:
    GTX 1070
    You neglected to account for the fact AMD also moved all the IO to its own chip so they really didn't get that big of a shrink. The 7nm node is about 1.6x more dense and the 7nm+ is about 15% improvement on top of the 7nm node. You won't achieve best case scenarios with a high performance part like a GPU or desktop CPU. I would ballpark about 60% smaller for a high performance part like Turing so somewhere around 330-350 mm2 and that's on 7nm+ from TSMC.
     
    Evildead666 likes this.
  3. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    This is true, i did forget about that.

    And yes, in truth i would expect it to be in the 300-400mm2 ballpark as well, if it would be just a die shrink.

    People are under the assumption that AMD parts get better whereas nvidia parts do not as time goes on. There have been numerous tests that prove this to be incorrect, that prove that over time, both companies products improve around the same, giving a plus or minus similar performance difference when they were released and present day. But some people never want to believe that.
     
    AuerX likes this.
  4. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    It is about radial speed and distance of objects moving on screen between each frame.

    Imagine that you have 90° FOV in fps game. You turn slowly back (180°) in 0.5 second. With 60fps it takes 30 frames (0.5s). That's 6° per frame. (And 1/15th of screen width.)
    Back in time of 4:3 14'' (28x22cm) screens things would move in scenario above by 1.87cm per frame.
    Now with 16:9 24'' (53x30cm) screen same movement will result in objects moving 3.53cm per frame. (And you are sitting at same distance from screen as in the past.)
    How big is again your 4K screen? Like 32'' (71x39cm)? And in situation above objects would move 4.73cm per frame? That's 2.5times distance we had at time of CRT screens. And as those progressed towards larger sizes we had 75/85/90Hz...

    Now imagine that you play with 75° FOV. From frame to frame objects would move by 1/12.5th of screen width. On 32'', it would be 5.7cm per frame. Or if you actually turned in 0.1 second as people usually do. Pretty hard to track objects moving that far in between refreshes.

    Doubling fps makes situation half as bad. Hell, 120Hz/fps on 24'' gets you to object tracking comfort of 14'' 60Hz screen.
    = = = =
    And then, none of those games you wrote above are really games where one moves very fast. One could play comfortably all of them on gamepad if they supported it. (Slow viewport movement required).
    So, take actually relevant fps games with combat that has all directional action... Happens to be case of Metro and Battlefield. They really need sufficient fps or one is going to be bottom feeder/corner camper.

    I do remember one tolerable game running on 30fps (locked). That was Split/Second. It was racing game where you did not have many sharp turns. It had damn well done motion blur and fluidly following camera. But I did play it on 15'' notebook, so I would probably not enjoy it on 24'' screen.
     
    Last edited: May 2, 2019

  5. MonstroMart

    MonstroMart Maha Guru

    Messages:
    1,397
    Likes Received:
    878
    GPU:
    RX 6800 Red Dragon
    But that's pretty much what the RTX lineup was at launch. Outside of the 2080 it matched the previous generation for maybe a little bit less money. But the less money part is debatable when it come to the 2070 i mean from what i can remember at launch in Canada the 2070 was pretty much a re-branded 1080 with RTX added to it.

    I honestly don't think AMD has to do that much if they released it before the successor of the RTX. If they can match the RTX 2060/2070 and have better prices they'll have a winner.
     
  6. AuerX

    AuerX Ancient Guru

    Messages:
    2,739
    Likes Received:
    2,641
    GPU:
    Militech Apogee
    Would be pretty sad if they would not.
     
  7. sykozis

    sykozis Ancient Guru

    Messages:
    22,492
    Likes Received:
    1,537
    GPU:
    Asus RX6700XT
    Given how long it's taking for AMD to get Navi out, I'm reconsidering holding off until Intel releases Xe.....3rd quarter is still a few months away.... Hopefully Intel will manage to pull off a miracle with their first launch.
     
    AuerX likes this.
  8. waltc3

    waltc3 Maha Guru

    Messages:
    1,445
    Likes Received:
    562
    GPU:
    AMD 50th Ann 5700XT
    Good points--nice sentiment, and I agree. I'm frankly amazed at how well my 4k @ 60Hz games are running right now (kind of an oxymoron, sort of, because I usually always run my games at 4k and with vsync off--only 1 or 2 of my older games don't much like 4k to the extent I have to drop back several notches on the resolution scale--so there's no hard frame-rate cap even with @ 60Hz monitor at 4k). I've mentioned this before, but I have found the visual difference between 4k and UHD--my former native res, now gracing the wife's desk--to be *stunning* at times. It's isn't a *slight difference*--it's a major, easily observable difference--not anything that requires pixel magnification or that sort of minute, nit-picking attention to see--Gawd, no...;) 4k is not just some arbitrary, symbolic resolution, at least for me.

    And objectively speaking, where my peepers and sensibilities are concerned, anyway, the difference between a game running at 2560x1440 and the same game running at 3840x2160 is not *slight*, etc,...! It's more like a watershed--a real marker of substance in terms of resolution--much like it was when we hit 640x480x24 for the first time in a playable game. What a difference from 320x200/400, 8/16-bits! A turning point. Grim Dawn, for instance, runs a treat @ 4K vsync off on just the RX-590 8GB (as Crossfire isn't invoked in this D3d11 graphics mode.) I have no games that stutter and drag when playing at 4k--not a one that is 'unplayable' with just the 590 @ 4k. And for those games that could use the horsepower, D3d12 MultiGPU (for ShadowoftheTombRaider and Rise of the TombRaider) work nicely with the RX-590/480 8GB & Windows 10x64 v1903. And then there's X-fire for D3d11 & below, as well.

    But there are variables--like eyesight, the difference in monitors @ 4k, individual preferences, etc. that serve to cloud the situation more than it should be clouded in my opinion. I remember to this day how baffled I was after 3dfx introduced FSAA through the splendid V5 models and then though the 2d/3d 3dfx card, the V3 line. I was always amazed at people who claimed to prefer the literally, stair-stepped pixellation of nVidia's non-FSAA GPUs.. But they existed, sure enough.

    Anyway, I expect that what AMD will be introducing later this year is the evolved descendant of the Polaris GPU breed--and that it will be unchallenged in terms of the performance and image quality it will introduce at the ~$300 price point, once again, a la' the RX-480 debut. It's the card I bought the RX-590 to stopgap--the one I've been waiting on post RX-480! I think that AMD has been crystal clear all year long on the fact that while Radeon VII was meant to occupy a market segment for the "prosumer," so to speak, that for the GPU consumer in the broader markets the AMD GPUs for 2019 have yet to launch! It's been pretty obvious all year long that AMD, especially, would not consider the launch of a $700 GPU to be its consumer launch of the year. nVidia has "shot its load" for 2019, imo--will be fun to see what AMD does in the 2H! During the RVII product launch, I thought Su was clear as she could be about the fact that AMD GPU positioning for 2019 was far from complete and that "the best" is yet to come from AMD...!....;) That was my take, anyway--odd, I thought, that some did not agree.
     
  9. MonstroMart

    MonstroMart Maha Guru

    Messages:
    1,397
    Likes Received:
    878
    GPU:
    RX 6800 Red Dragon
    I'm definitely waiting next fall before buying anything. And the only reason i plan to upgrade is because i got a new 2k144Hz monitor. I'll likely buy Ryzen 2 because going with intel cpu would pretty much require building a new computer. For the GPU though i'll wait to see what everyone is releasing this year. Don't mind waiting a few months. Is Intel really planning to release their dedicated GPU next fall? That would be cool specially if they can deliver. The GPU market really needs competition.
     
  10. Undying

    Undying Ancient Guru

    Messages:
    25,657
    Likes Received:
    13,057
    GPU:
    XFX RX6800XT 16GB
    I told you not to buy that 1660ti and you are clearly not happy with it from what i read. You could have waited for Navi with that rx470 instead.
     

  11. AuerX

    AuerX Ancient Guru

    Messages:
    2,739
    Likes Received:
    2,641
    GPU:
    Militech Apogee
    Nope, still like 4K better.
     
  12. ttnuagmada

    ttnuagmada Master Guru

    Messages:
    271
    Likes Received:
    145
    GPU:
    3090 Strix
    I don't expect it to be much if any more efficient than the Radeon VII, which doesn't even match Nvidia 12nm. AMD's current roadmap literally makes no mention of efficiency improvements until the "Next-Gen" architecture.

    https://images.anandtech.com/doci/14286/gpuroadmap.png
     
  13. sykozis

    sykozis Ancient Guru

    Messages:
    22,492
    Likes Received:
    1,537
    GPU:
    Asus RX6700XT
    The RX470 is still here. I'll be ordering new ram in a few weeks and case for my i5 6600K. Either my 970 or 1660Ti will be paired with that. Just depends on whether or not I decide to sell the 1660Ti. But, no, I'm not happy with the 1660Ti. In most of the games I own, which are quite old, it's performance is subpar. Not sure if it has to do with the game engines or the system itself. Navi is just taking too long to get released. The fans failed on the RX470, forcing a replacement. Unfortunately, I can't monitor the fan speeds on the Twin Turbo II cooler. I am contemplating throwing the RX470 back in and just living with being unable to monitor fan speeds though since the card does run quite cool now. I've had the 1660Ti since about 3 days after launch. I was just unable to game to any extent until a couple weeks ago. The 1660Ti was never going to stop me from going with Navi. It was just a stop gap measure to ensure I could still use the system until Navi's launch. I'm still hoping Intel releases a competitive product next year, at a reasonable price. Maybe that will push NVidia to lower prices and AMD to take the graphics market more seriously.
     
    Undying likes this.
  14. kings

    kings Member Guru

    Messages:
    158
    Likes Received:
    131
    GPU:
    GTX 980Ti / RX 580
    I don´t expect the same Polaris performance, never said that. I expect what I said, RTX 2060/Vega 64 performance at best. Of course I may be wrong, but looking at the past of mid-end AMD GPUs, reaching a Vega 64 will already be good.

    A Navi card at the level of the RTX 2060 / Vega 64, would already be about 60% faster than the RX580. That´s a perfectly fine job from a Polaris successor. Expect more than this for a mid-range GPU, honestly for me is whishful thinking!

    Navi are to be small and inexpensive chips, so they are being designed for consoles. I think some people are putting so much hype on Navi (as always happens with each new AMD GPU), that many will be disappointed!
     
    Last edited: May 3, 2019
    Evildead666, Maddness and tunejunky like this.
  15. Evildead666

    Evildead666 Guest

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    This would be Intels Second Discrete GPU launch iirc.
    They have managed to fail at it once already lol :)

    Intel i740 i think it was.
     

  16. tunejunky

    tunejunky Ancient Guru

    Messages:
    4,565
    Likes Received:
    3,193
    GPU:
    7900xtx/7900xt
    exc

    excellent point, one that was the basis of my thinking when i stated Turing was designed for a smaller process and Nvidia was edged out of fab capacity at prelaunch/launch of Turing.
    however, the point about yields is still apropos as a 7nm Turing will still be a large chip relatively speaking. the node shrink of Turing will still have production benefits of more chips per wafer (along with the performance benefits) at a lower price.but complex dies are still more difficult to produce, so there's going to be a fair amount of binning for any Turing node shrink - excepting the most excellent 16 series w/o rt
     
  17. tunejunky

    tunejunky Ancient Guru

    Messages:
    4,565
    Likes Received:
    3,193
    GPU:
    7900xtx/7900xt

    pretty good thinking. excepting the fact that Navi is scalable. there will be a high end Navi, but not for 6-8 months after the release of the mid-range "vanilla" Navi.
    the vanilla is going to be very good, imho the bar is going to be set at RTX 2070 performance in the $400-ish price range, with the cut-down Navi going for $100-ish less and offering 2060 performance.
    as this is a "chiplet"/SoC gpu - like Ryzen is as a cpu - the only thing holding AMD from doing a "threadripper" style gpu is working out latencies of the "infinity fabric"...which they are working like dogs on at this moment as we "speak".
     
  18. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Well, GPUs have certain building blocks which need to be fed from their control blocks. SO main question is, how do you cut following image.
    [​IMG]
    Separating just CUs would lead to bad latency to L2 which would be worse than having far away ACEs.
    Having chiplets made of Shader Engines (20CUs + smaller L2 and 2~3 ACEs) would work. Memory would be handled by I/O-die which would have Command Processor and handle Global Data Share.
    But it would require some L3 cache in I/O die to compensate for smaller L2s.
     
  19. kings

    kings Member Guru

    Messages:
    158
    Likes Received:
    131
    GPU:
    GTX 980Ti / RX 580
    I'm talking about Navi going out in Q3, hence having said "this year" in my first post:

    Also because it´s a little irrelevant to talk about what AMD may or may not be releasing sometime next year, since it is most likely at that time to compete with 7nm Nvidia cards and not the current series.
     
    Evildead666 likes this.
  20. Denial

    Denial Ancient Guru

    Messages:
    14,217
    Likes Received:
    4,133
    GPU:
    EVGA RTX 3080
    Nvidia doesn't seem to think L3 cache is required just good NUMA aware L2 scheduling. Their new research shows you don't even need that much interGPU bandwidth either (relatively speaking) their testing shows good scaling at 128gb/s.

    Seems like these multi-GPU designs are going to come faster than anticipated.
     
    tunejunky and Fox2232 like this.

Share This Page