AMD Vega To get 20K Units Released at launch and new Zen+ chatter

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, May 9, 2017.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,544
    Likes Received:
    18,856
    GPU:
    AMD | NVIDIA
  2. Romulus_ut3

    Romulus_ut3 Master Guru

    Messages:
    780
    Likes Received:
    252
    GPU:
    NITRO+ RX5700 XT 8G
    The same happened during the launch of Fury X, IIRC. It's not surprising. Given the anticipation and the hype surrounding this launch, I expect all Vega units to be sold out/booked within 6 hours from launch/pre-order. Guess reviewers may need to share review samples like they did during Fury X launch.
     
  3. icedman

    icedman Maha Guru

    Messages:
    1,300
    Likes Received:
    269
    GPU:
    MSI MECH RX 6750XT
    looks like ill have to start saving for the refresh hopefully the memory prices will have come down by then and compatibility will be better.
    As for vega i expect the launch to be a lot like the fury x was.
     
  4. Ryu5uzaku

    Ryu5uzaku Ancient Guru

    Messages:
    7,552
    Likes Received:
    609
    GPU:
    6800 XT
    I just might upgrade my 1800x come ryzen refresh XD just because the platform supports and I can. If they get +15% IPC and higher clocks for sure.
     

  5. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,518
    Likes Received:
    2,361
    GPU:
    Nvidia 4070 FE
    I don't understand why the HBM2 would be such a bottleneck. It's not new technology anymore, per se, it's just development from the original HBM. Didn't they learn anything from the first one?

    If this is indeed true, it's far too easy to say that Nvidia made once again a much more sensible decision by using GDDR5X, which apparently has no supply problems despite being nothing but a further development from GDDR5, just like HBM->HBM2.
     
  6. thatguy91

    thatguy91 Guest

    In the notes it discusses that Zen+ is expected to have 15 percent IPC increase. IPC properly stands for "Instructions Per Clock", although Intel seem to have changed it to 'Indicative Performance per Core". They can claim a 10 percent IPC increase just by having a 10 percent high clock, but going by the proper definition the IPC is the same. The term IPC is therefore meaningless, they should just say 10 percent performance improvement because that is what it is! From what I understand of AMD's use of the term, they simply don't mean 15 percent clock increase (giving the 15 percent boost), they actually mean 15 percent for the same clock? Or maybe the clocks increase 10 percent and the remaining 5 percent comes from architectural improvements?

    What do people think, are AMD using the exact same fake misleading description that Intel uses, or is it based on actual improvement?
     
  7. TDurden

    TDurden Guest

    Messages:
    1,981
    Likes Received:
    3
    GPU:
    Sapphire R9 390 Nitro
    Can anyone explain why AMD insists on HBM? Is say GDDR5X 512bit not enough? Fury did not seem to benefiy from HBM much?
     
  8. Chillin

    Chillin Ancient Guru

    Messages:
    6,814
    Likes Received:
    1
    GPU:
    -
    I have a feeling that it slots into their long term Heterogeneous computing concept somewhere down the line, and for now it's the introduction phase on the high end who can bear the costs in the first place.

    Question is if AMD will last that long...
     
  9. Turanis

    Turanis Guest

    Messages:
    1,779
    Likes Received:
    489
    GPU:
    Gigabyte RX500
    Fury X had a problem with so little ROPs,only 64,not 4GB HBM.HBM release/refresh data faster than GDDR5/x.

    Vega needs again many ROPs,over 96-128,and needs the high bandwidth memory(obviously needs high power for ze chip).
     
  10. Solfaur

    Solfaur Ancient Guru

    Messages:
    8,013
    Likes Received:
    1,533
    GPU:
    GB 3080Ti Gaming OC
    I hope this is not true, because if it is, then 20K is basically nothing...
     

  11. MorganX

    MorganX Member Guru

    Messages:
    142
    Likes Received:
    15
    GPU:
    Nvidia 3080 Ti
    You gotta start somewhere when you innovate. Without AMD taking risks, NVIDIA and INTEL would still be bending everyone over.

    If they can deliver 1080ti performance without water cooler for less than $650 I'll pre-order one of the first 20k. I don't even need it, there's no games that take advantage of it now, but I'm tired of the NVIDIA\INTEL monopoly, the inflated prices and built-in obsolescence. Their greed has stifled innovation for years.
     
  12. Undying

    Undying Ancient Guru

    Messages:
    25,478
    Likes Received:
    12,884
    GPU:
    XFX RX6800XT 16GB
  13. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    What exactly is AMD doing that's so innovative with Vega? And how has Nvidia stifled innovation for years?
     
  14. vbetts

    vbetts Don Vincenzo Staff Member

    Messages:
    15,140
    Likes Received:
    1,743
    GPU:
    GTX 1080 Ti
    This is cool and all, but I just want an am4 apu.
     
  15. rm082e

    rm082e Master Guru

    Messages:
    717
    Likes Received:
    259
    GPU:
    3080 - QHD@165hz
    Right? They've had the top performing products for years now, but somehow that's stifling innovation? Meanwhile AMD has only managed to tread water in a couple of specific product lines. But somehow AMD are innovators? :wanker:

    Zen is pretty great though. It seems like they are pushing Intel to step up their game. I just wish they could do the same on the GPU side.
     

  16. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Well AMD does innovate, I just don't consider HBM2 "innovative" at this point. For starters Nvidia has been shipping cards with it for 6 months. Putting it on consumer cards isn't innovative, it's arguably just a bad decision but one AMD doesn't seem to have a choice in.

    None of the other innovative things on Vega would limit supply. So I'm not sure what the point of his first sentence is.

    The idea that Nvidia is stifling innovation is just insane. Their innovation might be self-serving but it pushes the entire industry forward. Freesync is awesome, but AMD had no plan for it until GSync came. AMD's open software library is a response to Gameworks. Their shadowplay equivalent (I forget the name) is a response to shadowplay. The Vega cache controller is nice and might be faster but Pascal already supports unified memory through CUDA. Nvidia already has packed math. They already have tiled rasterization.

    Then you have all the stuff they push with virtualization, AI, Deep Learning, Ray Tracing, etc. Granted AMD has efforts in all this too - but Nvidia is still developing stuff there and pushing the ball forward.

    Idk which company has more "impact" in terms of innovation, but saying that Nvidia stifles it is ridiculous.
     
  17. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,518
    Likes Received:
    2,361
    GPU:
    Nvidia 4070 FE
    480 has half the ROPs of my card, but it seems to be doing just fine, regularly beating the 390.

    Although that being said, I actually do think it could use more.
     
  18. Ryu5uzaku

    Ryu5uzaku Ancient Guru

    Messages:
    7,552
    Likes Received:
    609
    GPU:
    6800 XT
  19. TieSKey

    TieSKey Master Guru

    Messages:
    226
    Likes Received:
    85
    GPU:
    Gtx870m 3Gb
    Saying that Nvidia stifles innovation might seem like a little too much but when all the things u list are put behind an inflated price tag, is doesn't sound that far fetched.

    Another thing that might contribute to that uneasiness about Nvidia is the fact they lagged behind in async, dx12 and vulkan while being the "bigger" company, instead (ab)using that power to manually tune drivers for each game (which works, but is not innovative at all).

    I don't hate Nvidia, but personally dislike when they put effort in software or "secondary" things as a way to "lock" gamers/academia/etc into their "not cheap at all" hardware, instead of focusing on the GPU itself.
     
    Last edited: May 9, 2017
  20. Valken

    Valken Ancient Guru

    Messages:
    2,924
    Likes Received:
    901
    GPU:
    Forsa 1060 3GB Temp GPU
    In low shader, but high geometry loads, more ROPs are better to push raw pixels to the screen, especially at 1080 or 1440p. If we compare just Fury X vs 980 Ti, you can see those trends - shader heavy games, Fury takes lead, otherwise, 980 Ti kills it in pure pixel pushing.

    At 4K, if you lower shader performance or features, ROPs will help at lot with high frequency.

    I have yet to see a lower ROP card beat out higher ROP (of the same performance segment) at the same resolution unless it shader (compute) heavy.
     

Share This Page