Event / Tech Coverage: AMD Capsaicin 2017 - Vega - Threadripper

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 30, 2017.

  1. Denial

    Denial Ancient Guru

    Messages:
    13,004
    Likes Received:
    2,409
    GPU:
    EVGA 1080Ti
    120 and 80? I could probably tell on a game like CS/Quake but generally no, not much of a difference - but when a new game comes out and the difference is the same 40% but between 60 and 40, it becomes immediately apparent.

    And honestly, from experience I went from a 1080 to a 1080Ti on a QHD monitor and the difference was pretty significant. There are too many games at the max settings that are on the cusp of 60fps and the 1080Ti makes it there while the ~30% slower 1080 just doesn't cut it without sacrificing settings.
     
  2. Solfaur

    Solfaur Ancient Guru

    Messages:
    7,382
    Likes Received:
    866
    GPU:
    MSI GTX 1080Ti Ga.X
    Same here, except I went from a 1070. At least with the 1080Ti I can stay safe for probably a year.

    Anyway, I'm still looking forward to some actual reviews of Vega 64, especially regarding the power draw/performance ratio.
     
  3. coth

    coth Master Guru

    Messages:
    443
    Likes Received:
    41
    GPU:
    KFA2 2060 Super EX
    Why can't they make 96 and 128 CUs instead of 56 and 64 and run them at lower frequencies and lower voltage? Wouldn't it solve excessive power consumption problem?
     
  4. Denial

    Denial Ancient Guru

    Messages:
    13,004
    Likes Received:
    2,409
    GPU:
    EVGA 1080Ti
    Because the larger the die the more expensive the chip plus the 64 CU chip is already ~480mm2 and ~600mm2 is the reticule limit for the 14nm process AMD is using. I don't know if they'd be able to double it.

    Also potentially it won't solve the power consumption problem. Doubling the CU count could lead to the necessity of other changes in the front end to feed it. So it may scale more than what you'd think.
     

  5. coth

    coth Master Guru

    Messages:
    443
    Likes Received:
    41
    GPU:
    KFA2 2060 Super EX
    Fury X is 600 mm2. They could save the die size. Instead they choose to save number of CU, but to push frequencies beyond the limit again.
     
    Last edited: Aug 1, 2017
  6. Denial

    Denial Ancient Guru

    Messages:
    13,004
    Likes Received:
    2,409
    GPU:
    EVGA 1080Ti
    Yeah.. because of the reasons I just stated. The cost of manufacturing the chip is heavily weighted to the number of transistors. A 600mm2 chip is more expensive to manufacture then a 480mm2 chip - but the performance will be roughly the same if you adjust clock speeds. So all you do is eat into your own margins.

    And then like I said, you don't know what changes they'd have to make elsewhere to accommodate the increased number of CUs. The entire front end would need to be beefier to feed it, and AMD already has problems feeding their architectures.
     
  7. coth

    coth Master Guru

    Messages:
    443
    Likes Received:
    41
    GPU:
    KFA2 2060 Super EX
    Performance would be same, but power consumption would be lower. So we are getting to main question - why are they saving? Do they have sale bonuses for effective costs or what? Managers are ready to kill the company in favor own bonuses?
     
  8. Denial

    Denial Ancient Guru

    Messages:
    13,004
    Likes Received:
    2,409
    GPU:
    EVGA 1080Ti
    How do you know performance would be the same? AMD's entire problem with GCN is feeding it. It's why they redesigned the front end nearly three times now. Its why DX12 makes such a huge performance increase - because it reduces idle bubbles in the pipeline. Increasing the size of the pipeline isn't going to magically increase performance to the same degree if you drop the frequency. It's just going to create more idleness.

    The other problem is - AMD would have had to make this decision like 3 years ago. Everyone in these threads sit here and always say "why didn't AMD do this" "why didn't AMD do that" it's all easy to say now, after the card is launched. But three years ago, when you have your engineers telling you "Hey we can build this Vega at 4096 cores, clock it to 1.6 and as long as we feed it we will get roughly GP102 performance by 2017, we should be good" why would you say no to that?
     
  9. JamesSneed

    JamesSneed Maha Guru

    Messages:
    1,006
    Likes Received:
    403
    GPU:
    GTX 1070
    I still contend they had Navi in mind when building Vega so there are some sacrifices in Vega so they don't have to start over for Navi. The Vega dies with minimal changes on the 7nm process, glued 2 together via there Infinity fabric then some of it makes since on die sizes frequencies and why HBM2 was used. I'm sure they didn't want to do this but as small as AMD is I would bet someone a Coke this is the approach even If I did't nail the specifics.
     
  10. Arbie

    Arbie Member Guru

    Messages:
    169
    Likes Received:
    58
    GPU:
    GTX 1060 6GB
    Why is it that no matter what a manufacturer does, someone tries to make it sound like an evil plan? AMD announced Ryzen and later announced ThreadRipper. Do you really think they gave up months of selling the latter in order to sell more of the former?? If you don't really think so, then why say it?*

    I'm a very happy 1800X owner. I couldn't use more cores, RAM, or PCIe lanes if I had them. Thank you, AMD, and good luck.


    *Edit: Sorry; just noticed you're a Trump supporter. That explains the behavior.
     
    Last edited: Aug 5, 2017

  11. msroadkill612

    msroadkill612 Active Member

    Messages:
    58
    Likes Received:
    1
    GPU:
    igp
    Well done Hilbert.

    The most exciting thing about amd is fabric, and vega is the first fabric gpu.

    Good to see some cover on HBCC, but I seem to be the only one suitably excited about using nvme as cache.

    A; quad, striped, samsung 960 pro array has similar bandwidth to system memory to a 16 lane discrete gpu card.

    Vega specs even include ports for dedicated nvme arrays on vegas local fabric, which bypasses the system bus limitations completely, as seen on the new $7k vega pro ssg.

    Doesnt anyone think the prospect of ~unlimited gpu cache/memory/workspace has interesting possibilities, even if not right away (its too new a concept for apps as yet)?
     

Share This Page