AMD EPYC CPUs, AMD Radeon Instinct GPUs to power Cray Supercomputer

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, May 7, 2019.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,523
    Likes Received:
    18,828
    GPU:
    AMD | NVIDIA
  2. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,516
    Likes Received:
    2,361
    GPU:
    Nvidia 4070 FE
    Interesting, first with Intel, now with AMD. It's a good thing the US government is doing its part to make sure some competition remains, at least.
     
    Mesab67 and maddog55 like this.
  3. maddog55

    maddog55 Active Member

    Messages:
    61
    Likes Received:
    13
    GPU:
    MSI GTX 1080 G PLUS
    Aye, but can you/it play Crisis 3 in 4k......?? ;))
     
  4. Gomez Addams

    Gomez Addams Master Guru

    Messages:
    255
    Likes Received:
    164
    GPU:
    RTX 3090
    No CUDA support. I'll pass.
     

  5. JamesSneed

    JamesSneed Ancient Guru

    Messages:
    1,691
    Likes Received:
    962
    GPU:
    GTX 1070
    LOL I would have thought the $600m you needed to make your own would have been a bigger issue than lack of cuda support. Gamers commenting on server/supercomputer news is always funny to me. Got to have the one guy say something about Crysis (it hasn't been funny for a decade) and then many others comment like they are somehow going to actually own a supercomputer. ;)

    Anyhow 1.5 exaflops is an insane amount of compute power that I realize its like saying someone is worth a 100B it just doesn't compute to most of us. Having supercomputers with this kind of capabilities will certainly push science further along.
     
    BlackZero, sykozis, Yakk and 4 others like this.
  6. Gomez Addams

    Gomez Addams Master Guru

    Messages:
    255
    Likes Received:
    164
    GPU:
    RTX 3090
    My post was not made in the context of gaming or being a gamer.
     
  7. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    So you're saying you're a low-key billionaire looking forward to buying one of these but will now pass due to no CUDA support?

    [​IMG]
     
    cowie, BlackZero and sykozis like this.
  8. Venix

    Venix Ancient Guru

    Messages:
    3,472
    Likes Received:
    1,972
    GPU:
    Rtx 4070 super
    do you care to enlighten us about your reasoning then?!
     
  9. HWgeek

    HWgeek Guest

    Messages:
    441
    Likes Received:
    315
    GPU:
    Gigabyte 6200 Turbo Fotce @500/600 8x1p
    Look how tables turned, 2 most powerfull supercomputers are based on AMD(CPU/GPU) and Intel(CPU/GPU), don't you wonder why no Nvidia's GPU's?, IMO he gonna sell his leather jacket soon.

    https://i.**********/Yq4dwc5b/frontier.jpg
    *Image from Anandtech.
     
  10. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,516
    Likes Received:
    2,361
    GPU:
    Nvidia 4070 FE
    Perhaps it would be interesting to see an all Nvidia one as well. But can Tegra handle running multiple powerful GPUs?
     

  11. HWgeek

    HWgeek Guest

    Messages:
    441
    Likes Received:
    315
    GPU:
    Gigabyte 6200 Turbo Fotce @500/600 8x1p
    Nvidia's CEO after seeing that non of the Two most powerful Supercomputers in the world will use their GPU's :).
    https://i.**********/8PHpcbbJ/giphy.gif
     
  12. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,010
    Likes Received:
    4,383
    GPU:
    Asrock 7700XT
    As much as I'm happy for AMD's success and repeatedly remind people that AMD's GPUs are actually very good for compute workloads, it seems people here are getting a little bit carried away with their success in this situation. Supercomputers are a game of leapfrog. There's always one country, university, or corporation that releases the next best thing, using some company's latest-gen hardware, and then a few months later someone else does the same thing on a competing platform.

    Trust me, soon enough there will be a server with Nvidia hardware ranking #1. And rest assured, they won't retain that position.
     
    Aura89 and HWgeek like this.
  13. Texter

    Texter Guest

    Messages:
    3,275
    Likes Received:
    332
    GPU:
    Club3d GF6800GT 256MB AGP
    Ah but a short decade ago...the good ole' days

    Mind you...nVidia's Fermi supercomputer at ORNL already had AMD CPU's in it...:)
     
  14. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    You're comparing unreleased super computers that aren't even due to be out for another 2 years, minimum, to nvidias latest and greatest offering (by nvidia i mean a super computer that uses nvidia) from 2018, which is currently the fastest super computer out there, with no regard to the fact that....the two others aren't released and could have competition in 2021 by other, unannounced super computers, could utilize nvidia, etc.?

    Sorry your point seems to be a lack of a point. Come back in 2021/22 when those super computers are up and running and then see if no one is using or planning on using nvidia beyond the Summit super computer, until then, it's a pre-mature "victory" based off a lack of evidence rather then actual historical evidence.

    Just for clarification, i am happy that super computers are starting to use AMD products, even both in the same system, very happy, but this notion that because the only KNOWN 2 next super computers do not have nvidia in them, that somehow means nvidia has any issue whatsoever, financially, process, performance or otherwise, is just nonsense.

    We know that EuroHPC JU is planning a 2022/23 exascale supercomputer, and we know they have at least 1.12 billion dollars to do so (whereas this articles stated one is at 600 million), and we don't know what it'll be using.

    We know that there's at least one more exascale super computer in the USA planned for the same 2021/22 slot, the el capitan, and we don't know what it'll be using

    There are articles out there claiming there will be 10 exascale super computers by 2023, and given the information we know, that doesn't sound terribly unlikely. So again, this whole "well 2 of the 10 don't use nvidia, therefore, nvidia has a problem!" is pure and utter nonsense. If none of the 10 have nvidia, then you can say there's an issue, then you can claim nvidia lost out big time.
     
  15. HWgeek

    HWgeek Guest

    Messages:
    441
    Likes Received:
    315
    GPU:
    Gigabyte 6200 Turbo Fotce @500/600 8x1p
    You didn't get my point, for long time Nvidia's GPU's were the obvious choice for those Supercomputers and you see that AMD and Intel's started to get their place, on Self-driving vehicles sector all saw that there is an option to develop better Asics for their own needs with "little" money and 2~3 years, so it seems that NV gonna have harder life for near future.
     

  16. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,035
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    the cray supercomputer is built for a specific type of task that amd gpu's are better suited for.

    its only the most powerful in terms of that task.
     
  17. chispy

    chispy Ancient Guru

    Messages:
    9,988
    Likes Received:
    2,715
    GPU:
    RTX 4090
    Very interesting built , nice to see AMD chosen for such " Epyc " ( pun intended ) supercomputer ;)
     
  18. HWgeek

    HWgeek Guest

    Messages:
    441
    Likes Received:
    315
    GPU:
    Gigabyte 6200 Turbo Fotce @500/600 8x1p

    After Watching this about Milan and the possibility that it will include 15 Chiplets and maybe SMT4, I think [my Imagination] I Have an Idea where AMD is going with it's future(Maybe Custom design?) HPC EPYC design on 7nm+:
    1)Each CPU chiplet will be 6C/24T to save space/power while giving similar or better then 8c/16t performance.
    2)Adding 4 custom Instinct GPU chiplets.
    3)Adding 2 custom AI accelerator [Asics] chiplets.
    4)1 I/O chiplet with HBM memory stack.

    So the final EPYC Milan(?) can be HPC beast with:
    • 48C/192T Zen CPU cores.
    • 4 custom Instinct GPUs.
    • 2 AI accelerator Asics.
    • 1 I/O Chiplet with HBM 3D staking .
    https://i.**********/NFRJKbYM/Possible-Future-AMD-EPYC.png


    EDIT: I see that there was already great article on such HPC APU design:
    https://www.overclock.net/forum/225-...lops-200w.html
    So after reading some of it I changed my illustration:
    https://i.**********/Qtt93ZWk/Possible-Future-AMD-EPYC-New.png

    And 8 Milans could be installed in Cray’s Shasta 1U with Direct Liquid Cooling:

    [​IMG]
    https://www.anandtech.com/show/13616/managing-16-rome-cpus-in-1u-crays-shasta-direct-liquid-cooling
    Do you think that such chiplet design could be beneficial for HPC clients?
     
    Last edited: May 12, 2019
  19. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    No bro. Doesn't work like that

    According to the the USA exascale project strategy the DOE has put in writing the requirement that the project MUST NOT rely on the single best and the greatest hw, and instead it has to be built on several different architectures.
    Is why you're seeing AMD GPU's there, being heavily subsidized by what seems to be a significant budget ramp up for exascale. Undoubtedly fueled by China's rising aspirations in the field and the whole bruhaha between the two countries.

    That's why all of them (NV, intel and AMD) are getting a piece of the cake, regardless who has the best hw. Intel already fukd up once in delivery, that's why the need for redundancy and several contractors delivering different architectures.
     
    Aura89 and HWgeek like this.

Share This Page