First AMD Radeon MCM GPU later this year as Instinct MI200 (multi-chip module)

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Aug 17, 2021.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,531
    Likes Received:
    18,841
    GPU:
    AMD | NVIDIA
  2. Maddness

    Maddness Ancient Guru

    Messages:
    2,440
    Likes Received:
    1,739
    GPU:
    3080 Aorus Xtreme
    It's coming. Now we just need a gaming version.
     
    Noisiv, Embra and Undying like this.
  3. Silva

    Silva Ancient Guru

    Messages:
    2,051
    Likes Received:
    1,200
    GPU:
    Asus Dual RX580 O4G
    AMD keep being the first one and making tech exiting again, can't wait for next year RDNA3 to bring MCM to consumers!
    This might help alleviate prices across the range, we could use for lower prices as +1000€ mid range GPUs is asking too much when a console costs 300-400€.
     
    Maddness and Undying like this.
  4. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,754
    Likes Received:
    9,647
    GPU:
    4090@H2O
    I too am curious to see how this performs under gaming tasks. I think the first generation won't be much to game on, but they'll eventually figure out the bugs in the second iteration.
    I hope somebody can hack that driver and see what's happening under gaming benches.
     
    Maddness likes this.

  5. icedman

    icedman Maha Guru

    Messages:
    1,300
    Likes Received:
    269
    GPU:
    MSI MECH RX 6750XT
    This should greatly improve yields across the board and prices hopefully idc how they do it but prices have to drop or gaming on pc is done for me at least.
     
    Maddness and Silva like this.
  6. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,395
    GPU:
    Asrock 7700XT
    Worst-case scenario, if MCM doesn't work well for desktop/gaming use, it is still valuable to servers. Servers don't care as much about sharing resources between processors. In a lot of servers, a bridge to each GPU isn't necessary since they're not typically working on the exact same workload, as they would in a game.

    That's the part I'm most excited about. Not only is MCM cheaper because they don't have to make such valuable gigantic dies, but the total product per-wafer increases, which will help keep up with demand and therefore lower prices. This is a big deal.

    Depending how MCM is approached, part of me wonders if we might start seeing multi-GPU setups again, except you can mix and match whatever you want and not face a major performance penalty.
     
    Silva likes this.
  7. Silva

    Silva Ancient Guru

    Messages:
    2,051
    Likes Received:
    1,200
    GPU:
    Asus Dual RX580 O4G
    I think that is dead: the basis of MCM is that each chip talks directly to each other, no bridge chip is used and the path is tiny rendering a small latency penalty.
    Adding another card will have the same problems of the past, so I believe both SLI and Crossfire are dead.
     
    Venix, Maddness and tunejunky like this.
  8. tunejunky

    tunejunky Ancient Guru

    Messages:
    4,451
    Likes Received:
    3,071
    GPU:
    7900xtx/7900xt
    absolutely.
    at first it will come at familiar price points as the easiest to make and sell for gaming will be the basic "chiplet" on it's own card. that will answer any question of availability and you can have the familiar disabled or not shader count making for at least two versions at the lower end.
    for the enthusiast level they will use MCM, it will easily surpass Xfire/SLI without any of the problems including drivers.
    for the really high end you can have multiple chiplets and a large socket to radically dial-up whatever performance level you want Nvidia to struggle to hit for a few years.
     
    Silva and Venix like this.
  9. Death_Lord

    Death_Lord Guest

    Messages:
    722
    Likes Received:
    9
    GPU:
    Aorus 1080 GTX 8GB
    We need a new platform, motherboards with cards the way we got them now is too old school, Image a computer that its build like a car engine, in 3D, where you can integrate cooling systems right inside the hardware and add GPU or CPU modules like cylinder heads on a Gas Engine
     
  10. tunejunky

    tunejunky Ancient Guru

    Messages:
    4,451
    Likes Received:
    3,071
    GPU:
    7900xtx/7900xt
    can be done except for I.P. plus the more complex a system the greater chance of breakdown.
    there is no way that this can be done legally because of patent laws and if someone licenses the patent great. now you have a console trapped like a fly in amber.
     
    Venix likes this.

  11. Venix

    Venix Ancient Guru

    Messages:
    3,472
    Likes Received:
    1,972
    GPU:
    Rtx 4070 super
    as far the "bridge chip " is the one pretending to be the gpu and schedules the 2 or more mcms to do rendering task with out the graphics engines being aware there are 2 gpus working on it ...as far it does a good job ... it will be the holly grail of multichip gpus ! I am still waiting to see results of it since a lot of things sound awesome in paper and on execution some times are underwhelming , that said this looks very promising !
     
    tunejunky likes this.
  12. tunejunky

    tunejunky Ancient Guru

    Messages:
    4,451
    Likes Received:
    3,071
    GPU:
    7900xtx/7900xt
    five years ago i was lucky enough to be in the audience for an AMD industry press conference (Epyc/Instinct) and they were still working out the scheduling issues but were excited about gen 2 infinity fabric (IF) they said then that the compute performance was off the charts on their mcm prototypes for Instinct and the "last mile of the road" was the most difficult. they said then that gen 3 would see the MCM in the market and "shortly" after they would see the gaming market. well we all know that "shortly" at a road map conference can end up being long to us but i was as stoked as a teenaged boy after a sports victory.
    for all of the folks crazy for RT, they can actually have RT cores as each chiplet is small enough that the real estate on the physical chip would easily allow for it, which would then increase the overall performance w/o the need of spreading the task around like they do now (RX 6xxx), but that all depends on how they work out the scheduling.
    i wouldn't be surprised if gen 1 used a specific chiplet to control it, or rather i'd be surprised if they didn't.
     
    Venix likes this.

Share This Page