TSMC would use gate all around transistors for 2nm node

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Sep 22, 2020.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    39,185
    Likes Received:
    7,829
    GPU:
    AMD | NVIDIA
    fantaskarsef likes this.
  2. Kaarme

    Kaarme Ancient Guru

    Messages:
    2,068
    Likes Received:
    729
    GPU:
    Sapphire 390
    They certainly look more complex, based on the illustration.
     
  3. Herem

    Herem Active Member

    Messages:
    70
    Likes Received:
    17
    GPU:
    Nvidia 1080
    I wonder how many pluses Intel will need to compete with this?
     
    angelgraves13 and Undying like this.
  4. nevcairiel

    nevcairiel Master Guru

    Messages:
    735
    Likes Received:
    281
    GPU:
    MSI 1080 Gaming X
    Intel is also developing GAAFET technology for use in whats presumed to be called their 5nm node (note: those sizes are marketing names at this point and not representative of any feature sizes on the chips)

    The funny part is that the "source" link in the above article actually points to the anandtech article about Intels GAAFET plans, and not to anything related to TSMC. :D
     

  5. mrkuro

    mrkuro New Member

    Messages:
    3
    Likes Received:
    1
    GPU:
    MSI 1080 TI LIGHTNING Z
    comparing TSMC nm vs Intel nm or Samsung nm is pointless and has allways been. It’s just pure marketing and speculation
     
    HandR likes this.
  6. JamesSneed

    JamesSneed Maha Guru

    Messages:
    1,035
    Likes Received:
    427
    GPU:
    GTX 1070
    So around 2024 we should see Ryzen CPU's on TSMC's 2nm process. That is crazy. The transistor density is going to be off the charts. I have a feeling the APU is going to take over when we get down around this density. We are talking about an expected 3.5x density improvement over 7nm. The largest Navi at rumored 505mm2 would be about 144mm2 which would make a nice little chiplet to go along with a small 16 core CPU chiplet. Things are going to get weird.
     
  7. Kaarme

    Kaarme Ancient Guru

    Messages:
    2,068
    Likes Received:
    729
    GPU:
    Sapphire 390
    Seeing how AMD hasn't been in any hurry to even bother to upgrade the APUs from the ancient GCN architecture, I don't think you are looking at it realistically. Even more so with Intel now trying its muscles in the discrete video card market. It's simply much more profit for a company to sell a CPU and a GPU separately than both in the same packet. Furthermore, people like to upgrade the video card more often than the CPU+mobo, so it's a big business on its own, for both the GPU chip manufacturer and the video card manufacturer partners. As far as games go, CPUs are not the limit currently. GPUs, however, are. That reveals how difficult it is to make powerful enough GPUs, even if you make the chip gargantuan and allow it to guzzle electricity like there's no tomorrow. So, no iGPU is going to be enough any time soon.
     
  8. JamesSneed

    JamesSneed Maha Guru

    Messages:
    1,035
    Likes Received:
    427
    GPU:
    GTX 1070
    You will see AMD pull up the APU roadmap in 2021 :) Anyhow we need a friendly wager because 3.5x density and even 3090 class GPU's are possible reality for APU's.
     
  9. TheSissyOfFremont

    TheSissyOfFremont Member Guru

    Messages:
    120
    Likes Received:
    49
    GPU:
    2070 Super
    Schoolboy question here, and I'm going to assume any problems with what I'm suggesting is going to be down to latency/bandwith:

    Is there any potential future where AMD or Intel (or even Nvidia if they start making CPU's) can improve gaming performance through an architecture that takes advantage of an APU/dGPU.
    So some part of the game rendering is taken care of by hardware that is fundamentally more effective/efficient being placed on the gpu portion of the APU rather than dGPU?
     
  10. Denial

    Denial Ancient Guru

    Messages:
    13,081
    Likes Received:
    2,511
    GPU:
    EVGA 1080Ti
    It's exactly what you said - latency and bandwidth.

    What you're asking is basically the direction chips are going though. I highly suggest you read this whitepaper:

    https://research.nvidia.com/sites/default/files/publications/ISCA_2017_MCMGPU.pdf

    Similar to how AMD started doing multi chip modules with their CPUs, eventually GPUs will be the same (I actually predict HPC GPUs of next generation will be semi-MCM), which isn't too far off what you're asking. If you follow the whitepaper, GPUs are way more sensitive to latency - where even if you have 3-4x the current bandwidth of interconnects like infinity fabric, it's not enough to overcome the penalties from MCM latency (nanoseconds), let alone APU/DGPU latency (microseconds). There may be some workloads worth sharing, but there is a ton of engineering, specifically software scheduling, required to do that correctly for potentially not much of an performance increase. It would also probably have to be tuned for every generation of APU/dGPU which just complicates it further.
     
    TheSissyOfFremont likes this.

  11. JamesSneed

    JamesSneed Maha Guru

    Messages:
    1,035
    Likes Received:
    427
    GPU:
    GTX 1070
    Yes. You already see the initial designs from AMD in the HPC space but its large and costs to much. It is very likely as HBM gets cheaper and made in higher volumes you will see APU's with a CPU, GPU, and say shared 64GB shared HBM memory. This is why HBM was invented in the first place. It was for when companies move to chiplet like approaches using 3d stacking.
     
  12. Kaarme

    Kaarme Ancient Guru

    Messages:
    2,068
    Likes Received:
    729
    GPU:
    Sapphire 390
    How would you handle the memory for that 3090 class iGPU? At least 16GB of it, and with high bandwidth?

    In any case, things that tax the GPU are extremely easy to add to games. Ultimately you need nothing but to add more stuff on a higher resolution screen quicker (or a VR headset). That's why no GPU is ever enough. It's so easy to demand more.
     
  13. angelgraves13

    angelgraves13 Ancient Guru

    Messages:
    2,124
    Likes Received:
    585
    GPU:
    RTX 2080 Ti FE
    Probably 2026, I'd say. We still haven't seen 5nm.
     
  14. JamesSneed

    JamesSneed Maha Guru

    Messages:
    1,035
    Likes Received:
    427
    GPU:
    GTX 1070
    I wouldn't doubt it moves out nodes are getting very complicated. TSMC is mass producing on 5nm for Apple as we speak so its no that TSMC is behind today. I suspect we will see APU's start to heat up on TSMC's 5nm and Intels 7nm which is when both move to using EUV. It should bring the needed power improvements to have a more powerful GPU under the hood. These would likely compete with the lowest end dedicated GPU skews which has not happened yet for integrated GPU's. You can already see this a bit with the latest generation of consoles that are coming out shortly.
     
    Last edited: Sep 23, 2020
    angelgraves13 likes this.

Share This Page