Samsung Talks About Chip Fab Production Roadmap up to 3 nanometers

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, May 28, 2018.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,398
    Likes Received:
    18,573
    GPU:
    AMD | NVIDIA
  2. wavetrex

    wavetrex Ancient Guru

    Messages:
    2,450
    Likes Received:
    2,547
    GPU:
    TUF 6800XT OC
    Seesh...
    Many of us are still running 32nm Sandy Bridge, 22nm Ivy/Haswell and the occasional 14nm stuff, and these guys are already discussing 3nm

    I wonder what kind of processing power will be possible with mature "3nm" tech. Trillions of transistors ?
     
  3. FrostNixon

    FrostNixon Master Guru

    Messages:
    275
    Likes Received:
    57
    GPU:
    RX 5700 XT
    Well Intel have been dropping nms every year almost, yet the performance increase has been almost unnoticeable so lithography is not the main importance, architecture is.
     
  4. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    Seems too ambitious IMO, it is a shame tho that a company like samsung focuses mainly on mobile/enterprise...
     

  5. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,513
    Likes Received:
    2,355
    GPU:
    Nvidia 4070 FE
    The main importance for Intel is the profit (actually it's for any company that plans to stay alive). The smaller the process technology, the more units they get out of a single wafer manufactured. Of equal importance is the improved energy efficiency, which is a decisive selling factor in many market sectors. So, yeah, while Intel had no interest in developing their architecture, they did try to develop the process technology.
     
    HardwareCaps likes this.
  6. xIcarus

    xIcarus Guest

    Messages:
    990
    Likes Received:
    142
    GPU:
    RTX 4080 Gamerock
    Don't discount the importance of litography, Nvidia/AMD gained a ton of performance by switching to a smaller node just because it made higher frequencies possible. The architectural differences were minimal.
    It's worth mentioning that the node jump was pretty substantial however.

    On the other hand Intel have been leaders when it comes to litography yet the performance wasn't there because they were sitting on their asses collecting laurels instead of actually improving their CPUs.

    Ryzen was a big comeback for AMD, that is absolutely true. But if Intel had properly worked on their CPUs during these past years, Ryzen would've simply been a competitor.
    Instead Ryzen is stepping on Intel's face over and over again and I think it's going to get even more brutal next generation.
     
  7. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    process node is the backbone of any processor, while now the gains are not as massive as before. they are still the most significant improvement we can get
    you get not just more components in the same size but also efficiency and power delivery improvements which affect performance directly(ofc frequency should improve too)
     
  8. cryohellinc

    cryohellinc Ancient Guru

    Messages:
    3,535
    Likes Received:
    2,974
    GPU:
    RX 6750XT/ MAC M1
    Im more interested what happens past 1nm. Will there be a Nano-Centimetre? Quantum computing? Or something radically different?
     
  9. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    You have to have future plans. Otherwise your stock value goes to hell closer you are to end of your business plans.
    Imagine, you are leader of technology, and then you stop in place and let everyone to catch-up to you.
     
    cryohellinc likes this.
  10. tunejunky

    tunejunky Ancient Guru

    Messages:
    4,346
    Likes Received:
    2,988
    GPU:
    7900xtx/7900xt
    to me this is Samsung responding to TSMC, essentially saying we're bigger, badder, and all around better. even if its not true.

    and folks...you are ignoring the elephant in the room...Apple.
    Apple has been paying incentives for process shrinkage ever since they went A8. over $2 Billion to date, and they might ditch Intel sooner than thought for regular computing. both their own (future) designs and AMD's are testing faster at lower power SoC's.

    and if you haven't noticed, microprocessors are becoming more and more SoC's (esp. Ryzen based).
     

  11. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    Nano-centimeter doesn't make sense. Picometer is the next step down.
    Quantum computers are a very different "species" of computers. They don't use traditional transistors or binary calculations, so their development doesn't really have much in common. I personally don't see quantum computers being available for home use in the foreseeable future, at least not as they're used now. They're ideal for science-based calculations with massive and complex numbers, but not a whole lot else. Much like a CPU vs a GPU, quantum computers are good at some things and worse at others.

    I would actually argue Ryzen is the least SoC-ish of mainstream processors, whereas ARM is the most. Almost your entire phone's capabilities are pinned down to 1 chip. Everything else is just power regulation, connectors, and sensors that need to be positioned elsewhere. Intel also has some SoCs that don't have an external chipset.
     
    Last edited: May 29, 2018
  12. Venix

    Venix Ancient Guru

    Messages:
    3,440
    Likes Received:
    1,944
    GPU:
    Rtx 4070 super
    Well 3nm talks ...now not sure how hard they can push it.... after that ...they have to change the material ..to something more efficient than sand i guess the ultimate shrink is up to 1 atom thickness? Good luck going that thin though :p
     
  13. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    Single-atom transistors or switches are a real thing. The tricky part is figuring out how to make use of them, let alone on a mass-produced scale. Just because you can go smaller, that doesn't mean you'll benefit from it. This is why I think Intel has been taking so long with 10nm - I'm sure they had it working well over a year ago, but it resulted in lower clocks. The advantages of such a die shrink are outweighed by the cons of slowing down the CPU.
     
    Venix likes this.
  14. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    How do you want to shape transistor and insulate it, if it has to be sized as one atom, then you have just one atom to do all that.

    As for 3nm transistor, that's just ~11 silicon atoms of length. Luckily for technology, silicon is taking too much space in comparison to its weight. So most of other elements lighter or heavier can fit more atoms in same area.
     
    Venix likes this.
  15. StewieTech

    StewieTech Chuck Norris

    Messages:
    2,537
    Likes Received:
    934
    GPU:
    MSI gtx 960 Gaming
    Still bigger than my penis. Nah i keed! But really they gonna need a new material soon. Exciting times!
     

  16. Venix

    Venix Ancient Guru

    Messages:
    3,440
    Likes Received:
    1,944
    GPU:
    Rtx 4070 super
    Fox and shim fair points , what i wanted to say is that we really need to find new material , and then again how long would it take to reach nm limits? I believe we are aproaching an era that fabs would not be able to shrink anymore ...... except...if ant man help us ? :p
     
  17. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    Finding a new material is easier said than done. Keep in mind that it is no coincidence why silicon is used for transistors. It has all of the right properties to make for a good one: it's abundant and cheap, it's a semiconductor, it has relatively small atoms, it tetravalent (this is important), and the other elements it can be doped with have been well-researched at this point. So, take a look at all of the other potential candidates:
    * Tin - too expensive, too low of a melting point, and too conductive
    * Lead - biohazard, large atoms, and too conductive
    * Flerovium - Synthetic, and therefore utterly useless
    * Germanium - Can and has actually been used in transistors (in fact, it was used for the first ever transistor), but it's expensive and more picky about the manufacturing process (and in case you're not aware, silicon is pretty damn picky). It might be good for special-use cases, but not for mass production.
    * Looking beyond "group 14", there are potential candidates like gallium arsenide, but I get the impression those seem to only be suitable for proof-of-concepts rather than practical approaches. They definitely wouldn't help in terms of reducing transistor size (besides, gallium is relatively expensive and arsenic is a biohazard, so that doesn't help).

    So, that just leaves us with carbon. Carbon is being investigated for use with transistors, and it is thought to maybe be the successor to silicon. The problem with carbon is trying to figure out a cost-effective way to manufacture transistors for it, because otherwise the element itself is very cheap and abundant.


    Anyway, I don't think we really need to ditch silicon any time soon. I think one of the reasons so many companies are investing in AI lately is because they're trying to use AI to create new processor architectures. An AI could notice something humans may have never thought of before to get us a lot more efficiency and speed in our designs. Besides, look at a CPU architecture vs a GPU architecture with the same number of transistors - depending on the task, one will decimate the other. But, who says it has to be that way? It may be possible to create a design that obsoletes both CPUs and GPUs (as we know them). Such a design could have a lot of potential benefits, like having everything in shared memory (integrated GPUs still work relatively independently of the CPU), there would be a lot of time saved not needing to communicate over PCIe. Maybe such a CPU could be modular, where you could basically add more cores over PCIe if you really needed to.
     
    Last edited: May 29, 2018
    wavetrex and Venix like this.
  18. tunejunky

    tunejunky Ancient Guru

    Messages:
    4,346
    Likes Received:
    2,988
    GPU:
    7900xtx/7900xt
    ...actually Ryzen is the fruition of decades of SoC design, as is "infinity fabric".
    the entire scalable design was done simply because SoC's were unwieldy and less efficient than thought. and were the (previous) largest single market for AMD.
    Ryzen facilitated much more advanced SoC's (see Xbox/Playstation) and industry specific
    SoC's, but the design of the CPU itself was driven by SoC designers.
     
  19. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    Infinity Fabric doesn't make a product an SoC. IF is just a feature that makes SoCs much easier to make. So AMD's design has the potential to be the "most SoC-ish product ever made" but currently, AMD does not hold that title.
    What makes an SoC is how many components you integrate into a single chip, hence the name. A Threadripper CPU with a discrete chipset, a discrete GPU, discrete RAM, etc is barely an SoC, because you have so many core components in separate chips. Compare that to some ARM processors, where the CPU, GPU, USB controller, sensor controllers, storage controller, PCIe lanes, and even the RAM are all integrated into 1 unified package. That is what makes an SoC; it's literally the entire system on a chip.
     
  20. Venix

    Venix Ancient Guru

    Messages:
    3,440
    Likes Received:
    1,944
    GPU:
    Rtx 4070 super
    Even if carbon is viable then there is a matter of production to my understanding the silicon fabs cost billions to make making a new fab to make carbon transiators it is not something samsung or glofo or tsmc or intel will invest if the resault is not vastly superior and they have to be sure it will work !
     

Share This Page