Samsung Talks About Chip Fab Production Roadmap up to 3 nanometers

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, May 28, 2018.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    32,391
    Likes Received:
    1,581
    GPU:
    AMD | NVIDIA
  2. wavetrex

    wavetrex Master Guru

    Messages:
    299
    Likes Received:
    72
    GPU:
    Zotac GTX1080 AMP!
    Seesh...
    Many of us are still running 32nm Sandy Bridge, 22nm Ivy/Haswell and the occasional 14nm stuff, and these guys are already discussing 3nm

    I wonder what kind of processing power will be possible with mature "3nm" tech. Trillions of transistors ?
     
  3. FrostNixon

    FrostNixon Member Guru

    Messages:
    115
    Likes Received:
    6
    GPU:
    GT 555M 1GB
    Well Intel have been dropping nms every year almost, yet the performance increase has been almost unnoticeable so lithography is not the main importance, architecture is.
     
  4. HardwareCaps

    HardwareCaps Active Member

    Messages:
    68
    Likes Received:
    11
    GPU:
    Intel UHD 630
    Seems too ambitious IMO, it is a shame tho that a company like samsung focuses mainly on mobile/enterprise...
     

  5. Kaarme

    Kaarme Maha Guru

    Messages:
    1,123
    Likes Received:
    156
    GPU:
    Sapphire 390
    The main importance for Intel is the profit (actually it's for any company that plans to stay alive). The smaller the process technology, the more units they get out of a single wafer manufactured. Of equal importance is the improved energy efficiency, which is a decisive selling factor in many market sectors. So, yeah, while Intel had no interest in developing their architecture, they did try to develop the process technology.
     
    HardwareCaps likes this.
  6. xIcarus

    xIcarus Master Guru

    Messages:
    913
    Likes Received:
    70
    GPU:
    1080 Ti AORUS
    Don't discount the importance of litography, Nvidia/AMD gained a ton of performance by switching to a smaller node just because it made higher frequencies possible. The architectural differences were minimal.
    It's worth mentioning that the node jump was pretty substantial however.

    On the other hand Intel have been leaders when it comes to litography yet the performance wasn't there because they were sitting on their asses collecting laurels instead of actually improving their CPUs.

    Ryzen was a big comeback for AMD, that is absolutely true. But if Intel had properly worked on their CPUs during these past years, Ryzen would've simply been a competitor.
    Instead Ryzen is stepping on Intel's face over and over again and I think it's going to get even more brutal next generation.
     
  7. HardwareCaps

    HardwareCaps Active Member

    Messages:
    68
    Likes Received:
    11
    GPU:
    Intel UHD 630
    process node is the backbone of any processor, while now the gains are not as massive as before. they are still the most significant improvement we can get
    you get not just more components in the same size but also efficiency and power delivery improvements which affect performance directly(ofc frequency should improve too)
     
  8. cryohellinc

    cryohellinc Maha Guru

    Messages:
    1,393
    Likes Received:
    408
    GPU:
    1080Ti SeaHawkX@2k+PG348Q
    Im more interested what happens past 1nm. Will there be a Nano-Centimetre? Quantum computing? Or something radically different?
     
  9. Fox2232

    Fox2232 Ancient Guru

    Messages:
    6,564
    Likes Received:
    425
    GPU:
    Fury X +AW@240Hz
    You have to have future plans. Otherwise your stock value goes to hell closer you are to end of your business plans.
    Imagine, you are leader of technology, and then you stop in place and let everyone to catch-up to you.
     
    cryohellinc likes this.
  10. tunejunky

    tunejunky Master Guru

    Messages:
    243
    Likes Received:
    42
    GPU:
    gtx 1070 / gtx 1080ti
    to me this is Samsung responding to TSMC, essentially saying we're bigger, badder, and all around better. even if its not true.

    and folks...you are ignoring the elephant in the room...Apple.
    Apple has been paying incentives for process shrinkage ever since they went A8. over $2 Billion to date, and they might ditch Intel sooner than thought for regular computing. both their own (future) designs and AMD's are testing faster at lower power SoC's.

    and if you haven't noticed, microprocessors are becoming more and more SoC's (esp. Ryzen based).
     

  11. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    2,860
    Likes Received:
    311
    GPU:
    HIS R9 290
    Nano-centimeter doesn't make sense. Picometer is the next step down.
    Quantum computers are a very different "species" of computers. They don't use traditional transistors or binary calculations, so their development doesn't really have much in common. I personally don't see quantum computers being available for home use in the foreseeable future, at least not as they're used now. They're ideal for science-based calculations with massive and complex numbers, but not a whole lot else. Much like a CPU vs a GPU, quantum computers are good at some things and worse at others.

    I would actually argue Ryzen is the least SoC-ish of mainstream processors, whereas ARM is the most. Almost your entire phone's capabilities are pinned down to 1 chip. Everything else is just power regulation, connectors, and sensors that need to be positioned elsewhere. Intel also has some SoCs that don't have an external chipset.
     
    Last edited: May 29, 2018
  12. Venix

    Venix Master Guru

    Messages:
    418
    Likes Received:
    64
    GPU:
    Palit 1060 6gb
    Well 3nm talks ...now not sure how hard they can push it.... after that ...they have to change the material ..to something more efficient than sand i guess the ultimate shrink is up to 1 atom thickness? Good luck going that thin though :p
     
  13. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    2,860
    Likes Received:
    311
    GPU:
    HIS R9 290
    Single-atom transistors or switches are a real thing. The tricky part is figuring out how to make use of them, let alone on a mass-produced scale. Just because you can go smaller, that doesn't mean you'll benefit from it. This is why I think Intel has been taking so long with 10nm - I'm sure they had it working well over a year ago, but it resulted in lower clocks. The advantages of such a die shrink are outweighed by the cons of slowing down the CPU.
     
    Venix likes this.
  14. Fox2232

    Fox2232 Ancient Guru

    Messages:
    6,564
    Likes Received:
    425
    GPU:
    Fury X +AW@240Hz
    How do you want to shape transistor and insulate it, if it has to be sized as one atom, then you have just one atom to do all that.

    As for 3nm transistor, that's just ~11 silicon atoms of length. Luckily for technology, silicon is taking too much space in comparison to its weight. So most of other elements lighter or heavier can fit more atoms in same area.
     
    Venix likes this.
  15. StewieTech

    StewieTech Chuck Norris

    Messages:
    2,270
    Likes Received:
    262
    GPU:
    MSI gtx 960 Gaming
    Still bigger than my penis. Nah i keed! But really they gonna need a new material soon. Exciting times!
     

  16. Venix

    Venix Master Guru

    Messages:
    418
    Likes Received:
    64
    GPU:
    Palit 1060 6gb
    Fox and shim fair points , what i wanted to say is that we really need to find new material , and then again how long would it take to reach nm limits? I believe we are aproaching an era that fabs would not be able to shrink anymore ...... except...if ant man help us ? :p
     
  17. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    2,860
    Likes Received:
    311
    GPU:
    HIS R9 290
    Finding a new material is easier said than done. Keep in mind that it is no coincidence why silicon is used for transistors. It has all of the right properties to make for a good one: it's abundant and cheap, it's a semiconductor, it has relatively small atoms, it tetravalent (this is important), and the other elements it can be doped with have been well-researched at this point. So, take a look at all of the other potential candidates:
    * Tin - too expensive, too low of a melting point, and too conductive
    * Lead - biohazard, large atoms, and too conductive
    * Flerovium - Synthetic, and therefore utterly useless
    * Germanium - Can and has actually been used in transistors (in fact, it was used for the first ever transistor), but it's expensive and more picky about the manufacturing process (and in case you're not aware, silicon is pretty damn picky). It might be good for special-use cases, but not for mass production.
    * Looking beyond "group 14", there are potential candidates like gallium arsenide, but I get the impression those seem to only be suitable for proof-of-concepts rather than practical approaches. They definitely wouldn't help in terms of reducing transistor size (besides, gallium is relatively expensive and arsenic is a biohazard, so that doesn't help).

    So, that just leaves us with carbon. Carbon is being investigated for use with transistors, and it is thought to maybe be the successor to silicon. The problem with carbon is trying to figure out a cost-effective way to manufacture transistors for it, because otherwise the element itself is very cheap and abundant.


    Anyway, I don't think we really need to ditch silicon any time soon. I think one of the reasons so many companies are investing in AI lately is because they're trying to use AI to create new processor architectures. An AI could notice something humans may have never thought of before to get us a lot more efficiency and speed in our designs. Besides, look at a CPU architecture vs a GPU architecture with the same number of transistors - depending on the task, one will decimate the other. But, who says it has to be that way? It may be possible to create a design that obsoletes both CPUs and GPUs (as we know them). Such a design could have a lot of potential benefits, like having everything in shared memory (integrated GPUs still work relatively independently of the CPU), there would be a lot of time saved not needing to communicate over PCIe. Maybe such a CPU could be modular, where you could basically add more cores over PCIe if you really needed to.
     
    Last edited: May 29, 2018
    wavetrex and Venix like this.
  18. tunejunky

    tunejunky Master Guru

    Messages:
    243
    Likes Received:
    42
    GPU:
    gtx 1070 / gtx 1080ti
    ...actually Ryzen is the fruition of decades of SoC design, as is "infinity fabric".
    the entire scalable design was done simply because SoC's were unwieldy and less efficient than thought. and were the (previous) largest single market for AMD.
    Ryzen facilitated much more advanced SoC's (see Xbox/Playstation) and industry specific
    SoC's, but the design of the CPU itself was driven by SoC designers.
     
  19. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    2,860
    Likes Received:
    311
    GPU:
    HIS R9 290
    Infinity Fabric doesn't make a product an SoC. IF is just a feature that makes SoCs much easier to make. So AMD's design has the potential to be the "most SoC-ish product ever made" but currently, AMD does not hold that title.
    What makes an SoC is how many components you integrate into a single chip, hence the name. A Threadripper CPU with a discrete chipset, a discrete GPU, discrete RAM, etc is barely an SoC, because you have so many core components in separate chips. Compare that to some ARM processors, where the CPU, GPU, USB controller, sensor controllers, storage controller, PCIe lanes, and even the RAM are all integrated into 1 unified package. That is what makes an SoC; it's literally the entire system on a chip.
     
  20. Venix

    Venix Master Guru

    Messages:
    418
    Likes Received:
    64
    GPU:
    Palit 1060 6gb
    Even if carbon is viable then there is a matter of production to my understanding the silicon fabs cost billions to make making a new fab to make carbon transiators it is not something samsung or glofo or tsmc or intel will invest if the resault is not vastly superior and they have to be sure it will work !
     

Share This Page