TSMC expects 5nm production in 2020

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jan 19, 2016.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    37,006
    Likes Received:
    6,082
    GPU:
    AMD | NVIDIA
    So the story goes like this, we'll see 16, 15 and 14nm solicon this year. Then in two years we'll see 7nm .. and two years later in 2020 we'll be at 5nm ! Man we are running our of scale, so afterÂ...

    TSMC expects 5nm production in 2020
     
  2. labidas

    labidas Master Guru

    Messages:
    231
    Likes Received:
    38
    GPU:
    HD7870
  3. Kaarme

    Kaarme Ancient Guru

    Messages:
    1,783
    Likes Received:
    522
    GPU:
    Sapphire 390
    They don't seem concerned at all, happily talking already about 7 and 5nm, despite the pioneer Intel encountering problems with 10nm.
     
  4. FerCam™

    FerCam™ Master Guru

    Messages:
    241
    Likes Received:
    4
    GPU:
    MSI Gaming GTX980
    But there wasn't a quantum barrier somewhere in this scales (less than 10nm) because there was too few atoms to dope the semiconductor material as type N or P?
    Will this be the new TSMC 20nm node?
     

  5. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    11,258
    Likes Received:
    3,314
    GPU:
    2080Ti @h2o
    Uhm, maybe it's just me, but do you guys really expect them to have it ready in 2020? Maybe they want to have it, but with low yields we're more likely to see anything with such a small scale being available in small numbers, and then they will sell it to mobile manufacturer probably. That's just what I think will happen, so don't panic.
     
  6. labidas

    labidas Master Guru

    Messages:
    231
    Likes Received:
    38
    GPU:
    HD7870
    http://www.kitguru.net/components/a...m-chip-production-in-2016-shows-first-wafers/

    Guess it's been solved for 10nm. Bet afterwards they'll transition from SiO2 to SiGe. 7nm should give 50% perf over 10nm.
    Who knows... there are even working 1.8nm carbon transistors. And beyond...
    http://www.wired.co.uk/news/archive/2015-07/22/tiniest-processor-moores-law
     
  7. Backstabak

    Backstabak Master Guru

    Messages:
    514
    Likes Received:
    194
    GPU:
    Gigabyte Rx 5700xt
    Seems more like PR or wishfull thinking than anything. There is already a quantum effect below ~8 nm. But hey, if the'll make it work for large scale production I'll be only happy.
     
  8. SSD_PRO

    SSD_PRO Member Guru

    Messages:
    172
    Likes Received:
    22
    GPU:
    EVGA GTX 1070
    Now we just need to see actual tangible benefits of the silicon shrinks. Comparing the 32nm i7-2600K and 14nm i7-6700k doesn't yield much progress. Both are awesome CPUs but are limited in differences. Four cores eight threads on each. CPU's are basically the same in performance with the only significant improvements coming on the iGPU, instruction set, PCIe revision, and memory controller of the 6700K.

    Well, I guess I kind of thought about it and those are a lot of changes...
     
  9. Lowice

    Lowice Member

    Messages:
    45
    Likes Received:
    0
    GPU:
    MSI 980ti OC
    It's not that they can't produce it now, it's only because they want to make as much funny papir (money) as possible.. Hense that money slow down technology state.

    ****ing sad.
     
  10. Denial

    Denial Ancient Guru

    Messages:
    12,563
    Likes Received:
    1,792
    GPU:
    EVGA 1080Ti
    I'm sorry but you have absolutely no idea what you're talking about.

    I would slightly, somewhat, kind of agree with you with nearly every other industry. But chip fabrication? No.

    Are these companies making money? Yes obviously, it is a business afterall and they have people to feed. Are they also at the complete forefront of the boundaries of physics? Absolutely. To the point where nearly all the friends I have who graduated with me from RIT (Rochester Institute of Technology) with MME degrees, are all perusing post-grad theoretical physics/particle physics because none of the companies doing this stuff will hire them without it.

    Fabing is probably the only mass-scale industry where the application of quantum mechanics actually plays a role. Quantum effects don't start 8nm, they started years ago. They've been using FE tunneling in practical fab processes like NAND memory for years.

    This isn't even to mention that companies like Intel, with $5B R&D budgets that are trying hard as hell to compete in the mobile sector, LITERALLY have the best engineers on the face of the planet working the problem. And even their stuff gets delayed for years at time. Or the fact that TSMC has lost several major contracts in the last few years, so why in gods name would they be slowing down when Samsung is already surpassing them?

    The idea that they are milking money doesn't make sense from a technical perspective nor does it make sense from a business perspective. It's probably one of the most competitive sectors in computing, with companies like ARM/Intel/Samsung/TSMC/GF/Nvidia/AMD/Huawei/Mediatek and countless others are all trying to one up eachother.
     
    Last edited: Jan 19, 2016

  11. Reardan

    Reardan Master Guru

    Messages:
    317
    Likes Received:
    26
    GPU:
    GTX 2080
    TSMC, Samsung, Intel, Global Foundries, AMD all have the absolute best minds in their fields, the absolute best technology and resources and none of them are sure, or can agree, how much further down past 10nm we can even go. 5nm, if you allow no space between them, is 25 silicon atoms across. And you need billions of those to make a working chip.

    This is not some regular day at the office, where you just fill in some spreadsheets and get your answer. This is: You fill out your spreadsheet, but pieces of ****ing Jimmy in Accounting's spreadsheet just show up in random cells on your screen because, today, Jimmy's spreadsheet decided they didn't like being cells and instead wanted to be waves.

    These companies don't even have a material, currently, they can make small enough to even be a true 16 or 14nm transistor.
     
  12. Turanis

    Turanis Maha Guru

    Messages:
    1,478
    Likes Received:
    218
    GPU:
    Gigabyte RX500
    Mark my words TSMC: Keep Dreaming!

    After Intel will have 5nm maybe you,TSMC, are allowed to have 10nm.Untill then stay with 16nm. ;)
     
  13. umeng2002

    umeng2002 Master Guru

    Messages:
    949
    Likes Received:
    33
    GPU:
    eVGA GTX 970 SC ACX 2.0
    4-5nm then new materials to get higher frequencies?

    What a world we live in when atoms are getting too big for us.
     
  14. Toss3

    Toss3 Member Guru

    Messages:
    184
    Likes Received:
    6
    GPU:
    WC Inno3D GTX 980 TI
    Gallium nitride.
     
  15. labidas

    labidas Master Guru

    Messages:
    231
    Likes Received:
    38
    GPU:
    HD7870
    Any sources for these ludicrous claims or did you just made all that up??
     

  16. BLEH!

    BLEH! Ancient Guru

    Messages:
    5,919
    Likes Received:
    91
    GPU:
    Sapphire Fury
    Wishful thinking, I thinking.
     
  17. sykozis

    sykozis Ancient Guru

    Messages:
    21,233
    Likes Received:
    740
    GPU:
    MSI RX5700
    TSMC isn't claiming they'll have 5nm ready for volume production in 2020. They are simply expecting to be able to fab a chip using 5nm transistors by 2020. None of us have any idea wtf is really going on inside TSMC or what they're really doing. We simply know what their PR says. Unless you actually work in the field just STFU, sit back and expect to be disappointed when it gets delayed for technical reasons like the rest of us.
     
  18. thatguy91

    thatguy91 Ancient Guru

    Messages:
    6,648
    Likes Received:
    98
    GPU:
    XFX RX 480 RS 4 GB
    They have almost 5 years to achieve this, seeing as it is January 2016 and this may happen in December 2020. The claim is quite bold, but it's possible that they have it working in principle at the moment... remembering of course that there is a significant difference in making a few hundred transistors (or whatever they do in the early stage of R&D) and a few billion. Keep in mind that like currently, it will probably be the low power designs first then the high performance versions one the technology is tweaked like happens currently.

    Money in R&D certainly is a huge advantage, but it doesn't necessarily mean they will automatically get there first than another company with a much smaller R&D budget. The interesting thing with Intel is how successful the 14 nm processors are over the earlier 32 nm Sandy Bridge CPU's. Sure, the Skylake processors are certainly faster but I would have expected more with 4-5 processor generations later (Sandy Bridge --> Ivy Bridge --> Haswell --> Haswell Refresh (because Broadwell wasn't ready) --> Broadwell -->Skylake and a drop from 32 nm to 14 nm. I guess you could say it's really 4 generations since Haswell Refresh was a stopgap CPU due to Broadwell having issues, but the principle is still valid. In Intel's case, despite the large R&D budget I feel the lack of competition probably hasn't helped. A large proportion of that R&D has probably gone towards the integrated graphics seeing as that is the area that has shown the most improvement, and also the area with the strongest competition (against AMD's APUs').
     
  19. rhysiam

    rhysiam Member

    Messages:
    18
    Likes Received:
    2
    GPU:
    7950
    As well as increased power efficiency, the main benefit from a smaller process node is being able to cram more transistors into the same space. The problem with CPUs at present is that it's not easy to translate those extra transistors into better performance. One simple way is to add more cores. Intel have 18 core 36 thread Xeon CPUs and I'm sure we'll see higher core counts as die shrinks continue. But, of course, gaming and even surprisingly lots of productivity tasks aren't able to use those extra cores effectively.

    For GPUs on the other hand, it's much easier to put the extra transistors to good use. Case in point: A Titan X has roughly 50% more transistors than a GTX 980, which Nvidia has used to give it 50% more CUDA cores, texture units, ROPs and a 50% wider memory bus. If clocked the same, you'll find that a Titan X is roughly 50% faster than the 980... those extra transistors are translating into extra performance.

    I think the success (or failure!) of future die shrinks will have a bigger impact on GPU performance than consumer CPUs, at least for the next few years.
     
  20. elkosith

    elkosith Maha Guru

    Messages:
    1,414
    Likes Received:
    12
    GPU:
    Nvidia 840M
    I still remember when 13 micrometer is a feature
     

Share This Page