6-Core Intel processors going mainstream in 2018 with Coffee Lake

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 21, 2016.

  1. Backstabak

    Backstabak Master Guru

    Messages:
    714
    Likes Received:
    301
    GPU:
    Gigabyte Rx 5700xt
    Although you are of course right with the intel milking its customers, the technological limitations aren't just an excuse. The main problem is the low hole mobility in silicone, which is a physical limitation and can't be solved in silicon transistors. That's also the reason there isn't a 10 GHz CPU, well that and the terrible efficiency.

    Another thing is that we can't really shrink the cpus anymore. The distance between two atom layers is 1.5nm (which would be hard limit for any shrinking) and there are severe problems way before that, so I doubt well go anywhere below 10 or 8nm.

    The only thing to look forward to is shift from silicone to some other material, like GaAs, where it could be possible to effectively clock the CPUs higher (into the 10s of GHz).
     
  2. angelgraves13

    angelgraves13 Ancient Guru

    Messages:
    2,274
    Likes Received:
    707
    GPU:
    RTX 2080 Ti FE
    Don't worry, Tea Lake is coming up after this lol

    I guess a 12c/24t will be the next best thing in 2018.
     
    Last edited: Nov 20, 2016
  3. robintson

    robintson Master Guru

    Messages:
    423
    Likes Received:
    113
    GPU:
    Asus_Strix2080Ti_OC
    All thanks to "Zen" we are starting to see movements and news from Intel about their new line of cpus made with 10 nm process. Till now Intell was sleeping and didn't really cared at all about investing $$$ on new die shrink silicon processing . Let's hope that AMD will outperform Intel's i7 cpus, they deserve it
     
  4. chispy

    chispy Ancient Guru

    Messages:
    8,930
    Likes Received:
    1,124
    GPU:
    RX 6900xt / RTX3090
    Q2 2018 is far far away ... , in my opinion those like me looking for an upgrade with a cpu that is not a 4 core cpu won't wait that long , if Zen it's as good as it sounds it makes perfect sense to me for an upgrade to Zen SR7. In my case if AMD fails to deliver i will have no other choice that to go with an Intel 8 core cpu. Rumor has it that intel will have a 6 core desktop cpu with coffee lake Q2 2018 along with the 6 core mobile cpus.
     

  5. Reqruiz

    Reqruiz Active Member

    Messages:
    70
    Likes Received:
    26
    GPU:
    Vega 56 Pulse
    Well... hello there. I'm using IGP of i7 for my second monitor. :eyes:
     
  6. JOHN30011887

    JOHN30011887 Member Guru

    Messages:
    147
    Likes Received:
    29
    GPU:
    MSI RTX 2080
    Ahwell il be keeping my i7 4790k for longer then, as il need new mb and ram for the new cpu, which better be 4ghz or higher with the six core
     
  7. Mufflore

    Mufflore Ancient Guru

    Messages:
    12,494
    Likes Received:
    1,154
    GPU:
    Aorus 3090 Xtreme
    These are labelled mainstream CPUs, the TDP are much too low for high clocked versions.
    This isnt about the K series.
    We might have to wait a bit longer.
     
  8. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    9,607
    Likes Received:
    511
    GPU:
    Asus TUF 3080 OC
    A year after AMD makes 8 core/16 thread CPUs that are clock for clock close mainstream... This isn't something to be happy about. By the time these Intel 6 core CPUs roll around AMD will have refined Zen and released Zen+ or whatever they're going to call it.
    Why does anyone consider a 6700K? The price difference between a 6700K and a 6600K is bigger than the difference between the 6700K and the 5820K.

    If you're going to blow that much cash for Hyper Threading (which doesn't do much) you might as well go for the 5820K considering unlike previous X(whatever) boards there are affordable X99 boards.

    Even then, why, Zen is 2 months away.
     
    Last edited: Nov 20, 2016
  9. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,252
    Likes Received:
    15
    GPU:
    RTX 3060 Ti FE
    Good for notebooks. Yawn for desktops. I'll stick with my Xeons. A 2690 V4 is a 3Ghz+ 14 core. Why bother with anything else?
     
  10. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,586
    Likes Received:
    2,089
    GPU:
    HIS R9 290
    Considering your profile actually states you have this CPU, I can't tell if you're kidding or serious yet woefully clueless.
     

  11. Corbus

    Corbus Ancient Guru

    Messages:
    2,446
    Likes Received:
    60
    GPU:
    Reference 6900 XT
    Smart if you do that.
     
  12. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,465
    Likes Received:
    495
    GPU:
    Sapphire 7970 Quadrobake
    Reading the thread, I would say that all of us are correct saying that Intel will be limited by physics very soon.

    There is also the issue about the designs themselves. No matter the canvas (32, 28, 14nm etc), the design has remained largely the same. Everything out of Intel has really been Sandy Bridge iterations. My guess is that the clockspeed limit has much more to do with the design itself, rather than the node it's made. Within reasonable voltage and temperature parameters, 32nm Sandys and 14nm Skylakes all seem to "top" at around 4.4-4.8GHz for daily use. If you remove the faster DDR4 from the equation and maintain similar clocks, an equivalent Sandy is not that much slower than a Skylake. All things considered, it seems like the pathways themselves are more of an issue, rather than the process node.

    Intel's fab people are ahead of everyone else, but things seem to be quite stale in the CPU department.
     
  13. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,586
    Likes Received:
    2,089
    GPU:
    HIS R9 290
    Yes, this has been proven several times in many different ways. Take GPUs for example - when you compare transistor count to an Intel CPU, the GPU may end up having more FLOPs. For example, a HD 6950 has roughly the same transistor count of an 8-core i7 5960X (somewhere around 2.5 billion, more or less). The i7 operates at 354 GFLOPs. The 6950... 2,252.8 GFLOPs. The 6950 is much older, has a lower frequency, and a crappier design overall yet it can handle over 6x the calculations.

    Obviously, a GPU and CPU can't be compared directly, hence the differentiation. But the point of me bringing this up is because there is always a way to handle more calculations in a more efficient way. The ultimate issue really comes down to x86 itself. So much software is heavily dependent upon this relatively obsolete architecture. AMD's Bulldozer was a pretty solid architecture, but it failed because it required software to behave a specific way. There isn't much headroom to improve x86 without demanding developers to change their ways. But if we were to change x86 that much, we might as well move to a completely new architecture. So instead, developers focus offloading the work that x86 CPUs can't do onto GPUs. GPUs are getting insanely complex where they now have triple the transistor count of the average i7.
     
    Last edited: Nov 21, 2016
  14. Undying

    Undying Ancient Guru

    Messages:
    14,588
    Likes Received:
    3,758
    GPU:
    Aorus RX580 XTR 8GB
    What does all this mean to an i5's? Can it be that we'll finally see a six-core i5 in 2018?
     
  15. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,465
    Likes Received:
    495
    GPU:
    Sapphire 7970 Quadrobake
    Unless Zen really stirs up Intel, I can't see it coming to be honest.
     

  16. Mufflore

    Mufflore Ancient Guru

    Messages:
    12,494
    Likes Received:
    1,154
    GPU:
    Aorus 3090 Xtreme
    That is what it spells out. They are saying mainstream CPUs will be 6 core.
    I cant see i3 being 6 core so it will be i5 and i7. (unless they are very low clocked 6 core i3s)
    They wont be performance chips, they are not mainstream and the quoted TDPs are too low.
     
  17. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,586
    Likes Received:
    2,089
    GPU:
    HIS R9 290
    To be fair, I'd say the term "mainstream" for an i7 is a bit clickbaity. To my recollection, i7 has always been high end or enthusiast grade, i5 is mainstream, and i3 is low-end.

    Even a quad core i7 isn't mainstream. Hell, even a mobile dual core i7 technically isn't mainstream.

    Regardless, I don't suspect Intel will offer a 6-core i5 any time soon. They have 6-core Xeons without HT that are reasonably priced (but unfortunately don't offer overclocking, to my knowledge).

    I'd have probably bought an Intel CPU years ago if they offered an overclockable 6-core i5.
     
  18. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    9,607
    Likes Received:
    511
    GPU:
    Asus TUF 3080 OC
    You're going to enjoy any upgrade, even at 4.5GHz your CPU is a bottleneck in some situations. Yeah, even to a 290. Just 2 more months.
     
  19. VENGEANCE

    VENGEANCE Member Guru

    Messages:
    172
    Likes Received:
    0
    GPU:
    GALAX GTX 1070 HOF @2164
    A little tooo late intel...u were greedy all these years...dont pretend to care now of the people
     
  20. Dburgo

    Dburgo Guest

    Besides the guys thread regarding endgame scenarios (which utilize a kind of process that really is far more unique than what consumers really need) until we get out of x86 processing and into something completely different, Intel has NO REASON to dump the money into R&D to do that. Powerful computing the Deep Blue etc, is not affordable yet. So Intel will continue to give us changes like Coffee Lake, (which in my imagination would look like a big lake full of ****).

    We all know, there wont be any switch to RISC or EPIC computing at a large enough consumer level. So what else could there even be? Even so, to recompile the code for software already available would be a great big mess, if not expensive to the companies making it. Obviously nothing imaginable has been able to make the change as EPIC tried and failed. I think nothing probably ever will. But I am not a physicist or anyone with the imagination to think of something like that anyways. Whatever that could change more robust processing would have to be powerful enough to be fast, use the same instructions with backward compatibility to high end demands of CISC processing, BUT also be able to prove to be faster with some sort of new way to process that gives us a change for the demands the software developers could utilize for processing instructions unimaginably quicker, much more powerfully, and even more robust than CISC to boot. How else could we even get into some sort of new architecture if we change the way it recompiled?

    I will say, my 6 core 6800k is okay speed-wise, but I just want more Pcie lanes without the premium cost of basically a server type chip. Watching the software I use daily and watching my CPU usage at thread level, the CPU seems to be ahead of the majority of the software I use, so no complaints there. My disappointment after all of these years computing is that AMD is in a position where it barely competes and wont beat Intel financially/technologically now, and yes, Intel is pocketing money for other ventures and milking its lead over AMD by close to marginal specs on thier CPUs, and sell based on brand recognition and technologies that can probably be gimped from a full-blown spectrum of technological advances. Intel will definitely do so until something alien and consumer demanding changes that course for PCs. What would you do if you were Intel?

    IBM is working on something and their approach is talked about here, http://www.nytimes.com/2014/08/08/s...-is-designed-to-work-like-the-brain.html?_r=0 but I havent heard anything about it on any tech hardware review site yet.
     
    Last edited by a moderator: Nov 21, 2016

Share This Page