Did Intel get lazy after Sandy Bridge? How fast would CPUs be today if they didn't?

Discussion in 'Processors and motherboards Intel' started by Eastcoasthandle, Sep 26, 2018.

  1. Eastcoasthandle

    Eastcoasthandle Guest

    Messages:
    3,365
    Likes Received:
    727
    GPU:
    Nitro 5700 XT
  2. That title cracked me up - I went and grabbed a 7800X the other day to replace a 3930K rig - benchmarked it in Cinebench; it barely scored higher. Returned it. Not looking to get into the Restricted PCI-E lanes too etc, plus the hack that allowed PCI-E 3 on 3930K - switched to a 2700 / X470. So I lost Quad - Channel Memory. Still gaining higher scores in synthetics / content creation tests I was running against the 7800X / might return for a 2700X if I can't get the 2700 to OC as high but so far it's holding despite lower TDP and all.

    Couldn't be happier. The last "great" Intel build I had was a 6900K / X99 rig & yet the cost didn't justify it - but still it was great - I overclocked the crap out of that system and it ran like butter until a bunk PSU fried the whole thing... so now I'm happy with the 2700. Cost justified the setup - it handles 3DS / Maya / Adobe needs - I do some light gaming on it & I can't say my old X99 platform's cost was really justified comparably now. I do miss that platform though - Intel was King then.

    Honestly aside from what I said above the only other option is the 8700K except it's pretty much sold out everywhere which says enough about that subject. Too bad it isn't 8-Core for content creators. That's the problem with Intel. Aside from their HT/ME security risks which have me stand-offish for the time being.

    Rant-out...
     
  3. nhlkoho

    nhlkoho Guest

    Messages:
    7,755
    Likes Received:
    366
    GPU:
    RTX 2080ti FE
    Where do you live that the 8700k is sold out everywhere. Every online retailer I've seen has plenty in stock and my local microcenter has hundreds of them.
     
  4. RzrTrek

    RzrTrek Guest

    Messages:
    2,548
    Likes Received:
    741
    GPU:
    -
    They didn't get lazy, only overconfident (relying on aging standards) and then surpassed by AMD in the budget and mainstream segment.

    Despite of Intel's hefty price tags attached to their CPUs, they're still ahead when it comes to high refresh rate gaming.

    However with all that being said, I will not buy something that will be superseded by annual socket changes, just to comfort their shareholder demands.
     
    Last edited: Oct 4, 2018

  5. mbk1969

    mbk1969 Ancient Guru

    Messages:
    15,536
    Likes Received:
    13,556
    GPU:
    GF RTX 4070
    Well, I switched from 3820 to 4930K back then (better PCI-E, better memory controller, better temperatures, ...) and I still have no problems with games, no matter were Intel lazy or not.
     
    The1 likes this.
  6. In my local area at brick & mortar up until today it's been hard to find. Fry's has been out of it etc I wanted to buy it at a local B&M store instead of online to have the in-person return option as opposed to shipping etc with an e-tailer. I just checked today and my local Fry's got some in stock! Thinking about swapping my Ryzen 7 2700 for an 8700K but not sure with that whole Management Engine Exploit story looming - yeah sure AMD has an undocumented issue we're all unaware of but it just has me unnerved...

    EDIT: Man I totally miss living in NYC I loved my local Microcenter in Brooklyn - Fry's is kinda similar but imo Microcenter is totally king...
     
    Last edited by a moderator: Oct 4, 2018
  7. Gripen90

    Gripen90 Guest

    Messages:
    869
    Likes Received:
    21
    GPU:
    2x RTX 2080Ti SLi
    I have a Core i7 5960X, Core i7 6900K, Core i7 6950X and a Ryzen 7 2700X, and the latter has suprised me a lot with it's performance compared to the 5960X and 6900K in multitasking. Even when the 2700X is running stock it matches both 8 core intel cpu's when they are running 4.2Ghz, even the single core performance is almost 1:1 with them.
     
    jura11 likes this.
  8. user1

    user1 Ancient Guru

    Messages:
    2,746
    Likes Received:
    1,279
    GPU:
    Mi25/IGP
    while "laziness" is a factor , I wouldn't necessarily say it would have turned out any different , few things, after netburst intel decided to focus on power efficiency vs raw performance , by doing so they managed core 2 probably the most impressive jump in recent history, they aimed for lower clocks ( and thus lower power)with higher ipc vs maximum frequency, and it paid off, after sandy bridge the focus shifted, rather than build a better core ( which at the time they already had really high performance) they began focusing more on parallelism, focusing on more threads, you see this with ivybridge ,haswell ,broadwell bringing 15 , 18 and 22 cores per socket instead of the slower jump from 6 to 10 cores from dunnington to westmere and actually a step back to 8 with sandybridge, aswell as the introduction of tsx ( which helps with heavy multithreading overhead and first shipped with haswell) ,the xeon phi, and improved smt on skylake.

    infact this is not only an intel thing, its industry wide, amd with the lackluster many core bulldozer chips and ibm with their quadthread smt and later octothreaded smt (8threads per core) power 8 chips with increased core counts (96threads per cpu)

    on the desktop side there was poor competition for mainstream , the only market amd actually had competitive products was lowend with integrated graphics. you can see this focus shift with the increase in die area the graphics take vs cpu cores with each generation after sandybridge for desktop products

    as far as recent stagnation from 2015 onward, that is more due to both incompetant management and node delays, the designs for cannonlake were probably finished by the time skylake launched and will never see the light of day(other than the token mobile chip), and icelake by the end of 2016, as soon as intel started having problems with 10nm they should have had a backup plan, like porting icelake back to 14nm, instead they put all their bets on the 10nm and amd being unable to recover , and now they find themselves up a river without a paddle, only being able to push their existing now 4-5 year old tech to stave off amd's very fast advance. they needed to launch icelake this week instead of "skylake 3.0 more cores edition" , but instead they will now play catchup, being one step behind, amd has got them off cadence,when amd launches a new product it takes 6-7 months for them to respond, not a good place to be.
     
  9. Eastcoasthandle

    Eastcoasthandle Guest

    Messages:
    3,365
    Likes Received:
    727
    GPU:
    Nitro 5700 XT
    At the end of the day Intel is still outperforming AMD in the CPU Desktop segment.
    I do agree they sat on their laurels and AMD has caught up. But until AMD produces a CPU that's more efficient and has a IPC greater or equal to Intel it's all academic IMO.

    We've yet to see what 7nm will bring to AMD's Rome Uarch. If AMD figured those 2 issues out it will usher in the Athlon days of ole.

    No one in their right mind will recommend Intel when a AMD cpu/motherboad (490?) will be cheaper then Intel overall yet perform the same or better. But offer more like higher PCIe lane count, etc. Which will allow some to use dual gpus and 2-3 M.2 MVMe drives. Among other advances you can't do on Intel setup.
     
    Last edited: Oct 12, 2018
  10. mbk1969

    mbk1969 Ancient Guru

    Messages:
    15,536
    Likes Received:
    13,556
    GPU:
    GF RTX 4070
    Do you think multithreading is not available now and for last two decades? Programmers need no more cores to use multithreading. It is not an easy thing to implement scalable multithreaded code.
     
    HandR, Maddness and tunejunky like this.

  11. tunejunky

    tunejunky Ancient Guru

    Messages:
    4,345
    Likes Received:
    2,988
    GPU:
    7900xtx/7900xt
    very true.
    but there's also the point that programmers tend to go with the first workable solution even if that isn't the "best". which keeps programmers employed as they spend a large percentage of their time correcting bad or lazy code.
    and the software corporations (until the "cloud") prized backwards compatibility as the majority of computers in business run old OS's. so there's a real incentive to feature single or dual thread performance.
    but it all ends up keeping them old fashioned until their competition does something better (in the way of SMT/HT)
     

Share This Page