Alleged Intel Core i9-12900K beats AMD Ryzen 9 5950X with Cinebench R20 (bigtime)

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 21, 2021.

  1. Airbud

    Airbud Master Guru

    Messages:
    769
    Likes Received:
    1,249
    GPU:
    PNY GTX 1060 XLR8
    Hey, that's me without coffee in the morning

    [​IMG]

    :D
     
    Venix, Maddness and DannyD like this.
  2. mackintosh

    mackintosh Master Guru

    Messages:
    483
    Likes Received:
    225
    GPU:
    Aorus 2080Ti
    Thing is, Conroe wasn't all that much better than Pressler in most synthetic and real-world productivity benchmarks. It excelled at gaming. But day to day things? 3DMark06? 10-15%.

    Bunch of nonsense. Time to start taking brain supplements.
     
    Last edited: Jul 22, 2021
  3. Denial

    Denial Ancient Guru

    Messages:
    13,511
    Likes Received:
    3,045
    GPU:
    EVGA RTX 3080
    You're definitely misremembering.

    https://www.anandtech.com/show/2045/8
     
    Airbud and mackintosh like this.
  4. mackintosh

    mackintosh Master Guru

    Messages:
    483
    Likes Received:
    225
    GPU:
    Aorus 2080Ti
    Well, what do you know? Weird, maybe it was the early engineering sample benchmarks that I remembered. It's been a while and I'm old. Thanks for correcting me :)
     
    Airbud likes this.

  5. alanm

    alanm Ancient Guru

    Messages:
    10,483
    Likes Received:
    2,592
    GPU:
    Asus 2080 Dual OC
    Prescott, not Pressler. I remember very clearly the Conroe days. The initial performance leaks a few months before release, ppl were saying no way its that good. The clincher was that a 200-300 chip was destroying AMDs $700 parts (FX chips).

    AMDs equivalent in terms of perf gains over previous gen was Zen 3.
     
  6. mackintosh

    mackintosh Master Guru

    Messages:
    483
    Likes Received:
    225
    GPU:
    Aorus 2080Ti
    Presler, I was referring to the 65nm Pentium D. Prescott was single core/thread and was much older (2004?). Smithfield/Pressler were dual core, Presler was released in early 2006, just months before Conroe. I had all of them, probably why it's all becoming a blur. Though since history repeating itself is a thing, perhaps Alder Lake will be just that - the next Conroe. We can hope.
     
    Last edited: Jul 22, 2021
  7. alanm

    alanm Ancient Guru

    Messages:
    10,483
    Likes Received:
    2,592
    GPU:
    Asus 2080 Dual OC
    OK, now I remember it. Presler was short-lived, thats why its not on top of many ppls minds. Yes, it was the first dual core, a hot and hungry chip that wasnt that good to begin with. Conroe set things right with a proper dual core.
     
    Airbud likes this.
  8. Airbud

    Airbud Master Guru

    Messages:
    769
    Likes Received:
    1,249
    GPU:
    PNY GTX 1060 XLR8
    you are correct except its Presler with one L

    I upgraded from a Pentium D to a E6850 and it was like night and day!
     
    DannyD likes this.
  9. user1

    user1 Ancient Guru

    Messages:
    1,695
    Likes Received:
    582
    GPU:
    hd 6870
    possible conroe was clocked pretty low for most chips, a <2.4 ghz conroe vs a 3.73ghz smithfield/presler would be pretty close
    presler is cedarmill(which is a die shrink of prescott) , presler and smithfield are just a dual die packaged version of prescott and cedar mill respectively
     
    Last edited: Jul 22, 2021
    mackintosh likes this.
  10. Öhr

    Öhr Master Guru

    Messages:
    311
    Likes Received:
    43
    GPU:
    AMD RX 5700XT @ H₂O
    I doubt these numbers as well. I do believe and hope that it'll outperform single threaded performance by a noticeable margin but this seems a bit much... Multithreaded with its P+E setup beating the 16 core monster 5950x seems even less believable.

    However if it is true it is good news for everyone, including fanboys from either camp: price drops for AMD chips, finally a noteworthy new architecture since forever instead of incremental updates by slapping on additional cores to essentially the same design.

    Fair competition is always good and accelerates progress!
     

  11. Richard Nutman

    Richard Nutman Master Guru

    Messages:
    222
    Likes Received:
    89
    GPU:
    Sapphire 5700 XT
    I don't really think that equates to running better.
    You're talking differences in nanoseconds between cores. That isn't going to translate to more frames much, in the millisecond realm.
    And even if the latency was better on the little cores, the bigger cores will still execute the game engine faster anyway as they run higher speeds, and have higher IPC.

    Yes, but the Golden Cove IPC improvements come mostly from architectural changes and being able to retire more instructions than the Skylake architecture. There is no additional programming required to take advantage of this.
     
  12. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    6,083
    Likes Received:
    2,420
    GPU:
    HIS R9 290
    It makes more of a difference than you think, considering Ryzen's latency issues are what really held it back in benchmarks for the first 2 generations.
    For people who are anal about framerate and care about getting in the hundreds, it makes a difference. If you just want a game to have a good framerate, even a FX series CPU will get the job done.
    The whole point of my post was to say that the little cores aren't slower just because they're simpler.
    As I stated multiple times already...:
    What I said assumes the little cores can be pushed faster
    IPC won't improve performance if the task at hand doesn't utilize all the instructions
    You're taking this a bit too seriously for what is mostly just theory. I'm sure the little cores are missing enough instructions that modern AAA titles either just simply won't run on them, or, will have to use up more cycles to compensate for missing instructions, thereby negating any advantage they had. These cores are not built to run complex foreground tasks.
    And I'm sure the little cores share much of the same changes, in which case that point is moot. It wouldn't make sense for the core architecture of the little cores to be different.
     
  13. BlindBison

    BlindBison Master Guru

    Messages:
    862
    Likes Received:
    180
    GPU:
    RTX 2080 Super
    Zen 2/Zen 3's high core/thread count CPUs do run surprisingly hot, but yeah, from what I've read they're just designed that way to some extent.
     
  14. BlindBison

    BlindBison Master Guru

    Messages:
    862
    Likes Received:
    180
    GPU:
    RTX 2080 Super
    It's the same type of design we've seen in the smartphone market where they do big little. Yes, it's a power saving feature. The idea is you have the power efficient lower performance cores handle mundane tasks like web browsing/video streaming, etc then when you load up a game the big boy cores kick in.

    One thing I'm wondering is whether or not both the little and big cores can be active simultaneously for demanding workloads. My confusion is, for a laptop or tablet or phone I understand this type of design, but for a desktop tower plugged into the wall, it seems to me an odd choice since at that point wouldn't you really only care about the big boy performant cores as power draw/battery life is less of a pressing issue? That's what Ryzen is doing currently and what Intel has been doing traditionally so it's interesting to see this switch.

    Perhaps there's something I'm missing there though. I do hope that in demanding tasks both the big and little cores can be used/active simultaneously.
     
  15. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    12,432
    Likes Received:
    4,705
    GPU:
    2080Ti @h2o
    Power saving is also saving heat. Imagine that this CPU was run with a full chip of big CPU cores, running at their intended power... you couldn't cool that chip. This one already is water cooled to get those benchmarks (see OP).

    A big one probably needs a chiller or a really good custom loop to even run without throttling... the truth behind it? They probably couldn't have made a chip going only big that they could sell as it would never run reasonably well under air cooling.

    They simply had to use lower power cores to even make that thing usable under air, I guess. :rolleyes:
     

  16. user1

    user1 Ancient Guru

    Messages:
    1,695
    Likes Received:
    582
    GPU:
    hd 6870
    10nm isn't that bad on power at lower frequencies(<4.5ghz), so a 12core chip is doable, the problem comes from the fact that intel will also be putting this silicon in laptops , 10nm superfin is simply inferior for mobile parts, tigerlake-h vs cezanne (zen3 apu) shows this, at 35w the ryzen chips are able to compete with 45w tgl chips. The only way for intel to get the power consumption low enough to compete, is to use a more efficient cpu core, thats why the atom cores are there, however it's a pretty tough sell, even with alderlake, since the igp graphics performance is likely to be inferior, like tigerlake-h, unless they are going to increase the total die size significantly.
     
  17. TheDigitalJedi

    TheDigitalJedi Ancient Guru

    Messages:
    1,856
    Likes Received:
    165
    GPU:
    RTX 3090 + CX OLED
    Hey Denial. You are a blast from the past. :D I'm glad to see you're still around and doing your thing. I was about to jump in but the points are already covered.

    And by the way to all of you.....Core 2 Duo FTW! :p
     
    barbacot likes this.
  18. barbacot

    barbacot Master Guru

    Messages:
    557
    Likes Received:
    478
    GPU:
    Asus 3080 Strix OC
    It was one of the greatest CPU that I ever owned...that's why it stayed in my memory..The E6600...
    After the initial tests on my PC it felt like the angels had sung their trumpets...
     
    TheDigitalJedi likes this.
  19. sykozis

    sykozis Ancient Guru

    Messages:
    21,934
    Likes Received:
    1,124
    GPU:
    MSI RX5700
    For those saying that the "leaked" gains aren't possible or are improbable.... Is this supposed to be another "optimization" + small cores or an entirely new architecture? That makes a big difference as to what's possible.

    There's no way to do an "apples to apples" comparison between an M1 based product an a product using an Intel or AMD CPU, which makes any direct comparison impossible.

    We must have been watching different forums. This forum has had an Intel/NVidia bias for years. From the announcement of Conroe up until Zen3....

    As for contributions, I see plenty. Maybe if you try to be a little less negative....or a little less Intel shill like, you'd notice it too.

    Core 2 Duo aka Conroe, was also a completely different architecture from it's predecessor..... It's pretty common to see large gains from new architectures. Not really common to see the same gains from architectural optimizations.
     
  20. mackintosh

    mackintosh Master Guru

    Messages:
    483
    Likes Received:
    225
    GPU:
    Aorus 2080Ti
    People have a tough time believing these gains because neither Sandy Bridge nor Haswell or Skylake blew anyone away. Intel hasn't made these kinds of gains since... Core and Nehalem. To quote a meme "it's been 84 years".
     

Share This Page