Alleged Intel Core i9-12900K beats AMD Ryzen 9 5950X with Cinebench R20 (bigtime)

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 21, 2021.

  1. mackintosh

    mackintosh Maha Guru

    Messages:
    1,162
    Likes Received:
    1,066
    GPU:
    .
    Thing is, Conroe wasn't all that much better than Pressler in most synthetic and real-world productivity benchmarks. It excelled at gaming. But day to day things? 3DMark06? 10-15%.

    Bunch of nonsense. Time to start taking brain supplements.
     
    Last edited: Jul 22, 2021
  2. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    You're definitely misremembering.

    https://www.anandtech.com/show/2045/8
     
    Airbud and mackintosh like this.
  3. mackintosh

    mackintosh Maha Guru

    Messages:
    1,162
    Likes Received:
    1,066
    GPU:
    .
    Well, what do you know? Weird, maybe it was the early engineering sample benchmarks that I remembered. It's been a while and I'm old. Thanks for correcting me :)
     
    Airbud likes this.
  4. alanm

    alanm Ancient Guru

    Messages:
    12,234
    Likes Received:
    4,436
    GPU:
    RTX 4080
    Prescott, not Pressler. I remember very clearly the Conroe days. The initial performance leaks a few months before release, ppl were saying no way its that good. The clincher was that a 200-300 chip was destroying AMDs $700 parts (FX chips).

    AMDs equivalent in terms of perf gains over previous gen was Zen 3.
     

  5. mackintosh

    mackintosh Maha Guru

    Messages:
    1,162
    Likes Received:
    1,066
    GPU:
    .
    Presler, I was referring to the 65nm Pentium D. Prescott was single core/thread and was much older (2004?). Smithfield/Pressler were dual core, Presler was released in early 2006, just months before Conroe. I had all of them, probably why it's all becoming a blur. Though since history repeating itself is a thing, perhaps Alder Lake will be just that - the next Conroe. We can hope.
     
    Last edited: Jul 22, 2021
  6. alanm

    alanm Ancient Guru

    Messages:
    12,234
    Likes Received:
    4,436
    GPU:
    RTX 4080
    OK, now I remember it. Presler was short-lived, thats why its not on top of many ppls minds. Yes, it was the first dual core, a hot and hungry chip that wasnt that good to begin with. Conroe set things right with a proper dual core.
     
    Airbud likes this.
  7. Airbud

    Airbud Ancient Guru

    Messages:
    2,575
    Likes Received:
    4,080
    GPU:
    XFX RX 5600XT
    you are correct except its Presler with one L

    I upgraded from a Pentium D to a E6850 and it was like night and day!
     
    DannyD likes this.
  8. user1

    user1 Ancient Guru

    Messages:
    2,746
    Likes Received:
    1,279
    GPU:
    Mi25/IGP
    possible conroe was clocked pretty low for most chips, a <2.4 ghz conroe vs a 3.73ghz smithfield/presler would be pretty close
    presler is cedarmill(which is a die shrink of prescott) , presler and smithfield are just a dual die packaged version of prescott and cedar mill respectively
     
    Last edited: Jul 22, 2021
    mackintosh likes this.
  9. Öhr

    Öhr Master Guru

    Messages:
    324
    Likes Received:
    65
    GPU:
    AMD RX 5700XT @ H₂O
    I doubt these numbers as well. I do believe and hope that it'll outperform single threaded performance by a noticeable margin but this seems a bit much... Multithreaded with its P+E setup beating the 16 core monster 5950x seems even less believable.

    However if it is true it is good news for everyone, including fanboys from either camp: price drops for AMD chips, finally a noteworthy new architecture since forever instead of incremental updates by slapping on additional cores to essentially the same design.

    Fair competition is always good and accelerates progress!
     
  10. Richard Nutman

    Richard Nutman Master Guru

    Messages:
    268
    Likes Received:
    121
    GPU:
    Sapphire 7800XT
    I don't really think that equates to running better.
    You're talking differences in nanoseconds between cores. That isn't going to translate to more frames much, in the millisecond realm.
    And even if the latency was better on the little cores, the bigger cores will still execute the game engine faster anyway as they run higher speeds, and have higher IPC.

    Yes, but the Golden Cove IPC improvements come mostly from architectural changes and being able to retire more instructions than the Skylake architecture. There is no additional programming required to take advantage of this.
     

  11. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,973
    Likes Received:
    4,341
    GPU:
    HIS R9 290
    It makes more of a difference than you think, considering Ryzen's latency issues are what really held it back in benchmarks for the first 2 generations.
    For people who are anal about framerate and care about getting in the hundreds, it makes a difference. If you just want a game to have a good framerate, even a FX series CPU will get the job done.
    The whole point of my post was to say that the little cores aren't slower just because they're simpler.
    As I stated multiple times already...:
    What I said assumes the little cores can be pushed faster
    IPC won't improve performance if the task at hand doesn't utilize all the instructions
    You're taking this a bit too seriously for what is mostly just theory. I'm sure the little cores are missing enough instructions that modern AAA titles either just simply won't run on them, or, will have to use up more cycles to compensate for missing instructions, thereby negating any advantage they had. These cores are not built to run complex foreground tasks.
    And I'm sure the little cores share much of the same changes, in which case that point is moot. It wouldn't make sense for the core architecture of the little cores to be different.
     
  12. BlindBison

    BlindBison Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,139
    GPU:
    RTX 3070
    Zen 2/Zen 3's high core/thread count CPUs do run surprisingly hot, but yeah, from what I've read they're just designed that way to some extent.
     
  13. BlindBison

    BlindBison Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,139
    GPU:
    RTX 3070
    It's the same type of design we've seen in the smartphone market where they do big little. Yes, it's a power saving feature. The idea is you have the power efficient lower performance cores handle mundane tasks like web browsing/video streaming, etc then when you load up a game the big boy cores kick in.

    One thing I'm wondering is whether or not both the little and big cores can be active simultaneously for demanding workloads. My confusion is, for a laptop or tablet or phone I understand this type of design, but for a desktop tower plugged into the wall, it seems to me an odd choice since at that point wouldn't you really only care about the big boy performant cores as power draw/battery life is less of a pressing issue? That's what Ryzen is doing currently and what Intel has been doing traditionally so it's interesting to see this switch.

    Perhaps there's something I'm missing there though. I do hope that in demanding tasks both the big and little cores can be used/active simultaneously.
     
  14. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,693
    Likes Received:
    9,572
    GPU:
    4090@H2O
    Power saving is also saving heat. Imagine that this CPU was run with a full chip of big CPU cores, running at their intended power... you couldn't cool that chip. This one already is water cooled to get those benchmarks (see OP).

    A big one probably needs a chiller or a really good custom loop to even run without throttling... the truth behind it? They probably couldn't have made a chip going only big that they could sell as it would never run reasonably well under air cooling.

    They simply had to use lower power cores to even make that thing usable under air, I guess. :rolleyes:
     
  15. user1

    user1 Ancient Guru

    Messages:
    2,746
    Likes Received:
    1,279
    GPU:
    Mi25/IGP
    10nm isn't that bad on power at lower frequencies(<4.5ghz), so a 12core chip is doable, the problem comes from the fact that intel will also be putting this silicon in laptops , 10nm superfin is simply inferior for mobile parts, tigerlake-h vs cezanne (zen3 apu) shows this, at 35w the ryzen chips are able to compete with 45w tgl chips. The only way for intel to get the power consumption low enough to compete, is to use a more efficient cpu core, thats why the atom cores are there, however it's a pretty tough sell, even with alderlake, since the igp graphics performance is likely to be inferior, like tigerlake-h, unless they are going to increase the total die size significantly.
     

  16. TheDigitalJedi

    TheDigitalJedi Ancient Guru

    Messages:
    3,958
    Likes Received:
    3,092
    GPU:
    2X ASUS TUF 4090 OC
    Hey Denial. You are a blast from the past. :D I'm glad to see you're still around and doing your thing. I was about to jump in but the points are already covered.

    And by the way to all of you.....Core 2 Duo FTW! :p
     
    barbacot likes this.
  17. barbacot

    barbacot Master Guru

    Messages:
    996
    Likes Received:
    981
    GPU:
    MSI 4090 SuprimX
    It was one of the greatest CPU that I ever owned...that's why it stayed in my memory..The E6600...
    After the initial tests on my PC it felt like the angels had sung their trumpets...
     
    TheDigitalJedi likes this.
  18. sykozis

    sykozis Ancient Guru

    Messages:
    22,492
    Likes Received:
    1,537
    GPU:
    Asus RX6700XT
    For those saying that the "leaked" gains aren't possible or are improbable.... Is this supposed to be another "optimization" + small cores or an entirely new architecture? That makes a big difference as to what's possible.

    There's no way to do an "apples to apples" comparison between an M1 based product an a product using an Intel or AMD CPU, which makes any direct comparison impossible.

    We must have been watching different forums. This forum has had an Intel/NVidia bias for years. From the announcement of Conroe up until Zen3....

    As for contributions, I see plenty. Maybe if you try to be a little less negative....or a little less Intel shill like, you'd notice it too.

    Core 2 Duo aka Conroe, was also a completely different architecture from it's predecessor..... It's pretty common to see large gains from new architectures. Not really common to see the same gains from architectural optimizations.
     
  19. mackintosh

    mackintosh Maha Guru

    Messages:
    1,162
    Likes Received:
    1,066
    GPU:
    .
    People have a tough time believing these gains because neither Sandy Bridge nor Haswell or Skylake blew anyone away. Intel hasn't made these kinds of gains since... Core and Nehalem. To quote a meme "it's been 84 years".
     
  20. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    9,236
    Likes Received:
    5,208
    GPU:
    AD102/Navi21
    imo it's normal for intel shills to say that a forum is amd biased as much as it is for an amd shill to say it's intel/nvidia biased
    this is how it always looks like from my perspective.
    one always thinks he's smarter than the other,while both are the exact same idiot just on the other side of the fence.
    there are 168.000 members,but yeah,one can unilaterally decide the forum is biased one way or another.
     
    Last edited: Jul 25, 2021

Share This Page