Review: Core i5 10600K and Core i9 10900K processors

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, May 20, 2020.

  1. metagamer

    metagamer Ancient Guru

    Messages:
    2,596
    Likes Received:
    1,165
    GPU:
    Asus Dual 4070 OC
    I will still pass on Zen3 mate because it'll be the last one on the socket. Even if Zen4 only brings DDR5 on a new socket, it'll be worth it. PCIe 5.0 isn't really needed and USB 4.0 would be nice, but again, not something I need to have. Plus the money not spent on Zen3 can be used to buy a PS5. That's what I'll be doing. Zen4 will be too nice to pass up on, I think.
     
  2. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    My only fear is that my old mobo and the 2600k might just stop working at a point :p
    There is also the issue that every single one new console has problems in its first revision. Either the core console or the controller.
     
  3. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Exactly. If Chip has IPC within margin of error from last generation. And power efficiency practically same, then OC limitaion which was temperature with last generation will be quite same.

    As someone has older chip that does 5.2 GHz and OC is limited by cooling, there is no point in getting new generation which will do same 5.2 GHz due to same cooling limitation.
    One can as well buy better cooling instead. Or save money by waiting for CPU that meaningfully improves situation.
     
  4. RavenMaster

    RavenMaster Maha Guru

    Messages:
    1,356
    Likes Received:
    250
    GPU:
    1x RTX 3080 FE
    How come there are no 4K gaming benchnn m arks? We have 1080p and 1440p but no 4K
     

  5. metagamer

    metagamer Ancient Guru

    Messages:
    2,596
    Likes Received:
    1,165
    GPU:
    Asus Dual 4070 OC
    at 4k the charts would look pretty much identical.
    fingers crossed it won't. If it does, you could still grab a cheap b450 and 3600 in the meantime. Plus you'd need RAM, but that could be carried over to a new rig.
     
  6. SpajdrEX

    SpajdrEX Ancient Guru

    Messages:
    3,399
    Likes Received:
    1,653
    GPU:
    Gainward RTX 4070
    Ok, so about i5 10400 overclocking ... tried at least All Cores set to 4.3Ghz, nothing works. Perhaps FSB to 102Mhz, but that's all!.
    This is with Gigabyte Z490M Gaming X mobo.
     
  7. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    By default a non - K part can't be overclocked, right?
     
  8. metagamer

    metagamer Ancient Guru

    Messages:
    2,596
    Likes Received:
    1,165
    GPU:
    Asus Dual 4070 OC
    not via multiplier, no. Best to just stick ot tweaking RAM. Not sure if cache can be overclocked, that would also help.
     
  9. user1

    user1 Ancient Guru

    Messages:
    2,748
    Likes Received:
    1,279
    GPU:
    Mi25/IGP
    No, though some boards do still include an external bclk generator, It cannot be used to oc non-k chips because of the softlock intel imposed after the skylake debacle.
     
  10. jwb1

    jwb1 Guest

    Messages:
    725
    Likes Received:
    157
    GPU:
    MSI GTX 2080 Ti
    Its when you say things like this no one takes you seriously. :rolleyes:

    Both of these points are wrong. Most people agree how much Intel has gotten out of 14nm is impressive. And if Intel was on 7nm, they would have pretty much the same power usage. You can't give all this credit to AMD for power usage when its the process that is helping in that area the most, not AMD.

    And for the people that seem to think the difference in gaming is only a couple FPS: 1440p, btw.

    [​IMG]
     

  11. alanm

    alanm Ancient Guru

    Messages:
    12,236
    Likes Received:
    4,437
    GPU:
    RTX 4080
    I dont believe this. For such a massively GPU intensive game to show that much difference between 9900k and 10900k at 1440p is fishy as hell. Even worse is the near double perf vs the 3900x, which is ridiculous. Source link?
     
    ZXRaziel and PrMinisterGR like this.
  12. sverek

    sverek Guest

    Messages:
    6,069
    Likes Received:
    2,975
    GPU:
    NOVIDIA -0.5GB
    Last edited: Jun 1, 2020
  13. NCC1701D

    NCC1701D Master Guru

    Messages:
    269
    Likes Received:
    172
    GPU:
    RTX 4090 FE
    It is from the Digital Foundry video that you can find on youtube under the title below. That's a cherry picked screenshot that lasts for about a second or two until things even out. The reviewer even states that early in the test all CPU's are pretty even until you get to that one section of the benchmark near the end. You can also find second long sections where 3700X is beating the 10900K, so not sure how relevant that single anecdotal example is for CPU buyers. Trying a little too hard to justify that purchase if you ask me :)
    Intel Core i9 10900K Review: The King of Gaming Performance - But Should You Buy It?
     
  14. sverek

    sverek Guest

    Messages:
    6,069
    Likes Received:
    2,975
    GPU:
    NOVIDIA -0.5GB
    That's even more sad. Bet I can sell illuminati story in the Intel CPU fan club discord, just add single fact as cherry on top of the story and they gonna start wearing black robes right away :D

    That's some cruel tunnel visioning and echo chamber effect going on.
     
  15. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    You did not disprove neither of @PrMinisterGR 's statements.

    Zen2 has higher IPC than intel in most of cases, you just clearly do not understand meaning of given term.
    And his claim that AMD's CPUs tend to gulp less energy is true too. Clock to Clock, it is not that big difference, but since intel's chips are actually running quite higher clock in most of scenarios, difference is far from small.

    Now to the image. Where the hell you got that from? I want to know, because it shows that person is not exactly doing his job right. Or is he?
    Image shows much bigger difference than what Hilbert measured. (And that's with 2700X while image has 3700X.)
    Then image claims that going from 9900K to 10900K will increase FPS by 26,5% on 4K. Do you really believe that?

    Edit: Thanks @NCC1701D for source. As far as CPU "Bound" goes, DF got it wrong. (As is usual for them.)
    While they evade showing actual bench of game as much as possible, they did show same area for 1080p. And they did show other areas too.
    For example, as character goes to look shortly before train hits the "barricade", on 1080p there of FPS dip, on 1440p, there is smaller fps dip, on 4K there is FPS gain.
    How does that happen? Well, engine does certain things per frame. One of them is Asset loading/pre-processing.
    When game can pull in usual situation 140fps but goes to area where it can momentarily do 200fps on 1080p, it takes 5ms per frame to make frame. Add some limit for other not-well-optimized thing like loading to 4ms per frame and when it happens fps dips to 111 ... 9ms frametime. But with higher resolution Average areas are going to have lower fps, frametimes are going to be higher and impact of 4ms addition from some extra processing may even happen to look inversely as it does in that particular spot.

    And little fun at the end. Depending on how engine is coded, faster storage may lead to bigger fps drops... Let's say engine gives some of its thread 2ms per frame to load data. Then other thread has to process it. But that processing thread is affecting main game thread = problem.

    Basically when someone makes claims about something being CPU bound, they should at least provide basic CPU utilization data because there are many other things which may differ from system to system.
     
    Last edited: Jun 1, 2020

  16. alanm

    alanm Ancient Guru

    Messages:
    12,236
    Likes Received:
    4,437
    GPU:
    RTX 4080
    Same vid at different point in the bench. 3700x, 9900k and 3900x beat the 10900k :D.
    [​IMG]

    Source Digital Foundry review on YT.
     
    ZXRaziel and HandR like this.
  17. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,016
    Likes Received:
    7,353
    GPU:
    GTX 1080ti
  18. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Sorry, guy is amateur...
    1st he assumes that people who measure those things are doing it wrong. That means he did not even look properly at data from all those people who do clock-2-clock comparison.
    2nd, he mentions frequency as if it did matter in testing which hints he did exactly same thing he blames others for.
    3rd is total lack of information about test setups.

    He gave only hints. Is he comparing some monolithic CPU to Rome based EPYC that has I/O die + 8 CPU chiplets? What are memory clocks and timings?
    Is he comparing Desktop Skylake there? Is he actually attempting to measure Zen2 core IPC or performance of specific Zen2 based product that has increased latency due to particular configuration?

    But one thing is clear. Article is lazy, half baked and does not apply to AM4 socket and its CPUs+Memory configurations.
    Not only time he changed values there.
     
    Last edited: Jun 1, 2020
  19. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,016
    Likes Received:
    7,353
    GPU:
    GTX 1080ti
    I thought the same, he's cherry picking results and building a sample that is intentionally unable to take advantage of the uop and smart caches then claiming ipc is less than intels.

    - Agner Fog
     
  20. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    This is basically his method. This kind of "observation" is by itself disruptive to performance. And bets on assumption that things that make it vastly inaccurate are not present as much.
    (It is assumption on his side as many other things.)

    And while I do not think he is cherry picking his results, He is limiting comparison to his specific use and uses far from optimal method of measurement.
    It is like having some blackbox and measuring it with add & xor instructions only while ignoring all others it can do.
    I am sure that Zen2 is not better in every type of data processing than Comet Lake. And that different data structures, sizes, instruction combinations will result in different benefits to each architecture.

    I even dare to say: Right tool for job.
    And that's where even university professor can easily fail. What he found is that Zen2 based Rome server chip is likely not best suited for his workload.
    (I used "likely", because if he used real world testing methods on his workload and measured time to finish versus something meaningful like power draw, Zen 2 may still win and being faster while eating less energy per work done. It is hard to say because he did his best to obscure situation... preventing constructive critics.)
     

Share This Page