Core i9-10900K can boost to 5.3 GHz, more specifications of 10th Gen Core Comet Lake-S leak

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Dec 28, 2019.

  1. oxidized

    oxidized Master Guru

    Messages:
    234
    Likes Received:
    35
    GPU:
    GTX 1060 6G
    Tell your friend to get his sh** together and check again because there's probably something wrong with his tests

    ROFLMAO
     
    Last edited: Dec 28, 2019
  2. bobblunderton

    bobblunderton Master Guru

    Messages:
    420
    Likes Received:
    199
    GPU:
    EVGA 2070 Super 8gb
    That actually shocks me here - but I won't argue to say it's wrong (I've seen some of this occasionally on benchmarks).
    Maybe once they get that much invested into things you'd think they'd go 4k, so they could actually put the GPU to use? To be such CPU bound (likely a DX11 title limited on draw calls), sounds like the game developer's modeler(s) didn't connect all their UV's they could have, and did a slop-job. Some games are losing 10~12fps between the two processors, but it's not every game though. In a perfect world, the cpu would only have to do AI, physics, and mild servings of draw calls (fed to GPU by CPU to render UV's or textured faces on 3d models, one texture at a time, one model at a time).
    i9 9900k-anything + 2070 VS r7 3700x + 2080 = roughly the same cash investment*, close to same fps averaged. I would say trashed is a strong word for that, not necessarily the wrong word though, I'd more describe that as "it all works out in the end if you don't use the GPU fully in some titles but do in others". As in my case, the build and use of it differs by person, it just depends how good they are at judging needs and what the budget is.
    I actually almost purchased a 2080, but the 2070 super was so close in performance VS the price jump to the 2080. It really depends how the game-engine is optimized and programmed, how many state changes between frames, how much AI and physics are going, if the game is script-heavy, if it's DX11 (or lower), or DX12/Vulkan (less likely for cpu bound due to multi-threaded draw call feeds, which REALLY helps open-world scene creation).
    *Note: does not take into account the following:
    Differences in TDP or cooling requirements,
    case upgrades required for additional airflow or radiator fitment,
    RAM speed increase accruing additional cost - though in optimal scenarios both machines would have 3600mhz CL14~CL16 RAM, etc.

    I'd reckon nobody lost out on either of those machines; one just poured more into the GPU and one poured more into the CPU. If the CPU holds back your GPU, ask more from it by increasing the resolution. Conversely if the GPU is holding things back, turn down / turn off a setting or two.

    That being said, there's always a market for 'the fastest game machine' as there is a market for 'the rendering powerhouse' or 'well priced content creation configuration' as I use here.
    One thing's for sure. My days of overclocking trying to get an additional 10% out of the machine, and spending 100+ on cooling and an extra 50~100$ on a fancier motherboard are over, but this doesn't hold true for everyone.
    Whoever mentioned the D-stepping 920 OC's above is entirely right. That and the Wolfdale Pentium dual-core (late socket 775 budget processors), and the Westmere (?) XEON 5xxx series (much of it) were some SCREAMING overclockers back in the days 10 years ago. Miss those days but not enough to re-live it.
     
  3. JOHN30011887

    JOHN30011887 Master Guru

    Messages:
    213
    Likes Received:
    62
    GPU:
    MSI RTX 4070Ti
    Il wait till intel are on 10nm and have hopefully a decent 8 or 10 core cpu
     
  4. nz3777

    nz3777 Ancient Guru

    Messages:
    2,504
    Likes Received:
    215
    GPU:
    Gtx 980 Radeon 5500
    I will buy when I see 6ghz clock-speed and 20 cores / 40 threads,It has to Turbo on all cores not just one,If I wanted a dual core I'd buy the G3258 Intel!

    Hope you guys know iam just messing around, Those are some impressive clock speeds would like to see Amd match them on that field possibly.
     

  5. rl66

    rl66 Ancient Guru

    Messages:
    3,934
    Likes Received:
    841
    GPU:
    Sapphire RX 6700 XT
    It's a good resume of the situation ;)
    Anyway one day it is Intel other day it is AMD etc etc;
    It's like that since the AMD 5x86 (1995/6 lol)
     
  6. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    That was related to 4GB version due to test exceeding it. 8GB version had practically double performance even on PCIe 3.0 x8.
    Since user would not see difference between those maximum and high textures on used resolution (1080p), he would reduce them and be OK even on 4GB version.

    Not that I blame AMD for making low end card with only PCIe x8. Or intel for not bringing PCIe 4.0. Each has their own reasons.
    But it is nice that we finally have some good example of 4GB VRAM not being enough. And effect of PCIe bandwidth in such situation.
    I said it many times before. AMD's production capacity would not make sufficient dent to intel's sales even if AMD sold immediately every CPU they make.
    And AMD's chips are available worldwide in good quantities.

    Intel will have reason to be afraid when AMD makes same amount of CPUs as today or more, but their chips are out-of-stock. Because that will signal unknown level of demand for AMD. And potentially loss of sales for intel in dozens of percent due to people waiting for AMD's chips instead.
     
    Last edited: Dec 28, 2019
  7. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,959
    Likes Received:
    1,246
    GPU:
    .
    5.3 GHz on single core, under 50°C and under max TDP.. for how much milliseconds this could happen on a high-eng cooling system?
     
  8. Dazz

    Dazz Maha Guru

    Messages:
    1,010
    Likes Received:
    131
    GPU:
    ASUS STRIX RTX 2080
    I think Intel's Marketing Team has already been playing this game judging from the cluster fk that is 10th Gen naming scheme. It's a total mess. You have i3's with and without hyper threading, you have i5's with and without hyper threading, you have i7's with and without hyper threading and all have varying core counts the naming schemes are all over the place. It's like it was created by 3 year olds.
     
  9. EspHack

    EspHack Ancient Guru

    Messages:
    2,799
    Likes Received:
    188
    GPU:
    ATI/HD5770/1GB
    nah more like someone flipped the rug under them and they're still picking up the mess
     
  10. fry178

    fry178 Ancient Guru

    Messages:
    2,078
    Likes Received:
    379
    GPU:
    Aorus 2080S WB
    @squalles
    not above 1080p.
    and above 120 will only make a difference for (online) pvp (not interested), nor do i see any difference in perf for my 2080.
    when compared in 3DMark with 9900K+2080, im getting into the top 30 (out of the +220 listed).
    so my 2080 works as a 2080, even with an R5 3600.
     

  11. cryohellinc

    cryohellinc Ancient Guru

    Messages:
    3,536
    Likes Received:
    2,978
    GPU:
    RX 6750XT/ MAC M1
    Problem here isn't the marketing team, they work with what they are given. Instead the issue is at the core - their development team and the decision makers.

    I am very curious to see how Intel will manage this. With the market share they have, AMD hurts them, but barely. The only thing which can change that is that if for another 1-2 years AMD will continue to hurt them meanwhile offering a far better product at not only consumer market, but especially OEM and Enterprise. That is where Intel is getting most of their income from. Epyc right now is a fantastic product, and what ever may come after it most likely will continue the trend.

    Admittingly it is impressive what they have managed to squeeze out of their 14 nm process, but it is at its limit. Hence, why they try their best to increase the product stack with basically the same product. Hilarious, but here they are.
     
  12. squalles

    squalles Maha Guru

    Messages:
    1,006
    Likes Received:
    106
    GPU:
    Galax RTX 4070ti
    sure, because you doing a synthetic gpu dedicated test, on real world games your rtx 2080 performs like a rtx 2070
     
  13. -Tj-

    -Tj- Ancient Guru

    Messages:
    18,107
    Likes Received:
    2,611
    GPU:
    3080TI iChill Black
    AMD, improving and gaining steady.

    Intel, lets OC further and sell as new again.


    xD
     
    fry178 likes this.
  14. beedoo

    beedoo Member Guru

    Messages:
    149
    Likes Received:
    126
    GPU:
    6900XT Liquid Devil
    Where do you draw this information from, and in which domain are we talking?

    In the home or small office, the general information I found was that often the the performance dial leaned in AMD's favour due to their processors often having more cores.
     
  15. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,959
    Likes Received:
    1,246
    GPU:
    .
    Intel EPT is just a copy of AMD NPT/RVI for SLAT. AMD support interrupt virtualization as well (even on Windows, though KB4490481 came a little too late). You are just spreading FUD.
     

  16. squalles

    squalles Maha Guru

    Messages:
    1,006
    Likes Received:
    106
    GPU:
    Galax RTX 4070ti
    try do any test even on simple virtualization platform like vmware or virtualbox and i want see you say that again
     
  17. fry178

    fry178 Ancient Guru

    Messages:
    2,078
    Likes Received:
    379
    GPU:
    Aorus 2080S WB
    @squalles
    because it takes away the impact of different hardware (ram etc)/settings,
    nor is it easy to recreate the same scene in actual game,
    so that you don't see fps fluctuations just because of an additional explosions that wasnt there in a different run.

    but hey, siege i guess isnt a real game (@1440p and vsync gets 75hz (screen max) with maxed settings incl TAA x4, and with fastsync i get steady +120fps).

    nor have i seen any review on guru that shows more than 10fps (if) difference when going past 1080p on amd vs intel,
    and most of the time the fps is already past whats needed.

    at least the ppl i know that own anything above Nv xx70 (and supporting ecosystem)
    play shooters at 1440p and up while are doing 120/144 on ryzen without problems.
    and that incl a few that either make 6 digit income or have no problem spending whatever they want on hw,
    and they all could have gotten the 9900, yet no one did.
    because they realised they wont pay intel 30-50% more for the big gains of 5-10fps (at lower res).
     
  18. squalles

    squalles Maha Guru

    Messages:
    1,006
    Likes Received:
    106
    GPU:
    Galax RTX 4070ti
    When you compensate with more cores amd have advantage of course
     
  19. fry178

    fry178 Ancient Guru

    Messages:
    2,078
    Likes Received:
    379
    GPU:
    Aorus 2080S WB
    which is given when looking at the same price point, not amds fault.
     
  20. Netherwind

    Netherwind Ancient Guru

    Messages:
    8,841
    Likes Received:
    2,417
    GPU:
    GB 4090 Gaming OC
    Requirements : 360mm AIO?
     

Share This Page