Core i9-10900K can boost to 5.3 GHz, more specifications of 10th Gen Core Comet Lake-S leak

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Dec 28, 2019.

  1. patteSatan

    patteSatan Member Guru

    Messages:
    119
    Likes Received:
    27
    GPU:
    Sapphire R9 390
    Intel is a dead horse.
     
    angelgraves13 likes this.
  2. oxidized

    oxidized Master Guru

    Messages:
    219
    Likes Received:
    33
    GPU:
    GTX 1060 6G
    Tell your friend to get his sh** together and check again because there's probably something wrong with his tests

    ROFLMAO
     
    Last edited: Dec 28, 2019
  3. bobblunderton

    bobblunderton Member Guru

    Messages:
    108
    Likes Received:
    44
    GPU:
    EVGA 2070 Super 8gb
    That actually shocks me here - but I won't argue to say it's wrong (I've seen some of this occasionally on benchmarks).
    Maybe once they get that much invested into things you'd think they'd go 4k, so they could actually put the GPU to use? To be such CPU bound (likely a DX11 title limited on draw calls), sounds like the game developer's modeler(s) didn't connect all their UV's they could have, and did a slop-job. Some games are losing 10~12fps between the two processors, but it's not every game though. In a perfect world, the cpu would only have to do AI, physics, and mild servings of draw calls (fed to GPU by CPU to render UV's or textured faces on 3d models, one texture at a time, one model at a time).
    i9 9900k-anything + 2070 VS r7 3700x + 2080 = roughly the same cash investment*, close to same fps averaged. I would say trashed is a strong word for that, not necessarily the wrong word though, I'd more describe that as "it all works out in the end if you don't use the GPU fully in some titles but do in others". As in my case, the build and use of it differs by person, it just depends how good they are at judging needs and what the budget is.
    I actually almost purchased a 2080, but the 2070 super was so close in performance VS the price jump to the 2080. It really depends how the game-engine is optimized and programmed, how many state changes between frames, how much AI and physics are going, if the game is script-heavy, if it's DX11 (or lower), or DX12/Vulkan (less likely for cpu bound due to multi-threaded draw call feeds, which REALLY helps open-world scene creation).
    *Note: does not take into account the following:
    Differences in TDP or cooling requirements,
    case upgrades required for additional airflow or radiator fitment,
    RAM speed increase accruing additional cost - though in optimal scenarios both machines would have 3600mhz CL14~CL16 RAM, etc.

    I'd reckon nobody lost out on either of those machines; one just poured more into the GPU and one poured more into the CPU. If the CPU holds back your GPU, ask more from it by increasing the resolution. Conversely if the GPU is holding things back, turn down / turn off a setting or two.

    That being said, there's always a market for 'the fastest game machine' as there is a market for 'the rendering powerhouse' or 'well priced content creation configuration' as I use here.
    One thing's for sure. My days of overclocking trying to get an additional 10% out of the machine, and spending 100+ on cooling and an extra 50~100$ on a fancier motherboard are over, but this doesn't hold true for everyone.
    Whoever mentioned the D-stepping 920 OC's above is entirely right. That and the Wolfdale Pentium dual-core (late socket 775 budget processors), and the Westmere (?) XEON 5xxx series (much of it) were some SCREAMING overclockers back in the days 10 years ago. Miss those days but not enough to re-live it.
     
  4. JOHN30011887

    JOHN30011887 Active Member

    Messages:
    97
    Likes Received:
    10
    GPU:
    MSI RTX 2080
    Il wait till intel are on 10nm and have hopefully a decent 8 or 10 core cpu
     

  5. nz3777

    nz3777 Ancient Guru

    Messages:
    2,385
    Likes Received:
    177
    GPU:
    Gtx 980 Strix
    I will buy when I see 6ghz clock-speed and 20 cores / 40 threads,It has to Turbo on all cores not just one,If I wanted a dual core I'd buy the G3258 Intel!

    Hope you guys know iam just messing around, Those are some impressive clock speeds would like to see Amd match them on that field possibly.
     
  6. rl66

    rl66 Ancient Guru

    Messages:
    2,288
    Likes Received:
    163
    GPU:
    quadro K6000+Tesla M2090
    It's a good resume of the situation ;)
    Anyway one day it is Intel other day it is AMD etc etc;
    It's like that since the AMD 5x86 (1995/6 lol)
     
  7. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,939
    Likes Received:
    2,292
    GPU:
    5700XT+AW@240Hz
    That was related to 4GB version due to test exceeding it. 8GB version had practically double performance even on PCIe 3.0 x8.
    Since user would not see difference between those maximum and high textures on used resolution (1080p), he would reduce them and be OK even on 4GB version.

    Not that I blame AMD for making low end card with only PCIe x8. Or intel for not bringing PCIe 4.0. Each has their own reasons.
    But it is nice that we finally have some good example of 4GB VRAM not being enough. And effect of PCIe bandwidth in such situation.
    I said it many times before. AMD's production capacity would not make sufficient dent to intel's sales even if AMD sold immediately every CPU they make.
    And AMD's chips are available worldwide in good quantities.

    Intel will have reason to be afraid when AMD makes same amount of CPUs as today or more, but their chips are out-of-stock. Because that will signal unknown level of demand for AMD. And potentially loss of sales for intel in dozens of percent due to people waiting for AMD's chips instead.
     
    Last edited: Dec 28, 2019
  8. Alessio1989

    Alessio1989 Maha Guru

    Messages:
    1,483
    Likes Received:
    262
    GPU:
    .
    5.3 GHz on single core, under 50°C and under max TDP.. for how much milliseconds this could happen on a high-eng cooling system?
     
  9. Dazz

    Dazz Master Guru

    Messages:
    866
    Likes Received:
    89
    GPU:
    ASUS STRIX RTX 2080
    I think Intel's Marketing Team has already been playing this game judging from the cluster fk that is 10th Gen naming scheme. It's a total mess. You have i3's with and without hyper threading, you have i5's with and without hyper threading, you have i7's with and without hyper threading and all have varying core counts the naming schemes are all over the place. It's like it was created by 3 year olds.
     
  10. EspHack

    EspHack Ancient Guru

    Messages:
    2,472
    Likes Received:
    44
    GPU:
    ATI/HD5770/1GB
    nah more like someone flipped the rug under them and they're still picking up the mess
     

  11. fry178

    fry178 Maha Guru

    Messages:
    1,367
    Likes Received:
    167
    GPU:
    EVGA FTW Hybrid2080
    @squalles
    not above 1080p.
    and above 120 will only make a difference for (online) pvp (not interested), nor do i see any difference in perf for my 2080.
    when compared in 3DMark with 9900K+2080, im getting into the top 30 (out of the +220 listed).
    so my 2080 works as a 2080, even with an R5 3600.
     
  12. cryohellinc

    cryohellinc Ancient Guru

    Messages:
    2,906
    Likes Received:
    2,062
    GPU:
    RX 5700 XT/GTX 1060
    Problem here isn't the marketing team, they work with what they are given. Instead the issue is at the core - their development team and the decision makers.

    I am very curious to see how Intel will manage this. With the market share they have, AMD hurts them, but barely. The only thing which can change that is that if for another 1-2 years AMD will continue to hurt them meanwhile offering a far better product at not only consumer market, but especially OEM and Enterprise. That is where Intel is getting most of their income from. Epyc right now is a fantastic product, and what ever may come after it most likely will continue the trend.

    Admittingly it is impressive what they have managed to squeeze out of their 14 nm process, but it is at its limit. Hence, why they try their best to increase the product stack with basically the same product. Hilarious, but here they are.
     
  13. squalles

    squalles Master Guru

    Messages:
    754
    Likes Received:
    32
    GPU:
    Galax GTX 1080 EXOC OC
    sure, because you doing a synthetic gpu dedicated test, on real world games your rtx 2080 performs like a rtx 2070
     
  14. -Tj-

    -Tj- Ancient Guru

    Messages:
    16,568
    Likes Received:
    1,572
    GPU:
    Zotac GTX980Ti OC
    AMD, improving and gaining steady.

    Intel, lets OC further and sell as new again.


    xD
     
    fry178 likes this.
  15. beedoo

    beedoo Member

    Messages:
    17
    Likes Received:
    8
    GPU:
    Asus Strix 2080 OC
    Where do you draw this information from, and in which domain are we talking?

    In the home or small office, the general information I found was that often the the performance dial leaned in AMD's favour due to their processors often having more cores.
     

  16. Alessio1989

    Alessio1989 Maha Guru

    Messages:
    1,483
    Likes Received:
    262
    GPU:
    .
    Intel EPT is just a copy of AMD NPT/RVI for SLAT. AMD support interrupt virtualization as well (even on Windows, though KB4490481 came a little too late). You are just spreading FUD.
     
  17. squalles

    squalles Master Guru

    Messages:
    754
    Likes Received:
    32
    GPU:
    Galax GTX 1080 EXOC OC
    try do any test even on simple virtualization platform like vmware or virtualbox and i want see you say that again
     
  18. fry178

    fry178 Maha Guru

    Messages:
    1,367
    Likes Received:
    167
    GPU:
    EVGA FTW Hybrid2080
    @squalles
    because it takes away the impact of different hardware (ram etc)/settings,
    nor is it easy to recreate the same scene in actual game,
    so that you don't see fps fluctuations just because of an additional explosions that wasnt there in a different run.

    but hey, siege i guess isnt a real game (@1440p and vsync gets 75hz (screen max) with maxed settings incl TAA x4, and with fastsync i get steady +120fps).

    nor have i seen any review on guru that shows more than 10fps (if) difference when going past 1080p on amd vs intel,
    and most of the time the fps is already past whats needed.

    at least the ppl i know that own anything above Nv xx70 (and supporting ecosystem)
    play shooters at 1440p and up while are doing 120/144 on ryzen without problems.
    and that incl a few that either make 6 digit income or have no problem spending whatever they want on hw,
    and they all could have gotten the 9900, yet no one did.
    because they realised they wont pay intel 30-50% more for the big gains of 5-10fps (at lower res).
     
  19. squalles

    squalles Master Guru

    Messages:
    754
    Likes Received:
    32
    GPU:
    Galax GTX 1080 EXOC OC
    When you compensate with more cores amd have advantage of course
     
  20. fry178

    fry178 Maha Guru

    Messages:
    1,367
    Likes Received:
    167
    GPU:
    EVGA FTW Hybrid2080
    which is given when looking at the same price point, not amds fault.
     

Share This Page