Intel 10th Gen Core X Cascade Lake HEDT Processors Launch October 7th at ~55 USD per core

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Oct 2, 2019.

  1. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,693
    Likes Received:
    9,573
    GPU:
    4090@H2O
    hehe dat wall of text... have a like for effort :D

    Still @bobblunderton I fare to say, beyond 1080p, not everybody needs to upgrade from Haswell. Maybe if you're running Haswell 4C, but 6C are still doing okay / fine-ish at 4.5GHz. That is, in gaming, no idea about workloads (there it always gets better with upgrades like Zen2).
     
    Deleted member 213629 likes this.
  2. user1

    user1 Ancient Guru

    Messages:
    2,746
    Likes Received:
    1,279
    GPU:
    Mi25/IGP
    Pretty sure that these will be soldered, which unfortunately means delidding isn't a great option. hopefully they improved the solder job, or it will end up like the 9980xe
     
  3. Exodite

    Exodite Guest

    Messages:
    2,087
    Likes Received:
    276
    GPU:
    Sapphire Vega 56
    Not going to lie, this really surprised me!

    I were expecting Intel to pull out some new instructions, run a targeted benchmark and go "see, look at how much value we are!" rather than actually lowering prices. So this is a pleasant surprise.

    That said, thinking about the situation it's really quite brilliant from a marketing standpoint.

    Now, before I go into details and someone's head explodes let me preface this with this is clearly just speculation on my part. Also, that while there are reasons to prefer Intel's HEDT platform - certain AVX workloads or latency sensitive applications come to mind - those are very few compared to those better handled by the mainstream option (whether that be the i9 9900K or the AMD offerings) or Threadripper. And so, with no further ado...

    Even with the price cuts I'd say x299 is a write-off. Pricing-per-core isn't competitive with mainstream platforms and even compared to X570 the connectivity is only marginally better. It's competitive with 2nd generation Threadripper for tasks that favor Intel's architecture but if you benefit more from cores or bandwidth then Threadripper 2000 is still a better option. And I very much doubt people will flock to buy into Cascade Lake X before seeing how it stacks up against Threadripper 3000.

    On top of that Intel is having further supply issues and a new set of huge, monolithic dies won't help in that regard. Especially as demand for the in-store previous generation are going to be non-existent after this announcement given the price difference.

    Essentially my argument is this; Intel is making a big show out of lowering prices for parts that
    • a; won't be released for two months
    • b; they can't deliver due to their supply issues
    • c; very few people would even want.
    They're generating surprise and delight, and goodwill they sorely need, at essentially no cost - because they won't be asked to deliver upon these promises to any measurable extent.

    I can't speak to how good Threadripper 3000 is going to be but this announcement puts some pressure on AMD's top-end AM4 chips and current Threadripper lineup with what's essentially vaporware.

    Regardless of what actually happens with x299 and Cascade Lake X it's absolutely brilliant marketing on Intel's part!
     
  4. ngoni615

    ngoni615 Active Member

    Messages:
    56
    Likes Received:
    18
    GPU:
    GTX 1080ti 11gb
    Sorry bud but i am going to correct you there about the move from 980ti to 1080ti. The difference is huge roughly double the performance game and resolution dependant. It is indeed night and day and this is coming from someone who used both.
     

  5. Exodite

    Exodite Guest

    Messages:
    2,087
    Likes Received:
    276
    GPU:
    Sapphire Vega 56
    You may want to correct your quoting. :p
     
  6. Loophole35

    Loophole35 Guest

    Messages:
    9,797
    Likes Received:
    1,161
    GPU:
    EVGA 1080ti SC
    6 core with SMT be it intel or AMD should be the minimum moving forward for a gaming rig. However with the new consoles housing a custom CPU with a low power 8 core SMT enabled Zen2 chip we may have to start rethinking that recommendation as well. I'm curious though if Devs will have low level access to the CPU functionality to the point that some may want to disable the SMT to optimize the hardware to the game engine instead of optimizing the game engine to properly leverage what are essentially pseudo threads.
     
    airbud7 likes this.
  7. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,693
    Likes Received:
    9,573
    GPU:
    4090@H2O
    Well I agree with the cores, but of those 8 console cores, probably a thread or maybe core should probably remain reserved for the system, like back in the day when first 4 cores hit the consoles, right? So we are at 6 right with the next generation... which isn't around. Until that picks up in game programming it probably takes another year or two, or more, so by the time the console generation after the soon to arrive one hits the shelves, we can think about buying more than 8 cores. Which is available in mainstream platforms right now.

    You are right, but I have seen and heard this too often... "consoles gain more cores, watch PC games make use of them as well", or "low level programming will make devs use more CPU cores"... yeah it's not going that fast, unfortunately. Or I would have made use of my 6 cores already... not sure if I couldn't do just well with 4 cores. That said, I didn't test. But after all, we agree, since like I already said, 6 cores are doing ok right now. And tbh I'm fairly sure that 6C at the right frequency + IPC will still be fine in 2020, 2021... no need for panic buys of 24C CPUs right now :D
     
  8. Loophole35

    Loophole35 Guest

    Messages:
    9,797
    Likes Received:
    1,161
    GPU:
    EVGA 1080ti SC
    Oh, try to run any newer Frostbyte game on a 4 core. The new madden game is a stuttering mess with audio drop out on 4 fast cores. I still have hitching on my 4c/8t laptop (this is a kabylake CPU too) but my 8700k and now 9900kf run it flawless.
     
    nizzen likes this.
  9. - crap
     
  10. Loophole35

    Loophole35 Guest

    Messages:
    9,797
    Likes Received:
    1,161
    GPU:
    EVGA 1080ti SC
    ? Care to elaborate on your statement?
     

  11. Sure! What you'd wrote reminded me of something I was looking at a day or so ago. The comparative of how relatively close the 9900K and 9700K were in benchmarks. I thought maybe you're right hence my ..... - crap (I hope more development goes into multi-threading in console markets; that in turn can spill over into the pc sector - whether OS side or gaming)

    Side bar - Those were just gaming benchmarks.In case of the 9700K it has no HT and still tests near the 9900K leading me to believe there's much optimization to be desired for simultaneous threads overall. Not just with games, OS side etc. The concept could be flawed; I doubt so. Upcoming Zen 3 is built around Quad-Thread SMT - I'd find it odd the future is invested heavily into threaded cores if they're a moot point.
     
    Loophole35 and Embra like this.
  12. Embra

    Embra Ancient Guru

    Messages:
    1,601
    Likes Received:
    956
    GPU:
    Red Devil 6950 XT
    Seems clock speeds per core have hit a wall. More cores and threading should be the next focus for software, etc ....
     
  13. D3M1G0D

    D3M1G0D Guest

    Messages:
    2,068
    Likes Received:
    1,341
    GPU:
    2 x GeForce 1080 Ti
    Time to bring back NetBurst :p
     
    fantaskarsef likes this.
  14. user1

    user1 Ancient Guru

    Messages:
    2,746
    Likes Received:
    1,279
    GPU:
    Mi25/IGP
    Imo the reason for the lack of improvement from smt in games , is the nature of the workload.
    smt doesnt really reduce the amount of time a task takes, it merely lets you run more tasks simultaneously, which can help reduce the time it takes to render a frame, but you still cannot present a frame until the slowest task is done.
    games with alot of units on screen can benefit more from smt , but for your avg shooter,no so much.
    This is why even something like a phenom can still achieve 60fps in modern games despite being many times slower than modern cpus, the latency isnt that much better today than it was 10 years ago.

    games are sequential workloads fundimentally, you cant render complete frames asynchronously( ie frame 1 must come before frame 2). Queuing and threading will only get you so far. Its just not something thats going to scale to 16-32 threads very well.
     
  15. D3M1G0D

    D3M1G0D Guest

    Messages:
    2,068
    Likes Received:
    1,341
    GPU:
    2 x GeForce 1080 Ti
    I dunno. As a software developer, I don't see why games cannot be done asynchronously. GPUs display frames in sequence and yet they are massively parallel chips - the power of GPUs doesn't come from high clock speeds but thousands of little cores running at relatively slow speeds. There's no reason to think the CPU is fundamentally different - it might require some new approaches to game design, but I think it can be done.

    I think the main reason for the lack of SMT is that such chips weren't available to mainstream consumers. What's the point in trying to optimize your game for 6+ cores if most mainstream gamers are using dual or quad core CPUs? Most game companies aren't swimming in cash (and even those that are will cut budgets and timelines to meet holiday deadlines) so it's important not to waste resources working on features that will yield minimal benefits.
     
    Deleted member 213629 likes this.

  16. user1

    user1 Ancient Guru

    Messages:
    2,746
    Likes Received:
    1,279
    GPU:
    Mi25/IGP
    when it comes to rendering things like movie or a single frame, multi threading works fine scales really well (hence why gpus are massively multithreaded), but as far as framerate in relation to the cpu, that is a different ball of wax, you can't present more than one frame at a time to the monitor. The game logic can be threaded to a degree, but you run into diminishing returns, because you still have to wait for the completion of the slowest task before you can continue or you risk getting too far ahead(we dont like input lag do we), you end up getting stuck waiting (Some types of work can't be multithreaded)before the frame can be rendered and sent to the monitor. it is mainly an issue because of how short the time-frame is to complete all of the tasks (<16ms).


    What i'd like to see new monitor technology where you dont have to send the monitor a complete frame , so you could run different parts of the image at different rates, though that brings its own problems.


    edited for clarity
     
    Deleted member 213629 likes this.
  17. It's costly and takes a lot of time to design a new engine for every single game; not every studio is Remedy. @D3M1G0D I see where you're going with this. SPUs w/th GPUs (nothing new / tech going far back as pre 2007 even) The multi-thread wars - as it would be are relatively new on the consumer-grade side of things. HEDT was a niche market for a while. non-HEDT i7s had been Quad going back .... quite a while far back as I can recall.

    @user1 Well with the advent of DX12 we have Async compute being a big feature level offering. (source) Often until recently post-release engine overhauling/heavy patching had often been required for integration of multi-core/threaded support. Anyone recall GTA IV & Crysis 1/Warhead? For over a decade most people had 2-4 physical cores on average; with the exception of HEDT platforms offering 6-10 for consumers (excluding Phenom 2 X6). Server markets being something else entirely.
     
    Last edited by a moderator: Oct 4, 2019
  18. SuperAverage

    SuperAverage Guest

    Messages:
    247
    Likes Received:
    2
    GPU:
    Gigabyte xtreme 1080
    It's because the instructions the GPU does are fastly different to the ones the CPU handles.
     
  19. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,780
    Likes Received:
    1,393
    GPU:
    黃仁勳 stole my 4090
    That's units of 1000... in USD. And outside the US, in the EU and Canada the taxes make a huge difference as well. This will come out significantly more expensive than AMD per core, even in the USA. Then there's the whole "frack you pay us" MO of Intel where they artificially have motherboards locked so you have to buy a new one for every new "generation" even though their last 9002 generations are all the same crap.

    Intel will end up getting 1 frame more performance in 1337 Sh00t3r 420 XTREME (or whatever the frack degenerates play these days), which is designed to run like crap using AMD hardware, at 320x240 resolution. Then all the fanboys will jizz in their pants and run out to spend their life savings. So the usual.
     
  20. sverek

    sverek Guest

    Messages:
    6,069
    Likes Received:
    2,975
    GPU:
    NOVIDIA -0.5GB
    Well, that's escalated quickly.
     

Share This Page