12-core Intel Core i9 7920X will get a 400 MHz slower base-clock

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 19, 2017.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,325
    Likes Received:
    18,405
    GPU:
    AMD | NVIDIA
  2. Kaill

    Kaill Member Guru

    Messages:
    121
    Likes Received:
    15
    GPU:
    EVGA GTX 1080 FTW
    going to be interesting to see the performance of those 2 compared side by side at stock and over clocked.
     
  3. D3M1G0D

    D3M1G0D Guest

    Messages:
    2,068
    Likes Received:
    1,341
    GPU:
    2 x GeForce 1080 Ti
    400 MHz is a pretty big drop, but it's not completely surprising. Judging by the thermals and power draw on the 7900X, it was very doubtful that they would be able to clock the 12 - 18 core parts anywhere near the 10-core one.
     
  4. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Even worse than I thought. And I already re-drawn bad image into pretty bad image.
    Worst case scenario:
    10 * 3.3GHz = 33GHz of relative performance
    12 * 2.9GHz = 34.8GHz of relative performance

    That's 5.5% increase of performance for 20.2% increase of price ($200).
    What's left for 14 core chips and more? Or is this chip just place holder, so intel can say 12C are available and then there will be properly clocked 14C with huge price bump?

    Considering how close this is in IPC to Ryzen and 12C TR having 20% base clock advantage... This is yet another weird chip I would like intel to explain.
     

  5. wavetrex

    wavetrex Ancient Guru

    Messages:
    2,445
    Likes Received:
    2,539
    GPU:
    TUF 6800XT OC
    This will be really fun after AMD releases Threadripper and reviews start to pop up showing it beats Intel's 12 and 10 cores soundly, while being cheaper than both.

    I think of this when I imagine Intel's board of directors:
    [​IMG]

    https://www.youtube.com/watch?v=ghzEd4WLUz4
     
  6. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    So...

    20% more cores - 13% per core performance = 20% more expensive.

    Or, in other words, if linear, 7% more performance for 20% more cost.
     
  7. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,855
    Likes Received:
    442
    GPU:
    RTX 3080
    I like your analysis there. To write down some of my thoughts regarding one of your points, you ask "what's left for 14 core chips & more", and my spin on this is that I don't think consumers are gonna want 14 core chips now - it's just overkill for almost everyone; but even if you argue that there are people that need more processing power to need 14 cores or more - well I think the answer to more processing power is to increase IPC with architectural changes, which would be more usable power for applications, I'm imagining that it's hard to code software to use more & more cores efficiently when it comes to CPUs. I can't see there being an arms race between Intel & AMD in the near/mid future beyond 12core/24 thread, I think they gotta increase IPC.
     
    Last edited: Jul 19, 2017
  8. nizzen

    nizzen Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,149
    GPU:
    3x3090/3060ti/2080t
    12 core and 4500mhz easy on custom water. Confirmed :D
     
  9. Agent-A01

    Agent-A01 Ancient Guru

    Messages:
    11,628
    Likes Received:
    1,119
    GPU:
    4090 FE H20
    That's not a correct way to assess performance differences.
     
  10. wavetrex

    wavetrex Ancient Guru

    Messages:
    2,445
    Likes Received:
    2,539
    GPU:
    TUF 6800XT OC
    You are forgetting that the number of "casual" content creators has increased dramatically in the last years.

    - Many gamers now stream or record/upload their gaming sessions
    - More and more youtubers recording whatever they like
    - People have many multimedia capable devices, and they might like to reencode video to play on the go
    - Services like PLEX are allowing your family members and friends to watch your media from their devices anytime. Guess what that one uses ? CORES ! Lots
    - Games themselves are becoming more and more threaded, because the way the new generation consoles work ( 8-cores )
    - New software that sorts your photo collection automatically (content recognition)

    ... and I could continue for a very long post.

    These many-cores CPUs are more needed than ever, and the ONLY REASON people are not using them is because Intel's prices have been so damn high for their HEDT platform, making it inaccessible for most.

    Thanks again AMD for propelling us into the new age of many-cores !
     

  11. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,855
    Likes Received:
    442
    GPU:
    RTX 3080
    Do you think the 'cores race' will ever stop then? I'm thinking it will just get to a point where more cores won't equal more performance, which is why I was talking about IPC increases being more important as a focus for future processing power increases.
     
  12. Denial

    Denial Ancient Guru

    Messages:
    14,201
    Likes Received:
    4,105
    GPU:
    EVGA RTX 3080
    Depends on the workload. VM's and stuff obviously want more cores per socket, rendering probably the same, etc. Office apps/gaming are more difficult to thread to 8+ cores, there just is no advantage, you have to wait for the slowest operation that may not be threadable to render the frame.

    Problem is that IPC increases are extremely difficult to find. Zen seems like a nice boost, but in reality it's mostly just copying the best parts of what Intel already had. Zen+ might also get a nice boost (10% IPC, will probably get more from higher clocks on 7nm) but going forward I think you're going to see AMD hit the same problems Intel has - marginal gains year over year in actual performance.

    I think the future will have to come from a combination of material science breakthroughs for transistor scaling and/or perhaps a paradigm shift in computing to either Optical/Biological. I actually feel like all the recent MCM designs would be a good test bed for doing optical interconnects for package to package communication. The transit latency would be much lower, power requirements lower and bandwidth significantly increased.

    The next 10 years or so is going to be either really boring for computing or really interesting.
     
  13. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,855
    Likes Received:
    442
    GPU:
    RTX 3080
    Very interesting read, yeah it does seem that with Moore's Law slowing down and the silicon chip beginning to reach the smallest possible nanometer scales, that a new technology is required beyond silicon. I'd agree, those are gonna be the next big massive step changes in performance & potential. If it happens in the next 10 yrs that would be super interesting & not long to wait, although I'm thinking that while there is still room to shrink silicon they're still gonna do that. How many more shrinkage nodes have they got left, and let's say they shrink once every 3 yrs as it becomes more & more difficult to shrink: 14nm-7nm-4nm-2nm-1nm (4 more nodes?) - 3yrs per node = 12 years of future silicon? By the way I don't know what the future pipeline is for shrinkage, so those nodes above I made up!

    EDIT: and after doing a bit of research I found this quote: "In fact, industry groups such as the IEEE International Roadmap of Devices and Systems (IRDS) initiative have reported it will be nearly impossible to shrink transistors further by 2023."
    from this article, which is an interesting read on the subject (March 2017): https://cacm.acm.org/magazines/2017/3/213818-the-future-of-semiconductors/fulltext

    Wow, sounds like even in the next 6 years that silicon has effectively come to the end of the road - that's like us today looking back to 2011, not far!
     
    Last edited: Jul 19, 2017
  14. H83

    H83 Ancient Guru

    Messages:
    5,443
    Likes Received:
    2,982
    GPU:
    XFX Black 6950XT
    Well at some point the "cores race" is going to stop because Intel and AMD are going to hit a physical limit that prevents them from adding more cores in the same package the same way they reached a core frequency limit that stops them from going over 5.0Ghz.
    Someone correct me if i´m wrong, please.

    About the 7920X being slower than the 7900X that was expected after seeing the thermal problems exhibited by the later...
     
  15. GhostXL

    GhostXL Guest

    Messages:
    6,081
    Likes Received:
    54
    GPU:
    PNY EPIC-X RTX 4090
    I understand Intel's rush to match AMD's cores. I honestly think this lowering Ghz on this particular chip to almost nothing is not a good way to do it.

    Intel is fine. I refuse to budge from my i7 5775c as it still eats up 4K with the rest of my current rig in specs.

    If someone is fully focused on gaming they are gong to buy Intel still. I think Intel shoulda left this generation alone and put full focus into the next and i7 9700 etc to match AMD's cores.


    This core war means nothing for the regular consumer and even gamer when more than 4 cores come into play.

    We need more games to even utilize this stuff.

    Although I don't blame a single person for jumping on Ryzen for gaming, Intel still has the lead there. Ryzen seems to be a great all-rounder though. If you want a bit of everything, and price-perf.

    But I think Intel is trying to do what they can with current chips to push. I don't see this as necessary, but because they can.

    I would not be quick to judge Intel saying that they don't know what they are doing. They are just doing something rather than nothing. The i7 8700K seems like a sweet spot to me. Six cores 12 Threads, I personally like that idea.


    I personally don't know why it even started, when the real war is in cache size, chip size, memory, and power consumption. Also OC potential. More cores....is more heat. I really think in order for Intel and AMD to make more money on their chips they need to get with game devs A LOT more on how to properly code for CPU's to utilize all cores on PC. Or at least push for it. Ton's of ways to unload different parts of a game on cores and use them.

    Only when more games and apps take advantage of more than 4 cores on a regular basis is this "core war" even worth fighting imho.

    There has to be a way to use these cores in games that don't. Even if the GPU eats most of the game up these days. What can the extra cores do to help the games? Off-load tons of background tasks to the extra cores not used while in game??

    Why not I say? Use those cores to do things in the background. If Intel and or AMD need to write a specific driver for it...do it I say. Then these cores will be worth upgrading for, and people like me even, would way to say okay...6 cores is actually better than 4 etc. Less interference in the game through unloading even most if not all background tasks to unused cores and threads while not interrupting the game sounds great to me.
     
    Last edited: Jul 19, 2017

  16. Exascale

    Exascale Guest

    Messages:
    390
    Likes Received:
    8
    GPU:
    Gigabyte G1 1070
    We have had a bunch of interesting, useful, and BETTER technologies for years already. However, they have not been widely used in consumer or even commercial products because its been too easy to stay with x86 with DIMM RAM.

    The last few years have gotten interesting. 2015 saw HMC used for the first time on a CPU(Fujitsu SPARC XIfx), then AMD Fury with HBM, GP100 with HBM2 and now V100.

    The trend now is fixing the rest of the system. Doing math(a FLOP) is cheap in terms of energy. Moving data is expensive. Reducing that cost is what everyone is trying to do now.

    You will see 2.5D ICs become more common first, like we have seen with HMC, HBMand 3DS DIMMS. RAM of all kinds will be integrated in new ways and exotic(but not really new) memory technologies like XPoint(PCM), Re-RAM, nanotube RAM, and STT-MRAM will actually be used in things.

    Essentially everything but the cores has become the bottleneck in many cases. Since cores are often starved for memory bandwidth, with very low byte/FLOP ratios, you can expect more memory focused architectural enhancements. Eventually architectures will have specialized built in accelerators, like Intels FPGA Xeons. Then true 3D mixed process ICs with pretty much everything integrated into a single semiconductor brick that includes logic and memory with photonic interconnects.
     
    Last edited: Jul 19, 2017
  17. b101uk

    b101uk Guest

    Messages:
    222
    Likes Received:
    5
    GPU:
    Gigabyte GTX1070
    if that was to be the case, then AMD is visibly losing that battle, with the size of their chip packages of late.

    but then you still have multiple sockets after that, and thats without assuming there will be further small size reductions along the way, but there is a point at which even the most avid home user who needs a powerful PC needs no more cores but instead would if they were that much of a power user running lots of apps, or doing lots of encoding they would do better having 2 PC's anyway, else switching to multiple sockets, or going to business orientated product lines which much better scalability.
     
  18. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    You are right, more cores => More concurrent attempts to access memory => Lower performance.

    But maybe you meant something else. Please elaborate. Maybe you can create something complex and very accurate.
     
  19. Amx85

    Amx85 Master Guru

    Messages:
    335
    Likes Received:
    10
    GPU:
    MSI R7-260X2GD5/OC
  20. user1

    user1 Ancient Guru

    Messages:
    2,746
    Likes Received:
    1,279
    GPU:
    Mi25/IGP
    Adding more logs to the fire I see...:backfire:
     

Share This Page