1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Review: Intel Core i9 7900X processor

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jun 19, 2017.

  1. TieSKey

    TieSKey Member Guru

    Messages:
    113
    Likes Received:
    24
    GPU:
    Gtx870m 3Gb
    Such changes should be done at the compiler and OS lvl, not by every single app. GPUs instruction sets are different and more limited than CPUs, talking your "every-day" application and writing it for GPU might not be even possible or shield no benefit at all.

    Parallel is the way to go anyway, our brains work at the ms scale and can achieve more complex tasks than our silicon working at the ns.
    (and no, it's not a matter of software since we "start" with an empty hdd and have to learn almost everything :p)

    I'd go one step further on your tangent ideas. The future is "meta cores" (or whatever name u want), were we have a set of physical cores which logic can be "loaded/programmed" on real time.
    So say, u need 1 full ARM core, 2 FPU and some "tensor" cores, np, the OS will convert 5 physical cores by loading the required instruction sets, modifying their circuit paths on real time :p
    I wound't expect anything like this sooner than 50 years (unless AMD dies and we live in an Intel monopolistic distopia in which case it will never happen :p)</tangent>


    That aside, comparing CPUs by game FPS is super dumb from the start. We all know this yet we continue to fall in the same hole XD
    "if you are not GPU bound, your build is broken"
     
    Last edited: Jun 19, 2017
  2. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,424
    Likes Received:
    1,353
    GPU:
    HIS R9 290
    I strongly agree. However, you don't need high clock speeds for multitasking. This is why there are x86 server chips out there with 20+ cores, or specialized CPUs with over 100 cores. This is also why GPUs are found in servers. There is great need for multitasking and parallelization, but you have to keep in mind most people buying an i9 or TR, despite their claims, do not need one. I have absolutely no doubt there are some people who can and will take full advantage of these CPUs, but a lot are just doing it "becuz i haz money 'n i want moar corez!"


    Should be, sure. But practically, that will likely never occur, at least not on x86. It's debatably doable on platforms like ARM.

    Yes and no. CPUs were never built to be parallelized, GPUs are. Even CPUs with multiple cores and threads aren't meant to do things in parallel, they're meant to multitask. HyperThreading and the Windows scheduler are enough proof that CPUs are bad at parallelization, but make for EXCELLENT multitasking.

    That would be cool and is technically doable (for example FPGAs) but in order to have a single processor dynamically take the role of a CPU, northbridge, GPU, and maybe memory, the die would have to be even larger than socket TR4. The yields for such a product would be terrible due to how large it needs to be.
    But, cool idea nonetheless.

    I totally agree. As long as I get my 60FPS, I don't care how much better I could go. The problem is, people want the best numbers regardless of how good the hardware is at other tasks. Again, Amdahl's law - if we really want all these cores, we're going to have to sacrifice a few FPS. That's a big turnoff to many people, despite how superficial it is.
     
    Last edited: Jun 19, 2017
  3. Loophole35

    Loophole35 Ancient Guru

    Messages:
    9,350
    Likes Received:
    825
    GPU:
    EVGA 1080ti SC
    This is why I've seriously started looking at the 7800x and 7820x as I know I want more than 4 cores but over 8 is just a waste for me. I like Ryzen but fear the 1080 bottleneck may start to show up in 1440. TR may fix that with the increased memory bandwidth but again I don't need more than 8c/16t. Hell the 7800x is going for $390 which is around the same price as 1700x but yes the motherboard will cost more. It does however provide me with quad-channel DDR4 and 10 more PCI lanes from the CPU.

    I just don't know what to do honestly. I'm more conflicted now than I was prior to SLx launch.

    Still however set on doing a 1600+b350 combo for a 4K game "console". Just waiting on good m-ITX motherboards.
     
    Last edited: Jun 20, 2017
  4. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,424
    Likes Received:
    1,353
    GPU:
    HIS R9 290
    If you're going to get an X299, really anything below the 7900X is a bad choice, and even the 7900X itself is a bit difficult to justify compared to Intel's own alternatives.

    You do seem to be blowing Ryzen's bottleneck way out of proportion. On a fully up-to-date system, I am not aware of a single game Ryzen struggles to get 60FPS in at stock clocks (where an Intel equivalent also doesn't struggle). When overclocked, it seems to regularly get a minimum of 90FPS (again, where an Intel equivalent doesn't also struggle). The performance gap drops as you increase resolution, since the GPU is becoming more bottlenecked.

    If gaming is all your care about, sure, Ryzen isn't the best choice, but it's certainly very capable. I personally am using it for a gaming rig. On the other hand, quad-channel memory is also pretty much useless for gaming. If you don't intend to use multi-GPU, neither choice makes any difference.
     
    Last edited: Jun 20, 2017

  5. Loophole35

    Loophole35 Ancient Guru

    Messages:
    9,350
    Likes Received:
    825
    GPU:
    EVGA 1080ti SC
    Thing is I'm not against going multi GPU and if I did it would be to get high refresh rate on say a 144Hz WQHD screen. So as you say not the best case for Ryzen which sucks as I would love to support AMD and really do like this new CPU.

    Really waiting for Vega as well to see Ryzen performance.
     
  6. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,004
    Likes Received:
    137
    GPU:
    Sapphire 7970 Quadrobake
    I curious to see how the 8/16 TR performs.
     
  7. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,424
    Likes Received:
    1,353
    GPU:
    HIS R9 290
    I guess that also depends on which GPUs you pick. Tech Powerup proved that many high-end GPUs suffer little to no performance loss on PCIe 3.0 at x8 lanes.

    If you still feel uncomfortable with the idea of lost bandwidth from X370, there is a possibility the X390 chipset will be available on AM4 (doesn't seem like X399 will be available, though). The X390 is, in my opinion, what should exist on full ATX boards. The X370 seems ideal for high-end micro ATX, and to me the B350 doesn't belong on full ATX.

    Regardless, if 144Hz at WQHD is what you want, AMD definitely isn't for you. Ryzen is perfectly fine for gaming, but not with refresh rates that high.
     
  8. sverek

    sverek Ancient Guru

    Messages:
    5,301
    Likes Received:
    2,158
    GPU:
    NOVIDIA -0.5GB
    Deliding $1000 CPU :stewpid:
     
  9. Exascale

    Exascale Banned

    Messages:
    397
    Likes Received:
    8
    GPU:
    Gigabyte G1 1070
    Realtime reconfigurable FPGAs have existed for years, as have X86 FPGA hybrid chips.
     
  10. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,738
    Likes Received:
    2,199
    GPU:
    5700XT+AW@240Hz
    Yep intel will not lose entire chip. But at theoretical defect rate at which intel no longer harvests any full chips, AMD can still make like 50% of full ones.

    @All: Btw, what's this buzz about AVX disabled/enabled? Years ago when it came, people were all over it. Performance did matter, now people should disable it? I do not see point in doing so. And if it eats too much energy while being used, it is result of intel's engineering.

    Should we ban AVX benchmarks? Or is it actually one more reason why not to get intel's solution?
     

  11. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,004
    Likes Received:
    137
    GPU:
    Sapphire 7970 Quadrobake
    Funny thing is that with AVX disabled Intel practically loses any IPC advantage they might have had.
     
  12. MadGizmo

    MadGizmo Maha Guru

    Messages:
    1,396
    Likes Received:
    0
    GPU:
    MSI R9 290X 8GB 2560*1440
    If programmers use AVX-512 then they need to optimize the software in such a way that they feed AVX operations to the processor in bulk. One doesn't need AVX for a single instruction. That alone creates a lot of heat. However, in the past this wasn't much of a problem. It's this generation that cannot take that heat.

    BTW: There is no single AVX-512 standard. There are 12 variations and in the name of product differentiation no processor supports all variations. Some variations require permission and/or firmware activation from Intel. I doubt they'll activate it for free.
     
  13. Exascale

    Exascale Banned

    Messages:
    397
    Likes Received:
    8
    GPU:
    Gigabyte G1 1070
    Seriously. Intel is making themselves look like ripoff artists not putting solder on these supposedly high end chips. In other news im literally going to buy AMD stock.
     
  14. nevcairiel

    nevcairiel Master Guru

    Messages:
    597
    Likes Received:
    187
    GPU:
    MSI 1080 Gaming X
    This was basically true for any SIMD instruction sets before. You should optimize an entire algorithm and process a whole thing in SSE/AVX/AVX2/... instead of throwing random single instructions into places. Loading data from the general purpose registers into SIMD registers and back for single instructions is just inefficient and likely defeats any performance advantage.

    The bigger the SIMD registers get, the worse this problem gets, and you need to be careful how to use them. But any developer can easily notice in benchmarks if its worth doing or not.

    Eh thats really nonsense.
    There is basically two sets of AVX512 instructions - one for Knights Landing, and one for "normal" CPUs (although with a bit of overlap between them). The Knights Landing variations are rather specialized and we won't miss them on the desktop.

    The main thing they did here is make AVX512 extensible, as well as use the same base layer for Knights Landing and ordinary CPUs.

    SKL-X and Xeon "Purley" (ie. Skylake-SP) support all of those in the set of normal CPUs, and Cannon Lake will as well (as well as extend that with a few more on top of that).
     
    Last edited: Jun 20, 2017
  15. kapu

    kapu Ancient Guru

    Messages:
    3,743
    Likes Received:
    6
    GPU:
    MSI Geforce 1060 6gb
    How is performance $/ratio vs Ryzen 1800 ? Because in graphs doesn't look like big leap.
     

  16. Silva

    Silva Master Guru

    Messages:
    938
    Likes Received:
    312
    GPU:
    Asus RX560 4G
    The problem ATM is the transition from high frequency bound software to multi threaded software. People are stuck on the old Ghz war and think more speed is better. We've hit a wall on speed and need to transition over the focus to more cores to see any meaningful evolution on the short run. Yes, more cores will soon hit a wall too and then AMD and Intel will need to get creative again (there's so much you can do with the space you're limited with, 14nm is already a big achievement where each transistor is just a couple atoms wide).

    229W without overclock. After OC is almost 300W.
    The biggest problem isn't even the power consumption but the temperatures, this is not your typical household chip but more for servers and workstations.

    If you can afford to, wait a few months for the Intel platform bugs to be ironed out and more reviews to come out (of the other processors).

    It's probably the same as R7, but with more pcie lanes and quad channel memory.

    Some people like to live on the edge!
    I would never risk such thing myself.

    Anyone who did in January 2016, has now 5 times the money invested.
    I wish I could see the future.
     
  17. Loophole35

    Loophole35 Ancient Guru

    Messages:
    9,350
    Likes Received:
    825
    GPU:
    EVGA 1080ti SC
    There is no 8c/16t TR. Are you making up non existent CPU's on purpose?

    The post you quoted clearly states, "I don't need more than 8c/16t."

    @Silva

    Don't think that will really change how conflicted I am. It's more that I want to support AMD as I like the work they did with Ryzen. Coupled with me not wanting to give Intel any money for essential a Sandybridge-E refresh.
     
    Last edited: Jun 20, 2017
  18. MadGizmo

    MadGizmo Maha Guru

    Messages:
    1,396
    Likes Received:
    0
    GPU:
    MSI R9 290X 8GB 2560*1440
    If I look at those 3 then I see this:

    Knight's Landing: AVX512CD_512, AVX512ER_512, AVX512ER_SCALAR, AVX512F_128N, AVX512F_512, AVX512F_KOP, AVX512F_SCALAR, AVX512PF_512

    Canon Lake: AVX512BW_128, AVX512BW_128N, AVX512BW_256, AVX512BW_512 , AVX512BW_KOP, AVX512CD_128, AVX512CD_256, AVX512CD_512 , AVX512DQ_128, AVX512DQ_128N, AVX512DQ_256, AVX512DQ_512, AVX512DQ_KOP, AVX512DQ_SCALAR, AVX512F_128, AVX512F_128N, AVX512F_256, AVX512F_512, AVX512F_KOP, AVX512F_SCALAR, AVX512_IFMA_128, AVX512_IFMA_256, AVX512_IFMA_512, AVX512_VBMI_128, AVX512_VBMI_256, AVX512_VBMI_512

    Sky Lake Server: AVX512BW_128, AVX512BW_128N, AVX512BW_256, AVX512BW_512, AVX512BW_KOP, AVX512CD_128, AVX512CD_256, AVX512CD_512, AVX512DQ_128, AVX512DQ_128N, AVX512DQ_256, AVX512DQ_512, AVX512DQ_KOP, AVX512DQ_SCALAR, AVX512F_128, AVX512F_128N, AVX512F_256, AVX512F_512, AVX512F_KOP, AVX512F_SCALAR
     
    Last edited: Jun 20, 2017
  19. Silva

    Silva Master Guru

    Messages:
    938
    Likes Received:
    312
    GPU:
    Asus RX560 4G
    1ºAnalyse what is that you need (workload wise) and your budget for it.
    2ºSee what options AMD/Intel have to offer.
    3ºRead reviews from at least 2 different sites for all of those options.
    4ºChose the best price/performance, not limiting yourself with what you chose (example: if Ryzen doesn't have enough pcie lanes for you, go TR or Intel).

    Make a rational decision based on your research.
     
  20. Denial

    Denial Ancient Guru

    Messages:
    12,342
    Likes Received:
    1,529
    GPU:
    EVGA 1080Ti
    I'm not saying disable it - I'm saying the reason why there is such a power discrepancy in applications like P95 is because Intel's AVX implementation is twice that of AMD's and AVX just happens to be one of the most power intensive units on the CPU. In a test like P95 you're comparing only temperatures and not any type of benchmark, so half the picture is missing.. Yeah it's using more power, but it would also be iterating through that prime search significantly faster.

    AVX isn't used in much and Intel's solution is overkill for it. The 7820x has half the number of FMA's per core and it's IPC is roughly the same per core.
     

Share This Page