Review: Intel Core i7 8700K processor

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Oct 5, 2017.

  1. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,858
    Likes Received:
    442
    GPU:
    RTX 3080
    I haven't specifically measured it, but when I was gradually increasing my CPU overclock & running CB15 it seemed linear at least.
     
    geogan likes this.
  2. Andrew LB

    Andrew LB Maha Guru

    Messages:
    1,251
    Likes Received:
    232
    GPU:
    EVGA GTX 1080@2,025
    Too expensive based on a subjective price/performance comparison which uses a single synthetic benchmark (cinebench) that greatly favors AMDs high core count.

    What I don't understand is if HH is going to take the time to put together that chart, why not make a second price/performance chart using a synthetic gaming benchmark like Firestrike since i'd wager the vast majority of potential buyers of this processor, especially those of us who frequent this website will be using it primarily for gaming. Not doing so could easily be seen as bias for one particular brand of processor over another when objectivity should be a primary concern when reviewing such products.

    Just for the heck of it i went back to take a look at previous CPU reviews like that of the i5-7600k, A10 7800, and even the Ryzen 5 review, and it seems HH just now started using this cinebench price/performance comparison. I'm not one make accusations without knowing all the facts but it does lead one to wonder if this was added just to give AMD a "win". Much like how so many people came up with similar things back in the day when it was ATi vs nVidia where nVidia would win dominate in performance except they'd throw in a performance/watt chart, something nobody had ever used previously.
     
  3. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    You say that as though Intel does poorly (and they certainly don't), or that Cinebench was bribed by AMD. I do agree that a synthetic test isn't exactly the best way to measure performance, but it would just be too much work for anyone to get a performance-per-dollar value for every CPU and every test. Gotta pick something, and Cinebench seems to have a healthy mix of many modern hardware demands. Meanwhile to counter your point, HH also uses just TimeSpy for the GPU shootout, which seems to greatly favor Nvidia.
    Seeing as you seem to favor Intel and want tests that are more un-biased, why would you want that? Firestrike favors AMD (and cheaper Intel products, for that matter) even more than Cinebench for PP$. For example, the 1600X (vs the 8700K) is only 11% slower in Firestrike, but is at least 33% cheaper.

    To clarify, I agree that it would be great to have more performance-per-dollar tests, at least of varying categories. For example, maybe there could be one set of numbers for "productivity", another for "gaming", another for "synthetics", and a last one for "overall". But Firestrike doesn't bode well with PP$ for the 8700K.

    I would also like to see performance-per-watt tests.
    He started using it because it's a useful metric that other websites are starting to use. Again, I agree that a different test may be better. The fact of matter is, even in tests where AMD doesn't fare that well, they're still going to be better or on-par with Intel in terms of PP$, so I'm not sure what your point is. The only times Intel's PP$ is [probably] better is if you're getting low-end stuff (like i3 or worse), AVX benchmarks, or when overclocking. If you look at Linux benchmarks, Intel also tends to win out for things like Java.

    The fact of the matter is, plenty of people aren't petty and don't care about buying "the best". Plenty of people want what is the most practical and suitable to their budget, so things like PP$ or PPW are very useful.
     
    Last edited: Oct 10, 2017
  4. kapu

    kapu Ancient Guru

    Messages:
    5,418
    Likes Received:
    802
    GPU:
    Radeon 7800XT
    i5 8400 review coming up ? looks like a monster for the buck. Currrently cheaper than 7600k with 2 cores more....
     
    airbud7 likes this.

  5. airbud7

    airbud7 Guest

    Messages:
    7,833
    Likes Received:
    4,797
    GPU:
    pny gtx 1060 xlr8
    I found this....
    [​IMG]


    agree the price of 8400 looks good
     
  6. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,541
    Likes Received:
    18,853
    GPU:
    AMD | NVIDIA
    Actually that chart was requested by you guys a while ago, forum readers.

    So yeah, sorry for listening to you guys. I decided to insert it so that people have a bit of insight as to how relevant performance is in terms of revenue (while clearly indicating in the articles that it is a bit of a subjective measurement), much like the 720p results I've been running all weekend in my free time which will be inserted in future reviews. Not because it is needed nor would provide a more objective overview (contrary), but because a small group of people really want to check it and continuously give me total crap about it.

    So maybe you are over analyzing things? By implication, you willingly made a subtle accusation. It is so easy to to throw crap at me like, if he does that oh then he must be BIAS to brand A or B, again soooo easy. Your claim that Cinebench is working out better for AMD is just laughable really.

    After all these years, it still amazes me how personal people make this. So ungrateful. Some of the remarks in this thread are just downright shameful, including yours.
     
    Last edited: Oct 10, 2017
    Aura89, schmidtbag, Denial and 2 others like this.
  7. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    It astounds me as to what degree some (long-time) members here will go.

    Cinebench is a benchmark that has been a staple for as long as I can remember but which recently became more popular among users reading reviews as AMD released their capable Ryzen processors and demonstrated that they, too, can pull off compute workloads, like Intel.

    You're literally on Hilbert's forums, "suspecting" Hilbert of adding a benchmark to add a "win" for AMD. What sort of audacity would that require, I wonder?

    If I were you, I wouldn't comment at all with that sort of nonsense.
     
  8. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    The i5 8400 is a great value, for a "layman". It's the kind of CPU you'd pick for a PC you'd build for someone who wants to do PC gaming but doesn't have the knowledge, budget, or interest to OC. For such a person, it is a better choice than the Ryzen 1500X or 1600. Otherwise, I think people who are amazed by it need to be aware of why it's so much cheaper than the 7600K.
     
  9. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    The fact you're doing SLI could very well be why you sometimes struggle to reach 144FPS. The maximum frame rate takes a pretty big hit when doing multi-GPU (meanwhile the minimum frame rate increases).
     
  10. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    No thanks. I have a 1440p144hz monitor, 1080 ti, and ryzen processor, and do just fine with getting 144fps+.

    So i'm not sure where this "bottom line" is that you speak of.
     

  11. RavenMaster

    RavenMaster Maha Guru

    Messages:
    1,359
    Likes Received:
    253
    GPU:
    1x RTX 3080 FE
    Well, after seeing the results i guess i will not be upgrading until the next CPU's are released. My i7 6850K still holds its own against the 8700K and has more cache to boot. Try harder Intel.
     
  12. Agent-A01

    Agent-A01 Ancient Guru

    Messages:
    11,640
    Likes Received:
    1,143
    GPU:
    4090 FE H20
    In what? Tetris?

    Lol jk

    It's just a fact, games that have highly attianble HIGH fps Ryzen falls behind.

    I like to have minimum FPS >100
     
  13. Loophole35

    Loophole35 Guest

    Messages:
    9,797
    Likes Received:
    1,161
    GPU:
    EVGA 1080ti SC
    LOL wut?!?!?!?! Sorry bro but you got that all kinds of wrong. I can tell you have never used multi-GPU. With SLI most of the time you get about an extra 60% of performance in max frame rate, of the base card sometimes more, some times less (depends on the SLI profile). A lot of the time the minimum (or 99th percentile) does not change compared to single GPU (especially when its and CPU caused dip).
     
  14. S V S

    S V S Member

    Messages:
    41
    Likes Received:
    13
    GPU:
    Nvidia GTX 1080 Ti
    Hi Hilbert,

    I wanted to thank you for adding the 720p results. It is another important data point that I find valuable for judging the relative performance of a CPU at gaming. I realize these results don't magically appear from nothing and that collecting this additional data does equate to real time (a lot of it, especially up front). This is just another example of why your site is always my first stop for reviews/benchmarking.

    I hope you didn't feel I was being ungrateful in my earlier posts. I made it a point to thank you for the time you put into your reviews. I definitely don't think you exhibit any extraordinary bias. I didn't for a second think you were intentionally running the gaming benchmarks using a methodology that supported one "brand" over another. I just wanted you to know that I'd find the extra information helpful for my decision making purposes, and the current charts were not helpful (for me).

    Again, thank you!
     
  15. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    Actually, bro, I have done multi-GPU rigs multiple times, and I currently have a rig involving 4 GPUs. Apparently, you don't understand what I meant by maximum frame rate. In the event you're playing a game where the GPU is not the bottleneck, it is easily possible that a 2nd GPU can actually slow you down, because you're increasing latency communicating to both GPUs, and, you waste time on having the GPUs synchronize once they're done rendering their frame. We're talking 144FPS+ here, where we're reaching points where the CPUs and GPUs are working as fast as they can and communication between all the hardware becomes the new bottleneck.

    Frankly, I shouldn't have to explain this to you, if you really knew better. You've got a lot of learning to do, including how to not jump to conclusions and talking crap as if you know better. Look up Amdahl's Law before you respond.
     

  16. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    You have no idea what you're talking about. What you just described is negative scaling and no, Amdahl's Law isn't related here when the entire workload related to the framerate (frame rendering) is parallelizable due to Alternate Frame Rendering. The CPU has to only be fast enough to dish out the frames, and you don't increase latency in communicating to both GPUs, since you're never sending out frames to be rendered to both GPUs at the same time. That would generate runt frames and defeat the purpose of any multi-GPU setup, since you now have pretty much identical sets of frames being displayed by the GPUs, in succession, on a single display surface (no additional motion information conveyed).
     
    Loophole35 likes this.
  17. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    You do realize more than just AFR exists, right? When just considering AFR then yes, you're pretty much right about what you said. But there is still an issue regarding communication. Do you think the rendered image of the 2nd GPU just magically teleports to the display? In the event one GPU renders quicker than the other, do you really think no syncrhonization is involved so you don't get stuttering issues? In a situation where a single GPU would not bottleneck, the 2nd GPU will be wasting time; an increase in latency. It's probably not much, but enough to see a handful of FPS lost.

    Meanwhile, I was primarly focusing on SFR, where Amdahl's Law does apply. And in that situation, I'm still not wrong. If a single GPU won't bottleneck a game, 2 GPUs will most likely slow down the game, because both of them have to synchronize with the parent process, or at least the parent GPU.

    I'm aware the CPU has to be fast enough to dish out frames. I never suggested otherwise. But there is more to how a game works than how quickly a CPU and GPU can process data. There are situations where the CPU and one of the GPUs isn't under full load, but the 2nd GPU could be. Considering tight spaces and thermal throttling, this isn't unheard of.
     
    Last edited: Oct 11, 2017
  18. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    Really. How do you think the rendered image from the second GPU actually moves then? On Nvidia's SLi, it's done through the SLi bridge. Literally zero impact on anything external to the GPUs. On AMD CrossFire, it's done over PCI Express. Pretty much zero impact on anything else as well, unless you consider load on the PCI Express lanes to constitute CPU load...

    You're digging yourself deeper. Name the games from the past decade that rely on SFR other than, say, Civilization: Beyond Earth. Also, no one in their right mind would run multi-GPU when a single GPU is not bottlenecking (read: CPU bottleneck) as that would, indeed, leave the framerate at or slightly worse than with a single GPU (due to the minimal overhead that no one speaks of in a properly functioning, non-bottlenecked multi-GPU setup).

    No one uses SFR anymore. What sort of split would you need in order to balance GPU workloads across each frame? Each GPU also has to do geometry calculations, which is, rather than being part of Amdahl's Law, of a similar effect due to each GPU having to perform the same calculations locally.

    The reason Civilization: Beyond Earth used SFR was because they realized that they preferred the lower latencies of SFR over the framerate advantages of AFR whereby multi-GPU setups were already powerful enough to stand limited scaling (yet still high framerates) in the pursuit of a lower latency experience. One can agree or disagree with this approach (AFR exploits frame buffers already available for a game, rather than necessarily creating new ones).

    No functional multi-GPU system using identical cards at identical clockspeeds would have imbalanced load across the GPUs as that would cause highly variable frametimes and thus microstutter, even if paced well. At best, pacing would adjust based on the difference in GPU load but then we'd be back to minimal to no scaling.

    Under AFR, the dominant mode which 99.99% of games run in multi-GPU setups, there is a reason Nvidia strictly requires identical GPU models, while AMD is slightly less flexible at "being of the same family" (at which point you'd see GPU load being distributed in a way that underutilizes the stronger card as if it were the weaker card).
     
    Last edited: Oct 11, 2017
  19. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    No, it isn't literally zero. As I mentioned (in my edited post) what about syncrhonizing with the parent process where it is informed the frame is done being rendered? It's probably not a lot of data, but in the grand scheme it builds up latency. But despite all of that, do you really think the data from the 2nd GPU just instantaneously shows up ready to display on the primary GPU? They need to make sure they're operating in the correct order, and, the primary GPU needs to be ready to display the image when it completes its own. And yes, this applies to AFR too. Syncrhonization to some degree does happen, and when it does, it builds up latency.

    Past decade? There are dozens that support SFR, or at least where you can use SFR. I don't know of any game that relies on it (I don't think even C:BE relies on it), but no game relies on AFR either. Back when I did multi-GPU gaming, SFR seemed to be the default option, but I usually opted for AFR. I haven't done multi-GPU gaming in a while, but from what I heard BF4 explicitly supports SFR. SFR has some major performance advantages, which enthusiasts care about.
    Where did I ever imply someone would intentionally do that? Can you not be antagonistic just for the sake of it? I don't really understand what you're gaining by behaving this way.

    We don't know about whether or not the cards were identical (and besides, there is evidence that "identical" cards from the same manufacturer don't always perform identically). And have you considered thermal throttling or DPM? I don't know if both GPUs can dynamically change their clocks to the slowest part, but, doing so would require them to constantly communicate with each other...

    Nvidia required identical models before AFR was popularized. In fact I'd say they made this requirement specifically because of how much more picky SFR is. AFR actually allows for some flexibility, to the point that DX12 and Vulkan theoretically can allow mixed-brand multi-GPU arrangements.


    Anyway, you and Loophole really need to take a step back sometimes. I made a simple proposition of what might have been a problem, and yet I get laughed at and accused of knowing nothing. Is that not excessive? Then you come along and nitpick everything I say. As you're probably aware, multi-GPU setups don't always run the way you expect them to, so is it really that hard to believe that the SLI config is the source of the problem (even for reasons beyond what I described)? Do you really think you're helping your point by being so hostile (not just you specifically)?
     
  20. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    Make sure what is operating in the correct order?...

    What are you trying to say here? The mechanism by which a single GPU, on its own, presents a frame to the monitor on each refresh interval is not even relevant to the discussion and already happens regardless of whether you're running 1, 2, 3, or 4 GPUs ...

    Please explain to me what sort of synchronization you speak of builds up latency. The sort where a secondary GPU pipes its frame to the primary GPU? Take 3840x2160x60. The secondary GPU is rendering 30 frames per second, practically. That's 3840 x 2160 x 30 = ~249 MP/s. You know how much that is, without compression? ~747MB/s. That is nothing. We're talking of GPUs shifting 512GB/s back and forth from VRAM, and PCI Express slots running at 16GB/s, and you tell me about the delays of a synchronization process established by dedicated hardware on-board and requiring a bandwidth smaller than a single PCI-E lane? Or frame pacing, the mechanism of actually ensuring frames are rendered at the correct pace? Tell me more.

    Name a single major game that works well over SFR and is even relevant nowadays. No game relies on AFR either? Pretty much ALL modern games that support multi-GPU rely on AFR as the one and ONLY method of multi-GPU support. DX12 brings new modes that nobody except Oxide has implemented yet with Ashes of the Singularity.

    No one is behaving in any way. It's all in your head. You chose to pick a subject you know nothing about and now you're throwing around terms and concepts you have a very weak understanding of. Why would you even mention being bottlenecked or not by a single GPU in the context of a functional multi-GPU setup?

    They don't have to perform "identically" because each frame requires a different frametime to fully render ... the point is having almost "identical" power to render each frame so that the frame pacing algorithm does not have to adjust for unnatural discrepancies in frametimes that wouldn't have been there had one been running a single GPU where each frame is rendered in a consistent fashion relative to another...

    Please tell me about the sort of complex communication required by one card to tell the other to change its clock speed or utilization percentage from value x to value y. Please.

    I've Crossfired a 7970 and a 7950 and I've seen what happens. The stronger card encounters less usage in order to balance with the weaker card. There's no magic, and there's nothing even alarming going on. There's nothing perceptible regarding the mechanism too - it is all transparent.

    That's because when SFR was popular, it was also popular to split the screen in half and you almost ended up with same graphical workload for the top part as for the bottom part. No longer the case with modern games.

    DX12 and Vulkan multi-adapter modes do not employ AFR...

    https://developer.nvidia.com/explicit-multi-gpu-programming-directx-12

    Here are Hilbert's own benchmarks for AotS DX12 multi-adapter:

    [​IMG]

    This is without AFR too. Not looking good at all.

    You need to stop acting like you know what you're talking about.
     
    Last edited: Oct 11, 2017

Share This Page