Review: Intel Core i7 8700K processor

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Oct 5, 2017.

  1. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,393
    Likes Received:
    1,946
    GPU:
    HIS R9 290
    Great, now you're ignoring points made by Fox2232 too. I can't keep spoonfeeding you every time...
    The goalpost the entire time was around GPUs that aren't bottlenecked. Focus *snap* *snap*.
    Yes, it is correct. But just because something is intended to function a certain way, that doesn't mean that's how it works 100% of the time. And before you hyperbolize yet again, I'm not saying it happens 0% of the time.
    Remember how many times I told you that my claim wasn't specific to latency? Remember how I kept saying the latency loss is in regards to non-bottlenecked GPUs? Yeah... you're still moving the goalpost. Better luck next time.
    It's not that I failed to understand, it's that it isn't relevant, and not something I said was indisputably happening; I merely suggested it could be. My only point is communication adds to latency. More GPUs adds more communication; therefore, more GPUs can add to latency (when not bottlenecked). It's simple.
    Also, you still have yet to explain how the work processed by the 2nd GPU ends up in the display connected to the primary GPU. All caps and bold text doesn't make that situation go away lol.
    Doesn't change my point.
    Let's look at this situation in a 3rd party perspective:
    Me: 1+1=3
    You: No, 1+1=4
    Me: But <reason>
    You: But <reason>
    Me: <counterpoint>
    You: You're wrong and know nothing
    Whether or not we're both right or both wrong, your tirades against my intelligence contributes nothing toward this. Say something meaningful.
     
  2. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,836
    Likes Received:
    235
    GPU:
    EVGA GTX 1080Ti SC
    You're wasting my time.
     
    typhon6657 likes this.
  3. Fox2232

    Fox2232 Ancient Guru

    Messages:
    10,864
    Likes Received:
    2,782
    GPU:
    5700XT+AW@240Hz
    As I wrote. Giving workload to GPUs in fixed order introduces situations where in-order-GPU is not ready to accept new workload while out-of-order-GPU is done and waiting.
    As demonstrated, doubling number of GPUs while keeping details does "double" fps (fluidity of image sequence), but time to render each frame remains same, therefore latency is same.
    In most cases, doubling of number of GPUs results in user to increase details, resolution, ... and therefore fps does not double. In such situations game feels smoother due to higher fps, but latency is higher due to higher rendering times per frame.
    (That's if you call "latency" same thing as I do. For me it is time between peripheral input from human, and moment image affected by given inputs is shown on screen.")
     
  4. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,836
    Likes Received:
    235
    GPU:
    EVGA GTX 1080Ti SC
    No, the point was that the workload was given to the GPUs all at the same time. As the GPUs rendered in parallel, some got done before others. It's very easy to display them in order - just have the program wait for frame 1 to finish, then frame 2, then frame 3, then frame 4 (then back to 1). That way, regardless of whether frame 2, 3, or 4 finishes before frame 1, frame 1 will be displayed first.

    It's sort of the same behavior when you have spawned threads from a parent thread and you wait for all child threads to finish in order to proceed, which is called a join. In Python:

    Code:
    import threading
    
    for child_thread in child_threads:
      threading.join(child_thread)
    
    This would terminate when all child threads have finished - regardless of the order in which they finish.

    That is true. Frametime latency does not decrease. However, the total chain latency is not merely dependent on the unit of frametime latency.

    Take this example:

    [​IMG]

    Consider the time right when frame n starts rendering to be t0, the time at which you provide user input.

    Let's consider all chain latency factors other than frametime to be a constant "c". Frametime latency is "f".

    Best case scenario: your input is sampled shortly before frame n starts rendering (t0-), frame n is rendered then displayed. Total chain latency: c + f

    Worst case scenario: your input is sampled shortly after frame n starts rendering (t0+), frame n misses the input, but frame n+1 picks up the input. Frame n+1 is rendered then displayed. Total chain latency is approximately c + 1.5f.

    So 1-1.5 frames of latency due to frametimes.

    Single GPU best-case scenario: same as multi-GPU --> c + f
    Single GPU worst-case scenario: input will have to wait for the next frame --> c + 2f

    So 1-2 frames of latency with single GPU vs. 1-1.5 frames of latency with dual GPU.

    Best case latency is not improved, but worst case latency is.
     

  5. Fox2232

    Fox2232 Ancient Guru

    Messages:
    10,864
    Likes Received:
    2,782
    GPU:
    5700XT+AW@240Hz
    @yasamoka: Yes, your code indicated work of GPUs in order, not disconnecting buffers from GPUs. Then it is OK.

    As far as latency, input is always practically actual. OS keeps live value and updates it in real time every moment change is delivered from I/O. So today's mouse has 1000 Hz refresh, hinting up to 1ms (rather small compared to rendering times). And as engine asks for input value, it gets as new value as available.
    Then it purely depends on internal engine logic.

    Iteration rate of engine's logic, syncing or not syncing logic to drawing, predicting approximate time of next free buffer...
    But either way, input part of latency is rather small to latency made by CPU (engine) calculating data for GPU, GPU rendering time, screen lag.
     
  6. Loophole35

    Loophole35 Ancient Guru

    Messages:
    9,738
    Likes Received:
    1,089
    GPU:
    EVGA 1080ti SC
    Yay another bias piece by AdoredTV. Will not click.
     
    airbud7 likes this.
  7. Fox2232

    Fox2232 Ancient Guru

    Messages:
    10,864
    Likes Received:
    2,782
    GPU:
    5700XT+AW@240Hz
    Actually he just pointed few things we discussed here within 'leak' thread and in this thread. Summarizing articles from many large sites like G3D.
    His theories are similar to mine. They are theories till those chips hit market at volume. It is not biased (weird).
    But it does have little content considering how long it is.
     
  8. Jagman

    Jagman Ancient Guru

    Messages:
    2,240
    Likes Received:
    310
    GPU:
    5700XT Pulse
    I also watched the AdoredTV Coffee Lake vid yesterday and I didn't think it was that biased either.
     
  9. Robbo9999

    Robbo9999 Maha Guru

    Messages:
    1,498
    Likes Received:
    262
    GPU:
    GTX1070 @2050Mhz
    I HAD been reading this thread, but it's become too verbose to keep track of! Ha!
     
  10. D3M1G0D

    D3M1G0D Ancient Guru

    Messages:
    2,125
    Likes Received:
    1,367
    GPU:
    2 x GeForce 1080 Ti
    His analysis was spot-on (it usually is), and I found it interesting how he called out specific reviewers who had inflated scores. I actually watched Jay's follow-up video before this, where he talked about MCE and how it's enabled by default (but shouldn't be) so felt it was a bit unfair to him, but at the same time, he should have known it was on when doing his tests (the same goes for Linus and the other channels that had MCE on). As a professional hardware reviewer, he should have been checking temps and clock speeds during those tests but looks like he didn't - that's his fault and no one else's.

    Also found it interesting how the sites that weren't supplied chips by Intel, including Guru3D, had lower scores. It's likely that those chips with higher scores were pre-binned chips and actual consumer chips won't perform as well. The good thing is that the Guru3D review is more accurate so it's a win for us :D
     

  11. Loophole35

    Loophole35 Ancient Guru

    Messages:
    9,738
    Likes Received:
    1,089
    GPU:
    EVGA 1080ti SC
    I didn't watch the video as stated in my post. I've avoided his videos as his Vega coverage and early Ryzen coverage was very bias.

    You do realize that the 8700k that Hilbert used is an engineering sample provided by a motherboard manufacturer. Likely sourced from Intel directly. In other words not a retail sample.
     
  12. Fox2232

    Fox2232 Ancient Guru

    Messages:
    10,864
    Likes Received:
    2,782
    GPU:
    5700XT+AW@240Hz
    @Loophole35 : As stated, I was surprised that it was not biased. Guy is slowly becoming commentator summarizing tech sites findings. And adding less and less of his own opinion. But maybe it looks like it because this time he simply matches educated guesses of many.
    As far as Hilbert's sample goes, it quite likely has exactly same IPC as final chip. Maybe bit different leakage, power limits of multiplier rules.
    There are many variables with this launch. So I want to see how chips in stores will perform. (Not that I intend to get one, as I am waiting for 2nd gen. Ryzen.)
     
  13. tfarber77

    tfarber77 Member

    Messages:
    10
    Likes Received:
    3
    GPU:
    Radeon RX 580 8GB
    My bro works for Intel. I'll share this with him. Maybe he can help you out.
     
  14. airbud7

    airbud7 Ancient Guru

    Messages:
    7,835
    Likes Received:
    4,739
    GPU:
    pny gtx 1060 xlr8
    Can he even get a 8700k?....Do they even have any?...Ask him how many they produced for this so-called launch.
     
  15. -Tj-

    -Tj- Ancient Guru

    Messages:
    16,981
    Likes Received:
    1,843
    GPU:
    Zotac GTX980Ti OC
    Yosef019 has one and he's from Israel
     

  16. Loophole35

    Loophole35 Ancient Guru

    Messages:
    9,738
    Likes Received:
    1,089
    GPU:
    EVGA 1080ti SC
    I too want to wait and see retail versions as well. I mean it's not like my 2600k is a bad CPU even now.
     
  17. sverek

    sverek Ancient Guru

    Messages:
    6,099
    Likes Received:
    2,950
    GPU:
    NOVIDIA -0.5GB
    I am not sure if "working for intel" gives any leverage. Unless he is really in manager position and can freely acquire any sample.
    99+% of Intel workers probably don't have access to their products. They just consumers, like us.
     
  18. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    39,209
    Likes Received:
    7,848
    GPU:
    AMD | NVIDIA
    No it isn't. The guy is a conspiracy theorist and gaining popularity and views by just that, feeding off a bit of confusion and throwing in many arguments for confusion, and all of the sudden narrowing that down to an answer that sounds plausible. He's doing it in an intelligent way I'll give him that. The scores aren't because of the proc sample, trust me all procs are the same aside for asic quality vs tweaking. Nope, it's simply because of the motherboard firmware. We had access to Coffee Lake 3 weeks prior to the launch, over the course of two weeks our loaner sample saw multiple new mobo BIOSes released gradually increasing performance on most motherboards. If you check the reference review and compare it to the later MSI review for example, what do you notice?

    Check here:
    http://www.guru3d.com/articles_pages/msi_z370_godlike_gaming_review,10.html

    Same processor. It's the mobo manufacturers who tweak performance by fiddling with the Turbo bins. ASUS for example has a feature that 'optimizes' performance and enables it by default. Great stuff for the novice user, but not representable for stock reference proc results as they set the turbo bin to 4.7 GHz on all cores for the 8700k. The problem is that most reviewers do not even look at such settings to disable them (which I did for the reference proc review). Basically my 8700k results are spot on as to what the 8700k really is. We did update to 1400CB after some BIOS updates though. The rest of the performance differential is the result of motherboard manufacturers tweaking for best performance and best results as all motherboard manufacturers want to show that their board is the fastest in the reviews and thus enable that stuff as they do not want to be slower than the competition. It is as simple as that and has nothing to do with the procs, these are all the same including ES samples.

    Just because Adored is talking and taking causality for granted doesn't mean it's right. Again, he is a conspiracy theorist, and while there's nothing wrong with that or him (love it how he pronounces Guru3D), it aint the facts, that's for sure.
     
    lucidus, Noisiv, yasamoka and 3 others like this.
  19. alanm

    alanm Ancient Guru

    Messages:
    9,760
    Likes Received:
    1,944
    GPU:
    Asus 2080 Dual OC
    ^ heh heh.. that was about as refreshing a takedown of Adored's 'investigative' journalism as I've seen. :D
     
    Loophole35 likes this.
  20. D3M1G0D

    D3M1G0D Ancient Guru

    Messages:
    2,125
    Likes Received:
    1,367
    GPU:
    2 x GeForce 1080 Ti
    Yes, I think that was covered as well in the Adored video. Jay redid his tests in light of this to correct for this difference (). He said he also contacted Asus, who denied that MCE was on by default - so either Asus doesn't know how their own motherboards are configured or they are hiding the fact that their boards overclock out of the box.

    You may not be a fan of Adored but he was right in pointing out the differences in scores (at least partially). He even got one of the reviewers to recognize their mistake and fix it in a subsequent review. I know that he speculates quite a bit and may come off as a bit of a kook but he actually got results here.
     
    Adored likes this.

Share This Page