Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Oct 5, 2017.
You're wasting my time.
As I wrote. Giving workload to GPUs in fixed order introduces situations where in-order-GPU is not ready to accept new workload while out-of-order-GPU is done and waiting.
As demonstrated, doubling number of GPUs while keeping details does "double" fps (fluidity of image sequence), but time to render each frame remains same, therefore latency is same.
In most cases, doubling of number of GPUs results in user to increase details, resolution, ... and therefore fps does not double. In such situations game feels smoother due to higher fps, but latency is higher due to higher rendering times per frame.
(That's if you call "latency" same thing as I do. For me it is time between peripheral input from human, and moment image affected by given inputs is shown on screen.")
No, the point was that the workload was given to the GPUs all at the same time. As the GPUs rendered in parallel, some got done before others. It's very easy to display them in order - just have the program wait for frame 1 to finish, then frame 2, then frame 3, then frame 4 (then back to 1). That way, regardless of whether frame 2, 3, or 4 finishes before frame 1, frame 1 will be displayed first.
It's sort of the same behavior when you have spawned threads from a parent thread and you wait for all child threads to finish in order to proceed, which is called a join. In Python:
for child_thread in child_threads:
This would terminate when all child threads have finished - regardless of the order in which they finish.
That is true. Frametime latency does not decrease. However, the total chain latency is not merely dependent on the unit of frametime latency.
Take this example:
Consider the time right when frame n starts rendering to be t0, the time at which you provide user input.
Let's consider all chain latency factors other than frametime to be a constant "c". Frametime latency is "f".
Best case scenario: your input is sampled shortly before frame n starts rendering (t0-), frame n is rendered then displayed. Total chain latency: c + f
Worst case scenario: your input is sampled shortly after frame n starts rendering (t0+), frame n misses the input, but frame n+1 picks up the input. Frame n+1 is rendered then displayed. Total chain latency is approximately c + 1.5f.
So 1-1.5 frames of latency due to frametimes.
Single GPU best-case scenario: same as multi-GPU --> c + f
Single GPU worst-case scenario: input will have to wait for the next frame --> c + 2f
So 1-2 frames of latency with single GPU vs. 1-1.5 frames of latency with dual GPU.
Best case latency is not improved, but worst case latency is.
for child_thread in child_threads:
@yasamoka: Yes, your code indicated work of GPUs in order, not disconnecting buffers from GPUs. Then it is OK.
As far as latency, input is always practically actual. OS keeps live value and updates it in real time every moment change is delivered from I/O. So today's mouse has 1000 Hz refresh, hinting up to 1ms (rather small compared to rendering times). And as engine asks for input value, it gets as new value as available.
Then it purely depends on internal engine logic.
Iteration rate of engine's logic, syncing or not syncing logic to drawing, predicting approximate time of next free buffer...
But either way, input part of latency is rather small to latency made by CPU (engine) calculating data for GPU, GPU rendering time, screen lag.
Yay another bias piece by AdoredTV. Will not click.
Actually he just pointed few things we discussed here within 'leak' thread and in this thread. Summarizing articles from many large sites like G3D.
His theories are similar to mine. They are theories till those chips hit market at volume. It is not biased (weird).
But it does have little content considering how long it is.
I also watched the AdoredTV Coffee Lake vid yesterday and I didn't think it was that biased either.
I HAD been reading this thread, but it's become too verbose to keep track of! Ha!
His analysis was spot-on (it usually is), and I found it interesting how he called out specific reviewers who had inflated scores. I actually watched Jay's follow-up video before this, where he talked about MCE and how it's enabled by default (but shouldn't be) so felt it was a bit unfair to him, but at the same time, he should have known it was on when doing his tests (the same goes for Linus and the other channels that had MCE on). As a professional hardware reviewer, he should have been checking temps and clock speeds during those tests but looks like he didn't - that's his fault and no one else's.
Also found it interesting how the sites that weren't supplied chips by Intel, including Guru3D, had lower scores. It's likely that those chips with higher scores were pre-binned chips and actual consumer chips won't perform as well. The good thing is that the Guru3D review is more accurate so it's a win for us
I didn't watch the video as stated in my post. I've avoided his videos as his Vega coverage and early Ryzen coverage was very bias.
You do realize that the 8700k that Hilbert used is an engineering sample provided by a motherboard manufacturer. Likely sourced from Intel directly. In other words not a retail sample.
@Loophole35 : As stated, I was surprised that it was not biased. Guy is slowly becoming commentator summarizing tech sites findings. And adding less and less of his own opinion. But maybe it looks like it because this time he simply matches educated guesses of many.
As far as Hilbert's sample goes, it quite likely has exactly same IPC as final chip. Maybe bit different leakage, power limits of multiplier rules.
There are many variables with this launch. So I want to see how chips in stores will perform. (Not that I intend to get one, as I am waiting for 2nd gen. Ryzen.)
My bro works for Intel. I'll share this with him. Maybe he can help you out.
Can he even get a 8700k?....Do they even have any?...Ask him how many they produced for this so-called launch.
Yosef019 has one and he's from Israel
I too want to wait and see retail versions as well. I mean it's not like my 2600k is a bad CPU even now.
I am not sure if "working for intel" gives any leverage. Unless he is really in manager position and can freely acquire any sample.
99+% of Intel workers probably don't have access to their products. They just consumers, like us.
No it isn't. The guy is a conspiracy theorist and gaining popularity and views by just that, feeding off a bit of confusion and throwing in many arguments for confusion, and all of the sudden narrowing that down to an answer that sounds plausible. He's doing it in an intelligent way I'll give him that. The scores aren't because of the proc sample, trust me all procs are the same aside for asic quality vs tweaking. Nope, it's simply because of the motherboard firmware. We had access to Coffee Lake 3 weeks prior to the launch, over the course of two weeks our loaner sample saw multiple new mobo BIOSes released gradually increasing performance on most motherboards. If you check the reference review and compare it to the later MSI review for example, what do you notice?
Same processor. It's the mobo manufacturers who tweak performance by fiddling with the Turbo bins. ASUS for example has a feature that 'optimizes' performance and enables it by default. Great stuff for the novice user, but not representable for stock reference proc results as they set the turbo bin to 4.7 GHz on all cores for the 8700k. The problem is that most reviewers do not even look at such settings to disable them (which I did for the reference proc review). Basically my 8700k results are spot on as to what the 8700k really is. We did update to 1400CB after some BIOS updates though. The rest of the performance differential is the result of motherboard manufacturers tweaking for best performance and best results as all motherboard manufacturers want to show that their board is the fastest in the reviews and thus enable that stuff as they do not want to be slower than the competition. It is as simple as that and has nothing to do with the procs, these are all the same including ES samples.
Just because Adored is talking and taking causality for granted doesn't mean it's right. Again, he is a conspiracy theorist, and while there's nothing wrong with that or him (love it how he pronounces Guru3D), it aint the facts, that's for sure.
^ heh heh.. that was about as refreshing a takedown of Adored's 'investigative' journalism as I've seen.
Yes, I think that was covered as well in the Adored video. Jay redid his tests in light of this to correct for this difference (). He said he also contacted Asus, who denied that MCE was on by default - so either Asus doesn't know how their own motherboards are configured or they are hiding the fact that their boards overclock out of the box.
You may not be a fan of Adored but he was right in pointing out the differences in scores (at least partially). He even got one of the reviewers to recognize their mistake and fix it in a subsequent review. I know that he speculates quite a bit and may come off as a bit of a kook but he actually got results here.
Yeah muticore enhancement is set to auto/enabled by default. Was the same with my asus z87 & haswell chip. Think its no different now.