Review: Intel Core i7 8700K processor

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Oct 5, 2017.

  1. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,642
    Likes Received:
    2,123
    GPU:
    HIS R9 290
    The rendered frames.
    I never mentioned anything how a single GPU presents a frame to the monitor... but it only takes 1 GPU in an SLI configuration to display something. If you've got a 2nd GPU, it needs to send it's display data to the primary GPU. Not a hard concept to grasp.
    Have you ever coded anything that is multi-threaded, networked, or communicates to another piece of hardware? Because if you did, you'd realize it doesn't matter how fast the data rate is, the act of communication alone takes up time. This could be for a number of reasons, such as but not limited to: sending acknowledgement packets, sending data on specific time intervals, filling up a buffer, or waiting for the receiving end to say it's ready (and no, I'm not saying all of these things apply to GPUs). Whether I deliver a single piece of paper or a 10kg package to you, if I use the same shipment method, the duration of the delivery won't change. The amount of data doesn't matter in this situation, because as you pointed out, it's very minimal.
    How about stop changing the goal post, eh? And yes, games don't rely on AFR. Back when I did multi-GPU many games, despite working with AFR, opted not to use it. Multi-GPU is optional. Anyway, you're driving this way too hard on opinion, because despite the fact that I personally prefer AFR, there are many who don't. SFR has inherent advantages, and I used it whenever it gave me an acceptable experience (which was pretty much any AAA game that wasn't a crappy console port). Sure, I'll admit it was weird, if not a bit wrong to assume SFR was being used. But ultimately that doesn't change my point; SFR is just simply a stronger example. I don't understand why you're so focused on SFR - the only relevance it has to this whole discussion was me bringing up Amdahl's Law, which at this point I regret bringing up.
    Uh... I never suggested anyone was behaving in any way, so what exactly is all in my head, and not yours? Clearly, if I knew nothing about this subject, you'd have given up like your post originally claimed. But you can't help handle the concept that maybe you're actually wrong or maybe you're blowing this whole situation way out of proportion for no good reason. Seriously, I didn't claim that I had the indisputable truth, it was just a thought.
    Mind explaining how consistency is achievable if the GPUs aren't identical and don't synchronize? Even if the GPUs were completely identical in every regard, the data they're processing isn't identical, so they can't depend purely on timing; they must communicate.

    I never said it was complex... Stop jumping to conclusions. Seriously, you don't have to act like this - you're choosing to see the worst in me. Regardless, refer back to my statement earlier about delivering letters and packages.

    I'm well aware... I don't see how that's relevant to this? You seem to be just mashing a bunch of words I said from separate unrelated points.

    That I am fine with admitting I was mistaken about.

    Likewise. And you still need to learn to calm down. You could've approached this politely.
     
  2. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,847
    Likes Received:
    242
    GPU:
    EVGA GTX 1080Ti SC
    So the frames magically mess up if all the time, in a multi-GPU system, frames are rendering in parallel then displayed in the order 1-2-1-2 or 1-2-3-1-2-3 or 1-2-3-4-1-2-3-4?

    Ordering 2-4 rendered frames to be displayed to a monitor causes latency?

    Not even a worthwhile concept to discuss, as you seem to indicate as if this process is complex enough to add latency.

    Funny of you to say that as this is my major. What you speak of is encoding delay, propagation delay, and decoding delay. And I'd like you to know that this propagation delay does not even exceed the order of nanoseconds or, at most, microseconds. As for encoding and decoding, you're talking of GPUs whose sole purpose is to push pixels. The act of sending a frame to or receiving a frame from another GPU amounts to nothing. The rest of the path, it's transmission rate that matters in moving data in a specific timeframe.

    How high do you think these delays are? On the order of milliseconds? For a GPU that takes less than ten milliseconds to render an entire frame from scratch, sending a frame is a significant source of latency? Really?

    Give. Examples. Of modern games. That can actually use SFR. Without throwing around the absolute nonsense of whether games rely or not on AFR (for multi-GPU, they rely absolutely).

    You have absolutely no damn clue what you are talking about.

    Mind understanding what the intent and purpose of having GPUs of identical models for frametime consistencies? Or are you deliberate in misunderstanding this extremely simple point?

    GPUs DO communicate. But not at the complexity that is sufficient for it to manifest in any measurable latency. Get out of here.

    You have contributed absolutely nothing so far other than claim that when two pieces of silicon communicate, they introduce latency. Duh.

    You seem to forget the very words you say at first then wonder at the sort of response you got. You started by mentioning bottlenecks relevant to single GPUs when we are all on the same page that for a discussion regarding a multi-GPU system, it is obvious that the reason multiple GPUs are being used is because One Is Not The Bottleneck.

    If you stop with the nonsense, we would all be fine. Please stop with the nonsense, my brains are starting to hurt.
     
  3. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,642
    Likes Received:
    2,123
    GPU:
    HIS R9 290
    No... but synchronization is necessary in the event it happens, which it can. Again, have you actually developed anything involving communication or multiprocessing? You can't just have 2 devices operate in parallel totally blind of each other's activities and expect good results every single time.
    Yes. I never said it was a lot, but considering there are frames coming from an external piece of hardware, that will contribute toward latency. Once you're accounting for well over 100FPS, every delay in every frame makes a difference.
    You have to be delusional if you think there is 0ns of latency involved in two GPUs communicating with each other, especially HD renderings.
    Microseconds is not insignificant. Regardless, sure, pumping frames from one GPU to the other isn't that slow. But do you really think that's all that's strictly all that happening? There is more logic involved than that. And besides, there is more being communicated than just the frames. There is more communication going on than just between the GPUs.
    The delays themselves are short. I personally guess order of nanoseconds, per set of data transmitted. Maybe 50ns by the time the data is sent, fully received, and interpreted? So do the math. You've got over 100FPS, some time spend on the CPU communicating to each GPU, potentially some more communication between the GPUs back to the CPU, then some packets need to be sent from the secondary GPU to the primary, likely some synchronization between the two GPUs, a signal going back to the CPU telling the GPUs they're ready for the next frame, and you'll likely end up in a magnitude of microseconds of latency by the time the second is over. That's still small, but enough to validate my point. You need to understand that communication is more than just transferring a signal from point A to point B.

    Says the person who is making up statements I never said, and pointing fingers at me for saying them...

    I'm aware of what the intent and purpose is, but it's irrelevant to this discussion. I know what you're trying to get at, my point is GPUs still will not perform identically regardless of how identical you make them. One frame could have 50 particle effects while the next could have 1000. That's inconsistent.

    Proof? For someone so adamant about how wrong I am, you should be able to back that claim up with solid evidence. Note how the only time you really got me to back down was when you provided proof. I'll keep this up as long as you want, but until you actually prove me wrong or apologize for your unnecessary behavior, I'll keep this up as long as you will.

    My point was pretty straight-forward, but you tried to make it as incoherent as possible for your own gain. It's simple:
    If a GPU is not a bottleneck and you get worse performance when using 2 GPUs, then something related to the SLI config could be the cause, such as (but not limited to *sigh*) latency lost due to the total back-and-forth communications between the CPU and GPUs. Why is that so obscenely wrong to you? That's pretty much the gist of my whole statement, that you decided to derail to this. Do you not realize how ridiculous you made this?
    Anyway, how exactly do you know one GPU wasn't bottlenecked? typhon6657 never mentioned the game that was played. So now you're making up facts based on rash assumptions. You sure aren't helping your credibility there.
    The "nonsense" continued because you brought it here. My original statement could very well have been wrong, but it didn't deserve this. At this point it isn't up to me when it stops. Not my problem your brains (plural?) hurt.
     
    Last edited: Oct 11, 2017
  4. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,528
    Likes Received:
    3,204
    GPU:
    6900XT+AW@240Hz
    The Bad:
    - New MB with every generation.
    - Intel's marketing.

    The Good:
    - Intel is providing 6C/12T in regular consumer platform at greatly reduced prices once compared to last generations.
    - OCed catches to R7 1800X in MT workloads. (But with higher power consumption => MB cost.)

    Highlights:
    - iGPU chart => Raven Ridge is supposedly three times as fast as Carrizo per Watt.
    - HH noting how W10 requires more cores if you want it to be quick.
     

  5. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,847
    Likes Received:
    242
    GPU:
    EVGA GTX 1080Ti SC
    Again, this is my major, so you're not getting away with this. Tell me about the event where frame order gets confused as to require frame ordering that would cause additional latency.

    Ordering latency after the frames were sent to the primary GPU. We covered frame transmission latency elsewhere.

    The frames are now in the primary GPU's VRAM - here's some pseudocode in C style:

    Code:
    #define NUM_OF_GPUS 4
    
    framebuffer fb[NUM_OF_GPUS];
    int j = 0;
    while(True) {
      if(fb[j].ready()) {
        fb[j].present();
        if (i == (NUM_OF_GPUS - 1))
          i = 0;
        else
          i++;
      }
    }
    
    This waits until the frame is ready to present. It achieves order by a simple 1-2-3-4-1-2-3-4 counter. Please explain the latencies inherent in this system so as to limit one's maximum framerate.

    Quote where I mentioned 0ns of latency for inter-GPU communication and you win a candy bar. You run out of arguments so you resort to what I never said.

    Again, nonsense in order to sound smart, like you know what you're talking about. Blank general statements that prove nothing. And yes, keep jumping between one part of the chain and another in order to confuse and lose the point.

    I'm going to humor you. Let's say 100 microseconds of delay per frame. On a system where mGPU is not scaling at all - zero scaling, so mGPU is harming performance.

    100FPS --> 1 / 100s = 10ms frametimes. 10ms + 100us = 10.1ms --> 99FPS.

    Okay. You just discovered The Americas with that one frame lost due to mGPU.

    That's with your own assumptions, which are based on nothing but napkin calculations arriving at the "microsecond" order of magnitude.

    Started with nonsense, continued with nonsense, and ending with nonsense. Your initial statement:
    Pretty big hit. Then speaks of microseconds. Face it and stop trying to cover up for what you don't know.

    Here are two imbalanced GPUs rendering frames with a consistent particle count (among other things):
    10ms (1) - 20ms (2) - 10.2ms (1) - 19.7ms (2) - 10.4ms (1) - 20.3ms (2)
    Microstutter unless you do frame pacing that adjusts for these differences. The differences are not necessarily consistent when you mix and match GPUs from different architectures - as there are particular strengths and weaknesses.

    Two balanced GPUs rendering frames with a consistent particle count (among other things):
    10ms (1) - 10.5ms (2) - 10.2ms (1) - 10.1ms (2) - 10.4ms (1) - 10.3ms (2)
    No microstutter.

    One GPU rendering frames with varying particle count:
    10ms (50) - 12ms (200) - 15ms (500) - 20ms (1000) - 17ms (800) - 14ms (400) - 12ms (200) - 11ms (100) - 10ms (50).

    There is no game on earth where some effect is so heavy on the GPU yet lasts one frame only - such that you would get a single frame where the frametime is so high as to cause a hitch or stutter.

    On the other hand, imbalanced GPUs would have frametime inconsistencies every other frame. Get this into your head.

    You have the nerve to ask for proof when it was you who claimed this nonsense of "the maximum frame rate takes a pretty big hit when doing multi-GPU" while still not providing any sort of evidence to indicate such. Also, you want to go against years of experience of others on this forum who have run multi-GPU for years and have never seen what you're talking about. Meanwhile, the last time you ran multi-GPU, SFR was popular. Right. You seem to know what you're saying.

    Your claim of getting a pretty big hit to the maximum framerate on a multi-GPU setup is dead wrong and no amount of going through hoops can fix this.

    Your original statement is wrong, your subsequent explanations are not only wrong, but also attempting to prove that you know what you're talking about - that you're being clever in:

    "oh but this isn't all what happens"
    "oh but the CPU also communicates with the GPUs"
    "oh but there are poll and response packets"
    "oh but the GPUs also communicate"
    "oh but frames have to be reordered"
    "oh but AFR is not the only method"
    "oh but games don't rely on AFR"
    "oh but the primary GPU has to present the frames"

    All sorts of statements that indicate that this isn't even your field of expertise, but you're trying to make it sound like such.

    Had you stopped sooner, you wouldn't have entered this whole swamp where you don't know how to navigate. Don't make statements regarding concepts you have no knowledge of. That's all.
     
    Loophole35 likes this.
  6. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,642
    Likes Received:
    2,123
    GPU:
    HIS R9 290
    I have told you but you seem to have this tendency to ignore explicit details you find inconvenient. In this case, it's when one GPU takes longer to render a frame that is more complex.
    *sigh* I never said anything about there being significant latency when ordering frames. Can you stop trying to disagree with me? The increase in latency I'm referring to is, for example, when the primary GPU renders its frame before the secondary, and it has to wait for the 2nd to finish. When the 2nd does finish, the rendered frame then needs to be transmitted to the primary GPU. This is a lot of lossless data being sent over just a few wires. There is latency involved in this transaction. I'm not saying there's a lot, but there is some. You are trying REALLY hard to avoid admitting this. I never said this is the cause of so much of a performance loss, so I really don't get why you're trying so hard to disagree.

    You didn't say that and I never implied you did otherwise... Anyway, I made a typo in my statement and you're treating it way too literally. You're getting way to petty about this. You said inter-GPU communication doesn't add latency (but despite what you think, yes, it does):
    "Not even a worthwhile concept to discuss, as you seem to indicate as if this process is complex enough to add latency."
    There's nothing that complicated in what I said. You send a message, it takes time. The message needs to be processed, in turn using up clock cycles. That takes up time. The message may need to be responded to, taking up more time. That message then needs to be processed, and so on.
    Interesting how whenever I make a straight-forward statement that you can't argue against, you just throw out insults. How about focus on the topic, say something useful, and not be a jackass?
    You do realize you just agreed with me, right? You're not supposed to do that.
    Anyway, the delay extends as the frame rate goes higher. Keep in mind the 1FPS loss is strictly just due to communication and interpretation (but not reaction) to that communication. So there are even more FPS lost just in the response logic. Not much - depending on the scenario, that too might only be a few FPS.
    But to reiterate:
    I never said latency was the cause of performance loss in multi-GPU. I never said it was the most significant form of loss, either. But it is a loss that definitely happens. So - now that you agreed there is some performance loss due to communication, at this point any further arguing you do is just nit-picking.
    Where in that quote did I mention anything about latency? That's right - I didn't. I was saying that multi-GPU in general may be the problem. As I said before, there are other factors that can cause performance losses. Multi-GPU does not universally give better performance. Do a quick google search and you'll find posts of people whining that they're losing performance. It's usually a simple setup problem, but why would this scenario be any different?
    Face it, you're getting frustrated and are grasping at straws to argue against me.
    You're joking, right? You literally just made up a bunch of numbers off the top of your head and used that as evidence against my point? It boggles my mind how you think I'm the one struggling...
    Rather than focus on completely made up data, here's real data that backs up my point:
    http://www.guru3d.com/articles-pages/hitman-2016-pc-graphics-performance-benchmark-review,8.html
    Hmm, interesting, AFR doesn't magically make things perfectly smooth, despite there being 2 identical GPUs being used. Notice how in the SLI graph, the greatest microstutter occurs during a sudden change in the scene. So yes, there are in fact games where the effect is too heavy for just one frame.
    Except as you pointed out, the GPUs aren't operated with imbalanced specs. So, how about focusing on real data?
    There you go again, blowing things out of proportion. I never claimed latency caused a "pretty big hit". Remember, you're the one who is in a fit of rage over such a petty detail. You're the one who thinks you're doing the lord's work by "proving me wrong" (even though you already admitted there is in fact a frame drop). Maybe if you and Loophole approached this like normal people, you'd understand that from the very beginning I never claimed that what I said was in fact the problem.
    As of right now, it's just you and loophole against me. I know you've got a big ego, but don't act like you represent the whole community. And frankly, your years of experience and whatever you majored in (which you are conveniently vague about) contributes nothing toward this discussion.

    So yes - prove me wrong and cite sources if you're so sure of yourself. No cherry-picking results or anecdotes. I don't need to cite anything, because you have already proved my point for me, and, I never said that latency was the only possible reason for multi-GPU performance loss. Again - you blew this out of proportion, so if this is so serious to you, you need to prove yourself.

    Nope, never claimed that. We should make a drinking game of things you think I said or implied but never did.

    Yeah... you're not the right person to say that sort of thing. Seeing as you're similarly as stubborn as I am, I know for a fact you would not just back down from someone who laughs at a harmless generic statement (remember, my original post wasn't specific to latency) and questions your intelligence.
    The swamp is unfriendly to all who aren't native apex predators in it. Do you really think what you're doing here reflects positively upon your image here? You've had your fair share of gross exaggerations and false assumptions; you certainly aren't apex. And no, I'm not saying I am, either.
     
  7. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,847
    Likes Received:
    242
    GPU:
    EVGA GTX 1080Ti SC
    Again, absolutely no information to justify the statement.

    [​IMG]
    You genius, you! No card waits for the other to finish rendering its frame... that defeats the whole point of multi-GPU AFR ...

    Keep going. Stick to your SFR lol. That alone should have made me stop reading instantly.

     
    Last edited: Oct 11, 2017
  8. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,642
    Likes Received:
    2,123
    GPU:
    HIS R9 290
    Gee, great counter-points! With such compelling evidence, I can't believe I was so blind to the truth! </sarcasm>
    Shows how little you payed attention. I said myself that I preferred AFR pretty early on:
    "Back when I did multi-GPU gaming, SFR seemed to be the default option, but I usually opted for AFR"
    But, if you prefer to believe I never said this in order to help make you feel like a winner, then by all means, go ahead. I'm not the one who cares about credibility.
     
  9. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,847
    Likes Received:
    242
    GPU:
    EVGA GTX 1080Ti SC
    Amazing. You still insist. Tell me more about SFR and the current state of multi-GPU.

    You're like someone telling us about the Gutenberg press as we print on our latest laser printers.

    Frankly, nothing needs countering as none of them are actual points. The only thing you mentioned at the beginning is the primary GPU waiting for the secondary one to complete its frame which is complete hogwash - and proves you know nothing about how multi-GPU works. The point of multi-GPU is for one GPU to never have to wait for the other to finish, as that would push you back to single-GPU performance. Find the relevant Nvidia slide attached above...

    That you so vehemently insist on how this frame transmission incurs additional latency when it's done without halting any rendering on either GPU shows how far your knowledge is of how latency calculations in parallel workloads work.

    Then you go on to ramble about AFR microstutter when, had you read and realized what you were actually linking to, isn't about increasing latency in total, but rather, frame pacing where one frame is too early, one frame is too late - a problem that has been identified for about a decade now and that NO ONE except you links to total increased latency (and thus decreased maximum frame-rate). Hell, the concern when frame pacing mechanisms started being used by AMD was that framerate could go down - not up...

    Keep going, this is incredibly amusing.
     
    Last edited: Oct 11, 2017
  10. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,642
    Likes Received:
    2,123
    GPU:
    HIS R9 290
    Not that amazing. Again, if you payed attention, I told you that I'm not the one who will decide when this stops.
    Then you should have no problem proving me wrong. Sources, please.
    So micro-stuttering just happens for unknown reasons and every frame will always complete in sequential order every single time guaranteed? (that's a rhetorical question).
    Note that I never said the primary GPU always waits for the 2nd one every time, I merely said it can happen. The inverse situation can also happen, in which case if there's a buffer, I imagine there would be no performance loss due to communication, because the primary GPU would already have the data from the secondary GPU.
    How naive of you. The point of going to the hospital is to ensure you leave in a better condition than you left, but that doesn't always happen. If multi-GPU (specifically, AFR) worked as flawlessly as you want to believe, it would be a lot more popular. But ultimately, multi-GPU isn't flawless; as a result, it is unpopular, and both AMD and Nvidia have reduced it down to 2 GPUs because of the complications involved. I'm not implying latency and synchronization specifically the reason for this, BTW.
     

  11. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,847
    Likes Received:
    242
    GPU:
    EVGA GTX 1080Ti SC
    Sources for what LOL. Please enlighten me what sort of sources your nonsense would require to be disproven. You ask for sources when ALL your claims have not been substantiated by a single source, except that Hitman article you brought into the discussion for some strange reason, in order to "educate" me on the hows and whys of microstutter - a phenomenon I have dealt with, dissected, went through, and had solved over the years after AMD came out with their frame pacing. All in order to inform me that multi-GPU "doesn't work perfectly". DUH? Who said it did?

    Stop inventing facts, will you? The primary GPU never, ever waits for the secondary in the way you envisage. Even when a frame takes too long on one GPU, and frame pacing is employed, the overall framerate is higher, and the overall latency is lower. How does that help your argument of reduced maximum FPS and higher overall latency, you baffle me indeed.

    HAHAHAHA...are you suggesting that microstutter, or any of its causes, could cause a re-ordering of frames? Incredible. You should work at Nvidia / AMD.

    Yeah yeah, keep inventing stuff I never said. I left multi-GPU for reasons of incompatibility with many games. But I never encountered this "lower maximum FPS" myth you speak of. Nor was the experience anything other than pretty much flawless when CF worked properly with a game.

    Again, what was your initial point? That mGPU lowers maximum FPS due to increased latency. That is 100% false, and whatever you keep adding to your argument won't make this any true. Nor will your mention of system latencies, which is way more than obvious, even add to your argument that these latencies will ever add up to anything that supports your initial argument - that maximum FPS is lowered.

    Get done with it.
     
    Last edited: Oct 11, 2017
  12. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,642
    Likes Received:
    2,123
    GPU:
    HIS R9 290
    Good question - you're the one who insists on disagreeing with me for the sake of it, even though you've already proven my point. So really, what nonsense of mine needs to be disproven? Why are you still here?
    Proof, please. You can whine all you want that I'm making stuff up, but I can just as easily say the same about your claims. You already agreed with me that communication results in some loss in framerate when there isn't a GPU bottleneck. You were so adamant that timing is not an issue, and then you say "even when a frame takes too long on one GPU". I mean really, you contradicted yourself in the same quote. Once again, you're tarnishing your credibility.
    If the GPUs are not bottlenecked, there is an increase in latency compared to a single non-bottlenecked GPU. I never said latency is increased if neither GPUs are bottlenecked. As the graphs I linked to suggest, bottlenecked GPUs do seem to reduce latency over a single GPU, which is totally understandable.
    I find it funny how much you think I disagree with such things, when I actually don't.
    You literally said "The point of multi-GPU is for one GPU to never have to wait for the other to finish". I'm not inventing anything, the quote is right there. The fact of the matter is, that's a naive perspective.
    Sure, in most cases, you won't ever encounter the lower maximum FPS, because very rarely will a single GPU be able to compete with 2x where none of them are bottlenecked. So, making such a comparison is rare. But we don't know what game was played, but what we do know is the framerate was worse on a Ryzen, suggesting that the CPU is potentially the bottleneck; not the GPU(s).
    Wow you are dense. How many times do I have to tell you that latency wasn't my main focus? The only reason it seems that way is because you just can't seem to let it go. How much proof do you need that my initial comment never mentioned latency?
    Also, stop backpedaling. You already agreed communication causes up to a 1FPS loss; that's a reduction in the theoretical maximum framerate.
    I'm a gentleman: I say "ladies first" and I reply when spoken to. If you want this to "get done" then that's your call. But, it would be rude of you to leave me on such short notice!
     
  13. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,847
    Likes Received:
    242
    GPU:
    EVGA GTX 1080Ti SC
    http://download.nvidia.com/developer/presentations/2005/GDC/Sponsored_Day/SLI.pdf
    http://developer.download.nvidia.com/assets/events/GDC15/GEFORCE/SLI_GDC15.pdf

    [​IMG]
    "All work is parallelized". Where do you see one GPU waiting for the other to finish its frame?

    [​IMG]
    Look at all these frames waiting for one another!

    [​IMG]
    BAD APPROACH. Causing one GPU to wait for another is a BAD APPROACH. Using the same render target makes SLi have ZERO benefit.

    [​IMG]
    SLi REDUCES latency, even with only 66% scaling.

    [​IMG]
    Communications overhead. Where do you see "frame transmission from secondary GPU to primary GPU" mentioned here? Look at the relevant resources. Where do you see "frames"? Nvidia doesn't even count that transmission as a communications overhead. Because it isn't.

    [​IMG]
    SLi basics. Learn the basics. Where do you see one GPU waiting for another? They're entirely decoupled. Entirely. Even in worst case scenarios - they won't magically couple...

    [​IMG]
    Interframe dependencies. When a frame is dependent on what happened in the previous one. Again, previous frame results, temporal feedback, compute, maps and textures.

    Where do you see "frame transmission from secondary GPU to primary GPU for display" here?

    Proof above...

    I can't believe you're trying to spin your way out of this. 1FPS loss - even when sticking to your unsubstantiated calculations - is now a pretty big hit...?

    Again, total nonsense. Keep going.
     
  14. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,528
    Likes Received:
    3,204
    GPU:
    6900XT+AW@240Hz
    That's quite piggy code...
    while(True) => deadlock
    Surely you meant 'j's to be 'i's. As your way it will always wait for fb[0].ready() to just alter 'i' variable between 0,1,2,3 to no additional effect. In other words always just 1st fb being used.

    But code mess aside. There is no reason to assign workload to GPUs in cyclic way as described above (0,1,2,3,0,1,2,3,...). As that actually introduces latency due to variance in time required to render. In other words: "Waiting for specific GPU buffer to be free while another is done already..."

    Ordering GPUs actually needs simple FIFO buffer for 4 values telling to driver succession of frames, while assigning workload to any available. Easiest thing, if you think about it.

    As far as Multi-core/multi-threaded CPU introduced latency may go: On macroscopic level of entire frames, only dummy can create situation chaotic enough to add measurable latency from ordering. But from standpoint of preparation of data for draw calls within one frame, there may be things which needs to be processed 1st as other things depend on them. Same goes for shader code, post-processing. Some things will end up differently when out of order.

    Most of stuff for rendering can be executed out of order. But stuff like multi-pass ambient lighting for more accurate shadows/light reflections can't.
    Or use of some dynamic textures. Each pixel of such can be processed in parallel, but textures may be needed to be regenerated in certain order. (Not that it adds latency since number of pixels on texture is usually significantly higher than number of CPUs in system.)

    Off-topic:
    But in the end having 4 GPUs in setup where each renders separate frame adds either latency, or increases framerate beyond reasonable value.
    Situation 1 where game details are kept same:
    1 GPU renders 50fps, rendering time 20ms per frame, added latency 20ms
    2 GPUs render 100fps, rendering time 20ms per frame, added latency 20ms
    4 GPUs render 200fps, rendering time 20ms per frame, added latency 20ms

    Situation 2 where game details are increased with higher processing power as user has only 100Hz screen:
    1 GPU renders 50fps, rendering time 20ms per frame, added latency 20ms
    2 GPUs render 100fps, rendering time 20ms per frame, added latency 20ms
    4 GPUs, render 100fps, rendering time 40ms per frame, added latency 40ms (consequence of increased game details)

    Explanation:
    - rendering time 20ms per frame => time it takes from dispatching data to GPU till it is ready to present frame
    - added latency 20ms => time GPU adds between input from KB/Mouse/Gamepad and image affected by those actions on screen
     
    schmidtbag likes this.
  15. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,642
    Likes Received:
    2,123
    GPU:
    HIS R9 290
    Geez way to spam the whole page... A simple link would've been good enough, you didn't need to embed the images...
    You're the one who "majored" in this - you should know that parallel workloads focused on the same objective need to synchronize with the parent process. It's both implied and an unnecessary detail for a slideshow.
    Look at you moving the goalpost again!
    I totally agree.
    Again, moved goalpost. Remember, this latency discussion is when the GPUs aren't bottlenecked. I already pointed out myself that when the GPUs are under full load that latency is reduced; you haven't brought anything new or valuable to the table.
    Funny - you explicitly ignore the statements regarding "transfer costs time and bandwidth" - something that is very relevant to this discussion. Not only that, but how do you think the data reaches the primary GPU? Seriously, it doesn't just magically teleport - the primary GPU is the one with the display(s) connected to it, so the rendered frame must transmit from the secondary GPU to the primary. It's common sense.
    All you're doing is getting nitpicky about terminology and trying desperately to use it against me. The gist of what I said still applies, even if I used the wrong word. Get a grip. Your pedantry isn't helping your point and you once again have failed prove me wrong. Next slide.
    Uh... that slide proves me right, not you. It explicitly mentions transmission is slow (something you were VERY adamant to disagree with). It also says in plain sight that "other transfers are necessary". Remember, this is interframe, meaning between frames. Where exactly do you think these transfers are occurring? Whether or not the GPUs are talking to each other at this stage is irrelevant - communication is still occurring to make sure the GPUs are doing what they're supposed to. That ultimately supports my point - communication adds latency, added latency lowers framerate, and the communication is necessary.
    I never said it was.
     

  16. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,847
    Likes Received:
    242
    GPU:
    EVGA GTX 1080Ti SC
    The while(True) is meant to signify a continuous process of moving through the circular array of framebuffers. Of course, this won't run as code - it requires break conditions. It is pseudocode that is meant to demonstrate the process by which frames are displayed according to their order.

    That is not on a single GPU. That is meant to signify what happens when 4 GPUs are rendering frames at the same time. This method ensures that frames are displayed in their proper order.

    Yes, that's what I meant. It's too easy.

    That's per frame, for sure.

    That's the issue of AFR - frametimes are still the same per GPU, so total chain latency is impacted differently than if it were a single GPU. In general, there's a reduction in latency, especially when your framerate is double (you see twice as many frames), but not in minimum latency (as the frametime penalty is unavoidable).
     
    Last edited: Oct 11, 2017
  17. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,847
    Likes Received:
    242
    GPU:
    EVGA GTX 1080Ti SC
    There's NO NEED for a synchronization process that adds in any shape or form to the total latency WHEN EACH GPU IS RENDERING A FRAME INDEPENDENTLY OF THE OTHER.

    Look at you throwing claims then saying I move the goalposts when proven wrong!

    Then my suggestion that the point of multi-GPU is to never have one GPU wait for another is correct...

    You're the one who's moving goalposts. Reminder for the initial post where you claimed that the maximum FPS is lowered with multi-GPU - with no suggestion as to whether the GPUs are bottlenecked or not.

    Funny - you miserably fail to understand that the transfers that Nvidia considers worthwhile as to "cost time and bandwidth" are NOT READY FRAMES RENDERED BY THE SECONDARY GPU AND TRANSFERRED TO THE PRIMARY GPU.

    Your "use of the wrong word" is a blatant concept mistake - when you assume that this frame transmission causes a delay, all you're doing is blurting out statements to sound as if you know what you're talking about.

    Transmission IS slow - when assets that Nvidia mentions are involved. We're talking of textures and assets that occupy Gigabytes of VRAM - NOT A 25 MEGABYTE 4K FRAME to be transferred.

    Your initial statement and all subsequent arguments that you attempt to exonerate yourself from being not only blatantly wrong, but purposefully ignorant regarding multi-GPU, all while preaching as if you know what you're saying, is preposterous.
     
  18. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    40,321
    Likes Received:
    8,859
    GPU:
    AMD | NVIDIA
    Guys guys ... love not hate ...
     
    Aura89, alanm, airbud7 and 2 others like this.
  19. HonoredShadow

    HonoredShadow Ancient Guru

    Messages:
    4,280
    Likes Received:
    12
    GPU:
    Gigabyte 1080 ti
    The Great Coffee Lake Con Job by Adored:



    Guru3d is mentioned too
     
    schmidtbag likes this.
  20. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,642
    Likes Received:
    2,123
    GPU:
    HIS R9 290
    Great, now you're ignoring points made by Fox2232 too. I can't keep spoonfeeding you every time...
    The goalpost the entire time was around GPUs that aren't bottlenecked. Focus *snap* *snap*.
    Yes, it is correct. But just because something is intended to function a certain way, that doesn't mean that's how it works 100% of the time. And before you hyperbolize yet again, I'm not saying it happens 0% of the time.
    Remember how many times I told you that my claim wasn't specific to latency? Remember how I kept saying the latency loss is in regards to non-bottlenecked GPUs? Yeah... you're still moving the goalpost. Better luck next time.
    It's not that I failed to understand, it's that it isn't relevant, and not something I said was indisputably happening; I merely suggested it could be. My only point is communication adds to latency. More GPUs adds more communication; therefore, more GPUs can add to latency (when not bottlenecked). It's simple.
    Also, you still have yet to explain how the work processed by the 2nd GPU ends up in the display connected to the primary GPU. All caps and bold text doesn't make that situation go away lol.
    Doesn't change my point.
    Let's look at this situation in a 3rd party perspective:
    Me: 1+1=3
    You: No, 1+1=4
    Me: But <reason>
    You: But <reason>
    Me: <counterpoint>
    You: You're wrong and know nothing
    Whether or not we're both right or both wrong, your tirades against my intelligence contributes nothing toward this. Say something meaningful.
     

Share This Page