Discussion in 'Videocards - AMD Radeon Drivers Section' started by Remedy, Jul 26, 2013.
put an other 7850, it's cheap!
Never go midrange multi-GPU. Please.
EDIT: I love how all eyes are on AMD now
Let's see if this opinion changes with new drivers.
Btw, I share it for now. (Wonder for how long )
Better to get rid of mainstream GPU and buy High End.
Until AFR is gotten rid of!
hahaha this thread is a success .
AAAAND still waiting.
I always buy a almost top end single gpu. By the time I feel like I want to upgrade, there's usually a new generation of gpus out that do mega things. I've owned 3Dfx Voodoo, 3Dfx Voodoo2, Riva TNT, x1900xtx, 8800gtx, gtx 480, now a hd 7950. just to name a few top-ish cards. but I've never gone multi gpu. hd 7950 might be the first card I go multi gpu with. Or I'll just wait for the new 99xx cards later this year haha.
There's still other disadvantages to having a multi card setup as I'm sure you know. Power draw, temps, noise just to name a few. but... you never know.
Clearly Roy has a different definition of shortly as he said that 11 hours ago.
In your worst case scenario for AMDs opinion, you are getting a bit mixed up. To keep interval consistent, the latency of the frame must be altered.
In this chart, if we take the first frames (31ms, 19ms, 34ms) and, hypothetically, want them to produce at a near 30ms consistently, we must increase latency of the second frame, thus adding the "10ms delay" that was recently quoted. You still get the same FPS, just now instead of that third frame being "dropped" (or partially drawn because of the quick to display 4th frame(19ms)) and creating the the lower "practical" FPS, it is consistent and the "Hardware" FPS can be properly displayed in full.
Watching the video, here is a photo that will sum it up:
Amd being the top, nvidia being the bottom. As a note, ignore the actual amount of "frames" that are in the top row of each. It will seem that amds FPS would be artificially higher but that is not the case, just a poor representation of what is actually happening. Refer above as why fps will not change.
Now having a look, amd clearly was shooting for minimal input lag and just shoving frames into the pipeline and not caring when one or another was displayed. To combat this, the metered style "paces" these frames. Now, to get these frames placed where you want in the pipe, you must play with the latency of when that second,third,fourth... frame is started/sent. If the user input came right after the first image is displayed but before the delayed start of the 2nd frame,(since we are pacing it to begin at a controlled interval), you will get induced input lag.
Where as nvidias video demonstration only specified latency as frames (.5-1.5 frame latency) and not the time between user input and frame display, its clear they (amd and nvidia) are talking about the same thing in different respects. Amd could easily say their latency is .5-1.5 frames as well (once the metering is put in effect) but chose to actually state the inherent problem with metering: the increase in input latency forced by the timing of frame display.
Hope any of this made sense. Both are talking about 2 different outcomes of the same process; AMD chose the bad side effect, Nvidia the good. Not surprising since they are both justifying their previous position on the matter.
Arghhhhhhhhhhhh the suspense is killing me arrghhhhhhhh
It's performance / watt that should be taken as the metric. If you're getting 50% scaling, then your cards are drawing ~50% more power than a single GPU.
With following generations, you'd have a card maybe performs as fast with half the power draw. But then again two of that card perform double as fast with the same power draw.
It's when you have an ancient multi-GPU setup that this starts becoming a problem.
Temps and noise, yeah they go up. Not if you have a water loop though.
completely agree. GIMME DRIVERZZZ NOW
I know all of this, it's "what's being pushed" that's the question. AMD are implying that the frame is still being rendered at the same time but presented with a delay, which doesn't make sense, since the worst case scenario I have described might mean that they are being displayed in perfect intervals but still rendered as fast as possible. Take VSync as an example. with VSync on, the output is a perfect 60FPS, smooth as silk. The second card is actually rendering now with a delay (a delay that should be present naturally if frames were to be perfectly paced). This even happens in cases where a game / benchmark microstutters so badly that 60FPS looks like 30FPS (Heaven Benchmark). If the frames were delayed so that they'd display with VSync on, the bench would still look like 30FPS since almost the exact frame is being displayed. If the frames were rendered like the frame metering illustration shows, then it'd look like a perfect 60FPS, as expected.
That's just AMD's excuse. If I wanted minimal input lag and would face microstutter as a result, I'd just turn off the second GPU and get EXACTLY the same input lag and same practical FPS. The second GPU here is useless when microstutter is in its worst case scenario. It's just drawing additional power.
Plus, a perfectly working multi-GPU setup is one where frames are expected to take the same time to be rendered on each GPU, but they are displayed in succession. That delay is not one that could be cut out without throwing the whole multi-GPU AFR idea down the drain. Here's what Nvidia says about this:
That's the whole point of multi-GPU. Reducing inter-frame latency due to parallelism (i.e. increasing FPS). Input latency is the same if you're simply doubling your FPS and reducing inter-frame latency. However, if you run a single card at 60FPS vs. a dual-GPU setup at 60FPS, the second will have additional input latency since the frametimes per card are twice as long.
Worst case for single GPU is 1-2 frames, that becomes 1-1.5 frames that are twice as long, i.e. 2-3 frames (single-GPU frametime considered here). That's the additional frame of input latency.
They're definitely talking about the same thing. However, Nvidia have specified that they are talking about 60FPS, so that means one could easily multiply frames by 16.67ms / frame and get the input latency results in ms.
All I'm honestly confused about is that AMD are implying that they are rendered at the same time as an unmetered setup but only PRESENTED with a delay. Nvidia are implying that they are RENDERED with a delay relative to the first GPU. But how I understand this is, that delay does not increase input lag as it allows for additional input, while in the unmetered setup, the second GPU is mirroring the first and practically doing nothing.
Definitely, that is what they are doing.
Thank you for your input! Would be glad if you could chime in again.
EDIT: To clarify something, my question is this: How could you render now what's supposed to happen a few milliseconds later? How would you predict my input? Is the second GPU actually rendering the frame at exactly the same animation timestep it should be rendering at? How would this be possible if it has yet to receive my newest keyboard and mouse input that have happened JUST before this timestep has been reached?
Im just waiting for Hilberts take on them, and an updated set for us single card users, not that i need any new drivers, as everythings running great, just been a while since i installed some
It wasn't just Hilbert who said the new driver has been delayed btw, a few other reputable sites posted the same thing today. I swear to god if this driver takes CF to new levels and I nail my interview tomorrow, I'll be ordering another 7950 soon.
They gotta be kidding, right? Lol
It's allright to cry
show early results looking awesome, tell them to wait 4 or so months, release an exact date, tell them early morning that exact date that u gonna release shortly and then get silent and not release anything.