Discussion in 'Videocards - AMD Radeon Drivers Section' started by Hilbert Hagedoorn, May 4, 2017.
Nice to know in the fun way.
Arkham Origins is dx11.
That was most likely issues with hooking and third party interfering overlays like Steam/RTSS etc, something that a driver implementation wouldn't face.
Can you give me one title as an example? I don't remember TB not working with anything that RadeonPro actually worked on.
This is just a shame. Fast Sync is nothing like triple buffering btw. To highlight that it's possible to force it everywhere, it's actually in the list of features that AMD is taking feedback on about implementing.
Interesting. In any case, I think that especially for DX11 games, the success rate with force enabling/disabling triple buffering might have been too low for Nvidia or AMD to feel the need to add such a feature to their drivers.
DX9 I expect a higher success rate with as you were able to force other things on that you couldnt really do with 10+, or at least effectively, like MSAA. Probably due to the extensive rendering path rewrite between 9-10 (11).
"To highlight that it's possible to force it everywhere"
I'm not sure if thats actually possible, or that people just submitted it and it got a lot of votes?
According to just about everything I've read (and tried to understand) on the subject, its not possible to actually force enable triple buffering in Direct3d if the game engine does not natively support it---a lot of people, I think, mistake triple buffering for "rendering ahead" which Nvidia already has a feature in the NVCP (and AMD doesnt). (Maximum Pre-Rendered Frames), which increases input lag, whereas true triple-buffering shouldnt affect lag that much.
I could be completely wrong, but thats what I've seen.
That list is created by AMD, not people. You can only vote in it, not add/remove proposed features. It is completely possible to enforce triple buffering, especially if you handle it on the driver level. Pre-Rendered frames and flip queue size is again a different thing. There are flip queue controls that currently work in the driver and they are again accessible through RadeonPro and RadeonMod.
Triple buffer in the nvcp has worked for me since the age of FX 5200. I've gotten it to smooth out numerous FIFA/Sports games where enabling Vsync would drop my framerate to half of what my refresh rate is. Today's fancy DWM or borderless windowed mode didn't even exist back then. In games like Crysis, running it on Windows Vista SP2 without that Triple Buffering option marked resulted in framerate dipping to 30 when Vsync was enabled, so don't believe everything you read on Reddit. Things changed since Windows 8/10 I'd imagine but I remember clearly that Triple Buffering set nvidia and AMD apart when it came to playing older titles like Assassin's Creed II when it originally launched.
Also, Triple Buffer is highly discouraged for SLI/CFX configurations and IIRC it just wouldn't work/play nice with multi GPUs.
From what I'm reading, what we are actually able to force on for D3D games and Windowed Borderless is not actually true triple-buffering. It has a queue buffer of 3, yes, but its not actually triple buffering, which is why people are always complaining about it introducing latency. Nvidia's fastsync (which I have yet to try on my 1080s) is supposedly as close as you can get to forcing true triple-buffering in D3D games. Apparently what Nvidia Microsoft AMD refer to as "triple buffering" isnt actually correct--its referring to a swap queue/render ahead queue in D3D.
"There are flip queue controls that currently work in the driver and they are again accessible through RadeonPro and RadeonMod."
Right, but not in the official control panel. Perhaps that is what they are asking to enable?
But its not just reddit---there are official Nvidia forums claiming that it will not affect anything but OGL games. Maybe it used to do something in older D3D/Windows versions and doesnt anymore?
Also, from what I understand---enabling CFX/SLI automatically enables triple-buffering (or what we call TB), as it is required for AFR to work.
I have already stated that the option made a difference prior to Windows 8/10, so yes, it made a difference in older OSes. Now most games ship with Triple Buffer as I haven't dealt with Vsync causing my framerate to tank to half of what my refresh rate is.
No, Flip queue (as you mention) is a different thing to Triple Buffering. Triple buffering is actually a queue buffer of 2, plus the front buffer being displayed.
Which I guess (If I'm reading this correctly) seemingly supports the theories out there that to have true triple-buffering in games, you need to have the game engine implement it. I've seen the option recently, like in Thief (which I gather uses UE3 or some custom version of it) where you can actually choose between triple and double buffer, and DX:MD.
From what I gather, the reason why the "forced" triple buffering in fullscreen exclusive D3D isnt real triple buffering (and thus can introduce noticable latency) is that D3D has no ability baked in to discard unnecessary buffers/data if they arent needed or desired. Does that make sense to you, or am I misreading this stuff?
In any case, I cant make the "Triple Buffering" option in NVCP do anything on my system no matter what I do for D3D games in W10, even disabling SLI for troubleshooting (apparently for AFR/SLI to work, a psedo-TB has to be in effect anyways).
Triple Buffering option in Radeon Settings only applies to OpenGL games. Has no affect on DirectX games, which is why we needed to use a 3rd party application to enable it for DX. Same for NVIDIA.
Yeah, the contention was originally I think, that AMD labels its current control panel as "OpenGL Triple Buffering" whereas Nvidia just says "Triple Buffering" with no mention of what API it actually works with.
Then I quickly took it to a dark place with breaking down the very definition of what triple-buffering actually is, and well....
On the NV CP, if you read the description on the bottom when you highlight "Triple Buffering" you will see they mention this is for OpenGL.
x2, I remember that OpenGL detail detail in the description since 6800GT AGP days. Edit: nope, even before that I was using it in IL-2's OGL under a FX5900.
AMD has made a statement in the past that they wont add TB for DX as its outside of the scope of recommendations by MS.
MS recommends the game engine implement it.
ALL 3rd party implementations of TB are not true TB per JapAMD who wrote radeopro.
Sadly, this was a one-off event.
Most times I start my pc, it turns itself off just after the Windows 10 logo appears, or it simply hangs on the logo screen. After turning the pc back on again, or doing a hard reset from the hang, it normally boots.
And then after logging in, there is a message about Wattman experiencing an error and being reset.
All I have adjusted on Global Wattman is turning "Chill" on, which has only been in the last few days, normally it has been completely untouched. I use Sapphire Trixx 6.30 to set a custom fan profile.
I don't see anywhere in the description that mentions anything regarding Triple Buffering being limited to D3D or OpenGL.
However, the most common mistake people are making is this.
When you hover your mouse over the Vsync Option, this is what it says:
So the Vsync support's description of it's usage scenarios is being passed on as Triple Buffer implementation, eh?
What's next? If you can't enable Vsync for a game from the NVCP, the Triple Buffering is useless, therefore Triple Buffer is only supported in OpenGL titles, that's going to be the logic that's going to get thrown at me I assume?