Discussion in 'Videocards - AMD Radeon Drivers Section' started by PrMinisterGR, May 4, 2015.
It's a CPU bound situation. That's why with an i3, the 750Ti performs better than the 280.
You actually can. it does not say that one card is better than other, it tells you that fps will suffer if game pushes more than driver can feed into GPU.
Check Space Marine in my Benchmarking post. 1080p does exactly same as 1440p. Only GPU utilization rises. It has same fps because driver does not send more requests.
But in comparison to previous driver it now has higher fps because new driver feeds more data to card.
I am sure they did something to very old driver core now, because Dead Island (DX9) now has extra high fps in comparison to previous drivers too.
Now reviews/benchmarks can be done with the official 15.7 with overhead improvements, a lot didn't use the Windows 10 modded drivers in say 8.1, so it will be good to see how they compare now.
Yeah I know. But gotta be realistic. Can't get 1M from 1 driver upgrade easily, so gotta be satisfied from even smaller ugprades. AMD needs to use a lot of driver dev team resources to DX12 drivers aswell.
I played Space Marine on my 270X with VSR at 1440 (15.5 with VSR mod) and even that card did almost constant 60. Not very demanding. Pretty good game, clunky controls unfortunately.
I played it back then on HD5870m, And it was running very well, I like how they used contrast based AA.
Basically applying AA only around edges where materials of objects had high brightness difference. That improved IQ and lowered HW requirements a lot.
Thanks to that it does 95~192fps on 2160p for me now.
But point is that while fps on lower resolutions is very good, it can be considerably higher as GPU is quite under utilized.
The 3 months would be mainly my time, but add maybe a few weeks of 2 other people for tools support, shader/material pipelines etc. So probably 4 "Man Months" - then a few more months fixing bugs and refining multi-threading, with an end goal of visual parity.
I didn't like AA much, hence VSR. It was a bit empty, and it left us on a cliffhanger. They were trying too much to be "cinematic". To me it felt it needed one more iteration to really come together. That difference between a very good game and a truly great one.
I consider the SSAA with LOD-offset feature (the one which has been available from CCC for years on virtually every supported hardware) a better choice of SSAA than VSR. They are basically the same but the SSAA in CCC tries to counter the texture-blur / loss of detail (well, it's not like MLAA/FXAA blur, but it's inevitable to loose SOME detail during re-sampling) by forcing a negative LOD offset on textures (theoretically resulting in more detailed textures to begin with before the down-sampling and thus the blurring happens).
I tried overriding the AA settings for Space Marine in CCC, but it did not work at all for me. Probably something to do with it being an AMD sponsored game and having a special (hidden) preset in the Catalyst already. I then tried VSR, it worked with a stable 60 fps and I finished the game.
I had more success with CCC AA override on Hand of Fate. Really visible on card edges.
Well, yes, that is a limitation. CCC usually tries to pull the desired AA level (OFF,2x,4x,8x) from the game menu and only override the method and/or filter (adaptive- and/or CF- MSAA or SSAA) despite the presence of the sample level setting is CCC (which might or might not override the level in the game menu when that's anything other than OFF, I am not sure). So, you can end up with no AA at all if the game itself does not support any kind of AA and you can't change the method if it's some kind of custom post-processing (not standard MSAA or SSAA).
Is the driver overhead also reduced in directx 10 games? (e.g. flight simulator X)
Almost nothing works from the CCC. There is no real way to truly force anything. Only older titles really work.
No DX12 crossfire support yet
API overhead test give me similar results for Mantle and DX12 with or without enabled CFX.
Do you have to have it enabled on the driver level, for DX12? Isn't the app supposedly that does that?
May be because the 3DMark API overhead test is basically a CPU benchmark...
It's a benchmark or raw multi-threaded CPU performance in DX12 and Mantle modes and a benchmark of 1: either the VGA driver's CPU optimizations on a given CPU or 2: a CPU benchmark if you test different CPUs with a given VGA driver.
Yes, and 3dmark api maxes out 4c/8t cpu on 290x on mantle or dx12. So you need more cpu cores.
Or you can try underclocking your cards core to get performance down to test if cf works.
These are my results with Catalyst 15.7 in Windows 10 240.
The DX12 results are mind boggling.
This is the graphics score:
Yes. It's interesting how a low-level but still manufacturer-independent API seems to outperform an equally low-level but manufacturer-specific API in raw CPU performance. Although, MS obviously knows the Windows kernel better and could even specifically alter the Win10 kernel to accommodate for the needs/wishes of this new low-level DX API (which might or might not could help a theoretical future Mantle version as well).
I wonder if AMD's high-level DirectX and low-level Mantle drivers have optimizations for a "generic amd64" and some specific AMD CPU architectures (automatic compiler optimizations or might even some hand-written ASM code) but not optimized at all for Intel CPUs (no specific instruction set support for anything which could be handled by and/or would be faster on Intel CPUs but not on AMD CPUs). It would make sense from a CPU manufacturer's and a "platform pusher"'s standpoint of view. In contrast, from nVidia's point of view, it's better to optimize for both (since they only sell GPU and don't directly compete with AMD on a CPU front) or for Intel CPUs only (since they indirectly do compete with AMD ; although, it's probably more important for them to make their GPUs run as fast as possible on any CPU than to indirectly undermine AMD's CPU business and they don't even want a CPU-monopoly).