Discussion in 'Videocards - AMD Radeon Drivers Section' started by fullban, Apr 14, 2015.
Obvioulsy AMD can't solve DX11 limitation but it can do something with his own drivers.
On paper DX12 will solve all PC gaming problems.
I trust in the ability of game developers to code bad optimized games who requiere HUGE and increasing amounts of hardware power...year after year.
Optimization seems to only occur in console world in PC we suffer downgrades and abuse of hardware.
Or unoptimized DX11 drivers. Like the ones AMD has.
DX12 is so amazing for AMD, because it means a smaller driver footprint = smaller driver team investment.
All of the fanbois (me included) who believe that DX12/Vulkan is the next greatest thing to sliced bread, forget that most titles are going to keep being DX11 titles, just because coding in DX12/Vulkan requires more work on the part of the developer.
That is so true. DX12 is only going to appear on Windows 10. How many people are going to upgrade to that? Probably not much. The onus is also on the developers to develop games that comply within the specification of an API.
AMD has to communicate better with it's user base as well. I know of a software engineer that had reached out to both Nvidia and AMD. Only Nvidia responded and rectified the issue with their drivers.
It has been over a year since then and AMD has still stayed silent and not solve the issue at all.
We can't blame entirely on AMD or NVIDIA for last years game disaster, almost every single AAA title it's launched unfinished and need multiple patches and the "Day one patch/update" it's a plague.
Developers have a big share on this, the biggest i think.
If games work better i will use W10 or W8.1 or Linux Mint 1X.X no prob, right now i have 2 W8.1, 1 W10 and 1 Linux Mint 17.1 OSes installed in my PC.
We are on PC we can try new a OS use multiple or mod drivers it's not a closed console box.
What AMD needs to comunicate?
Something like the recent "apologize about the confusion"?
What was the "confusion"?:
Option 1: The "confusion" was to don't enable VSR for all cards.
Option 2: The "confusion" was to left behind the option to unlock via software the "hardware" scaler needed to add VSR support on GPUs who lack this "hardware" scaler, series 6000,7000 and the 4K VSR res on the "half-hardware" scaler on 290(X)/295X2.
AMD has to lie less and code better drivers for his GPUs if they want the customers whom spent money on them repeat and buy an AMD GPU next time they need to update their GPUs.
Well I'm not talking about neither the gaming aspect of GPUs to be honest nor the confusion about VSR. GPUs do have more function like OpenCL or DirectCompute which can be use for other purposes.
Some functions used to work correct before in year 2013 drivers. It just went broke after 13.12. Several people (including myself) had reported the issue to AMD multiple times over the years and it's still not fixed at all. Nvidia was more forthcoming in fixing the issues that was reported.
AMD just has to provide better service support, be it gaming or otherwise. It's totally underwhelming from them.
Windows 10 is a free upgrade if you have windows 8 & even if the windows 8 you have is pirated you can still upgrade for free. I suspect a crap tone of gamers will upgrade because of DX12. Thank AMDs Mantle Devs for stoking the fire under Microsquish asses.
MS developed DX12 because:
- "12" is "11"+"1" and it's nice to make things easy!
- MS wants a unified ecosystem around W10 including PCs, Xbox One, tablets and phones for THEIR OWN INTEREST.
These are the reasons to develop DX12 and freed the upgrade to Win 10: to gain more money.
Do you really think MS developed DX12 cause Mantle?
Don't fool yourself an API supported by 6 (6?) games is not "the next big thing" who will destroy MS empire. LOL
Sure they will use ideas, concept and even code if they can from Mantle but only because they will make more money unifying all MS software "pieces" in a coherent ecosystem aroun W10 and DX12.
Not because they were facing a BIG GAME CHANGER.
MS thinks DX12 will be a game changer for Xbox One due to an easier development and best performing games for it and easier ports from/to PC.
I don't think so, the hardware shortcuts (-$$$) they take in Xbox One memory and gpu will not be solved by DX12.
Just to post my situation of GTA V on my crossfire setup (2x R9-290 Asus Direct CUII) using the latest 15.4 betas + default GTA V application profile provided with these drivers.
Running at 1920x1200@60Hz with mostly settings at Very High -will post the exact settings later, I'm having stuttering only while driving fast, which does not makes the game unplayable as other users say. Most of the time the framerate is at 60 (using v-sync).
Having my homemade G19 applet always showing both cards timings, it shows that Crossfire is enabled, the clocks are stable @10000MHz/1260MHz -so no vrm overheating, plus utilization is at ~45% per GPU.
I also saw that the CPU utilization is in the list of the games with the biggest one I've seen (the other ones are Watch Dogs, Far Cry 3 and Far Cry 4). All 8 cores are active and the total CPU usage is mostly at 50%, with peaks at 65%, without fps dropping at all.
All these show that there is no bottleneck on the CPU or the GPU.
The only strange about GTA V's settings, is that at the graphics settings I see I can use up to 8GB of GPU ram, which is technically wrong as in crossfire the memory is divided by 2, meaning 4GB per GPU/2, totaling a 4GB usable GPU ram. If I'm not wrong, only Dx12 will manage the total GPU memory as one. Does GTA V using the GPU memory without divide it in two ?
You are capping at 60fps and aren't going to experience the same issue as someone trying to run at 120/144Hz, where the crossfire usage definitely is being limited. The fact that the GPU usage in crossfire is only 40-45% at higher frame rates with no frame limiting on demonstrates that. Even on modest settings the game will not maintain a steady 120fps because the GPUs are not being utilized properly, a single card gives more consistent frame rates than two.
If there is no issue like you say, then there is no reason for a single card to be giving better sustained performance at high frame rates than two, but sadly that is the case currently as several people are experiencing.
You get me wrong. All I've have said had to do for my system only.
We are at the same train here. I agree that something goes wrong with crossfire scaling and a little stuttering is present in my system, too.
At least on my system/setup, the game is not as unplayable as some users say.
I hope that there will be a fix out soon (just not like Watch Dogs and Far Cry 4 did).
Both Crossfire and SLI have bad fluctuations, according to the HardOCP review.
The ups/downs on the framerate are too much apparently, with Crossfire being a bit worse:
So multi-gpu scaling is not too good from both sides. AMD performing worse as usual in both single and Crossfire. I guess we have to wait for the next beta catalyst drivers.
The latest patch from R* also seems to drop performance in game.
Ofc there's going to be fluctuations in framerate... dear lord, whether it's clearly "stuttering" is a different thing.
It's totally smooth for me.. no issues, running 1440p @ 114hz. Smooth as butter. Almost 6GB vram used of 6GB available.
Make sure you have ULPS disabled and those with running intel processors make sure you have all cpu cores unparked too.
Use this http://www.coderbag.com/Uploads/Unpark-CPU-App.zip
That image you linked to is for single GPU.
You're looking at this:
As for their testing methodology at 2560x1440, "hey let's max everything and compare two 4GB cards in SLi / CF to a single 12GB! Hey why not! Oh it stutters?... How come...". Then they discuss scaling and say the game needs it at 4K WITHOUT testing it at 4K where we could see if the game is bottlenecking or not at high framerates. No comment.
That test does not reflect what we're experiencing. We're not using settings that cause us to run out of VRAM. I also do not have stutter at any framerate and trust me, I could spot stutter in an instant.
It's actual relatively fine for me during normal game-play except for the occasionally drop accompanied by stutter and lag. Strangely enough, cutscenes always micro-stutter even when locked at 60fps. Not unplayable though, just annoying.
Sorry for that, wrong paste
You are right is not fair to compare 4GB cards to 12 GB cards.
If we take Titan out of the chart and look the comparation between 980 SLI and 295X2:
- CFX frames are horrible in this chart they stay under 60 maybe half of the time even 30s and 40s appear as a deep fall.
- SLI is only 20% of the time under 60s, it haven't this fps deep fall and fps stay higher and more stable than CFX.
From my experience even at 1080p a 290X cfx is unable to sustain 60 fps maxing the in game settings.
Stay on top of a roof during day and you will see sub 50% usage in 2x290X and 40ish frames...on 1080 GPU OCed (1150/1600).
I have pics staring at the panorama 35 fps 52% GPU without OC (1080/1250) on 1080p.
I read for years the mantra that a single 290X is enough for 1080p gaming and a CFX is overkill.
Playing this game i don't see the cfx overkill, locked 60 fps at 1080p is not posible without the compromise of using less in game effects like MSAA and other (ultra in grass?).
Running GTA V with my 2 R9 290X's without a problem. Scaling is in the high 90's. Did disable vsync though
Sure it scales well and use +90% of GPUs and +90 fps...indoors.
Outdoors i don't see the cfx scaling well.
Try driving or simply stay on top of a bulding looking the horizon, both GPUs will be around 50% usage and fps around 40.
It's a nonsense to have a 290X cfx and need to down graphic options to stay all the time over 60 fps at 1080p.