Discussion in 'Videocards - AMD Radeon Drivers Section' started by LocoDiceGR, Nov 29, 2017.
Wow. My Fury can't go above 1070, even with +20PowerLimit. I will try more now
For real. I havent had to use DDU for a few months, but after I hooked up my monitors to my fury after using them on a 8400gs, it went all stupid.
DDU is amazing.
I sayd I always use DDU and I flag everything to remove. It still happens. Is my PC under demonic presence?
Do these latest drivers still fail to install under w10 2016 ltsb?
@ AlleyViper yep at least here Windows warn that this driver is not write to this O.S. so I go to safe mode and of course use DDU and return to version 17.11.1
What a let down. Thanks for your quick answer!
Have to be, but according to all my friends, Im just that lucky. Most of the time, things just works. Or eventually work.
I gave my card the "treatment" sometime ago by replacing thermal pads with Fuijipoly with a thermal conductivity of 17.0 W/mK.
That still didn't help with stable OC'ing past 1100MHz though. But it did bring down VDDC, etc temps down to GPU temperatures. So it was still far better then factory.
Highest I got my fury was 1150/575. But it was too toasty even with fans at 100%. I actually run it stock clocks 1050/500 for gaming and it stays 63c or cooler with 75mv undervolt.
That's a tad bit high for a Nitro card at stock settings. What are your ambient temps? When was the last time you re-applied tim to your gpu/hbm? Fans should be ramping up to keep temps down unless you prefer more silent then cooling?
Here take a look at mine:
My fury x ran 46c with stock fan settings. lol jeez. Its stock TIM. My room is kinda warm. About 70-72f during the day. I even have a 60mm blower exhaust fan under my fury to dump the heat out too. I do have a custom fan profile that has the fans off till 50c. But then ramps up. My fans are about 1800 rpm around 60c. About 60% fan speed around 65c. Currently 43c idle with 3 monitors and NFL game streaming on chrome.
Testing max fan speed is 3620 RPMS. Its loud lol. Pushed the card to 33c and going lower.
Could make it more aggressive for gaming. I dont really hear the fans at 2600rpms
Just did some testing. 1156 core, stock voltage of 1250, fan at 70% and fury was around 50c.
I see, as long as it's under your control then temps are fine.
I did a quick run down between OC's. Not bad results... I wonder what changed in the drivers...
Firestrike dont run for me, or really any 3dmark benchmarks. I dont bother with it anymore. Glad I got 3dmark for $5.
Constantly says cancelled by user. Does it on my server which has 1 monitor and GTX 650ti 1GB. I popped my fury in it, and it still did it.
I have ran stock settings on cpu, ram at even 2133, and gpu stock, and only 1 montiors and it still says its.
Oh well, I get black stretched artifacts (as when vram craps out) on Deus Ex Mankind Divided on open streets in Prague. It's only on DX12 and DX11 seems to cure it (unfortunately it's much slower on my ancient cpu) , but I hoped Eidos or AMD would have sorted those problems by now. I've seen complaints of it from RX4xx and Maxwell/Pascal users since last year on the steam forums.
Your GPU is a bit newer than mine but I had to give up on DX12 for Mankind Divided due to how buggy and crash prone it was, that was a few months back though when I (finally!) actually completed the last third or so of the game so more recent drivers and perhaps even OS updates such as RS3/1709 might have seen this improved but yeah performance improved by around 15% or so at best for the quick comparisons I did but it just wasn't stable.
Other DX12 titles have been mostly fine (A slight issue with texture quality in The Division, fixed by the developers in a later update.) so it was likely something on the games side, I don't think AMD has done much tinkering with the game on the driver side either for a while now to whatever extent this can be done for a lower-level API such as DX12 (Or vulkan.) but there's probably room for some optimization even with these and much being in the hands of the developers themselves as a result.
EDIT: Though it would seem the Fiji (Fury and Nano.) is pretty much a dead GPU far as AMD is concerned minus bug fixes and enhancements benefiting all currently supported GCN GPU models especially going by recent statements on the AMD forums when asked about stuff like HBCC, AMD's representative could have stated hardware and not supported due to architectural differences found in Vega but they specifically pointed out the GPU as being too old.
(~2014 so 3 years so yeah a bit aged but not too bad.)
Whereas the 400 series even if AMD is more interested in the 500 series having some tweaks it's still Polaris and should benefit from tweaks AMD does to this architecture and then of course Vega being it's own thing as the newest GPU available.
Though some games have during testing shown the 500 series GPU's seeing improvements whereas the 400 series does not so who knows how this thing is divided in the actual driver code, GCN might be more compatible and easy to develop for but it's probably still split up not just for GCN 1.0 to GCN 5th Gen or what it's called but also between GPU models such as the 400 and 500 even if both go under the Polaris architecture and being the 4th generation GCN.
It could be that some reviews comparing newer RX580s are using AIB models, while they keep the older RX480 a reference model (from when it came out). The reference RX480 was awfully capped by tdp, struggling to maintain stock clocks at default power limit, and that could hinder driver gains vs a decent RX580 more prone to scale naturally.
On Deus Ex it's about a 10% loss in the bench at high settings under 1080p (53 to 48fps avg) from dx12 to dx11, but while gaming that reflects on 55 instead of 60fps in the most common but less demanding open areas causing some stutters due to frame pacing, even with triple buffering enabled under vsynch.
Yeah the Prague parts in particular in Mankind Divided make use of a streaming system so as it loads and unloads data there can be noticeable stuttering. RAM usage also builds up and the last texture quality setting in particular easily exceeds 4 GB VRAM after traveling around for a while, loading times also steadily increase which can be seen from the station transits between the two main city blocks or when going on a actual mission but with 8 GB it's probably less of a issue for you.
DX12 is able to allocate and manage this better though it's also able to leverage AMD's GPU's a bit better overall which most benchmarks also reflects with the hardware improvements in Polaris allowing for keeping up with Fiji in several DX11 games which is a pretty clear indication of a bottleneck of some sort. Whereas in DX12 it's usually able to pull ahead to it's faster shader performance.
Much like you said the third party overclocked and overvolted 580's really pull ahead and I guess tests using these could have re-used performance figures from earlier tests of the 480 GPU's including stock thus the difference between the two in benchmarks like that.
For my own tests with the game I actually used the main menu screen for the quicker comparisons, 48 - 50 FPS with DX11 and then 58 - 60 which is what it's capped to with DX12 though the actual game is more varied depending on area and Prague is still very demanding even without VRAM as the bottleneck though it alleviates the streaming a bit.
(Well, more than a bit since if VRAM runs low the speed isn't going too matter that much, 4 GB is starting to really hinder GPU's now if you push the settings up and HBM doesn't make a difference here.)
DirectX 12 also enforces certain optional (Less seldom used.) standards from DX11 such as allowing for a flip queue (Windows 8 and newer.) and flip discarding (Windows 10 RS1 and newer I think it was.) though I tend to cap framerate and leave VSync off so these don't affect things too much but if set up and the user makes use of VSync then these can remove a bit of latency and smooth out framerate fluctuations.
(Windows 7 is popular, setting these carelessly would break Win7 support so that's maybe one reason we seldom see this used by DX11 games, just speculation though.)
Hardware wise G and Free-sync has their own method too when it works that also helps a lot with this issue and some others.
Flip discarding was included in the 1507 RTM of Windows 10..
TH2 then if I remember the names correctly, hah this six month upgrade schedule isn't the easiest to keep track of (Though it hasn't been in use for that long.) but it does at least guarantee quick updates to the OS and MS keeps the monthly cumulative to mostly bug fixes and security enhancements.
And then getting used to settings being reset and learning all the changes and what's been added or removed twice a year ha ha, eh it's not all bad I suppose and more advanced configurations is also possible by modifying the ISO to have some of these tweaks immediately on install instead of having to pull the network cable (Bad OS! Drop those old drivers you're trying to pull from WU.) and perhaps clean up some less used apps and what not.
Well that's Windows 10 and all the things with that and this is AMD's driver and here's hoping there's only a week or two until the 17.12.1 drivers hit and we can get some info on what they changed this time.
(Probably Vega oriented for actual performance improvements but we'll see, OSD might have some handy stuff if that's what they'll be adding too and hopefully some bug fixes in general and maybe some other enhancements.)
We want Wattman fixed and Enhanced Sync for everyone