Discussion in 'Videocards - AMD Radeon Drivers Section' started by freibooter, Aug 2, 2013.
Um wtf I just replied to your post...now it's gone..hmmm
How many people sent the bug report?
OK 2nd try, a bit shorter I cba writing all again...lol
Basically the GPU has 3 available power modes; Boot, UVD and 3D. In turn each of these has a high, med, low limit has 3 vendor specific options: low, med, and high. UVD does not have one fixed speed as some people are mislead to believe.
High limit UVD uses 3D clocks. My 2nd screenshot was trying to illustrate this.
The clock speeds depend on workload and application, compute transcoding for ex requires higher clocks to be efficient - which explains the different clocks speeds people see.
Another thing is UVD clocks cannot be adjusted by the end user.
Overclocking has no effect in this regard hence why when a 3D app has control overclocks speeds are used then drop back to stock when the UVD app has access.
So yeah UVD is AMD's ip but it's up to applications like Adobe Crap Player to decide how to use it
Speaking of which, UVD support is now open source so hopefully we will see devs like you bring some better software...
I think (I'm assuming here) your scrambled screen is a result of switching power/clock/voltage states.
This occurs on quite a few GPU's when switching 2D/3D clocks due to vddc target undershoot or not ramping up fast enough with a change in power state. May want to check what buck converter the card uses....:nerd:
I just made some experiments:
What gives that...
- hardware, vddc, etc is unlikely - it can switch over from low 2d (500/150) to 850/1250 without problems, and when manually switching over frequencies all over the place it is fine, but when it needs to go from 1249 to 1250 suddenly displays are out of sync?
- different UVD levels are fine, but memory is all the same in them, and it is exactly memclock that makes it, basically any UVD hit with memory even mere 1mhz different from the stock and it is dead
UVD support being opensourced makes me happy, but the fixed, unchangeable UVD settings not. And I'm wondering if the whole stuff is bios based, rather than driver based, i.e. it is not the driver reading the values and forcing a freq switch, but bios being given a command to switch to UVD and it does pick one of them and forces it over everything, pretty badly too.
It still leaves the 2 lines of code easy fix, but it might be bios fix needed, rather than driver or application. For applications is probably a bitch too - they need the card in a condition to play videos, obviously idle clocks are not going to cut it for hardware decoding, so they request UVD (so it can kick the card up as required), but instead UVD kills the display as it is in this case. It may not be possible for the application to counter that, unless keeping a database of every card model to be able to understand when to use or not UVD.
It is still in AMD's land to apply a very easy fix for all instead of making every video player of any sort having to implement ugly solution.
Edit: It is very late, will make you a video of the effect tomorrow, totally fun to watch, but I've got to check card frequencies before running any video, every now and then failing to do that and getting either scrambled screen or direct freeze and I'm losing game progress.
I have even got an ipad to play videos on, since my 2 7950 can't handle that task...
I don't think anyone here believes that.
It uses stock 3D clocks (and I believe only on current gen cards).
That is the entire issue! That is all we are complaining about!
The fact that the UVD clock high limit cannot be changed by anything other than a BIOS mod (that AMD now prevents with checksums) and that they are not kept in sync with the 3D clock high limit when using Overdrive or any other OC software.
That's what is causing all the problems, that's what causes the performance issues and the crashes (when adjusting the 3D Memory Clock, since multi-monitor and 144Hz and similar configurations crash when it is changed from stock and when the card keeps switching between UVD and 3D clock high limits).
Under normal circumstances the memory clock is never, ever touched in multi-monitor setups. It stays at its high limit at all times in dual monitor or 144Hz mode. But when the 3D Memory Clock is manually adjusted via Overdrive, the UVD Memory Clock is not. Playing HW video then constantly switches between the two different high limits - and this causing flickering and ultimately scrambling and crashes.
All we are saying is that there is no reason for the hard coded UVD high limits. They cause serious problems. High limits set in Overdrive should be applied to both 3D and UVD clocks.
This actually sounds trivial to implement by AMD, but they have not - making UVD effectively incompatible with over-/underclocking. You can use one or the other but not both without issues.
This is exclusively the result of a difference in UVD and 3D memory clocks. There must be a good reason why AMD keeps the memory clock locked at its maximum in multimonitor setups, changing it too often causes the system to lock up under those conditions - that's why the unadjustable UVD clock high limits are a serious problems here.
All AMD has to do in order to fix the entire problem is to keep the maximum 3D and the maximum UVD clock speeds identical, even when Overclocking. And only the maximums, everything else can and should stay how it is.
I really don't think that is all that terribly much to ask.
I just noticed that even videos in the Steam client trigger UVD clocks and HW acceleration can't even be disabled there.
This idiotic bug essentially makes it impossible to use AMD Overdrive and have a stable and fast system.
Exactly, this is all driver/vbios related and open sourcing the userland side of UVD doesn't change anything. This has absolutely nothing to do with the software using UVD but the way UVD is implemented on the kernel/hw side of things - and that's all AMD's doing, nobody elses.
Can you STOP trolling?
A mod needs to clean this thread out, fast...
I'm not sure if the problem is triggered by IDLE clocks being 500/(overdrive ram speed), instead of 300/150 or not. I just did some tests running hw accelerated flash (in Steam) and Unreal Tournament 2004 (windowed) as well as fullscreen, and the clocks kept switching from 1100/1700 to 501/1375 and there was no screen scrambling. But I'm running at 100 hz atm, so the idle clocks are always 300/150.
Back when I was triggering the bug, the IDLE clocks were 500/1700 (or ram overdrive set speed), from running at 144 hz refresh rate on the desktop, and it was always the clocks switching from 500/1700 to 501/1375 that caused the scrambled screen. (even 500/1376 to 501/1375 would cause it)
Note the difference in the core speed.
I really don't want to test 144 hz again (right now) as I don't like looking at scrambled screens,, but *IF* in fact it doesn't happen again at 144 hz, it "may" have something to do with me having a revision 2 Visiontek HD 7970 (the one without VRM temperature monitoring, and thus that won't accept that old ghz bios flasher. I still have a V1 diamond and the old sapphire laying around, but I'm not going to install those cards to test this....
Maybe when I'm feeling brave, I'll go back to 144 hz and try to trigger the scrambled screen with the V2 visiontek card.
Guys, this topic is meaningless.
Anybody with some technical skill will claim this "bug" go be invalid.
1. As pill monster said, downclocking is considered to be a feature.
2. Its reproducable only when using overclocking. That simply voids any validity of the "bug".
If you have utility which can modify bios of your card, you probably can alter idle, uvd and 3d stock values. I did similar things on my old radeon 9800xt.
Offler2, editing bios on 7 series is a pain in the ass, although it will probably fix it.
1. I'm fine with downclocking being a feature, but just adding a check if memory is not at their default/idle, then to not touch it, otherwise it kills it, this way it works like it used to be for everyone that is not touching it, and works for the ones that play with it too, win-win
2. It's reproducible when downclocking too, why is my downclocking any different than what the driver decides to? I too want to save power! I agree, that weird effects can be happening if you push the card too much, like the memory to over 1700 from 1250 default, but to trigger the bug at 1249?! Clearly something else is at play here
As promised the video!
Sorry, thats the first video I upload, but yeah it shows how stuff is done. Sadly, the scrambled pic is not as colorful as it used to be, but meh, still works
For me it has gotten much better with the driver update.
It has gone from a fullscreen freakout to a little "Zap" in the lower third of one screen.
I wonder if there's a link to screens connecten via (mini)DisplayPort?
I have 2 screens, one connected via DVI and one via mDP, but the one on DVI only corrupted once with the old beta drivers and never with 13.8B drivers.
But then again it could be because the mDP port is always set to be my main screen.
Same setup here, but mDP and HDMI, always corrupts no matter the driver version. If I set the hdmi to be primary display, then it corrupts instead, go figure lol.
Just disable hardware acceleration on flash. Not such a big problem anyways.
But yeah it would be great if it could be disabled :nerd:
Was discussed, disabling it results in substantial lower video quality, you can try it. Besides, it is not only flash that makes it. It needs either:
2. avoiding (not touching if memclock is customized)
3. option for turn off
I'm happy at either.
For anyone still oblivious to the issue here, maybe some pictures will help
Note: I'm running 3 displays at 1600x900.
Ok so first off, here's a typical overclocked GPU running without any issues normally:
Clocks are 1000Mhz/1450Mhz during gameplay. All is well here.
But then all of a sudden I open a flash video in Chrome with Chrome's PPAPI Flash Player:
No big deal at all; clocks are still 1000Mhz/1450Mhz, all is well here.
But then, I decide to open the same video in Chrome with Adobe's NPAPI Flash Player:
Clocks are now 860Mhz/1200Mhz. My screens corrupt very slightly for like half a second in order to change the memory clock. Not everything is well here.
Next, I'll load up a video in VLC media player with GPU accelerated decoding enabled:
Clocks also drop to 860Mhz/1200Mhz. No good.
Same video in VLC media player, no GPU accelerated decoding:
Clocks are 1000Mhz/1450Mhz. All is good here.
Now, this all occurred while I was overclocking my GPU. Let me just lay down a very important piece of information here. My GPU (specifically listed to the left) comes factory overclocked. UVD clocks do not respect this factory OC.
But how would I know this since I was OC'd past that to begin with right? Well, lets take the same exact scenarios above, and do them without user-OC.
First off, no video of any kind running:
Clocks are 950Mhz/1200Mhz. Everything is good and well here as it should be.
Chrome open with video via Chrome's PPAPI Flash Player:
Clocks are still 950Mhz/1200Mhz. No issues to be found still.
Now, Chrome open, with video via Adobe's NPAPI Flash Player:
Clocks are now at 860Mhz/1200Mhz. Memory clock isn't affected, so that's good. Core clock however is affected, negatively.
Video in VLC media player with GPU accelerated decoding:
Clocks at 860Mhz/1200Mhz.
And finally, same video in VLC media player without GPU accelerated decoding:
Clocks are 950Mhz/1200Mhz, as it should be.
So what have we learned here?
- Video using GPU-accelerated decoding use UVD clocks
- Video using GPU-accelerated rendering do not use UVD clocks
- UVD clocks are based on the reference version of your GPU
- Chrome's PPAPI Flash Player only accelerates video rendering, not decoding
- Adobe's NPAPI Flash Player only accelerates video decoding, not rendering
- VLC has a toggle for GPU accelerated decoding
Memory clocks dropping during accelerated video decoding on multi-monitor setups can be a big deal, and does happen, but seemingly only if you are overclocked in any way past reference clocks. This can be done either from the factory, or from the user via software (Overdrive, Afterburner, etc.).
If your OEM is smart, they would of altered the UVD clock to respect any factory OC (if they can; who knows, maybe OEM's can't do this). MSI seems to of kept memory clocks at reference for my card, meaning they probably were aware of this issue.
Tests done on Windows 8.1 x64 (9471) with OpenGL 4.3 Beta drivers, Chrome 30 via dev channel, VLC x86 2.0.8.
@Espionage: DA post lol
EDIT: It blew my mind! And i'm a legacy user!