Discussion in 'Videocards - AMD Radeon Catalyst Drivers Section' started by chris89, Nov 23, 2017.
I'm wondering if Asder00 or someone can unlock HBCC for all cards? That would be cool.
Thats not possible, you need VEGA HBM for that.
Can Asder00 write in using system memory as video memory for AMD cards like 2-4GB cards? Nvidia did this back on the Nvidia GTX Fermi days to prevent lag & it worked.
I made similar question on amd forum about R9 Fury and HBCC.
I love the Idea (IMO it's possible for Fury, because of HBM controler in it)
Then our Fury with HBCC 4GB at 1120/550 is almost as fast as V56 1450MHz
But all is an academic disqussion
all gcn cards have the ability to use system memory for video memory, via the techonology known as "zero-copy", the HBC controller on vega includes a more advanced hw implementation of this feature among other things, not exactly something you can "enable" on older hw, perhaps it could be partially emulated via software, but i would guess it wouldn't be much better than the default memory management that windows does.
Is this only reason? Ohh, sweet. I could understand if that would be hardware requirement.
It is hardware. HBCC = HBM.
I'm sure about that. But I totally don't get why AMD guy said that R9 Fury isn't enough new, so they won't bring any new features. Do you hear it? We could do that, but we don't want, because Fury is too old.
The Fury X seems to have been shunned kind of like the old Pascal cards from what I can see in the recent benchmarks. Even at 1080p where the card shouldn't be struggling with video memory requirements, it's still a fair distance behind it's intended competition, the GTX 980 Ti.
It is also a bit behind in terms of hardware, Polaris and Vega has the advantage here though I only understand some of the design differences and changes but those and the 4 GB VRAM when that becomes a issue is what is holding back the AMD Fury GPU in most cases and why the 480 and 580 can almost catch up to it in some games when it's bottlenecked more with front-end render work (Geometry, tessellation and such.) than shader work which is in most current games though low-level API's such as D3D12 and Vulkan can see it excel still depending on the game and how said API us utilized.
(Vega has similar bottlenecks too but also some designs to work around these.)
No doubt some driver improvements could likely be done as well and AMD is prioritizing Pascal and Vega more but that's not too surprising, availability of a 580 and pricing of Vega GPU's isn't exactly helping when it comes to upgrading or side grading though, unfortunately.
Overall improvements vary from game to game though, up to above 40% compared to the Fury in some titles, down to as little as 10% or maybe 15% in others. Still trying to read up on more of this though the really big problem in newer games is memory usage where even at 1920x1080 some games will exceed 5 or even 6 GB of VRAM though it can take some time before it's bottlenecked but it will happen unless some settings are turned down.
Easily recognized at least, severe stuttering and really large framerate dips.
^ Yes but we have Options/GFX Sliders
Im playing all my Games with 1005/550 tMOD 400MHz HBM.Strap (Default is 300MHz for all Fiji) 1.169 core/1.337 HBM
40-55deg (in Summer Hot dayZ), every game runs great & looking great.
No Fury problems on my end ---> waiting for VEGA 64 AIBs or Vega2 12nm
Do the timing straps and HBM modding even work at all for Fiji now? Drivers past 16.12.1 blocked most overclocking of the VRAM (For unspecified reasons, it did have some nice gains too.) although I haven't read up too much on it aside from a bit over at the Overclockers forum with the whole bios editing guide and discussion, good info there although the GPU's tend to be a bit poor for overclocking but it's possible to get something out of them at least, or reverse it and get power usage down since similar to the Vega series these are also fairly overvolted on stock settings.
Though since the various models of Fiji have different parameters and also differ in overall quality it can take some testing to find out exactly what your own GPU can handle, which well I guess that's overclocking in general ha ha.
My own GPU reports as 1.218 for voltage for state 7 in Wattman, Fury uses a bit lower quality compared to Fury X and the best as I remember it were used in the Nano but it doesn't strictly define how well your GPU can clock though it's good to keep in mind, regular Fury can also be unlocked via bios editing but this takes a lot of testing to find out just how stable it is though 60/64 or 3840/4096 tends to be doable but similar to overclocking the gains can be fairly small when the GPU isn't utilized fully due to it's hardware keeping it back.
(Other GPU's well the Asus Strix uses undervolting on stock if I read that correctly whereas the Tri-X can vary around the 1.2v range and the OC version and Nitro with it's custom PCB models in general use a bit more for the 1050Mhz clock speed boost.)
In my own testing well I haven't done too much with the GPU, initially I tested overclocking but the gains are like 2 - 3 FPS and that's also requiring more voltage, lowering it thus reducing power draw and temps were the better choice though now with more DX12 and Vulkan games it might be time to see how well the GPU does when it's fully utilized although I don't expect too much from it even so but it might see a few gains.
(Mankind Divided when I played through that about two months back showed a nice boost compared to DX11 but I had to switch back since DX12 caused the game to be a bit prone to crashes which a quick check on the Steam forums also showed to be not too uncommon.)
Wolfenstein New Colossus should make for a interesting test case too I'd imagine, when it's not murdering VRAM for fun.
(Granted that's back with idTech5 too, looks nice the way this actually works or was meant to work but it uses a lot of memory if you let it, ha ha.)
EDIT: Oh yeah quality doesn't really mean too much just to clarify, mostly the reported ASIC value via say GPU-Z which in turn means leakage and how voltage is calculated through the GPU and thus the values that end up in Wattman or similar when monitoring the GPU under load and such.
Fury tends to be in a pretty broad range from 50's to 70's percentage or so, Fury X is a bit on the upper range and the Nano as I remember at least has the "best" or well the highest quality silicon but it doesn't mean the GPU doesn't clock at all or so, just that you might need a bit more voltage but since gains are mostly pretty small for overclocking the Fury unless you *really* boost voltage up and get clocks to 1100+ Mhz well undervolting and reducing power draw and temps while still having good performance can be a thing, really popular with the Vega 56 too and also the 64 although it does need a bit higher since it's not as throttled.
Wolfenstein II The New Colossus
Only Tweak you need is to lower (1 notch) The Textures & Shadows.
Then you'll end up (like me) in 45-63FPS Chill Gameplay.
Looks amazing, works a lot better. (you don't tell the difference )
And for me HBM OC working as always.
AMD/ATI do not stated that they turned this OFF.
I have artifacts in Ryse when set to 560 (I have now in Bios 550, because of Cryengine games ), 600 will give artifacts even on desktop (tMOD is ON + HBM V mod)
Im very happy with my Fiji w/FreeSync setup.
HBCC works with any kind of memory, not just with HBM 2 .
HBCC needs HBM to work. HBM2.
Nope. It is about the cost of memory. 1-2 stack enough from HBM but if you want to use GDDR 5(x) , need lot more stack and more power. You may see HBCC with GDDR6.
GDDR6 yes maybe...but people want HBCC in old cards...NOT gonna happen..
Try and set in the Registry, next to the other KMD values:
KMD_EnablePageMigration = 1 (DWORD 32-bit)
KMD_VirtualSegmentSize = [size in byte] (DWORD 32-bit)
This enables HBCC for Vega, but it will probably do nothing, or crash, on other GPUs.