Discussion in 'Videocards - AMD Radeon Drivers Section' started by frankuzzo, Sep 29, 2020.
I know but i dont want vsync active while using freesync.
Not 4K. But yeah. I play at 3400x1440 as well and even 5700XT is not enough. Of course depends on the game and graphics settings and what one considers 'enough'. I prefer 90+FPS but I can make do with 60FPS for 3rd person view.
V-sync is applied/used only if framerate exceeds FreeSync range
Looks like you can add D3D9 vertex shader 3.0 on this list as another regression.
Seem to just cause visual glitches though like missing polygon geometry so not tied to GPU stability or performance issues with D3D9 on it's own.
For instance if i have freesync range from 40-75hz and i limit the fps to 74fps via rtss i dont need vsync enabled.
A follow-up after some experiments: It doesn't seem to be a driver fault. A frame time graph revealed some heavy CPU spikes (in the 120+ ms range) in these situations albeit the overall frame rate is fine. These spikes seem to be mitigated a lot if running with the High Performance power plan, especially not allowing the CPU to downclock itself. One explanation could be that my 2678V3 is reaching its limits with its 3.3 Ghz, being overwhelmed by all that action at once. A second theory is that the Windows scheduler is at least partially to blame for tasking the game on cores which were put to sleep before. The second theory has it roots in some experiments with the Cluster-on-Die (COD) functionality if I enable NUMA and COD in the BIOS - Battlefield 1 gets tasked and fully utilizes one NUMA node, the frame times are much better then but the overall frame rate is around 20-30 % lower than normal.
I also tried DX12 mode which to my surprise was almost playable after a few minutes of heavy stuttering. In the past my experience with DX12 was even worse in that game. Once all the shaders are compiled, the frame times are usually better in situations with heavy action than in DX11 mode. And once cached, the next time you load the map it is usually fine to play (with some occasional hiccups still throughout the game, at least on large 64 player maps which I usually play).
If anyone has a trick how to improve things, I am eager to hear it. Except buying a new CPU/platform with a higher IPC (that is too obvious and not what I intend to do in the near future).
VSync will still do it's anti-tearing function, but you will have the VRR low latency, so it's the best of both. Using FreeSync doesn't guarantee tear-free gameplay.
true but when it works it works beautifuly. Im used to use it that way but sadly some games like you said are not tear free.
3440x1440 is not 4K 4K is still 1,7 more pixels.
And still, even from benchmarks what I searched, even with RTX2080 you get just around 120FPS on FHD...
Some games simply doesn't works well without VSync OFF with FreeSync (you still can see tearing). That's all.
it's similar e.g. for The Cycle where you can see tearing without VSync ON. But at least it doesn't cap FPS pn 60...
It works most of the time. Depends on the game true. Like i said when it works like it should you dont need vsync and there is no tearing. Its smoother having it by itself vsync only adds lag.
Next week , New GPU - New Drivers...
And dont forget..the Anniversary driver is soonTM.
Last time anniversary driver brought more issues and good. It also overhauled the ui that many didnt like. Lets see how amd will this time find a way to ruin a good hardware by providing a crappy software to go with it.
(Wish) Next Omega be like:
DLSS like tech but working with every TAA game (No matter the API).
New HBCC but working with every gRAM type.
Magicall +10% to all GPUs performance.
Did I understand correctly that those 3 points are your wishes and not some info about the next version of radeon drivers?
^^ Right now it is an possibilities + pinch of wish.
We shall see
LOL very optimist
One of those wishes will be true. And If you read my posts on the RDNA2 thread you know which one The next one (randomly of those three..).. might happen... not sure on that.
But I cannot promise that - if it happens - if it will work with all the architectures... not even sure if it will work below RDNA....
Well there's DirectML which I presume is Direct Machine Learning but that's a API that could be utilized for various things not a implementation on it's own.
Same as DXR for Microsoft and D3D12 as a API for the ray tracing effect, going to be really curious to see how this works for AMD and if it's even going to work on existing titles seeing how these route into NVIDIA RTX.
(There's also Optix from NVIDIA as another SDK here also aided by CUDA which would be NVIDIA only but I believe this is more of their high-end ray or professional oriented tracing solution though I tend to mix it up a bit.)
If NVIDIA can get DLSS to work on it's own AMD would have to develop something to compete although even with just a few titles worked on with NVIDIA these are often more high profile and then that gets the RTX treatment and performance is boosted by DLSS also like with how Watch_Dogs Legions system requirements points out.
(Consoles support it too but from current examples from WD Legions it's a mix of ray tracing and cube map plus while it's 4k 30 FPS it's dynamic 4k via scaling not native resolution so lower res scaling up and also something that could be simpler than the PC's lowest ray-tracing effect.)
High bandwidth cache I guess can be useful even for gaming but without a buffer of sorts or cache I don't think it's feasible though RDNA2 while it uses GDDR3 has a lot of cache and might be aided by the infinity fabric as well against HBM's bus width and bandwidth potential.
3 - 5% happens every time AMD does some optimization and more for D3D12 and Vulkan, more occasionally because some games or game engines have issues and if they collaborate with a developer stuff like Horizon Zero Dawn happens with a lot of tweaking specifically for RDNA1 though as can also be seen little benefits for Vega or Polaris where Vega even shows less gains than it's hardware power would otherwise be expected to be capable of.
(NVIDIA's 2080 and it's Ti still push through to the top if by nothing more than being really strong GPU hardware.)
EDIT: I could see AMD and Microsoft working out something for DirectML through the X Series S and X Series X too it'd benefit both consoles and work well for 4k gaming especially on a TV a meter or two or more from the users eyes so that's a possible angle for it as well.
But one that could happen in a number of ways or even from developers doing their own stuff with 12_2 and related functionality and then over on Sony's hardware and how the Playstation development goes.
Remains to be seen what will be done, Microsoft's own team is already doing a ton of work with backwards compatibility and enhancement patches patches but something like this to get close to 4k with less drawbacks from current upscaling methods could be a thing.
I can see it being investigated because of it's potential for visual quality and performance even with some drawbacks and limitations.
(How and what it'll result in though just speculation if it leads to anything at all this console generation.)
Yeah, the last one really was a broken mess for way too long! Even Lisa Su heard all the shouting in her CEO chair. Radeon settings was a regression in my eyes to the one they had before. It was overloaded with functionality that I don't need nor use, and way too sluggish. It didn't help that Wattman was broken and profiles didn't apply properly. That said, my expectations are pretty low this time, not seeing a broken driver would already satisfy my needs.
On my whishlist: Hardware scheduling and improvements for Vega (as they buried NGG there, maybe they could unlock some other unimplemented functionality which brings performance improvements).
Yeah, the 2020 branch was a nightmare for me too mate.... I think that my screams were so loud that they could be heard at the AMD HQ LoL. Since 20.5.1 WSL everything was again nice and dandy for me.
As for your wish.... don't keep your hopes up.... Last I have heard the Hardware scheduling was not possible on Vega (due to some borked or missing HW stuff) and brought negligible perf gains on RDNA1... maybe RDNA2 will be more lucky in that matter...