MSI AB / RTSS development news thread

Discussion in 'MSI AfterBurner Application Development Forum' started by Unwinder, Feb 20, 2017.

  1. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,127
    Likes Received:
    6,691
    There are no changes in VF curve editing principles with Turing, it is working exactly like it was on Pascal series and discussing it in details is beyond scope of this thread. VF curve must be monotonically increasing and driver is always correcting specified VF point offsets in order to respect the curve monotony. Knowing this fact you can make some parts of the curve flat via adjusting just a single point offset.
     
    knuckles84 likes this.
  2. LordHyena

    LordHyena Member

    Messages:
    15
    Likes Received:
    1
    GPU:
    4xGTX
    Question for @Unwinder
    Here i tested Forza Horizon 4 with Single channel RAM vs. Dual channel and i found useful feature "Stutters counter"

    Maybe possible to add that feature in to MSI AB that calculates "Stutters" and shows in real time?

    [​IMG]
     
  3. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,127
    Likes Received:
    6,691
    Showing frametime value in realtime while "Show maximum frametime" is enabled does something close to the same thing. It is detecting and displaying stutter (or slowest frame) during each polling period.
     
    LordHyena likes this.
  4. maffle

    maffle Member

    Messages:
    37
    Likes Received:
    3
    GPU:
    gtx970
    Because the suggestion thread is locked, I will post my question/suggestion here. @Unwinder

    May it be possible to implement the following:

    - trigger a specific (next/previous) boost state/frequency (via hotkey) and lock the GPU in that state/frequency (for example +- and pressing + lifts the GPU a boost state up and vice versa)

    I am asking because the boost logic of the Nvidia driver doesn't seem to be optimal or work properly always. It would be handy, to have a manual overwrite mode, where you can force the GPU into boost state 1/2/3 and then lock it there.

    I have the issue in some games, that the automatic boost mode of the Nvidia driver doesnt work. And my GPU is stuck at 800MHz and MSI AB shows a usage of 99-100%. The driver doesnt seem to notice, that the usage is 99% and boost up correctly. I found out, that a workaround for this in some games is, to alt-tab out, open a Chrome window, load a page, then alt-tab quickly back into the game, and now the GPU boosts up correctly and FPS become stable at 60, instead of 10-30.

    When this is triggered, the usage suddenly goes down from 99 to 70-80 and FPS become stable and good. Though even the clock goes back down to 900-1000 the usage stays around 80-90%. It seems the voltage maybe is risen resulting in proper FPS when this happens.
     
    Last edited: Sep 30, 2018

  5. Andy_K

    Andy_K Master Guru

    Messages:
    842
    Likes Received:
    240
    GPU:
    RTX 3060
    In Nvidia Control Panel change the setting Power management from "adaptive" or "optimal power" to "prefer max. Power", for these games or even in general.
     
  6. maffle

    maffle Member

    Messages:
    37
    Likes Received:
    3
    GPU:
    gtx970
    That is not a satisfying solution. Also it doesn't work always for some games (another bug?). I dont want max performance where there is no reason for. I dont want the GPU clocking at max frequency boost, where being in the next upper boost from 800MHz would work fine for most games I play (like World of Warcraft on low settings for example). It seems there is a bug in the Nvidia logic or it is not clever enough to notice, that in some games the current boost isnt enough to support 60FPS, even AB states a usage of 99-100%.

    It would be nice, if you could tell AB via hotkey: Toggle into next boost state/frequency and also lock it there.

    Nvidia control => bugged and doesn't work always
    Max Performance => not a satisfying solution to this problem

    I have a lot of games, where it runs good at 60FPS with a boost around 900-1300MHz (also memory at 810MHz instead of 3500MHz works fine for most games for 60FPS). 1700+ would be overkill for it and produce not needed fan noise and energy drain.

    I dont understand, why the Nvidia driver doesnt notice the GPU usage is 99% and trigger the next boost state by that. This seems to be a bug. It is not in all games though.
     
    Last edited: Sep 30, 2018
  7. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,016
    Likes Received:
    7,355
    GPU:
    GTX 1080ti

    the nvidia driver uses a utilisation % before ramping up the power state, some games are always just going to need to be set to prefer max, starcraft2 and assassins creed come to mind.
     
  8. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,127
    Likes Received:
    6,691
    Nope, clock control API doesn't work that way. No need to continue this in this thread please.
     
  9. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,127
    Likes Received:
    6,691
  10. bernek

    bernek Ancient Guru

    Messages:
    1,632
    Likes Received:
    93
    GPU:
    Sapphire 7900XT
    Can someone explain me how do I choose the value for this scanline sync ? I just want to use it and be happy. I've tried 1080 1085 -50 -200 and so on but most of the times I have tearing on screen ?! frame rate is above refresh rate which is 75 and vsync and similar functions are off.
     

  11. RealNC

    RealNC Ancient Guru

    Messages:
    4,959
    Likes Received:
    3,235
    GPU:
    4070 Ti Super
    You decrease it until the tearline disappears at the top of the screen. If that's not possible (meaning the tearline just randomly appears at the bottom and top of the screen with no way to hide it), then you need to try SyncFlush=1 in the profile file. If that also doesn't help, then you need to try SyncFlush=2. If that results in stutters, then you need a faster GPU and/or need to lower the graphics settings of the game in order to lower GPU load.
     
    bernek likes this.
  12. bernek

    bernek Ancient Guru

    Messages:
    1,632
    Likes Received:
    93
    GPU:
    Sapphire 7900XT
    Do I start from 0 and go negative until the line disappears ? Is this option using a significant amount of GPU computing power ? I do not need to understand what SyncFlush does but from the name of it quite explanatory...

    Will test and see ! Is there a way I can test this value without actually firing up a game ? like in a browser something like UFOs (https://www.testufo.com) maybe I can fine a walking line to set this up.

    Thanks for your time !
     
  13. RealNC

    RealNC Ancient Guru

    Messages:
    4,959
    Likes Received:
    3,235
    GPU:
    4070 Ti Super
    No. You start from -1. 0 disables scanline sync completely.

    It does not use GPU by itself, but it needs a fast GPU to work well. It's about how fast the GPU can clear all pending operations to the screen. The faster the GPU is, the faster it can do that. If it's not fast enough, the tearline will not be stable on the screen.

    SyncFlush=1 is usually fine in DX10, DX11 and GL, since it's an asynchronous flush. If the GPU is not fast enough, it will simply make the tearline unstable. In DX9, DX12 and Vulkan, it doesn't work and and it will use SyncFlush=2 instead, which is a forced flush, and if the GPU is not fast enough, you will get major stuttering. So keep in mind the API the game uses. When you use SyncFlush=1 but you're playing a DX9, DX12 or Vulkan game, then it means SyncFlush=2 and that one really needs a fast GPU (or a very non-demanding game; very old games work perfectly fine even on mid-range GPUs, for example.)

    I don't think so. It need something RTSS can actually hook into. Note that you don't have to quit the game and restart it to change the scanline value. You can do that live while it's running.
     
    Last edited: Oct 3, 2018
  14. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,127
    Likes Received:
    6,691
    I've not tried Windows 10 version 1809 yet, but according to report of one of my friendly developers, D3DKMT GPU usage monitoring interface stopped working in this OS on some combinations of AMD GPUs and AMD display drivers. MSI AB uses this interface to report Intel iGPU usage and AMD GPU usage if you tick "Enable unified GPU usage monitoring" in AMD compatibility options. So keep it in mind if you have this option enabled and see that GPU usage sensor disappeared from the list.
     
  15. emperorsfist

    emperorsfist Ancient Guru

    Messages:
    1,972
    Likes Received:
    1,074
    GPU:
    AORUS RTX 3070 8Gb
    In RTSS beta 5, there is no SyncFlush command in the game config files. Should we add it in?
     

  16. RealNC

    RealNC Ancient Guru

    Messages:
    4,959
    Likes Received:
    3,235
    GPU:
    4070 Ti Super
    Yes, in the [Framerate] section.
     
  17. aj_hix36

    aj_hix36 Member

    Messages:
    24
    Likes Received:
    0
    I hope this is the right spot to post this, I've noticed that when scanning for OC the resulting memory clock is lowered by 200. Im normally running a +900, so I can't actually compensate by changing it to +1100, I'm not sure how to clock it past what the bar let's me do (1000).
     
  18. cookieboyeli

    cookieboyeli Master Guru

    Messages:
    304
    Likes Received:
    47
    GPU:
    Gigabyte 1070 @2126
    I've had that problem too on some cards that go over +1000. The 200MHz loss is from the driver switching into compute optimized mode.

    I have a similar limit issue with going BELOW stock clocks on memory as well.

    Anyway, see no reason to pick 1000 as the highest number. It could be 951 or 1342, why 1000?. - Same with the fan slider lower limit of 25%. I guess that's a bit more justifiable but I personally dislike it since you have to fiddle around with the curve to test anything below 25%. Testing fans to see where they stop spinning and what the minimum startup value is . A lot of sliders are useful with wider ranges, anyone who could cause serious problems for themselves simply from that would screw themselves up anyway, I don't' see why it's worth limiting fan from 0-100 or memory from a large range (I would say +/-1500 but I'm just pulling a number out of my hat, Unwinder would definitely know best on this subject...). I really do hope the memory slider is increased on both sides.
     
  19. CaptaPraelium

    CaptaPraelium Guest

    Messages:
    229
    Likes Received:
    63
    GPU:
    1070
    Have you tried disabling this with nvidia profile inspector? Scroll down to CUDA - Force P2 State and set it to OFF.
    Of course, this may be more error prone (which is why it's on by default) and could cause the OC scanner to find problems that will only exist in the P0 state, but it seems to me that should be the correct scanning method anyway, since one would expect the resulting OC to work at P0.
     
  20. cookieboyeli

    cookieboyeli Master Guru

    Messages:
    304
    Likes Received:
    47
    GPU:
    Gigabyte 1070 @2126
    WOW thanks! That makes things a heck of a lot simpler for compute loads and profiles!
     

Share This Page