MSI AB / RTSS development news thread

Discussion in 'MSI AfterBurner Application Development Forum' started by Unwinder, Feb 20, 2017.

  1. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,198
    Likes Received:
    6,865
    AMD won't be able to fix it, it is not their problem at all. It is entirely specific to AMD codepath implementation in idTech6 Vulkan renderer. They use renderers with slightly different architecture for AMD and for the rest graphics cards. Their AMD specific implementation is focused on intense compute queue usage and such renderer implementation cannot co-exist with traditional vendor agnostic graphics queue oriented overlay implementation without performance penalty. The only way to see it "fixed" is to develop alternate compute oriented overlay implementation for this engine and for AMD only. Which is rather expensive from development resources POV, considering that you'll need to develop new Vulkan OSD from scratch just for two games and for one GPU vendor. I'll better invest that time into something vendor agnostic and useful for both GPU brands.
     
    JonasBeckman likes this.
  2. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,198
    Likes Received:
    6,865
    I'd absolutely love to ignore that, but I'm forced to comment it as it spreads really fast. Clicks and views do not smell for some reviewers, hehe.
     
  3. gedo

    gedo Master Guru

    Messages:
    310
    Likes Received:
    43
    GPU:
    RX 6700 XT 12GB
    I believe how social media algorithms work is, any attention from you or action by you (positive or negative) will bump the content.

    That means that if you don't wish to support some content or content creator at all, all you can do is avoid clicking, reading, watching or reacting to the content altogether. Trying to work against them actively does the opposite: by getting worked up enough to "dislike" shows the algorithm that you care, which means the content is worth something for the platform in the form of more views, comments and reactions (collectively: engagement).

    This is also why YouTubers say to "remember to comment, like, dislike" in their videos and they actually mean it; Each action will promote the video.

    (And sorry about the offtopic. I'll stop now.)
     
    Last edited: Oct 22, 2018
    Andy_K likes this.
  4. maffle

    maffle Member

    Messages:
    37
    Likes Received:
    3
    GPU:
    gtx970
    Ive found an annoying bug in MSI AB with custom clock curves, which is present for a while, isnt fixed in latest beta. I have saved a custom clock curve to undervolt my GPU with it. The bug is, that it randomly gets altered (shifted) on the flat curve I have set in the back area, with high chance when starting AB.

    I am noticing this, because I am monitoring vcore in games via RTSS. Whenever I see, the vcore isnt the desired 0.875V at max boost, I toggle out into AB, press ctrl+F and see, the curve is altered. I then toggle back and forth several times the profiles, until checking ctrl+F and the curve is the one I desire, change back into game, and it is working again.

    This is my custom curve:
    [​IMG]

    And this randomly happens:
    [​IMG]

    Clicking the profile and apply doesnt work most of the time. The broken curve still stays. Doing it a bit back and forth and somewhen it is working again and checking vcore also is correct 0.875V in game with RTSS.

    It would also be nice, if you could export curve profiles into a file you can import again easily.
     
    Last edited: Oct 25, 2018

  5. knuckles84

    knuckles84 Guest

    Messages:
    109
    Likes Received:
    6
    GPU:
    MSI GTX1080 Sea Hawk EK
    I think that isn’t a Bug from Afterburner but more a NVAPI thing.
     
  6. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,198
    Likes Received:
    6,865
    Sorry, but that's a bug in your understanding of GPU boost basics. Neither final clocks nor base curve are supposed to be static by design of GPU Boost, the only static thing is an offset defined by you per each V/F curve point. And yes, base curve is constantly changing dynamically depending on thermals and other factors. That's by design of NVIDIA.
    And please no need to continue it in this thread, it is not intended for basic application/hardware usage related discussions.
     
  7. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,198
    Likes Received:
    6,865
    Guys, I noticed in different forums that some GeForce RTX owners try to use NVIDIA OC scanner wrong way. The thing they do is:

    - Run the scanner
    - Take average overclocking value reporting by it in the end of scanning process
    - Set this fixed offset manually and start seeing system instability in some cases

    That’s incorrect approach. OC scanner is giving you average overclocking value for whole curve to estimate the result, but autodetected offsets are not the same for all points and they can be higher or lower for different V/F points. So you should apply non-linear curve detected by OC Scannner instead of manually setting fixed average overclocking result reported by scanner.
     
  8. knuckles84

    knuckles84 Guest

    Messages:
    109
    Likes Received:
    6
    GPU:
    MSI GTX1080 Sea Hawk EK
    I do it as you sad and it works perfectly. As F@H User I must say, that the OC Scanner Curve is really good. For Benchmarks or Games, I can set a little more OC but for Folding@Home that is very sensitive for instability, give me in a very short time errors. The OC from the curve has never given errors.

    Great what Nvidia has done huere, an your implantation Unwinder is the best. In EVGA Precision I only get, an average Score, I'm not even able to apply the new curve (at least with the last version tested). Perhaps of that behavior, a lot of User use, Afterburner the wrong way. They think it works like the other tool.

    Because Nvidia pushed Precision so much, the average users have tried that tools first. Only after they realize that it is only crap, the try Afterburner.

    By the way, as RTX User, is there something Special, you want tested? Until now I hadn't have bug, so I couldn't report anything.
     
  9. peppercute

    peppercute Guest

    Messages:
    459
    Likes Received:
    104
    GPU:
    Msi 2080Ti TRIO
    Hi! For RTX card need adjust voltage slider? I don`s see difference
     
  10. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,198
    Likes Received:
    6,865
    It works exactly in the same way like on GTX 10x0 series cards. I created video demonstrating that, please use search. And once again, this thread is not dedicated to any generic application usage questions.
     

  11. cookieboyeli

    cookieboyeli Master Guru

    Messages:
    304
    Likes Received:
    47
    GPU:
    Gigabyte 1070 @2126
    This has got to be one of the most frustrating behaviors of GPU overclocking, because if you want to set a curve, you must do so after letting the card cool down from load and wait. Make a change, then click the profile again and see if the curve is *actually* making only that change. Otherwise,
    for example if you want to modify one point down 13mhz then save it, you've saved an ENTIRELY DIFFERENT CURVE that will now behave ENTIRELY DIFFERENTLY. I took me a while to catch on this was happening - my card is always stable at 2126MHz on 1.093v up to the highest temperature it will reach. I have to set it to 2139MHz just to get 2114MHz after a while, but worse is under no circumstances do I want 2152MHz as my card has the potential to crash under heavy load above certain temperatures, but that's what my card will run at sometimes during the first few minutes of the game or in menu IF I MESS UP THE PROFILE by saving changes when clicking the profile and hitting apply do not match.

    So although I know my best performing stable clocks, I have to put up with worse performance as manually compensating by starting higher will give me instability before it drops clocks. This is all just for the 1.093v point.

    I understand this is how Nvidia does it by design. But what possible argument could there be for this being desirable or good behavior??

    So forgive me for being blunt, -because I don't understand how this idea could possibly be a bad thing- What MSI Afterburner NEEDS is a GPU Boost CLOCK COMPENSATION mode. Where it detects GPU boost changing clocks for ANY perfcap reason on a voltage point and automatically offsets an equally opposite amount so clocks remain the same on their respective voltage points. EG: 1.093v should always correspond to 2126MHz because that's what I set. Throttling voltage and the usual behavior for curves still exists, clocks just don't change on individual points normally (power perfcap is separate from this AFAIK and is more dramatic).

    Unwinder I greatly respect your expertise, I want to be clear that am not questioning it or claiming there is a bug. I just don't understand why it needs to be this way for individual voltage points. Is a compensation system such a terrible idea?
     
    Last edited: Oct 26, 2018
  12. maffle

    maffle Member

    Messages:
    37
    Likes Received:
    3
    GPU:
    gtx970
    @cookieboyeli Thank you very much for your post, at least someone tried to explain. Yes, youre totally right of course, I see it the same as you. It doesnt make any sense, as usual, to be like "that is by design, deal with it" and be ignorant about improvement concepts. A simple solution for this could be a manual overwrite mode to activate in AB, and then AB tries to keep a curve and apply it again maybe every x seconds to work against Nvidias driver compensation. My card is 100% stable with this curve I tried out and the undervolt it achieves in this example 0.875V at the max boost frequency of around 1750MHz. Or even have a simple automatic mode, where you define an upper maximum voltage value, and AB tries automatically to find a value around it, maybe tries in steps to lower the frequency every x seconds, waits, lowers it, and then tries to be stable around a defined value. Especially for undervolt scenarios, where there is the desire to run the card as cool as possible, and not as "fast as possible".
     
  13. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,198
    Likes Received:
    6,865
    Nope, it is not the best idea at all, sorry. NVAPI's voltage curve readback and programming (especially programming) interfaces are not intended to be called frequently in realtime, they are rather slow and doing so and constantly adjusting curve in realtime will result in stuttering. Furthermore, even if you do it ignoring performance related things, there is always a chance that driver will alter base curve between two adjustment calls so by adding any sort of compensation you can sometimes go beyond safe clock limit and crash the system. So that's unsafe from both performance and stability points of view and no go.
     
  14. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,198
    Likes Received:
    6,865
    What is actually ignorant is your own post, sorry. No need to aggressively push some concept if you don't understand the basics. I ignored your ignorant suggestions about laptops support once and do it second time now. Take it as the final warning, no need to continue it in this thread.
     
  15. Hi @Unwinder :) I used the oc scanner on my Gigabyte 2080 Ti Gaming OC, but there's something strange on the final result.

    I've do the following steps :
    1. Set power limiter to max
    2. Set memory frequency to +400
    3. Leave voltage control at default
    4. Temperature limit to max
    5. Hit scan button
    At the end of scan (15 minutes ) as overclocking results +138mhz on core clock.

    Hit apply on Afterburner and then saved as profile.

    In the end I've launched GPU-Z (ver. 2.14) and with my surprise the gain of the overclock was only 70mhz on core clock and not + 138mhz as OC scanner curve result.

    Where did I wrong to do??
     

  16. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,198
    Likes Received:
    6,865
    Please reread the text you quoted and pay attention to part marked with bold:

     
  17. knuckles84

    knuckles84 Guest

    Messages:
    109
    Likes Received:
    6
    GPU:
    MSI GTX1080 Sea Hawk EK
    @Alberto

    You done nothing wrong. But that you consider, that you get +138mhz, that is the wrong way of seeing it.
    You get a new Curve, where some points are +70, +165, +128..... (examples) so the +138mhz is perhaps average but not at all you OC.
    GPU-Z is not the best way, to see the clock Speed. If you open the Curve and click on the different Points you see, which one was raised and how much.

    An If you watch in Afterburner you Core Clock Slider, you should see 2Curve" as value and not an +XY mhz.


    Edit: too slow
     
    Deleted member 255716 likes this.
  18. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,198
    Likes Received:
    6,865
    Ouch, it is not gonna stop... Rumors and conspiracy theories related to RTSS and DOOM Vulkan renderer on AMD platforms continue spreading across the net with help of the last WCCFTech video. Now these rumors reach Russian segment of Internet so our local “hardware gurus” appended it with their own false info. So I’ve got the next portions of hate from the next wave of AMD fanboys. Now there were the claims telling that my statements about compute & graphics queues sync is a lie and RTSS is simply the only broken overlay tool behaving this way. So I created a few screenshots with different other third party unified non-AMD-favored overlays enabled in this 3D engine on one of my home rigs powered by AMD Ryzen + Vega 64. By the way, hardware for that rig was donated by AMD a couple years ago, and that time none of NVIDIA fanboys cried that AMD is trying to “buy” developer. So, top to bottom:

    [​IMG]

    - DOOM framerate as is, with internal DOOM performance overlay displaying that
    - DOOM framerate after enabling Steam FPS counter (Steam is not capturing it on screenshot)
    - DOOM framerate after enabling RTSS framerate counter and realtime frametime graph
    - DOOM framerate after enabling FPSMon framerate counter

    Performance is dropping in all 3 cases of enabling third party overlays, performance drop in all these cases is equal and caused by compute & graphics queues sync (i.e. flushing compute queue prior to rendering overlay). You can easily see that additional synchronization is having place on DOOM’s own CPU/GPU timing graphs. Under normal conditions, CPU times are lower than GPU times in this engine, but as soon as you enable any of those overlays it is growing up rapidly and become higher than GPU time all the time. That’s exact result of explicit synchronization, overlay renderers simply start waiting for the end of compute queue flushing on each frame presentation.
    So dear “reviewers”, please stop this hype train. Generating extra traffic with such “sensation news” and AMD vs NVIDIA flame in comments is just low and it smells bad.

    P.S. I guess we’ll release 7.2.0 officially pretty soon, so disabled overlay for AMD cards on such systems will stop these clickbait videos spawning. So I can finally back to work instead of commenting this nonsense.
     
    warlord, cookieboyeli, Vidik and 2 others like this.
  19. I've reread and I've understood that the oc scanner give an average result and it is different on V/F curve points.

    On Afterburner what should I set for the core clock? The OC curve or the average value manually? (in my case +138mhz)

    Please @Unwinder I'm a terrible newbie :(
     
  20. Unwinder

    Unwinder Ancient Guru Staff Member

    Messages:
    17,198
    Likes Received:
    6,865
    Now the next part in bold. And please create and use a different thread for general how-to questions. This thread is intended for new version development & testing.



     
    Deleted member 255716 likes this.

Share This Page