Nvidia Has a Driver Overhead Problem

Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by RealNC, Mar 15, 2021.

  1. Astyanax

    Astyanax Ancient Guru

    Messages:
    12,016
    Likes Received:
    4,552
    GPU:
    GTX 1080ti
    The setting has no affect on vulkan and d3d12 to my knowledge.

    Nvidia is investigating.
     
    Last edited: Mar 23, 2021
    Smough, Archvile82 and BlindBison like this.
  2. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,895
    Likes Received:
    767
    GPU:
    Inno3D RTX 3090
    I am getting different fps in SOTTR depending on whether it is on or off.
    Haven't tested properly yet.
     
  3. dr_rus

    dr_rus Ancient Guru

    Messages:
    3,108
    Likes Received:
    462
    GPU:
    RTX 3080
    That's eh not entirely accurate either.

    1. Ampere is pretty much nothing like GCN. GCN doesn't have much specialized h/w either, it's very general purpose - which is why its good in compute and rather bad in graphics and perf/watt. There is a degree of convergence in RDNA and Pascal shading core designs but Turing and Ampere are a step beyond that.
    2. SM level scheduling is fairly static on all GPUs these days and has been like this since GCN1/Kepler - this is what changed between Fermi and Kepler basically. "Static" here means "no scheduling happening since the instruction stream is pre-compiled by the driver to execute in an optimal fashion on the target h/w".
    3. The only level where "Ampere is the most GCN-like h/w Nv even made" is global level scheduling (GigaThread engine) which was enhanced with h/w syspipes in Ampere which are kinda reminiscent of GCN's ACEs/HWSes. "Kinda" because a) not really since the design is different enough and serves different purposes (server grade h/w virtualization - ability to run up to 7 virtual GPUs on one GA100/GA102 GPU; less on smaller ones - one per GPC); b) because there's a distinct lack of disclosure on what it even does outside of CUDA compute. It may be used by graphics drivers or it may not be, this is unknown.

    Anyway no amount of global API/driver side scheduling can explain such differences in CPU loads. Also worth noting that this difference seem to be present on all NV GPUs which support D3D12 - which means that whatever is the reason it should be the same on Maxwell, Pascal, Turing and Ampere - very different h/w architectures in many aspects.
     
    BlindBison, enkoo1 and PrMinisterGR like this.
  4. Astyanax

    Astyanax Ancient Guru

    Messages:
    12,016
    Likes Received:
    4,552
    GPU:
    GTX 1080ti
    Can probably replicate it on Fermi as well, if you find a D3D12 game that doesn't 99% the gpu (720p perhaps?)
     

  5. Nastya

    Nastya Member Guru

    Messages:
    152
    Likes Received:
    41
    GPU:
    ASUS RTX 2080 Ti
    HUB posted a new video with some more insights:

     
  6. Astyanax

    Astyanax Ancient Guru

    Messages:
    12,016
    Likes Received:
    4,552
    GPU:
    GTX 1080ti
    HUB are deleting any comments that refer to and mention architectural information that discredits their claims about the device scheduling.
     
  7. BlindBison

    BlindBison Master Guru

    Messages:
    923
    Likes Received:
    208
    GPU:
    RTX 2080 Super
    Seriously? Wow, if true, that's not cool -- I mean, maybe I'm missing something, but those sound like comments I'd like to read lol.

    How can one tell if comments were removed? Don't get me wrong, I'm glad they discovered this behavior and all/I enjoy their testing work, but maybe would've been better to present their ideas behind the "why" with a bit more "kernel of salt" in this case til an official response was out.

    I'm looking forward to Nvidia's own investigation results.
     
    Last edited: Mar 26, 2021
  8. Astyanax

    Astyanax Ancient Guru

    Messages:
    12,016
    Likes Received:
    4,552
    GPU:
    GTX 1080ti
    Youtube throws an error when you go to confirm an edit and when you refresh the comment is gone.

    I was merely directing users to beyond3D's vast coverage of the RDNA architecture and pointing out that both vendors use a static 1 instruction per cycle per wave/warp design among other things XD.

    also user dissemination of the geforce cards and how their architecture happened to change with each CC revision.
     
    BlindBison likes this.
  9. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,895
    Likes Received:
    767
    GPU:
    Inno3D RTX 3090
    Let me guess, still no tests about threaded optimisations on or off.

    Also, especially with Turing and Ampere nViDiA doESN't hAvE sCHEDuleRs is kind of tiring.
     
  10. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    2,996
    Likes Received:
    1,521
    GPU:
    RTX 3060 Ti
    they did the same to mine when I merely tried to refer to a $20-40 cooler test when they began using $40 added value for wraith stealth in cost per frame charts for r3600 in gaming (ridiculous,right?)
    I noticed they also kept changing test locations for some games like witcher 3 back and forth whenever they felt convenient from more to less cpu bound.
    it's a channel where the clickbait headline comes first,they'll find the way to obtain the data they want.
     
    Archvile82 likes this.

  11. Maddness

    Maddness Ancient Guru

    Messages:
    1,717
    Likes Received:
    872
    GPU:
    3080 Aorus Xtreme
    If that is true, then they just lost a lot of respect in my eyes.
     
  12. Astyanax

    Astyanax Ancient Guru

    Messages:
    12,016
    Likes Received:
    4,552
    GPU:
    GTX 1080ti
    These are the same guys that crap on a user for demonstrating that HWU and GN benchmark results using FX processors are worse than they actually should be.



    Bully bro's lol.
     
    Cryio likes this.
  13. RealNC

    RealNC Ancient Guru

    Messages:
    3,600
    Likes Received:
    1,780
    GPU:
    EVGA GTX 980 Ti FTW
    Question: If enabling threaded optimizations is better, why is it optional to begin with, and why would the default "auto" setting turn it off?
     
    BlindBison likes this.
  14. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,895
    Likes Received:
    767
    GPU:
    Inno3D RTX 3090
    My guess is that "Auto" let's NVIDIA profile games. I cannot fathom that they talk about driver overhead without testing this setting. I might even test it myself, but I have a monster CPU so I don't know how useful it might be. I might be able to at least show different behaviors with it.
     
  15. Astyanax

    Astyanax Ancient Guru

    Messages:
    12,016
    Likes Received:
    4,552
    GPU:
    GTX 1080ti
    Auto is known to be broken in Cemu, the setting can change while the process is running and isn't necessarily set at application run time.
     
    Last edited: Mar 27, 2021

  16. terror_adagio

    terror_adagio Member

    Messages:
    42
    Likes Received:
    13
    GPU:
    7950 GTX 512MB
    This is really weird. They are outright lying on Twitter too about it.
     
  17. Undying

    Undying Ancient Guru

    Messages:
    16,799
    Likes Received:
    5,688
    GPU:
    Aorus RX580 XTR 8GB
    Im actually glad they are only one speaking about it and not beeing green enough to cover this under the rug like others payed of by nvidia channels do like DF.
     
  18. terror_adagio

    terror_adagio Member

    Messages:
    42
    Likes Received:
    13
    GPU:
    7950 GTX 512MB
    [​IMG]
     
    Smough, mirh and PrMinisterGR like this.
  19. RealNC

    RealNC Ancient Guru

    Messages:
    3,600
    Likes Received:
    1,780
    GPU:
    EVGA GTX 980 Ti FTW
    I don't follow youtubers much, so I haven't seen much previous work by this channel. I delved a bit into their backlog, and some videos are clearly clickbait. Case in point, I found one where they claim "DLSS is dead" and in the video they claim that NVidia's (very simple) sharpening shader is better than DLSS...

    And they don't mention that we had many sharpening shaders to choosefrom through ReShade many years before nvidia added theirs to the driver.

    So according to Hardware Unboxed, there's no need for DLSS. Just enable sharpening in the nvidia control panel. It's better.
     
  20. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    2,996
    Likes Received:
    1,521
    GPU:
    RTX 3060 Ti
    7.7.2019
    amd discovered sharpening according to hub
    and it's better than dlss
    [​IMG]
     

Share This Page