Total War: WARHAMMER runs 27% slower in DX12 on NVIDIA’s hardware.

Discussion in 'Videocards - AMD Radeon' started by Undying, Jul 1, 2016.

  1. eclap

    eclap Banned

    Messages:
    31,468
    Likes Received:
    4
    GPU:
    Palit GR 1080 2000/11000
    welcome to my ignore list.
     
  2. Boemlauwe_Bas

    Boemlauwe_Bas Member

    Messages:
    18
    Likes Received:
    0
    GPU:
    MSI 980 TI Gaming
    http://www.guru3d.com/news-story/nvidia-will-fully-implement-async-compute-via-driver-support.html

    Kinda says it all right ? But still, even so I personaly would not trade my 980 TI for anything less then a 1080 GTX. Purely on driver quality. (I'm a 5870 CF veteran yes) that said. I give AMD credit for how they handled the whole DX12/Mantle API with the resources they have compaired to Nvidia. That R9 Nano is of the hook, stop hating people, competition is a good thing. When Intel was sleeping with there stupid P4's AMD gave us the Athlon 64 X2. Pretty sure ya'll be running a quad P4-MMX2 cpu instead of some 8way monster.
     
  3. Undying

    Undying Ancient Guru

    Messages:
    25,501
    Likes Received:
    12,901
    GPU:
    XFX RX6800XT 16GB
    It must be a big list. Can i get in there too? Thanks.
     
  4. eclap

    eclap Banned

    Messages:
    31,468
    Likes Received:
    4
    GPU:
    Palit GR 1080 2000/11000
    Why? Don't you like it when I prove you wrong? He's the only one on it. But by all means, if you don't want to see my posts, feel free to quit the forum or add me to your ignore list.
     

  5. vase

    vase Guest

    Messages:
    1,652
    Likes Received:
    2
    GPU:
    -
    well at least he is nice enough to inform me about it, that will save me time when considering i could have wrote posts addressing him in the future. they would have been in vain.
     
  6. Devilhunter12

    Devilhunter12 Guest

    Messages:
    71
    Likes Received:
    0
    GPU:
    GTX 980 Ti Lightning
    That is why right now i stay away from AMD.

    Why?

    http://wiki.totalwar.com/w/Optimisation_Blog

    CA never worked with Nvidia or take their input for DX12 that is why you are seeing results like these. CA DX12 implemented did not do any favors to CPU in fact it is same like DX11. Dx12 only enable AC so that AMD can sell 480s.

    http://www.dsogaming.com/news/report-total-war-warhammer-runs-27-slower-dx12-nvidias-hardware/

    Well Nvidia is more professional company ,therefore will never blame other they will own it.
     
    Last edited: Jul 2, 2016
  7. prazola

    prazola Member Guru

    Messages:
    179
    Likes Received:
    20
    GPU:
    R9390XSOC / R9290DCU2OC
    Dx12 patch are usually wrappers o bad implementations sometimes you gain some fps, sometimes not. Vulkan on The Talos Principle is the same thing.
    Only full dx12 games are relevant to me.
     
  8. Devilhunter12

    Devilhunter12 Guest

    Messages:
    71
    Likes Received:
    0
    GPU:
    GTX 980 Ti Lightning
    Doom is so smooth on both AMD and Nvidia ,however, this is the reason ID tech is delaying Vulkan launch.
     
  9. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
  10. OnnA

    OnnA Ancient Guru

    Messages:
    17,983
    Likes Received:
    6,839
    GPU:
    TiTan RTX Ampere UV
    I like that site :)
    hmm lalala FUD etc. :wanker: not our concern, just leave it behind and stay positive

    -> Polaris IMO is very good GPU and it has new standard as of High-End avaible to every-One !
    199-229USD (212/239USD) its good.
    Here in Germany not so pricey ~250€ Free shiping for XFX OC Black w/Back Plate :thumbup:
     
    Last edited: Jul 2, 2016

  11. Havel

    Havel Master Guru

    Messages:
    404
    Likes Received:
    46
    GPU:
    RTX4080
    AC is Asynchronous Compute? Right?
    Why DX12 Apps on Nvidia runs not so good? It's driver issue or architecture?
    PS: So much eclaps in threads like this. Interesting why?:)
     
  12. vase

    vase Guest

    Messages:
    1,652
    Likes Received:
    2
    GPU:
    -
    how is that a reason for it?

    vulkan isnt replacing the OGL implementation. you will be able to chose API just as in every other game with multiple api choices.
     
  13. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,959
    Likes Received:
    1,246
    GPU:
    .
    "Asynchronous Compute" is an implicit feature allowed by the API when the application execute in concurrency graphics and compute queues. it is part of the multi-engine design of the API.
    Actually, only AMD cards do not need to serialize those works.
    Intel iGPUs serialize the works on the driver level. The performance penalties are usually low.
    On NVIDIA GPUs driver serialization is needed on Kepler. While Maxwell 1 and 2 do not need to serialize the work however they scheduler is unable to take any advantage at all but it decrease performance. Looks like NVIDIA still do not want to serialize those works at driver level for Maxwell GPUs too. On Pascal GPUs the scheduler issue is solved (though it does not allow the same granularity allocation of AMD GPUs it works reasonably well).
     
    Last edited: Jul 2, 2016
  14. OnnA

    OnnA Ancient Guru

    Messages:
    17,983
    Likes Received:
    6,839
    GPU:
    TiTan RTX Ampere UV
    Yeah Bratan' and tell them what can be acomplished by it (ACE) its far more interesting :nerd:

    AA
    AO
    Physics
    AI
    ....
    you will fill the rest :)

    IMO ACE and new APIs are the future !
     
  15. vase

    vase Guest

    Messages:
    1,652
    Likes Received:
    2
    GPU:
    -
    yes, title is misleading , should be either made sure it's a quote from an article topic
    or changed to "WARHAMMER runs 27% slower in DX12 on NVIDIA’s hardware than in DX11" for completeness.
    Maybe add that a new patch introduces this fall off.

    Now, i hope I didnt offend anyone in my post. I read over it 3 times and couldn't find any insults. -> Clear to post I'd say!
     

  16. Devilhunter12

    Devilhunter12 Guest

    Messages:
    71
    Likes Received:
    0
    GPU:
    GTX 980 Ti Lightning
    Can you answer this
    They never worked with Nvidia on Dx12, never take their input and DX12 is not giving any benefits to CPU , which is the reason of DX12 existence.

    Of course Nvidia will run slower on AMD games and AMD will be fast on their sponsored game.
     
  17. -Tj-

    -Tj- Ancient Guru

    Messages:
    18,107
    Likes Received:
    2,611
    GPU:
    3080TI iChill Black
    and when you OC both cards to max, well in AMD FuryX case its not much as on 980TI, that furyx is left behind, just saying :p


    That 6fps furyX advantage becomes 5-7fps advantage on 980Ti if not even more.
     
  18. vase

    vase Guest

    Messages:
    1,652
    Likes Received:
    2
    GPU:
    -
    yes i can.

    it's wrong to put proprietary graphical features into games is my opinion.
    but DX12 is not AMD-proprietary nor is it a secret "Gameworks of AMD"(because it enables any graphic card manufacturer to use the full range of its features, right from the start! no closed source period as in new gameworks features, that last ~1 year or so until being opened, FOR GAME DEVS ONLY, not for public)
    possibly AMD is just more experienced with low-level API integration (considering they did it with mantle several times already) at this moment and may be consulted before nvidia to get some know-how transfer done about how writing game code for low-level-APIs works?
    i mean nvidia does not develop any kind of low-level-API so far.
    so they are clearly dependent on input from microsoft, khronos and AMD so far. and i am sure there are enough consortiums where all those players sit at one table and discuss certain stuff that happens next.

    being butthurt about amd showing DX12 features to a game dev
    is like being butthurt about nvidia showing DX11 features to a game dev
    => no reason to be butthurt about it.

    implementing proprietary features with a closed source period after launch for a certain time so enabling those features during closed source period may have negative impact on missing amd optimizations?
    => reason to be angry (my opinion)


    Next Episode: See a Vulkan erupt.
     
  19. Devilhunter12

    Devilhunter12 Guest

    Messages:
    71
    Likes Received:
    0
    GPU:
    GTX 980 Ti Lightning
    So you mean to say that AMD lagging behind in Nvidia sponsored games is AMD's own fault ,however i cannot put sense in to single minded kid.
     
  20. vase

    vase Guest

    Messages:
    1,652
    Likes Received:
    2
    GPU:
    -
    Can you not? Maybe you don't try hard enough, yet.


    So, I mean to say there is a difference between what you call "sponsored" game.
    (I call it a game where manufacturers of GPUs have worked with game devs to make sure the game works optimal on their cards from the start, which are the optimal draw calls, which complexity for sceneries is the maximum to run smooth right on release date, etc, stuff like that) Against which there is nothing to say.

    And in contrast to that, games that incorporate proprietary graphic features that are not coded by the engine developers but as a closed source by a manufacturer of graphic cards and then given to the respective game devs only. Until then the GPU manufacturer owning the rights to this code can decide when to open this feature for all game devs / all industry members. Up to then he makes sure nobody else can optimize their hardware to that code.

    You see if Dodge could decide what road surface (Gameworks feature) will be put on the street for the next season, but nobody else of the car industry is being told (closed source) then Dodge will of course be able to preproduce car tires that match that surface way better than their competitors. Although all wil have to drive on it.

    If on the other hand all of the competitors get to know which road surface there will be next season (open api specifications for all) then every competitor has the same chance to compete at his best capacities.

    That's all I actually try to point out.

    So if AMD should have any kind of proprietary features they introduce into games due to partnerships.
    Then i would find this bad and damaging not only the competiton. (you see i am not a hostile person in general maybe that's why i prefer not damaging the competition over damaging the competition)

    And the same opinion is valid for Nvidia.

    And the same opinion goes for Intel (should they in any time be in the position to offer enough graphical compute power to compete in the enthusiast gamer range)
     
    Last edited: Jul 3, 2016

Share This Page