Review: Total War WARHAMMER DirectX 12 PC graphics performance

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jun 2, 2016.

  1. EdInk

    EdInk Active Member

    Messages:
    98
    Likes Received:
    2
    GPU:
    GTX 1080 (H20)
    Matt at techtested (youtube video) did this already with DX12 and showed 8 cores was the sweet spot for DX12...value for moneywise...sorry can't post links need 5 or more posts on guru.
     
  2. Aelders

    Aelders Banned

    Messages:
    870
    Likes Received:
    0
    GPU:
    980Ti G1 @ 1490|8000

    I feel like a broken record.

    AMD does not have an inherent DX12 advantage.

    AMD cards tend to be perform far better in DX12 compared to DX11 because of CPU overhead.

    In Ashes of the Singularity (and very probably this game as well) raw compute throughput is the major determining factor in game performance.

    It's no surprise a stock 390 outperforms a stock 980.

    Stock vs stock

    390 vs 980
    5376 gflops vs 5038 gflops

    This is the exact same situation as AotS.
    390 OC vs 980 OC

    1200mhz (i'm being nice today) vs 1500mhz

    6144 vs 6144

    The AMD cards in this G3D benchmark test should actually be performing better;

    Fury X vs 390x
    Fury X has 45% more shaders @ same clock
    Fury X is 25% faster at 4k


    Titan X vs 980

    TX has 50% more shaders @ 4% lower clock (reference)
    TX performs 35.7% faster at 4k
     
    Last edited: Jun 2, 2016
  3. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,465
    Likes Received:
    495
    GPU:
    Sapphire 7970 Quadrobake
    Almost all the titles have been AMD-sponsored, but I'm not even sure how much that matters any more. The two titles that NVIDIA are doing better are DX11 engines with DX12 patched on.
     
  4. Aelders

    Aelders Banned

    Messages:
    870
    Likes Received:
    0
    GPU:
    980Ti G1 @ 1490|8000
    I hope you don't mean ashes, because nvidia hardware is outperforming AMD in it
     

  5. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,465
    Likes Received:
    495
    GPU:
    Sapphire 7970 Quadrobake
    No, it's not. Unless you compare 16nm NVIDIA to 28nm AMD, it does not. The 390x is practically as fast as the 980Ti and the 380x as the 970.You say that AMD does not have an inherent DX12 advantage, which I agree with. The fact is though that NVIDIA is pricing their hardware according to what is basically DX11 perceived performance, and not what the cards can do to the max (as it happens with most DX12 titles). So AMD prices accordingly. The 380x reaching 970 performance is one example of pricing like that. So if you take that into account, then yeah, AMD does have a performance advantage in DX12 regarding performance/dollar.

    [​IMG]
     
    Last edited: Jun 2, 2016
  6. vazup

    vazup Master Guru

    Messages:
    302
    Likes Received:
    7
    GPU:
    r9 280X
    Can we get a source for that? I mean every time amd does better in something someone is claiming that amd paid for it. At least with nvidias gameworks it is pretty clear when they had a hand in it.
     
  7. Dazz

    Dazz Master Guru

    Messages:
    902
    Likes Received:
    97
    GPU:
    ASUS STRIX RTX 2080
    You and me both, although disabling async compute reduces performance by about 10% most of the increase in performance is the removal of the driver overhead like both of us have already said. It doesn't matter which game it is we seem to always come across it's been sabotaged against nVidia but i think people are over hyping async compute way too much. Rather than, nVidia cards have been running 99% efficient and AMD cards have been running 60% efficient and DX12 has allowed them to become 99% efficient since the driver is no longer fighting for resources on the one CPU core along with applications running code on that very core. I think it's also the reason why frame pacing is typically worse on AMD cards compared to nVidia too.
     
  8. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,465
    Likes Received:
    495
    GPU:
    Sapphire 7970 Quadrobake
    I won't even get into that, there is no meaning. The truth of the matter though is that in all engines that were developed with DX12 in mind, AMD has been getting much higher than their usually expected performance. Yes, they were games sponsored by them, but on the other hand there haven't been any closed libraries like GameWorks on, which means that NVIDIA has had access. They didn't even complain about no access either. The two games they "win" are Rise of the Tomb Raider (a game ported 3 months before its time with DX12 patched on), and Gears of War Ultimate Edition (an atrocious original release with DX12/UWP bolted on top of Unreal Engine 3).
     
  9. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,251
    Likes Received:
    15
    GPU:
    RTX 3060 Ti FE
    Total war has always performed iffy and required pure clocks to mitigate. I bet they offered DX12 implementation to bring performance gains across the board. I doubt they just dropped a suitcase of money and said optimize for their cards.
     
  10. Dazz

    Dazz Master Guru

    Messages:
    902
    Likes Received:
    97
    GPU:
    ASUS STRIX RTX 2080
    Well since AMD is in the red for their finances, it would have been a pretty light suite case :D lol j/k
     

  11. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,251
    Likes Received:
    15
    GPU:
    RTX 3060 Ti FE
    Monopoly money. or Fury dies as a souvenir. :3eyes:
     
  12. Aelders

    Aelders Banned

    Messages:
    870
    Likes Received:
    0
    GPU:
    980Ti G1 @ 1490|8000

    Yeah you're right, and that's perfectly valid but you know my qualms with it.

    AotS is compute heavy, so what we're seeing in those results you just posted are AMD's cards (which offee higher compute at each tier) outperforming their competitors.

    That isn't AMD performing better in AotS though, that's the individual cards doing better than nvidia counterparts because of the higher compute.

    What matters to me (and maybe nobody else) is how they perform, in this case, given equal shader throughput, and nvidia takes the lead there. Overclock a 390x to 1200mhz and compare to a reference 980ti

    Or overclock a 980ti to 1526mhz and compare to Fury x
     
    Last edited: Jun 2, 2016
  13. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,251
    Likes Received:
    15
    GPU:
    RTX 3060 Ti FE
    Warhammer is supposed to include async compute (the amd way) yet I see no indication of it
     
  14. Illyrian

    Illyrian Master Guru

    Messages:
    219
    Likes Received:
    12
    GPU:
    2080 Super
    You guys hyped up about a particular GPU performance really shouldn't...

    I haven't read the article, but yall need to know that these Total War games have always been almost 100% CPU dependant, GPU's never really mattered...
    Their "Warscape" engine they use has been a piece of **** for a long time,
    All it's ever done in the past is stress out the first core 100% and not give a flying fck about what GPU's you have (I got 0 performance increase going from 660ti to 290x, and just a few extra frames going from single to Crossfire)

    Still with that said, the terrible history the game series has with Performance problems, I don't care that they've upgraded to 64-bit, don't trust them... also couldn't give a **** about Warrhammer, I'm a history nerd not a fantasy one
     
  15. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,251
    Likes Received:
    15
    GPU:
    RTX 3060 Ti FE
    Cool story but the game performs fine lol
     

  16. Aelders

    Aelders Banned

    Messages:
    870
    Likes Received:
    0
    GPU:
    980Ti G1 @ 1490|8000
    Unrelated but just realized a VR platform for playing tabletop RPGs online would be God damn amazing
     
  17. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,251
    Likes Received:
    15
    GPU:
    RTX 3060 Ti FE
    Yeah the potential is unlimited. I forsee mass weight gain.
     
  18. Dazz

    Dazz Master Guru

    Messages:
    902
    Likes Received:
    97
    GPU:
    ASUS STRIX RTX 2080
    As far as i am aware there is no Async involved in this title or very little, the patch is really for balancing the workload and use more CPU cores.

    So it's performance based, there was a article about that i read last week that shows both AMD and nVidia cards getting huge boosts, of course AMD's boost was alot high but again this is down to their high driver overhead being removed. However the point remains it's all to do with getting the most out of the CPU thats being used.... As is All strategy games which are far more CPU intensive than graphics. Hence why HH has CPU scaling. DX12 allows upto 6 cores to be utilised, and the likes of Vulkan/Mantle which has 8+. nVidia has had a work around for the driver overhead for years now and offloads resources to idle parts of the CPU while AMD's does not.

    Still better than Tomb raiders "CPU" balancing which ends up being negative for both AMD and nVidia and didn't seem to have much impact on CPU performance at all.
     
  19. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,465
    Likes Received:
    495
    GPU:
    Sapphire 7970 Quadrobake
    So, they actually are performing better in AotS, because AotS is a compute-heavy game and AMD cards are better in compute (concurrent compute at that). NVIDIA shaders are more "expensive". They occupy larger die sizes and they are more efficient. The total TFlop might be similar, but NVIDIA cards use that more frequently because of that. The fact is that all DX12 games who have had engines made for DX12, do that by default. The reason I exclude Tomb Raider and Gears is not because they are NVIDIA titles, it is because DX12 was basically hastily bolted on old engines for both. I don't believe it was luck that NVIDIA chose those titles to sponsor too. I can't see that pattern changing, it would make it suicidal for cross-platform games.

    The whole point is that NVIDIA is basically a "weird" company that is surrounded by AMD products of similar usage in other platforms. They have 70% of a very specific market (PC gamers), which is not small, but it is not a target platform for any AAA releases that actually need the GPUs NVIDIA makes. It can't really go to consoles because of the CPU designs required and the lack of a x86 license, and even if it did have those things, the ship has sailed already. These "new" consoles are looking more and more as "platforms" like the Apple App store etc. That requires backwards compatibility, which means no new contracts with NVIDIA for any perceived amount of time. The only reason they will survive is CUDA and the huge products that capture mindshare sold at enormous profit margins (look at the 1080, it's as cheap as it can possibly get, my 7970 has a more complicated PCB than that and a larger die, don't tell me it costs more than $250 to make and ship anywhere).

    That's why all of their strategy revolves around soft or hard lock-ins. Gsync monitors/Physx/Gameworks, they are all made to make sure that if you have invested a bit more in the NVIDIA environment, you stay there. These are NOT accusations, I'm just noticing some patterns. The GPU world would lose immensely if NVIDIA ever went away, I'm actually happy they are doing so well. But don't ever think that they will keep being the target platform for anything. Anything optimized for NVIDIA would be because it either caters to the PC market only, or because of a special per-case deal. AMD already has even Ubisoft in the fold, along with EA and Square Enix and id. That's like 50% of the big publishers, if not more. Everyone else that does AAA has to make their engine to cater to the particularities of GCN, first with GCN 1.0/1.1 and now with Polaris.

    Well dude, read the article then. Both AMD and NVIDIA get really big performance boosts with DX12. A Total War game is actually a very good choice for a low level API, just because of the reasons you mentioned.
    PCGamer tested with DX11 too. Even NVIDIA cards get 2x the FPS by switching to DX12.

    Hot damn, I was actually thinking the same about a week ago. Even better, AR. Imagine D&D on an actual tabletop with animated figures that react depending on what you do and what you roll :banana:
     
    Last edited: Jun 2, 2016
  20. Agonist

    Agonist Ancient Guru

    Messages:
    2,988
    Likes Received:
    303
    GPU:
    XFX 5700XT Raw II
    Honestly, my fury x is the first GPU I havent overclocked. First GPU i spent $300 over as well.

    I dont even feel the need to oc it.
    In every game, Im always above the needed FPS im happy with.
    My HD 7950s were the gpus I oc'd the piss out of the most.
    1300 core, 1500 memory.
    stock was 860/1200.

    AVG 120 fps in BF4 ultra @ 2560x1080 is awesome, even on a 75hz ultrawide is very nice.
     

Share This Page