1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

High DX11 CPU overhead, very low performance.

Discussion in 'Videocards - AMD Radeon Drivers Section' started by PrMinisterGR, May 4, 2015.

  1. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,069
    Likes Received:
    49
    GPU:
    RX 580 8GB
    It's a CPU bound situation. That's why with an i3, the 750Ti performs better than the 280.
     
  2. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,737
    Likes Received:
    2,198
    GPU:
    5700XT+AW@240Hz
    You actually can. it does not say that one card is better than other, it tells you that fps will suffer if game pushes more than driver can feed into GPU.

    Check Space Marine in my Benchmarking post. 1080p does exactly same as 1440p. Only GPU utilization rises. It has same fps because driver does not send more requests.
    But in comparison to previous driver it now has higher fps because new driver feeds more data to card.

    I am sure they did something to very old driver core now, because Dead Island (DX9) now has extra high fps in comparison to previous drivers too.
     
  3. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,069
    Likes Received:
    49
    GPU:
    RX 580 8GB
    Now reviews/benchmarks can be done with the official 15.7 with overhead improvements, a lot didn't use the Windows 10 modded drivers in say 8.1, so it will be good to see how they compare now.
     
  4. Dygaza

    Dygaza Master Guru

    Messages:
    535
    Likes Received:
    0
    GPU:
    Fury X 4GB
    Yeah I know. But gotta be realistic. Can't get 1M from 1 driver upgrade easily, so gotta be satisfied from even smaller ugprades. AMD needs to use a lot of driver dev team resources to DX12 drivers aswell.
     

  5. MatrixNetrunner

    MatrixNetrunner Member Guru

    Messages:
    125
    Likes Received:
    0
    GPU:
    Powercolor PCS+ R9 270X
    I played Space Marine on my 270X with VSR at 1440 (15.5 with VSR mod) and even that card did almost constant 60. Not very demanding. Pretty good game, clunky controls unfortunately.
     
  6. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,737
    Likes Received:
    2,198
    GPU:
    5700XT+AW@240Hz
    I played it back then on HD5870m, And it was running very well, I like how they used contrast based AA.
    Basically applying AA only around edges where materials of objects had high brightness difference. That improved IQ and lowered HW requirements a lot.
    Thanks to that it does 95~192fps on 2160p for me now.

    But point is that while fps on lower resolutions is very good, it can be considerably higher as GPU is quite under utilized.
     
  7. SMS PC Lead

    SMS PC Lead Member

    Messages:
    22
    Likes Received:
    0
    GPU:
    Titan X SLI
    The 3 months would be mainly my time, but add maybe a few weeks of 2 other people for tools support, shader/material pipelines etc. So probably 4 "Man Months" - then a few more months fixing bugs and refining multi-threading, with an end goal of visual parity.
     
  8. MatrixNetrunner

    MatrixNetrunner Member Guru

    Messages:
    125
    Likes Received:
    0
    GPU:
    Powercolor PCS+ R9 270X
    I didn't like AA much, hence VSR. It was a bit empty, and it left us on a cliffhanger. They were trying too much to be "cinematic". To me it felt it needed one more iteration to really come together. That difference between a very good game and a truly great one.
     
  9. janos666

    janos666 Master Guru

    Messages:
    659
    Likes Received:
    43
    GPU:
    MSI GTX1070 SH EK X 8Gb
    I consider the SSAA with LOD-offset feature (the one which has been available from CCC for years on virtually every supported hardware) a better choice of SSAA than VSR. They are basically the same but the SSAA in CCC tries to counter the texture-blur / loss of detail (well, it's not like MLAA/FXAA blur, but it's inevitable to loose SOME detail during re-sampling) by forcing a negative LOD offset on textures (theoretically resulting in more detailed textures to begin with before the down-sampling and thus the blurring happens).
     
    Last edited: Jul 9, 2015
  10. MatrixNetrunner

    MatrixNetrunner Member Guru

    Messages:
    125
    Likes Received:
    0
    GPU:
    Powercolor PCS+ R9 270X
    I tried overriding the AA settings for Space Marine in CCC, but it did not work at all for me. Probably something to do with it being an AMD sponsored game and having a special (hidden) preset in the Catalyst already. I then tried VSR, it worked with a stable 60 fps and I finished the game.

    I had more success with CCC AA override on Hand of Fate. Really visible on card edges.
     

  11. janos666

    janos666 Master Guru

    Messages:
    659
    Likes Received:
    43
    GPU:
    MSI GTX1070 SH EK X 8Gb
    Well, yes, that is a limitation. CCC usually tries to pull the desired AA level (OFF,2x,4x,8x) from the game menu and only override the method and/or filter (adaptive- and/or CF- MSAA or SSAA) despite the presence of the sample level setting is CCC (which might or might not override the level in the game menu when that's anything other than OFF, I am not sure). So, you can end up with no AA at all if the game itself does not support any kind of AA and you can't change the method if it's some kind of custom post-processing (not standard MSAA or SSAA).
     
    Last edited: Jul 9, 2015
  12. gijs007

    gijs007 Active Member

    Messages:
    81
    Likes Received:
    0
    GPU:
    AMD Sapphire R9 290
    Is the driver overhead also reduced in directx 10 games? (e.g. flight simulator X)
     
  13. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,004
    Likes Received:
    137
    GPU:
    Sapphire 7970 Quadrobake
    Almost nothing works from the CCC. There is no real way to truly force anything. Only older titles really work.
     
  14. JonMS

    JonMS Active Member

    Messages:
    72
    Likes Received:
    0
    GPU:
    2x EVGA 980Ti FTW
    No DX12 crossfire support yet
     
  15. sammarbella

    sammarbella Ancient Guru

    Messages:
    3,931
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    API overhead test give me similar results for Mantle and DX12 with or without enabled CFX.
     

  16. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,004
    Likes Received:
    137
    GPU:
    Sapphire 7970 Quadrobake
    Do you have to have it enabled on the driver level, for DX12? Isn't the app supposedly that does that?
     
  17. janos666

    janos666 Master Guru

    Messages:
    659
    Likes Received:
    43
    GPU:
    MSI GTX1070 SH EK X 8Gb
    May be because the 3DMark API overhead test is basically a CPU benchmark...
    It's a benchmark or raw multi-threaded CPU performance in DX12 and Mantle modes and a benchmark of 1: either the VGA driver's CPU optimizations on a given CPU or 2: a CPU benchmark if you test different CPUs with a given VGA driver.
     
  18. Dygaza

    Dygaza Master Guru

    Messages:
    535
    Likes Received:
    0
    GPU:
    Fury X 4GB
    Yes, and 3dmark api maxes out 4c/8t cpu on 290x on mantle or dx12. So you need more cpu cores.

    Or you can try underclocking your cards core to get performance down to test if cf works.
     
  19. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,004
    Likes Received:
    137
    GPU:
    Sapphire 7970 Quadrobake
    These are my results with Catalyst 15.7 in Windows 10 240.
    [​IMG]

    The DX12 results are mind boggling.

    This is the graphics score:

    [​IMG]
     
    Last edited: Jul 17, 2015
  20. janos666

    janos666 Master Guru

    Messages:
    659
    Likes Received:
    43
    GPU:
    MSI GTX1070 SH EK X 8Gb
    Yes. It's interesting how a low-level but still manufacturer-independent API seems to outperform an equally low-level but manufacturer-specific API in raw CPU performance. Although, MS obviously knows the Windows kernel better and could even specifically alter the Win10 kernel to accommodate for the needs/wishes of this new low-level DX API (which might or might not could help a theoretical future Mantle version as well).

    I wonder if AMD's high-level DirectX and low-level Mantle drivers have optimizations for a "generic amd64" and some specific AMD CPU architectures (automatic compiler optimizations or might even some hand-written ASM code) but not optimized at all for Intel CPUs (no specific instruction set support for anything which could be handled by and/or would be faster on Intel CPUs but not on AMD CPUs). It would make sense from a CPU manufacturer's and a "platform pusher"'s standpoint of view. In contrast, from nVidia's point of view, it's better to optimize for both (since they only sell GPU and don't directly compete with AMD on a CPU front) or for Intel CPUs only (since they indirectly do compete with AMD ; although, it's probably more important for them to make their GPUs run as fast as possible on any CPU than to indirectly undermine AMD's CPU business and they don't even want a CPU-monopoly).
     
    Last edited: Jul 17, 2015

Share This Page