High DX11 CPU overhead, very low performance.

Discussion in 'Videocards - AMD Radeon Drivers Section' started by PrMinisterGR, May 4, 2015.

  1. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    Damn, those ATI engineers did something with GCN :3eyes:

    Any sources for this?
    I don't know if you do it on purpose but the reason this whole thing is being asked is not framerate, but the percentage that the framerate goes down as you lower you CPU spec.
    For NVIDIA is so much better that people recommend lower-tier NVIDIA cards. The Digital Foundry recommended a GTX 750 Ti to an R9 280 for people with slower CPUs.
    As for the card benchmarks, here they are.
     
  2. Yecnot

    Yecnot Guest

    Messages:
    857
    Likes Received:
    0
    GPU:
    RTX 3080Ti
    It would be better to compare 15.4 W7/8.1 vs 15.4 W10TP vs WDDM2.0 drivers in Dx11 overhead, because we don't know if using WDDM1.3 on Win10 results in reduced performance.

    HWCompare vs Anandtech Benches
    Specs vs Realworld Performance
     
    Last edited: May 4, 2015
  3. sammarbella

    sammarbella Guest

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    YOUR interpretation about a fair comparison is what is not logical.

    People buying low end GPUs like 750 ti or 270x are not using them in high end PCs.

    In which context do you expect to use a 750 ti or a 270x GPU?

    In a PC with a Intel® Core™ i7-5960X Processor Extreme Edition octacore with 128 GB of RAM using a RAID 0 of four SSD disks of 1 TB and 6 monitors attached?

    These GPUs will be used in low end i3 PCs with 4 GB of ram in 90% of the cases.

    Matching low end GPUs with low end CPUs in a context of low end PCs is what is a logic and FAIR comparison not your "logic".

    DX12 support down to 7000 series?

    Really?

    Maybe for rebranded cards as 280 like they did in VSR. :D

    What a joke!

    I don't think you are a user with multiple nicknames or a troll, you must be the Joker himself!

    :nerd:

    Edit: I added an incorrect "x" after "280"
     
    Last edited: May 4, 2015
  4. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    DX12 is working for my cards.
     

  5. Krteq

    Krteq Maha Guru

    Messages:
    1,129
    Likes Received:
    764
    GPU:
    MSI RTX 3080 WC
    GCN is fully programable architecture, it's quite similar to CPU (not the same, but it's almost identical in some aproaches).

    Here is something about asynchronous shaders
    TweakTown: Asynchronous Shaders - AMD's Secret Weapon?

    Some stuff about resource binding - it's almost the same as bindless resource model in Mantle API
    Advanced DirectX12 Graphics and Performance - GDC 2015
    Microsoft* Direct3D* 12: New API Details and Intel Optimizations

    You can find a lot of stuff used in this tables in DX12 and Mantle whitepapers and GCN Whitepaper.

    Anyway, there is a slide from another Intel's DX12 presentation which is related to this thread. Intel practicaly said that DX11 deffered context was a mistake (same as Huddy in 2008)

    [​IMG]

    And just for fun - diferrences in DX12 and Mantle Programming guides :)
     
    Last edited: May 4, 2015
  6. sammarbella

    sammarbella Guest

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    Which is a 7950 also know as 280 lately. :)
     
  7. Krteq

    Krteq Maha Guru

    Messages:
    1,129
    Likes Received:
    764
    GPU:
    MSI RTX 3080 WC
    Nope, it's not a joke ;)

    Please re-read my previous posts and do some research. There are tons of informations about this around the web (lots of them are from GDC 2015).

    By the way, VSR is HW based scaling, It's not SW based like DSR. There are some HW limitations, but it's faster than SW solution.
     
    Last edited: May 4, 2015
  8. sammarbella

    sammarbella Guest

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    I hope you are right and for the AMD sake they are supporting older and still well performing GPUs as you said.

    From previous experiences in support of VSR and Freesync we can see how they drop support on not so old GPUs.
     
  9. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    I know. It sounded like you didn't think AMD would release DX12 for my 7950 like they did with VSR.
     
  10. Krteq

    Krteq Maha Guru

    Messages:
    1,129
    Likes Received:
    764
    GPU:
    MSI RTX 3080 WC
    I still don't get it. What's wrong with VSR and Freesync support?
     

  11. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    I'm very suprised that my cards are basically "full" DX12 ready. I hope AMD make use of all those functions in their drivers.
     
  12. sammarbella

    sammarbella Guest

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    Nothing wrong VSR itself it's working fine for the major part.

    The "hardware" limiting factor was proved false in 6000 and 7000 series by "software" tricks like reg codes and modded drivers adding VSR to GPUs without this required "hardware".

    AMD real limiting factor in supporting older GPUs is more a commercial decision than a real hardware limitation in most cases.

    So it will not be very strange if AMD say DX 12 will not be supported in older GPUs due to "hardware" limitation....again.
     
  13. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    It was known pretty much straight away that AMD will support all their GCN GPU's.
     
  14. Krteq

    Krteq Maha Guru

    Messages:
    1,129
    Likes Received:
    764
    GPU:
    MSI RTX 3080 WC
    Weren't these mods performed on 15.3 Beta drivers? There are some changes in driver binaries for sure and they could add some partial support for "SW VSR" there but it has not been allowed/defined in registry yet for some reason.

    This is most possible scenario IMO

    We are too OT now, let's stay at DX11 cpu overhead.
     
    Last edited: May 4, 2015
  15. sammarbella

    sammarbella Guest

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    I agree with you: we are going OT. :)

    VSR workarounds are well discussed in other threads.

    About DX11 my opinion is pesimist: if AMD didn't improved his drivers performance with DX11 compared to Nvidia ones untill now they are not going to improve them in a WDDM 2.0 compatible driver.
     

  16. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    (Matthew McConaughey voice): Alright alright alright. So, if the cap bits you posted are correct (and I see no reason why they wouldn't be), then all the AMD hardware since GCN1.0 is fully DX12 capable?

    In the DX12 Microsoft presentation you linked it seems like DX12 will have three capability tiers.
    [​IMG]
    If the hardware support you posted is correct,
    [​IMG]
    [​IMG]
    then GCN 1.0 has full DX12 support in hardware while it took NVIDIA untill Maxwell v2 and they still dont? :nerd: :infinity: :D


    PS: If that's the case indeed, it make the driver situation even more sad. :(
     
  17. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    I don't know if you're slow but the very bench you linked to shows that the 270X is far from being twice as fast as the 750Ti. Unless you want to tell me that Company of Heroes 2 runs like it does on the 270X at which point I give up (hint: CoH2 runs better on AMD cards - http://www.anandtech.com/bench/product/1031?vs=1037)

    270X = 7870 ~ GTX580+, 750Ti ~ GTX570.

    Again, for the third time, you CANNOT COMPARE shader unit count for two cards from DIFFERENT hardware architectures. If you knew a thing about computer hardware architecture, you would know such comparisons are meaningless. Read up on microarchitectures and how instructions based on a specific ISA (Instruction Set Architecture) are executed in hardware then come back and argue this very topic. Until then, if I were you, I would stop casting assumptions and drawing conclusions from them.
     
  18. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    This is the last time that I will actually reply to you, since you seem like you don't want to get the point.

    The actual FPS don't matter. What matters is the percentage of performance loss for each card, when you lower the CPU spec. What of that can't you get? (Wait, at this point I don't want to know).
     
  19. Yecnot

    Yecnot Guest

    Messages:
    857
    Likes Received:
    0
    GPU:
    RTX 3080Ti
    He doesn't get that we're comparing the performance of drivers and not cards.
     
  20. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    Oh, no. I do get the point, you see. But there was a claim thrown about, that the 270X is double the performance of the 750Ti, based too on the hardware spec sheet you saw and linked to, which is a false assumption. And then, going from this assumption that it's AMD's DX11 driver implementation responsible for the 270X showing limited performance gains over the 750Ti.

    When a 750Ti is recommended over a 280 in CPU-limited scenarios, let me share that I would also recommend a 750Ti over a 770 in CPU-limited scenarios. This might be obvious, but lower-end cards are always suggested over higher-end cards in CPU-limited scenarios. This example of AMD's drivers slacking behind in CPU overhead is also unsuitable, and it would be more suitable to find examples where an Nvidia card and an AMD card perform almost exactly the same when paired with a top-end CPU but the AMD card lags behind when paired with a less powerful CPU, as this topic is trying to explain.

    Shall I repeat once more, or has this been understood?
     

Share This Page