Do AMD 7XXX series cards support directX 12?

Discussion in 'Videocards - AMD Radeon' started by DSparil, Jul 30, 2015.

  1. DSparil

    DSparil Guest

    Messages:
    3,295
    Likes Received:
    33
    GPU:
    GeForce RTX 3080
    The question is in the title! If not, what is the oldest series of AMD cards that will support the new benefits of Win10 and DX12? thanks gurus

    :007:
     
  2. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    Yes they do.
     
  3. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    >= gcn 1.0
     
  4. OnnA

    OnnA Ancient Guru

    Messages:
    17,791
    Likes Received:
    6,691
    GPU:
    TiTan RTX Ampere UV
    Yes DX12.x for now ;-)

    [​IMG]

    [​IMG]
     
    Last edited: Jul 31, 2015

  5. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,939
    Likes Received:
    1,239
    GPU:
    .

    Can you guys - please - stop posting this table? It is fake and contribute only to diffuse false information.
     
    Last edited: Jul 31, 2015
  6. OnnA

    OnnA Ancient Guru

    Messages:
    17,791
    Likes Received:
    6,691
    GPU:
    TiTan RTX Ampere UV
    Its from Stadock Developers -> they where first one to fully utilise DX12.3 in 4k RTS upcoming New Game.
    So i think its Real, OMG AMD is responsible for Mantle ;-) and from mantle emerges DX12.x + OGL Vulcan :) Everybody knows That fact :infinity:
    -> here

    https://youtu.be/t9UACXikdR0
     
    Last edited: Jul 31, 2015
  7. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    12.3, do you mean 12.1?
     
  8. OnnA

    OnnA Ancient Guru

    Messages:
    17,791
    Likes Received:
    6,691
    GPU:
    TiTan RTX Ampere UV
    I think for digits like - 12.1 12.2 - etc. we have to wait for Games that really takes advatage from base DX12.0 in the first place ;-)
    I think games will be as always: 90% will have 12.0 (Radeon and nV 9xx fully compatybile)
    Rest will be 12.3 from EA ;-) like today we have 90% games in DX11.0 but EA have already BF4, BF Hardline, NFS Rivals in New DX11.1 and YES nV is unable to get DX11.1 in any game, despite what they say in news/drivers etc. ;-) -> got any YT Gameplay from nV7xx or even 9xx and you'll see only base DX11 in those Games !

    Here from my PC ->

    [​IMG]

    and DX12.3 in GCN, and i think in Fijii only ;-)

    [​IMG]
     
    Last edited: Jul 31, 2015
  9. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    There's no such thing as feature level 12.3, it stops at 12.1 currently. Tier 3 means something else. DX12 games will be using feature level 11.0 and above.
     
  10. thatguy91

    thatguy91 Guest

    It's not actually 12.1, it's 12_1. The distinction is important, because regardless of feature level 11_0. 11_1, 12_0, and 12_1, they are ALL DirectX 12.0. The feature level is also a bit misleading, because it is only based on certain requirements. In reality, games probably won't make use of feature level 12_1 for a while (since it supposedly requires much more power for the added graphics quality), especially true if it is restricted to one card series currently. What would be interesting is whether resource binding tier 3 is any better performance wise than say, tier 1 or 2... :).

    For the end user, it is unlikely for a couple of years that there will be any noticeable difference between any of the feature levels, that is of course unless Nvidia decide to 'encourage' developers to make use of the added feature of feature level 12_1. You'd most likely need a 980/980 Ti to make use of this.
     
    Last edited by a moderator: Jul 31, 2015

  11. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    That table is not fake, it is just old ~ outdated.
    M$ wanted to have DX12.0 running on as many cards as possible so, they changed requirements table in way nVidia can do that stuff. So on paper nVidia looks much better now numbering wise than they did before and AMD worse.

    We will not know what AMD/nV has transistors for and what is emulated, till there is actual game using those features. And even then it may be hard to decide.
    One would have to make benchmark for each separate feature and measure differences between each architecture revision.
    (Like placing HD7950 against r9-285.)
     
  12. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,939
    Likes Received:
    1,239
    GPU:
    .
    That table is fake. There is no such thing like DX12.3 and I am a DX12 EAP member, I know what I am talking about. Finally, Stardock devs NEVER posted that table.

    It is fake, little example: only Maxwell 2.0 GPUs have Conservative Rasterization support (Tier1 only).

    If you guys want to know the current hardware D3D12 support, you can have a look at this page https://en.wikipedia.org/wiki/Feature_levels_in_Direct3D Even if not 100% fully accurate (cannot tell more due NDA untill MS updates it's MSDN documentations), it describes quite well (like 95% accurate) the current public state of the API with current hardware.
     
    Last edited: Jul 31, 2015
  13. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Everyone with W10 insider was EAP for DX12. Because everyone could download visual studio 2015 RC which is what you need to do DX12.
     
  14. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,939
    Likes Received:
    1,239
    GPU:
    .
    So what? Did you have the ISV specifications doc? Did you have access to private forums? I don't things so. Yes, me -> developer. I do not work for a big company, I am independent (well, currently unemployed -.-) but I know what I am talking about.

    Every Windows 10 insider accessed only to the compiled binaries (since March IIRC), but you do nothing only with binaries.
     
  15. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    https://msdn.microsoft.com/en-us/library/dn899118(v=vs.85).aspx
    I hope it is clear.
     

  16. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,939
    Likes Received:
    1,239
    GPU:
    .
    So what? The public SDK contains the DX12 SDK bits only since Spring 2015. Before only DX12 EAP members had access to DX12 SDK bits and still no-one that doesn't sign-up in that program does not have the NDA ISV specifications (which are not the same of the MSDN programming manual and reference) and other NDA documents. That's not a secret.

    Anyway, I still do not understand what are you pointing about. That table is fake, no-one is able to prove the opposite. You guys are just making useless and historical scaremongering - that nobody need - about things you do not understand, things that are meant to be useful only for developers.
     
  17. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    That table has only few minor errors. That does not make it any more fake than your comment.
    It lists most of features correctly.
     
  18. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    DXDIAG lists the feature levels as 11.0 11.1 etc.
     
  19. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,939
    Likes Received:
    1,239
    GPU:
    .

    - Kepler and Maxwell 1.0 are Tier 2 GPUs on resource binding
    - GCNs do not support Conservative Rasterization, nor Intel Gen 7.5 and Gen 8, moreover CR is enumerated into 3 tiers.
    - GCNs do not support Volume Tiled Resources (aka Tier 3 of Tiled Resources) and CANNOT emulate (like through a 2D array)
    - Typed UAV loads are not supported on Fermi and Kepler GPUs
    - GCNs do not support ROVs (doesn't matter GL_INTEL_fragment_shader_ordering)
    - Async DMA (aka Async Copy) and Async compute are part of the API, they do not require hardware support (they are serialized by driver by default)
    - Async Copy is beneficial across all hardware (ie: dedicated hardware copy engine)
    - Async Compute is beneficial only for Maxwell 2.0 on NVIDIA hardware (ie: dedicated hardware compute engine)
    - Conservative depth semantics (like SV_DepthGreater) is not a DX12 requirement and is supported across all FL 11.0 DX12 hardware
    - Atomic counter is NOT exposed by any DirectX API.
    - All supported DX12 PC gpus have HLSL double precision (aka FP64 ) enable.
    - SAD4 shader instruction is supported across all DX12 hardware and it's not emulated since it's a shader intrinsic.
    - Stencil reference in PS is supported only by GCNs
    - 3D steroscopic is not exposed by any DirectX API since DirectDraw.

    Yeah, just a couple of minor errors...

    https://www.youtube.com/watch?v=Wui-PNqJrxs
     
    Last edited: Jul 31, 2015
  20. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Post #5 images:
    https://forums.geforce.com/default/...0-will-not-have-all-the-directx-12-features-/

    As I wrote, that table is old and its problem is, that it is outdated. Therefore some information may not be correct anymore.
    [​IMG]
    So, someone here with kepler and maxwell 1.0 can run it and show how they support "Tiled Resources Tier 2"
     

Share This Page