DirectX 12 First Tests

Discussion in 'Videocards - AMD Radeon Drivers Section' started by PrMinisterGR, Aug 17, 2015.

  1. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    Find the link :)
     
  2. Ironjer

    Ironjer Active Member

    Messages:
    67
    Likes Received:
    1
    GPU:
    2x AMD 295X2 CrossfireX
  3. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    That's another Gaming Evolved title, so the point is well taken, but we get the undertones.

    One question: Does AMD even consider adding proper Vsync/TripleBuffering/RenderAhead/FrameLimiter controls, or not? It will really influence my next purchase. Thanks.

    Every new game out there has treated this CPU better and better. It won't be faster than overclocked i5's, but it will be faster than the crippled i5 you might get at the same price. The total price of the platform is lower too.

    Nvidia has had "Game Ready" drivers for AotS for a week now. These are the drivers everybody tested with. As for the rest, the overhead thread is where you need to go.
     
  4. AMDFreeSync

    AMDFreeSync Guest

    Messages:
    271
    Likes Received:
    0
    GPU:
    MSI Radeon R9 390 8GB
    Nvidia know that Gameworks is a pain in the ass for AMD users but now the other cheek has been bite hard in the ass & Nvidia is saying Ashes of the Singularity is not a real DX12 Benchmark LMAO.
     

  5. Goiur

    Goiur Maha Guru

    Messages:
    1,341
    Likes Received:
    632
    GPU:
    ASUS TUF RTX 4080
    Wait for their "real benchmark" with hacked results to look cool for more info.

    PS: Was on the "ATI 9800 vs nVidia 580" last time they cheated on benchmarks?
     
  6. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    You mean like quack.exe and the FP16 demotion? Both sides cheat. Get over it.

    Nvidia calling this AOS a bad test is dumb, I agree, but the results aren't even impressive. All it shows is that AMD's DX11 driver is garbage with huge draw calls and Nvidia can't find any additional performance in DX12.

    In most titles DX12 won't even have an impact unless you're on an low end processor.
     
    Last edited: Aug 18, 2015
  7. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    I actually wonder if they can "cheat" now in the benchmarks.
     
  8. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    What is cheating? Optimizing for the specific game? As long as they aren't impacting image quality in a negative way, it should be fine, no?
     
  9. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    I would be incredibly surprised if the DX11 Nvidia drivers were at the level that DX12 offered little to no benefit.

    Why would Nvidia be spending any time/money on it?
     
  10. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    The whole point of the DX12 driver is that it is "thin". That the manufacturer is not allowed to directly intervene on game code. That's the whole point of low-level programming in the end. If you read the Oxide blog, they said that NVIDIA suggested them some changes in their shader code, and they incorporated it. If it was the olden days, they would have simply supplanted them in the driver. The problem with "cheating" is that different developers get different reactions from the driver. If you make app A which is famous, and NVIDIA optimizes your shader code in their driver, you're screwing the developer who makes app B and is less famous, and will never have a team of experts write code for them. And the driver would behave differently towards your application too. It created two different kinds of apps.

    DX12 and Vulkan is supposed to be the end of this.

    Optimizing the driver itself (scheduling within the GPU etc), is necessary and good, but the rest...

    That's a testament to the speed of the NVIDIA DX11 driver. I don't understand why NVIDIA and some NVIDIA fanbois are feeling bad about this.

    You were getting 100% of your hardware from DAY ONE. If any, the people with Kepler and AMD users should be very upset.
     

  11. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    I definitely don't think 100% would be a bad thing, I'm just not convinced it is 100%.

    As previously mentioned, it would be be nice to see some CPU/GPU usage stats.
     
  12. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    I was referring to what can be done with DX11. It is nice as an NVIDIA owner to know that you get most of what your card can do, the moment you get it (see 980 DX11 performance). It's not so nice when NVIDIA changes architecture with every iteration, and you suddenly find that your old card could have been 35% faster, but nobody optimized the driver efficiently for it (see GTX 770).
     
  13. ---TK---

    ---TK--- Guest

    Messages:
    22,104
    Likes Received:
    3
    GPU:
    2x 980Ti Gaming 1430/7296
    Changing architecture is much better than releasing the 300 series as respins imo. I had 2 sets of Kepler cards, 680 sli and 780ti sli and never noticed anything going on performance wise with drivers.
     
  14. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    It's a maintenance nightmare, and when you have a more or less stable arch, you can support older hardware much better. Weren't you surprised that the 770 got a 35% increase under DX12? The Maxwells didn't, doesn't that ring any bells?
     
  15. janos666

    janos666 Ancient Guru

    Messages:
    1,653
    Likes Received:
    407
    GPU:
    MSI RTX3080 10Gb
    I was pessimistic but I still cautiously hoped these hyped new APIs might manage to bring some real and notable improvements on the GPU front as well (similarly how the utilization of advanced CPU instruction sets [SSE,AVX,FMA,...] or even more architecture-specific optimizations [especially when done by hand but even through automatic compiler optimizations] might bring really nice speedups in certain kinds of CPU tasks) and it's still possible to happen in the future but I am more pessimistic after seeing some Mantle and some early DX12 results. It seems like it's really just about the CPU, not the GPU at all (if we consider real and significant differences only - on SINGLE GPU). And I think that's a problem because the GPU evolution (in terms of raw horsepower) will slow down along the CPU evolution (just take a look at a SandyBridge<->SkyLake clock-to-clock benchmark and how they stall the move to 6 or 8 cores for the mainstream). It takes more and more time to get a new fabrication process working as expected and even those bring diminishing returns in terms of performance benefits and today's top VAGs are already "monsters" (300W+ beasts which should be tamed by a watercooler and the prices seem to crawl up from gen to gen, so I don't really want several pieces of those in a single gaming PC...).


    Some noteworthy things I still remember from funny unofficial (fanboy) AMD marketing:
    1: "You should buy the HD4xxx because you will need DX 10.1 for anti-aliasing in games with deferred shading"
    2a: "You should by the HD5xxx because it WILL BE faster in tessellation"
    2b: "You should buy the HD6xxx because now it's really faster in tessellation (faster than the earlier-gen Geforce which emulated the tessellation unit...)"
    3: "You should buy the HD7xxx because you will need GCN for real DX11 support and GPU computing in games [GPU accelerated AI, physics, ect]

    Several years and VGA generations later (with a 290X which could be called HD8xxx for comparison) I play tons of hours of DIA with Mantle (which is comparable to DX12) and:
    1: MSAA still doesn't work (the fps is significantly lower but the aliasing is virtually the same with 4x MSAA, so I obviously turn it off :3eyes:)
    2ab: I still turn off the tessellation (completely) because the overall quality/performance ratio is still miserable in my opinion, even at Low level (it makes the ground a little more bumpy but that's all [and it's not even that nice and it's still weird around the edges of objects laying on the ground] but it demands ~1/3 of the framerate even when I am barely having any tessellated surfaces on the screen and I set it to Low :3eyes:).
    3: still none of that is happening and the techno-babby is replaced, now I will need az APU (and it's integrated GPU with access to the CPU memory) to do that kind of magic (even though Physx could run just fine on "ancient" Geforce cards like the 8800GT and most of the other physics engines are far from that level but whatever...) and, of course, DX12 (or Mantle) instead of DX11 (before I might forget that in this topic).:infinity:
     
    Last edited: Aug 19, 2015

  16. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    Does the 3D Mark api overhead test match up with the results from this?
     
  17. janos666

    janos666 Ancient Guru

    Messages:
    1,653
    Likes Received:
    407
    GPU:
    MSI RTX3080 10Gb
    That's is still a raw CPU speed benchmark (and the validation of a technically working API). This is a CPU+GPU graphics test. Apples and sharks.
     
  18. OneB1t

    OneB1t Guest

    Messages:
    263
    Likes Received:
    0
    GPU:
    R9 290X@R9 390X 1050/1325
    there is something weird with FX cpu performance in ashes of singularity

    it seems like these CPUs are not fully utilized as difference between fx-6300 and fx-8370 should be much higher (25% from core count and about 10% from frequency)
    also devs stated before that FX-8350 is close to i7-4770 in this test..

    but im happy about 290/390(X) performance finally what it should be from day one :)
     
  19. Spartan

    Spartan Guest

    Messages:
    676
    Likes Received:
    2
    GPU:
    R9 290 PCS+
    Nope...

    [​IMG]

    Actually fx cpus are faster under dx12 than under dx11, but they're still a pile of junk.
     
  20. OneB1t

    OneB1t Guest

    Messages:
    263
    Likes Received:
    0
    GPU:
    R9 290X@R9 390X 1050/1325

Share This Page