DirectX 12 vs DirectX 11 Performance Slides

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Dec 16, 2014.

  1. isidore

    isidore Guest

    Messages:
    6,276
    Likes Received:
    58
    GPU:
    RTX 2080TI GamingOC
    not impressed at all, give me direct to metal, go away with your API..
     
  2. moab600

    moab600 Ancient Guru

    Messages:
    6,658
    Likes Received:
    557
    GPU:
    PNY 4090 XLR8 24GB
    GO DX12 be even better, be adopted by everyone, DX or not, bad ports are still bad ports. as if something can beat that.
     
  3. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
  4. isidore

    isidore Guest

    Messages:
    6,276
    Likes Received:
    58
    GPU:
    RTX 2080TI GamingOC

  5. Serotonin

    Serotonin Ancient Guru

    Messages:
    4,578
    Likes Received:
    2,036
    GPU:
    Asus RTX 4080 16GB
    This article is way too optimistic given the history of directx and every big claim made with every new version. I'm sorry, been gaming on the pc for far too long to think this positive, especially with ANYTHING that involves Microshaft.

    DX12 does not fix shoddy ports, it does not lure developers to make big PC exclusives again as they all cry piracy. If it even comes close to being able to create cgi, you won't see it until a console can do it. That's where the money and focus is these days. Like everything else, the PC will be able to do bigger and better but be ignored for where the money is. I don't blame devs for this, gotta make money. I blame it taking so long for PC to get this efficient and to metal.

    But all these happy, quaint little articles do not change the fact the PC leads in technology but the technology is actually taken advantage of extremely sparingly.
     
  6. Calmmo

    Calmmo Guest

    Messages:
    2,424
    Likes Received:
    225
    GPU:
    RTX 4090 Zotac AMP
    Dx10 was supposed to improve everything but visuals, so did dx11. This is going to be more of the same thin air something tells me.
     
  7. Cryio

    Cryio Master Guru

    Messages:
    683
    Likes Received:
    299
    GPU:
    Nitro+ 7900 XTX
    DirectX9 showed us everything that it could be, even going into unreasonable overhead ocasionally.

    DirectX10 would've been wonderful if devs would have actually tried to properly use it and weren’t scared of Windows Vista. In Far Cry 2, Assassin's Creed 1 and Devil May Cry 4 it provided better performance for example. When used properly, it could lower the overhead of the video driver, offer better HDR (exemples in AC1, DMC4, Crysis 1/Warhead), better motion blur as seen in Crysis 1/2, Metro 2033, Lost Planet 1 and better shadows as see in Assassin's Creed 1. Also it enabled easier integration of anti-aliasing in various game engines as opposed to the brute force needed for DX9 engines. Various effects could also be accelerated by DirectCompute and using the Geometry Shader (basically various effects and the instancing of same object, all accelerated by the GPU), but most (none?) games didn't do it. No, wait. Fur in Lost Planet 1 and motion blur in Crysis 1/Warhead/Metro games were using DirectCompute. DirectCompute was used a lot in Battlefield 3/4 too.

    DirectX11 enabled better multithreaded CPU utilization, tesselation, an even better version of DirectCompute that could be used even of DirectX10 GPUs. In DirectX11, 2 out of the 3 new things introduced were able to be used by DirectX10 GPUs. Unfortunately, this mostly never happened.

    There are only two games AFAIK, Alien vs Predator 2010 that enabled DX11 codepath (thus the CPU optimization part) on DX10 GPUs and another game that I forgot. But better ambient occlusion and contact hardening shadows (basically both DirectCompute effects), capable to be performed on DX10 GPUs, were restricted to DX11, Shader 5 GPUs. For the lifespan on DirectX10, nobody really bothered to exploit its capabilities that were used in DirectX11 games, when people *started* using them.

    For some reason, nobody is using DirectX11.2’s most interesting feature: Using on desktop resource sharing of RAM for GPU. Basically the GPUs can use RAM as VRAM. It’s exactly how the X1 and PS4 operate with shared resource for ram in vram in their 8 GB buffer.

    DirectX12 is Direct11 with even better CPU usage and therefore lower driver overhead. Same thing as Mantle for GCN GPUs mostly. DirectX11.3 will be likely released for Windows 7/8/10 and DirectX12 for Windows 8 and 10.

    DX11.3 = Easy default automatic CPU optimization.
    DX12 = More complex, more in-depth CPU optimization (better framerates and lower overhead beucase of more detailed tunning)
     
    Last edited: Dec 16, 2014
  8. HonoredShadow

    HonoredShadow Ancient Guru

    Messages:
    4,326
    Likes Received:
    21
    GPU:
    msi 4090
    At least XBox will use it too which will help.
     
  9. Reddoguk

    Reddoguk Ancient Guru

    Messages:
    2,661
    Likes Received:
    593
    GPU:
    RTX3090 GB GamingOC
    In the near future a CPU will never bottleneck a GPU, that is what DX12 is trying to achieve. Both Nvidia and AMD want CPUs out of the way totally and the GPU will do a lot better once the API matures and gets left alone to do it's work.

    The less a GPU and CPU need to talk the better, from what i've read both Mantle and DX12 have tried super hard to remove the CPU from the gaming equation.
     
  10. blkspade

    blkspade Master Guru

    Messages:
    646
    Likes Received:
    33
    GPU:
    Leadtek Nvidia Geforce 6800 GT 256MB
    Well, Mantle has basically succeeded in doing that. DX12 only promises that it will do the same.
     

  11. typeAlpha

    typeAlpha Guest

    Messages:
    280
    Likes Received:
    1
    GPU:
    ATi 7850 2GB
    When are the first DX12 cards due, the ones that properly support it?
     
  12. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Maxwell GM200+ cards already support it, including the hardware specific features.
     
  13. typeAlpha

    typeAlpha Guest

    Messages:
    280
    Likes Received:
    1
    GPU:
    ATi 7850 2GB
    So that's the upcoming 980 Ti and the new Titan card? What about 970/980 that's out now?

    I'm assuming the AMD 3XX series will fully support it too.
     
  14. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Sorry I should have said GM2XX. The 980/970 both support the hardware functions of DX12/11.3.

    http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/4
     
  15. Cryio

    Cryio Master Guru

    Messages:
    683
    Likes Received:
    299
    GPU:
    Nitro+ 7900 XTX
    All GeForce 400/500/600/700/800/900 will support DX12. Only the 900 series will support it 100%, since with the previous versions they skipped some functionality, for whatever reason.

    AMD HD 5000/6000/7000/8000/R 200 will 100% support DX12.
     

  16. Groomer

    Groomer Guest

    Messages:
    11
    Likes Received:
    0
    GPU:
    .
    Brad Wardell is the website "owner" not "woner" of littletinyfrogs.com. :)

    The data in those images displays CPU utilization and thread distribution in 3DMark using Direct3D 11 vs ported to Direct3D 12.

    The same data (in slightly different images) was posted about 9 months ago :cops:, have a read for yourself:
    DirectX Developer Blog and Futuremark Press Release :grad:
     
  17. Bladeforce

    Bladeforce Active Member

    Messages:
    55
    Likes Received:
    5
    GPU:
    nVidia Titan
    To be honest ive had enough of DirectX a proprietary API that has zero use for me today as a developer
     
  18. h4rm0ny

    h4rm0ny Guest

    Messages:
    275
    Likes Received:
    0
    GPU:
    R9 285 / 2GB
    No, it really isn't. If you consider DX11 to be a "normal" graphics API (it's all relative so lets use that as our comparison), then DX12 is NOT a further refinement of DX11. It's not "the next version". It's a different approach. The purpose of DX11 is to abstract away the hardware and provide something manageable for the programmers to work with - which is what it does.

    The purpose of DX12 is, like Mantle, to give Developers a lot more fine control over the hardware with a lot more low-level tools. You can't say that DX11 should have been what DX12 is because they're serving different markets. Most people aren't going to be throwing out DX11 as soon as DX12 becomes available because DX12 is MUCH MORE COMPLEX. You're going to see DX12 used by the big players who have both the resources and the need to optimize game engines to their maximum potential. And much of the rest of the programming world will carry on with DX11 because it's good enough and they don't need a programmer with a two brains to handle it.

    That's why MS are not replacing DX11 with DX12, but offering DX11.3 alongside DX12.

    We know what this technology is, we've seen it in Mantle. I wont say that it is a known-factor, but it's not vapour-ware!
     
  19. Goldie

    Goldie Guest

    Messages:
    533
    Likes Received:
    0
    GPU:
    evga 760 4gb sli
    this'll be needed for future games no doubt, as we move nearer to photorealism.
     
  20. Ainur

    Ainur Guest

    Messages:
    1
    Likes Received:
    0
    GPU:
    8GB
    I usually like this site, but this article is disappointing - high on 'hype' and low on correctness...

    I work in the CG-Industry for over 8 years now, and have been watching closely with anticipation at how real-time graphics have been improving.

    I know how movies like Jurassic Park and Phantom Menace were made, and the next-gen real-time graphics will not come anywhere near that level of realism.

    I recently bought a GTX 970 and battle-tested it a bit. Very cool stuff, but still, no-where near film-quality capability.
    VXGI is the most interesting/promising technology, and it is still a very rough approximation of what is/has-been being done in film.

    But it's not 'just' about Ray-Tracing algorithms and data-structures (Global Illumination didn't even exist when Jurassic-park was made...).
    It's about the levels of complexity of the shaders, and the rendering-engine's architecture.

    Movies, for the most part, use a render-engine called PRMan (Photo-Realistic Renderman from Pixar), which subdivides to micro-polygons on each frame (think: progressive-tessellation down-to the sub-pixel level). Shaders written in it for movies are in the hundreds-of-thousands-of-lines-of-code scale - no DX shaders are in any danger of getting anywhere near that level of complexity/flexibility any time soon...

    DX12 is looking awesome, though, for what it targets - and if anything, it is playing catch-up with consoles-APIs... Mantle is NOT a console-oriented API, it's for PC, but was meant to bring console-like levels of control to PC. But DX12 is not Mantle... It may have taken many concepts and approaches and have implemented similar layers of abstractions, in some areas, but Mantle is a different beast because it doesn't have a driver-level - the driver is built-in the API itself. If anything, it is similar to CUDA - if NVIDIA would have implemented the same API-interface of Mantle with the driver built-in as AMD did, then you could compare it - but not to DX12.
    What Microsoft ended up doing, is designing a new driver-architecture for windows 10 (and probably would back-port it to 8 down the line), that removes a lot of concurrency-validation/optimization layers from the driver. What this means, is that the same code that in previous DX versions is written by the drivers-teams at AMD/NVIDIA, would now have to be written by application/(game-engine)-developers. It allows for more control, but also opens the door to a whole new class of potential bugs...

    I wouldn't say that it somehow would magically enable film-quality render-engines in games - that is silly to even suggest... What it WOULD allow, is for more fine-grained optimization-capabilities, and lower driver/CPU-overhead for command-generation (GPU-assembly compilation), as now application-developers can construct their own command-lists manually, and do so in a way that knows about the execution-environment of the game-engine, so it "could" be more efficient.

    But it's NOT like it allows to do new kinds of stuff - if anything, DX12 is the first major version that DOESN'T add any new capability (DX11 added tessellation/compute-shaders, DX10 added Geometry/Hull shaders, etc.), because basically, the list of essential capabilities is already done and provided for. Ray-tracing is a software-solution that can be implemented in Compute-Shaders, and has already been done to varying degrees in DX11 games/engines. For example, Full-Scene-Ambient-Occlusion (not screen-space), is a Spherical-Harmonics algorithm that uses ray-tracing. Contact-hardening shadows implemented in the recent UE4 version, does a Distance-Field algorithm that uses Cone-Tracing (a kind of ray-tracing). And there are other examples... The big problem of ray-tracing is actually not the computation, but the memory-access pattern (which is random and all-over-the-place...). It needs a freely-accessible huge chunk of RAM, that can bee random-accessed very fast - extremely fast if you want real-time - and even the latest GDDR5 VRAM modules are nowhere near as fast as would be required. That is why 'Acceleration-structures' like dynamic-voxelization (like in NVIDIA's VXGI) need to be generated, before any ray is traced - and it's going to take many more years before this can be fast enough at scale in large games.

    But films didn't even need ray-tracing to achieve photo-realism - they did need tons of RAM and textures, and complex code-paths that run on a flexible processing-unit, in ways that 'by-nature' don't always cater to parallelism very well... (Read: "Can't benefit much from a GPU...")
    A GPU 'can' accelerate 'some' of the tasks of an offline-like-render-engine, and 'GPU Accelerated' render-engines have been popping-up like mushrooms of late, but they all provide, at-best 'interactive'-progressive rendering of isolated/truncated scenes - which is very far from 'full/final-frame-all-real-time-full/heavy-scene' kinda fantasy... We have at least a decade left before we get THAT...

    In short, don't expect DX12 to give you film-quality games - the statements made in that direction in the article are pure 'hype' and are grossly uninformed and misleading.

    If anything, for film-quality stuff, even DX12 serves more as a barrier than anything else - you need to write a 'complete' rendering-pipeline from scratch in CUDA/OpenCL+OpenRT in order to come close to something like that - and when that happens, it would be the end of DX/OpenGL/'Any-Graphics-API' forever... They will at that point no longer be necessary.
     
    Last edited: Dec 17, 2014

Share This Page