Quick comparison: Geforce GeForce 378.66 versus 378.78 DirectX 12 performance

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Mar 9, 2017.

  1. Agonist

    Agonist Ancient Guru

    Messages:
    4,287
    Likes Received:
    1,312
    GPU:
    XFX 7900xtx Black
    DX11 launch games were the same. Alot of them only had DX11 api path just for having AA with deferred lighting. After about a year of DX11 games they really started to shine.

    Sadly it will take time but I think DX12 will be around a long time. We are 7 years into DX11 titles now.

    I didnt expect anything mind blowing graphics wise from games. And honestly didnt expect them to run better either.

    DX11 didnt really run better then DX10 or DX9 games. Infact most of the time DX9 ran better because of less graphics features being used.
     
  2. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    Hitman is the only game that performs poorly on Nvidia hardware, everything else is probably as good as it can get.

    DX12/Vulkan are still in the beta stages, and not hugely exciting for Nvidia owners, but AMD owners get the have more fun with it though.
     
  3. -Tj-

    -Tj- Ancient Guru

    Messages:
    18,102
    Likes Received:
    2,606
    GPU:
    3080TI iChill Black
    Well
    dx9 was limited to ~9-11k drawcalls
    Dx11 up to ~18k drawcalls
    Opengl 4.5 ~ 28k calls
    Dx12 up to ~50k or more, apparently up to ~ 100k drawcalls

    The higher the calls number less driver/ api bottlenecks, potentially making richer worlds and more interaction with it. And that's what 3dmark api overhead benchmark shows. But in reality it doesn't deliver, now idk why but I think its mostly because of excessive postprocess fx crippling gpu pipeline.

    E.g. overdone DOF fx, brutal shadow techniques - hfsl, pcss (both bs to sell nv gpus), excessive tessellation, more postprocess bloom, blur on top, etc.

    If all that was made in optimization in mind it would be a lot different, but hey you gotta make fx to cripple gpu to sell new ones.. hypocrite development at best.
     
  4. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    Do games ever need that many draw calls?

    3D Mark looked impressive with the api test, but it's not relevant for the majority of games, even RTS games don't benefit, and in some cases regress under DX12 on Nvidia hardware.
     

  5. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    I mean all those techniques enhance the image quality.. what would you rather them do? Just increase polygon count to 500 billion? Let framerates go up to 10,000 fps?

    And I know you're probably going to respond with "improve lighting with raytracing" or some nonsense, which is completely outside the realm of current generation performance. All the HBAO/GI/HFTS techniques you're describing as "fx to cripple GPU" are all optimized systems designed to replicate raytracing and in some cases implement raytracing techniques.

    There has also been giant strides in material shaders and stuff too - things like photogrammetry significantly improve the look of materials with very little performance impact.

    So really not sure what else you'd want them to do.
     
  6. Phragmeister

    Phragmeister Guest

    Messages:
    1,895
    Likes Received:
    316
    GPU:
    MSI GTX 980 4GB
    I wonder how many people are gonna' be suckered into getting Windows 10 over this.
     
  7. -Tj-

    -Tj- Ancient Guru

    Messages:
    18,102
    Likes Received:
    2,606
    GPU:
    3080TI iChill Black
    Yes they are, but there is a difference if a thing is done right or tasking on purpose. And that's what most fx are, there are exceptions. E.g. ID6 engine fx or Unreal engine to certain point and few more, Snowdrop engine is ok too - ok The Division was crippled compared to initial showcase rls, but I guess they had to drop gfx detail so those "fancy nv shadows" don't take too much of gfx pipeline.
    Imo default shadows @ high looks good enough if not better at times, compared to both pcss & hfsl combined.

    I would like to see both richer worlds and of course better detail, idc about accurate raytracing bs, mimic is enough as long as it still looks ok and Im 100% sure it can be done in effective way.



    Another good example was Deadrising3 or Metro2033 DOF, in both cases it was made to cripple gpu on purpose not that it somehow looks more advanced and accurate, Metro2033 Redux - fixed its DOF I wonder why heh


    We could have CGI mimic gfx 2-3yrs ago np, but no. They rather push "new gfx techniques" so it tasks gpu more while still looking like some 2010 or older game. They showcased Unreal Samaritan in 2012 running on a midget GK104 for crying out loud and fast forward 5yrs later, hm hm.. Yaa, see where Im going at, BatmanAK or MKX is somewhat a small glimpse of that, but still nowhere near.
     
    Last edited: Mar 10, 2017
  8. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,858
    Likes Received:
    442
    GPU:
    RTX 3080
    Just played some Titanfall 2 on this driver, with fps counter active, and fps seemed & felt higher. I have recently, a few days before reduced Titanfall 2 graphics settings to hit 144fps with more regularity, so I haven't played all the maps on the old vs new driver under the new graphics settings, but unscientifically I think this driver has some improvements in this game, seems pegged at refresh rate more often.

    I have compared synthetics & game benchmarks between previous hotfix driver and this latest one, and all were the same:
    -3DMark Firestrike
    -3DMark Timespy
    -3DMark11
    -F1 2012
    -F1 2015
    -Batman Arkham Knight
     
    Last edited: Mar 10, 2017
  9. tensai28

    tensai28 Ancient Guru

    Messages:
    1,555
    Likes Received:
    416
    GPU:
    rtx 4080 super
    Sorry the drivers didn't work out for you. I am seeing large gains ashes of the singularity and rise of the tomb raider, both which are cpu heavy games.
     
  10. tensai28

    tensai28 Ancient Guru

    Messages:
    1,555
    Likes Received:
    416
    GPU:
    rtx 4080 super
    Because dx12 is supposed to help with cpu bottlenecks and he uses a cpu where he would be the least bottlenecked, then criticizes the results. :stewpid:
     

  11. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    I'm never going to criticise graphical additions, the best image quality quality is a combination of all these small things.
    Samaritan demo itself is full of these sort of post processing effects, and is all scripted instead of gameplay, modern games have already passed it.

    So...anyone tested these with Hitman on older Nvidia hardware, or am I going to have to delete some games and test myself.
     
  12. pharma

    pharma Ancient Guru

    Messages:
    2,494
    Likes Received:
    1,194
    GPU:
    Asus Strix GTX 1080
  13. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
  14. chronek

    chronek Guest

    Messages:
    184
    Likes Received:
    3
    GPU:
    Geforce 980 GTX 4GB gddr5
    so... directx 12 on nvidia hardware is still mostly emulated by software?
     
  15. HeavyHemi

    HeavyHemi Guest

    Messages:
    6,952
    Likes Received:
    960
    GPU:
    GTX1080Ti
    Why do you say that?
     

  16. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,440
    Likes Received:
    109
    GPU:
    Surpim LiquidX 4090
    Because of gains via graphics driver in a low level api. staying outta this one. lol
     
  17. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,128
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    No GPU is emulating anything via software. NVIDIA's scheduler is statically partitioning the GPU on the SM level for Graphics/Compute/Copy tasks between each draw call, AMD does it on the SIMD level and it schedules in hardware.

    They both follow the Graphics/Compute/Copy engine paradigm of DX12, just with different approaches. NVIDIA's is more efficient, but harder to use for a lot of different programs running on the GPU, AMD's is less easy to fully utilize but better once you have a lot of small programs running on the GPU.

    Where "programs" any kind of compute/graphics/copy task running concurrently.
     
  18. chronek

    chronek Guest

    Messages:
    184
    Likes Received:
    3
    GPU:
    Geforce 980 GTX 4GB gddr5
    I just was wondering from what that performance increase come... In 9xx nvidia cards directx12 conservative depths and sad4 was emulated by software, so driver-software changes could impact performance..
     
  19. Hussnash

    Hussnash Guest

    Messages:
    20
    Likes Received:
    0
    GPU:
    GTX 1080
    Do you have a source for any of this ?
     
  20. Hussnash

    Hussnash Guest

    Messages:
    20
    Likes Received:
    0
    GPU:
    GTX 1080
    @hilbert

    I'm really confused by this quick review thing you did here... Specifically this

    This makes no sense whatsoever in light of your conclusion, you basically ran a test to validate the claims made by NV but you tested in a completely different context.

    NV didn't make ANY CLAIMS WHATSOEVER regarding a performance improvement from the last WHQL.

    This is for all intents and purposes, a futile test. If you said "hey we are going to test performance improvements between 378.78 and 378.66 " that would have been fine, but then you can't make any conclusions whatsoever about the claims nvidia made.

    It's very easy to test, use old driver, benchmark. install new driver, benchmark. Game updates have absolutely nothing to do with it, because you're testing the latest game binary with older and newer drivers.

    What next, let's say nvidia claims 10% improvement from core OC on the 1080ti (based on FE), then you tested a high end AIB card that's already at 2GHz and conclude that nvidia lied and you only get 2% improvement.

    It's utterly retarded
     

Share This Page