Nvidia pressurised Oxide dev DX12 benchmark to disable certain settings

Discussion in 'Frontpage news' started by AvengerUK, Aug 31, 2015.

  1. Turanis

    Turanis Guest

    Messages:
    1,779
    Likes Received:
    489
    GPU:
    Gigabyte RX500
  2. Asgardi

    Asgardi Guest

    Messages:
    248
    Likes Received:
    14
    GPU:
    MSI GTX 980 Ti OC
    Yes it is. But the architecture was designed before DX12 specs even existed, so cant really blame them. AMD got lucky because of Mantle. Anyway, like it is stated in the article, there is no AMD/Nvidia optimized game engines on DX12 yet which would give us some real results. Personally I dont think the API is even completely ready yet, as I have not seen any complete API documentation by MS like for DX11.x.
     
  3. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    I don't think AMD got lucky because of Mantle. I think AMD made their luck by developing Mantle, thinking about future workloads, and then putting their hardware in the Xbox. The last part is the most important, because whether or not you believe DX12 was developed before/during/after Mantle, it was always going to be designed around the Xbox hardware. If the roles were reversed, if Nvidia was in the Xbox, DX12 may have focused more on geometry based stuff, like tessellation and whatnot. If that were the case people would be here bitching the opposite direction.
     
  4. shymi

    shymi Guest

    Messages:
    152
    Likes Received:
    2
    GPU:
    EVGA RTX 3070
    If I'm not mistaken - DX12 was developed with constant communication with AMD, Intel and nVidia teams. Everybody knew what to expect from the API and what were the tiers.
     

  5. mR Yellow

    mR Yellow Ancient Guru

    Messages:
    1,935
    Likes Received:
    0
    GPU:
    Sapphire R9 Fury
    LOL, you can summarize this thread as follows:

    Nvidia owner: Lies, benchmark must be false.
    AMD owner: FU Nvidia - you had this coming.

    I for one hope this is true because competition is a good thing.
     
  6. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,872
    Likes Received:
    446
    GPU:
    RTX3080ti Founders
    Hey, if it doesn't work, then, Nvidia telling devs to turn it off for their hardware is a "good call". By extension, if it skews benchmarks, then, turn it off otherwise benchmarks aren't comparable.

    We are witnessing the birth of dx12. Just like last time it took well over a year to make much difference to upgrade. Sit and wait. 1st gen = no-one has working full-implementation.

    DX11 performance is still where it's at because that's what we're playing.
     
  7. Tronman

    Tronman Guest

    Messages:
    102
    Likes Received:
    0
    GPU:
    XFX 295x2
    Bahaha that has to be the most hypocritical post ive seen on here yet. Why does everyone have knickers in a knot over this? If it turns out GCN does async better why is it such an issue... tessellation has always been a weak point, every architecture is going to have certain strengths. Its not the end of the world right?
     
  8. Dazz

    Dazz Maha Guru

    Messages:
    1,010
    Likes Received:
    131
    GPU:
    ASUS STRIX RTX 2080
    Thats true both have their strengths and weaknesses, if AMD gets more of a performance boost with Async then better for AMD but nVidia cards are still no slouch even with it not working or unable to keep up with the command calls in a certain game.

    It maybe working in ashes but if the other benchmark shows that after 31 calls the nVidia cards start to struggle bad, with all the particle lighting and physics used in ashes i can see it doing well over 31 and it might be that that's causing the nVidia cards to tank.

    In all honesty i don't think nVidia really know themselves, the first time this came about nVidia bitched before the benchmark/Alpha was out saying FSAA was bugged and people tested both nVidia and AMD hardware, no visual anomalies and both camps cards had 7-9% drop with it enabled so fairly consistent really. Then nVidia have asked for Async to be disabled and this has blown up in their face, i bet the next one is that it is working as intended but the game is making to many calls showing a weakness in Maxwell.

    Why do i say this i remember a early video of the game and the developers say EVERY light source is their own and unique rather than having one light source used for multiple objects. So you can quite easily have 200 calls way above what Maxwell can feed.
     
  9. Lane

    Lane Guest

    Messages:
    6,361
    Likes Received:
    3
    GPU:
    2x HD7970 - EK Waterblock
    If you are looking at this thread, you need understand that for the moment we are trying to understand the behavior on Nvidia gpu's... so nobody jump on any conclusion for now, the little code used is not finished and evolve in different version since the start, and the result seen for the moment dont permit to got a clear picture on what happend on Maxwell ( 2.0 ).

    Will need to get more analyze for the time being.

    This type of thing have nothing to do with graphics features as tesselation or whatever, it is about how code the instructions and schedulers on an hardware level, its like comparing multithreading and non multithreading.

    AMD have think the graphics engine and API should support this type of parallel compute+graphics behavior in the future, the same way years ago, graphics pipeline have moved from fixed function to compute shaders.

    So they have developped GCN with pure async compute in mind ( hence this design with ACES, ( Asynchronous Computing Engine ), DMA Engines.. GCN was presented in August 2011, and certainly at work allready in 2008-2009..

    Its the same way that Nvidia with Pascal aim at mixed precision FP16-FP32 computing, as they think, it will be the big deal in the next years.

    Mantle, and the PS4 API allready expose this model because they had the possibility to do it, and so it was completely normal that they use it. Same for DX12, it will be completely stupid to keep the bad serialized style of DX11, if you can do it more efficiently.

    In fact, DX12 real advantage, is how the GPU pipeline is working compared to DX11, this was on the base of Mantle, is on Vulkan, and ofc have been remade on DX12..

    [​IMG]

    [​IMG]

    As you see, the serialized vs parallael Graphic pipeline exist by itself, but this is still not Async compute..

    Now we can add Async compute ( or Async shading )

    [​IMG]

    [​IMG]


    A little comment from Mark Cerny in 2013 about async computing..
     
    Last edited: Sep 2, 2015
  10. AsiJu

    AsiJu Ancient Guru

    Messages:
    8,806
    Likes Received:
    3,368
    GPU:
    KFA2 4070Ti EXG.v2
    Is it just me or is this whole DX12 thing spiralling to a huge hype(r)bole of its own?

    Manufacturers basically shooting each other in the foot, people debating etc...
    just hoping it doesn't mean that attention is not given where it's mostly needed, in other words app development and adaptation of DX12.

    Because atm I'm feeling that, at this rate, soon we'll fast forward a couple of years on and nothing's happened in the meantime.
    Everyone's been so busy debating that no one ever got to actually making something out of DX12.

    Or maybe I'm just anxious to have something concrete about DX12 to run on my own. As it's been hyped to be the... I don't know, salvation of gaming?
     

  11. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    I don't think there will be an issue with that, especially DX12 only games.

    The devs will be loving the idea of developing for just one api, for one os and only having to optimise for a reasonably small amount of cards.
     
  12. Turanis

    Turanis Guest

    Messages:
    1,779
    Likes Received:
    489
    GPU:
    Gigabyte RX500
    Anynone push their tech specs and cards in front of the market.Some cards have some features,others have other features.All support DX12 basics features from 2011 till today.

    Interview with Nvidia engineer about DirectX 12
    https://www.youtube.com/watch?v=Dnn0rgDaSro
     
  13. moab600

    moab600 Ancient Guru

    Messages:
    6,658
    Likes Received:
    557
    GPU:
    PNY 4090 XLR8 24GB
    Paying premium for nvidia cards to have less features, then they secretly try to pressure certain companies to De-optimize for AMD, or take down features that can benefit AMD.

    If AMD could actually fight nvidia, this thing would be very big and defiantly put a hurt on nvidia, and kill 900 sales.
     
  14. Barry J

    Barry J Ancient Guru

    Messages:
    2,803
    Likes Received:
    152
    GPU:
    RTX2080 TRIO Super
    that made me laugh it has been done really well
     
  15. Undying

    Undying Ancient Guru

    Messages:
    25,330
    Likes Received:
    12,743
    GPU:
    XFX RX6800XT 16GB
    That sh1t never gets old. :D
     

  16. Barry J

    Barry J Ancient Guru

    Messages:
    2,803
    Likes Received:
    152
    GPU:
    RTX2080 TRIO Super
    I agree the problem is they have already sold so many 900 thay will all wait for next gen that
    will not help AMD.
     
  17. Undying

    Undying Ancient Guru

    Messages:
    25,330
    Likes Received:
    12,743
    GPU:
    XFX RX6800XT 16GB
    Atleast you know who to trust next gen. Nvidia tends to screw you over, again and again. ;)
     
  18. Barry J

    Barry J Ancient Guru

    Messages:
    2,803
    Likes Received:
    152
    GPU:
    RTX2080 TRIO Super
    Every NVidia GPU I have owned I have been happy with except the FX I am really happy with the 980ti it has awesome performance in every game.

    I will buy the fastest GPU I can be that AMD or NVidia when I upgrade again I see no reason not to buy NVidia.
     
  19. moab600

    moab600 Ancient Guru

    Messages:
    6,658
    Likes Received:
    557
    GPU:
    PNY 4090 XLR8 24GB
    The situation is catch 22, AMD are big liers as well but less of backstabbing, their PR is the best at presenting a reality that will never happen, heck they admitted that Fury DOES NOT support full dx12 featrues, not any card like nvidia and AMD said.

    NVIDIA 900 sales set them soooo high, amd can't touch them no matter what happens, but then again AMD made some comebacks so who knows...

    I think anyone with good card like R280X+ or GTX 780+ should not buy any nvidia card and to less extend and AMD card, if it only for DX12, the situation is such a mess, we need to wait till the storm passes on.
     
  20. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,693
    Likes Received:
    9,572
    GPU:
    4090@H2O
    This.
     

Share This Page