Review: Ashes of Singularity: DX12 Benchmark II with Explicit Multi-GPU mode

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Feb 24, 2016.

  1. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Uh, you average 50 and the benchmark shows an average of 56 with the same card. I'm not saying the benchmark is or isn't bs, but how do you getting 6 frames less mean the benchmark is bs? Especially considering it's using a different processor.
     
  2. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,872
    Likes Received:
    446
    GPU:
    RTX3080ti Founders
    The most important thing to remember is for this game; both DX11 and DX12 look identical.
     
  3. semitope

    semitope Guest

    Messages:
    36
    Likes Received:
    0
    GPU:
    iGPU
    please link to the source that said the above. Async is not being "abused" from what I have read. And this is not far more than any other game will have because other games will have more.

    Already confirmed to be used in hitman dx12, gears of war ultimate and probably gears of war 4. Most likely in the next battlefield, possibly quantum break.

    Can't be abused by just using it, because the only result is a benefit to AMD users. If you are mad then maybe buy AMD. Nvidia will just have around the same performance with or without it. Though i suspect nvidia will keep losing performance with dx12 itself.

    This whole async compute CUDA thing is silly. its not about game graphics. Its a compute only thing. Async compute in dx12 is running graphics and compute at the same time and to me this is simply the natural way to do it. Why on earth lock up the graphics queue OR the compute queue when games have both? if one GPU can do both at the same time, then let it. If nvidia did not build their GPUs like that, they can run everything on graphics queue just fine. No reason to hold back AMD users just because they have superior designs in their purchases. Thats what already happens to PC because of consoles.

    And I will say it again, when nvidia does support async all this silly complainng about it will stop. Suddenly it will be obviously great. Right now because nvidia is left behind some people do not realize how awesome it is. FREE ssao, lighting effects etc etc etc etc etc. (or at least much less cost in performance)

    unlikely. dx11 might mimic the effects well though.
     
    Last edited: Feb 27, 2016
  4. cowie

    cowie Ancient Guru

    Messages:
    13,276
    Likes Received:
    357
    GPU:
    GTX
    nv left behind? thats too funny man.


    if I even play this game(like if I buy something and get it free...and don't give away the codes on the forum)it will be in dx11 and even my 980 will have better numbers then a fury x....hell even the 290x get as good of fps then the furyx.

    hate on me all you want I have no use for w10 at all...I am so old school it will be awhile before they can hype up an api improvement to get me to even raise an eyebrow it all works for me pretty damn good in dx9-dx11 so sorry m$
     

  5. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    Yeah, if there is any real life difference (and i do not believe there will be any) then this will be the only game anyone will see it in.

    AMD may have input at the end with other games, but this is the only game they have been involved with since pretty much the start.
     
  6. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,440
    Likes Received:
    109
    GPU:
    Surpim LiquidX 4090
    It's not being abused. It's not even fully taxing the 8 ACE units that the 290/390/Fury/Nano have.

    They wanted to max out what is possible with DX12, they've openly stated this. It's not their fault nVIDIA is weak at it. nVIDIA has access to optimization via game's source code.

    It doesn't matter if CUDA could do even do it better. The game's algorithms are neutral and aren't optimized towards one or the other vendor. CUDA needs a completely different path which nVIDIA would/should have to pay for.
     
  7. Carfax

    Carfax Ancient Guru

    Messages:
    3,972
    Likes Received:
    1,462
    GPU:
    Zotac 4090 Extreme
    That guy Mahigan is undoubtedly a shill for AMD so I take whatever he says with a mouthful of salt.

    He's just parroting what that guy from Beyond3d is saying, but doesn't know anything for a fact..

    The only people that have real answers are the NVidia software engineers, and until they release the drivers with AC enabled, we won't know for sure what the deal is.

    Different CPU, different platform.
     
  8. mR Yellow

    mR Yellow Ancient Guru

    Messages:
    1,935
    Likes Received:
    0
    GPU:
    Sapphire R9 Fury
    DX12 isn't going to mean automatically better looking games.
    DX12 means better optimised games. It's gives the developers better tools to get direct access to the GPU. Once devs embrace DX12 we will see a great improvements and hopefully better coded games.

    DX12 is meant to make devs lives easier. One standard but it seems that some HW is lacking.

    Only time will tell who's got the better architecture.
     
  9. Keesberenburg

    Keesberenburg Master Guru

    Messages:
    886
    Likes Received:
    45
    GPU:
    EVGA GTX 980 TI sc
    Stupid me, the game was runnig bad al the time.
     
    Last edited: Feb 27, 2016
  10. Spets

    Spets Guest

    Messages:
    3,500
    Likes Received:
    670
    GPU:
    RTX 4090
    It actually doesn't. There's a lot more work for the developers on DX12 to get the most out of hardware.

    http://www.anandtech.com/show/8544/microsoft-details-direct3d-113-12-new-features
     

  11. Keesberenburg

    Keesberenburg Master Guru

    Messages:
    886
    Likes Received:
    45
    GPU:
    EVGA GTX 980 TI sc
    It was my pc

    stupid me
     
    Last edited: Feb 27, 2016
  12. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,872
    Likes Received:
    446
    GPU:
    RTX3080ti Founders
    Confirmed by Anandtech. So yeah, they look identical.
     
  13. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    DX12 = more performance, which means they can theoretically shove more stuff on the screen and keep a card at their target framerate.

    People still seem to not understand that game developers mostly target framerates for specific setups. They build their game, the FPS is always low at the start, then they optimize till a target card hits a target framerate. So at Ultra their target cards are probably Fury X/980Ti, they want that to hit 60fps @ 1080p. Maybe at High they want 980/390 to hit 60fps at 1080p and so on down.
     
  14. Ext3h

    Ext3h Guest

    Messages:
    13
    Likes Received:
    0
    GPU:
    Various.
    It's slightly more complicated than that. In order to manage multiple queues in software - regardless how they map to hardware - you need additional synchronization. This synchronization consists of signals, and fences which can only be passed when the signal has been sent.

    As long as there is only a single queue in software, it's trivial for the driver to deduce that a fence can be ignored if the corresponding signal has been scheduled earlier in the same queue. That effectively results in the GPU not needing to stop on such fences, as the driver can just remove them and the GPU may hence just rush through.

    With multiple queues, it's no longer working like this, as signal and fence may now be on different queues. The GPU can only execute one section at a time, and after that (if the driver/hardware isn't supporting it in hardware), fences need to be checked. In software, on the CPU, and if the fence is not passable yet, a different block needs to be scheduled. Replacing the content of the one queue the GPU uses in hardware.

    The performance "penalty" you can observe on Nvidias GPUs is (AFAIK) mostly the driver no longer being able to eliminate fences forehand, effectively eliminating any possible optimization and causing stalls during which the GPU becomes idle.

    For AMD it's the other way around. At least the compute queues are able to monitor fences in hardware (the 3D queue is AFAIK subject to the same limitations as on Nvidia hardware!), making the overhead from the lost driver optimization barely noticeable. While at the same time the parallel execution even achieves at net gain as the diverse workload can hide bottlenecks,

    Fun fact:
    With a bit of optimization in the driver (an improved scheduler), Nvidia can still manage to re-gain that lost performance by extending the heuristic for the optimization to multiple queues. Even without hardware acceleration, at least to some extent. I'm awaiting that to happen for quite a while now.

    PS:
    I suspect that Nvidia opted to be tested with Async Compute on, to debunk the myths that they couldn't handle it. They did in fact get rid of a few bugs the driver was originally riddled with, and the performance did improve. The remaining penalty is most likely going to be eliminated entirely with a future driver update, but for now they already improved themselves from "broken" over "major penalty" to just "minor penalty", with the option of reaching "no penalty".

    Oxide devs getting a better understanding of what exactly hurts NVs hardware so much probably did help as well at avoiding extensive (ab-)use of AC. Keep in mind, that at first, nobody, not even NV engineers, knew why it scaled so baldy.
     
    Last edited: Feb 27, 2016
  15. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,440
    Likes Received:
    109
    GPU:
    Surpim LiquidX 4090
    Didn't take long for you to show up.

    Can I get your opinion on Maxwell II's realistic performance and framerate latency for hypothetically dealing with VR games that also use "async compute", is CUDA the only alternative?

    I can see this being a problem for VR headsets that aren't nVIDIA branded or partnered. Or maybe I'm wrong.

    In the benchmarks with AoTS, we can clearly see that AMD's cards seem somewhat "over-engineered". Would you agree with the former statement? As in, to realistically match with Pascal, these 10-20% async compute gains, no balls to the wall architecture is necessary.

    By regaining the performance lost, do you mean those 2 FPS to 5 (worst case scenario)? Effectively a negligible difference. As a Maxwell II owner, I'm not so excited about this.
    Thank you. People just won't stop speculating about some agenda against nVIDIA, yet you tell them that nVIDIA can work with Oxide while the development is ongoing, and they still don't understand that.

    Dealing with people on the internet can be painful if one expects others to learn.

    Just for kicks, what card would you pick out for each of the sets as a gamer, if only one for the next 3 years? Overclocking factored in too.

    970 or 390
    980 or 390X
    980Ti or FuryX
     
    Last edited: Feb 27, 2016

  16. Goiur

    Goiur Maha Guru

    Messages:
    1,341
    Likes Received:
    632
    GPU:
    ASUS TUF RTX 4080
    None. Wait for next gen.
     
  17. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Yeah, anyone buying a current gen card this year is making a poor decision regardless to what they are choosing.

    Also if Oxide is getting a better understanding of Nvidia's AC usage, then why is performance degrading as newer versions of the benchmark come out? The last time these cards were tested the 980Ti/Fury X Performed the same, now the Fury is beating the 980Ti and X is even faster.

    I would think the opposite would occur?
     
    Last edited: Feb 27, 2016
  18. Dygaza

    Dygaza Guest

    Messages:
    536
    Likes Received:
    0
    GPU:
    Vega 64 Liquid
    They did increase AC usage for this version, that's atleast one reason. So AMD cards gets to shine a bit more due of increased AC usage, though they are faster even without using AC. There are more compute effects also in this benchmark version, which naturally works in favor to AMD's arch.

    In general this new benchmark is a bit heavier.
     
  19. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,440
    Likes Received:
    109
    GPU:
    Surpim LiquidX 4090
    Look at what Oxide has said. They are changing and tinkering with the game effects. They are tinkering and changing the game's algorithms, and recently, emphasis was put on algorithms that happen to further saturate hardware computational "engines" that operates asynchronously.

    There is no conspiracy. It's not like they don't have access to what's going on with the game. This isn't GameWorks.
     
  20. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    I never said it was a conspiracy.
     

Share This Page