Review: Ashes of Singularity: DX12 Benchmark II with Explicit Multi-GPU mode

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Feb 24, 2016.

  1. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    he's saying the reason why NV is better in DX11 is because of their software scheduling,
    yet I clearly remember everyone and their dog complaining they want HW scheduler, because TRICKS!

    Now everyone demands Async Compute. Why? Well why not LOL.

    In reality the only thing that matters performance wise is your fps,
    and Oxide suceeded in making DX12 benchmark which majority of PC hardware(NV) does not run(not compatible), runs same, or even worse than DX11 codepath.

    Any way you look at it, that's quite an achievement :D NOT!
     
  2. Dygaza

    Dygaza Guest

    Messages:
    536
    Likes Received:
    0
    GPU:
    Vega 64 Liquid
    Biggest reason is that AMD lacks Nvidia's equivalent of Gigathread engine, which allows them efficient use of DX11 command lists. We'll see when polaris comes out, but everything he has said so far sort of makes sense.

    If it was just bad driver efficiency... Really they have huge experience of mantle, vulkan, metal, dx12 and what else. And they can't get driver to work properly? No, they do have some talented people, so if it would be only driver problem, it would have already been fixed.
     
  3. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,439
    Likes Received:
    108
    GPU:
    Surpim LiquidX 4090
    Proper async compute is so much more than just more frames. Thats really not its primary effect. I would say async compute is essential for VR. Latency will be of highest importance hereon. Nvidia wont catch up until 2019.
     
  4. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Well I mean people want a hardware scheduler now because it's going to be better going forward. There really is nothing wrong with wanting the best when it's best.

    The bottom line is that NV's decision to go with Software not only gave them a power advantage, but it gave them a performance one as well, if that post is accurate. Now I bought my GTX 980 in Fall of 2014. Now I guess it's direct competitor at the time was the 290x, which it was faster then in the majority of titles. Now that gap has obviously shrunk over time, especially against the 8GB variant and going forward the 290x may even eclipse the 980 in titles like Ashes. But I'm about to replace the card anyway, as I think most people are since 16nm should bring a pretty hefty increase and most knew that when Maxwell/GCN shipped and it was still 28nm. Like when the 980/70 launched Nvidia stated that the vast majority of users upgrade every two years and that the bulk of that upgrade occurred during the 680's launch. Considering they own 80+% of the market, I would imagine the majority of users in general are looking to upgrade this year.

    And again the bottom line is that there are two DX12 benchmarks out, Fable Legends and Ashes. Nvidia wins one, AMD wins the other. Nvidia claims it can get some additional performance out of it's driver, maybe it can, maybe it can't, I don't know.

    Nvidia won't catch up based on what? Do you have some type of inside knowledge on Pascal's design? How do you know Pascal isn't using a hardware scheduler? How do you know Nvidia hasn't remedied the problem? Like the initial Oculus report about the pre-emption problem specifically stated that it's most likely fixed in Pascal as Nvidia was already aware of it going into VR, which is why they developed the middleware Async Warp to begin with. Your post about them not being able to fix it in the driver is the same crap. You're literally making the argument based on nothing. Which I guess I should just expect as your initial post in the thread was essentially flamebaiting nonsense.
     
    Last edited: Feb 25, 2016

  5. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,872
    Likes Received:
    446
    GPU:
    RTX3080ti Founders
    What's clear is the game needs the best cards from both sides to play it as "intended".

    IMHO, the performance for what it's trying to be is fine on both sides. Also, Hilbert's article is great because it helps show-up problems that need fixing.

    For example, on AMD side all you will see is 60fps atm max, no matter what the benchmarks say. This will be fixed by AMD.

    For Nvidia, it shows there's a lot of work to do.

    This is the way it's always been when a new dx version is released. It's work-in-progress.

    For the average consumer; this is nothing to worry about. For the devs, it's more work to get dx12 working than dx11.

    For us gamers I think we're still at least 1 yr away from really seeing the impact of dx12. I highly doubt we'll see many dx12 exclusive games in this time-frame.

    In 2 years though, I do think many will have moved onto win10 as the software available will justify upgrading.
     
  6. Dygaza

    Dygaza Guest

    Messages:
    536
    Likes Received:
    0
    GPU:
    Vega 64 Liquid
    I have over 100 hours on steam already spent in game, and I have to admit that even DX12 path is still being learned. I have to admit that it's quite impressive that I haven't received even one crash during gameplay (unless we count the very first versions crashing during alt+tab). It's very well made for first dx12 game, and things can only get better from this.
     
  7. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    Hey! Hope nobody minds me interrupting the inane bashing

    I downloaded the game and ran the benchmark in DX12 on my 980Ti

    2560x1440 using the in-game "Extreme" preset

    I got an average of 64fps

    Seems odd considering on G3D it achieves 55 on High.

    Granted my card is overclocked and boosts to 1490

    Edit:

    I've noticed several discrepancies in the benchmark numbers looking at all the articles that came out on various websites.

    I also logged hw events with gpuview to take a look under the hood, 85% 3d queue utilization , compute queue literally untouched by ashes, just dwm events

    lovely

    Edit 2.0:

    I'm gonna post screenshot of gpuview as soon as i can, im also wondering if logging affects performance, maybe should run bench again without gpuview

    Also i've been wondering, does anybody have a link to an indepth feature on Fermi/Kepler/Maxwell

    It's confusing as **** with the interchangeable terms; warps, wavefront, EUs, CUs, SP, SMs, SMMs I've reached a point where the more I read about it the less I understand.
     
    Last edited: Feb 25, 2016
  8. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    It's probably under the 3D Hardware queue in GPUView. It does the same thing on Fable.
     
  9. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    Probably, how do I confirm that though ? Assuming it is in the 3d queue, why is it there ? Does it default to traditional dx11 style queues on nv hardware with async disabled ?

    A nice test, gpu view dx12 vs dx 11!

    [​IMG]

    Here's the DX12 gpuview screenshot , not sure why ashes shows up as blue on the far left, then becomes black

    Flip queue as well, not really sure what it is but I'm guessing its the pre-rendered frames (https://msdn.microsoft.com/en-us/li...393(v=vs.85).aspx?f=255&MSPPError=-2147217396)

    Dat memory usage when loading gpuview logs :banana:
     
    Last edited: Feb 25, 2016
  10. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Yeah, I'd imagine it's because it's disabled for now.

    Google, asynchronous-compute-investigated-in-fable-legends-dx12-benchmark, first link second page.

    Yep, if you do it post the results, I'd be interested in knowing if there is a difference.
     

  11. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
  12. -Tj-

    -Tj- Ancient Guru

    Messages:
    18,097
    Likes Received:
    2,603
    GPU:
    3080TI iChill Black
    Yes interesting. So it has a very powerfull hw async just in cuda lvl not dx api.

    Btw can u put that pic in spoiler it breaks the browsing on phone.. Its too big :)


     
  13. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    wrong version, oops
     
    Last edited: Feb 26, 2016
  14. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    What do you get on the Crazy preset?

    Also are you sure you are running the same benchmark? I think they released an updated version this week with newer effects and stuff. I'm not sure if you have to run it separately or what.
     
    Last edited: Feb 25, 2016
  15. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,439
    Likes Received:
    108
    GPU:
    Surpim LiquidX 4090
    Oh, so there is a problem? We're making progress. If you checked the screenshot from AT, you'd realize that the ACE units are most effective at the highest loads (4K). If you think that can be fixed at the driver level, you're going to be in for a disappointment. Async implementation need not be to this extraneous.

    When the load is low, it's known that NVidia does well. Just wait for games that use VR and async ingame :rolleyes:.

    As for Pascal, it still won't be anything like AMD's async. This is just "obvious". Plus, It will of course be better, but most likely not enough for worst case scenarios. Looks like Ext3h thinks the same :].
     

  16. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    I believe so, I downloaded it from gog and it's the beta not the alpha

    As for crazy preset, I'm on it cap'n
     
  17. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,439
    Likes Received:
    108
    GPU:
    Surpim LiquidX 4090
    I'm curious about your 980 Ti clocks, since that's Maxwell's advantage.

    Also, I'd like to know the CPU that Extremetech used. I believe The x79/x99 are the best platforms for nVIDIA's async compute.

    Can you also confirm Async Compute is enabled?

    edit- I believe ET is using a stock 5960X, but I'm not sure.
     
    Last edited: Feb 25, 2016
  18. zimzoid

    zimzoid Guest

    Messages:
    1,442
    Likes Received:
    25
    GPU:
    2xEVGA980TiSC+(H20) Swift
    This is half price on steam till March 1st might grab it next pay:)
     
  19. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    nvidia's async compute is essentially non existent as evidenced by the gpuview screenshot

    Async is reported enabled by both the driver and the game but there is NO 'asynchronous compute' going on; no concurrency between 3d and compute workloads.

    So assuming nvidia never implement an async-like feature, this performance is pretty damn acceptable to me
     
  20. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    In my very first post in this thread I say it's a problem..

    http://forums.guru3d.com/showpost.php?p=5235811&postcount=5

    My point is that the issue is overblown in this game. It's no different then AMD's architecture being piss-poor at tessellation.

    http://images.anandtech.com/graphs/graph9390/75485.png

    If Oxide released a game with tessellation everywhere, I would be saying the exact same thing I'm saying here. That it's a poor metric for gauging performance. And while it's definitely possible that Pascal won't have a fix, there is nothing definitive to show that. You saying it's "obvious" doesn't mean anything. Also that article doesn't even mention pascal.
     

Share This Page