Review: Ashes of Singularity: DX12 Benchmark II with Explicit Multi-GPU mode

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Feb 24, 2016.

  1. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    I've been an active poster and lurker on here for close to a decade, you've always been very reasonable and brand agnostic, I presume this is why you're always being targeted by those who just want to watch the world burn
     
  2. Dygaza

    Dygaza Guest

    Messages:
    536
    Likes Received:
    0
    GPU:
    Vega 64 Liquid
    Ieldra, you aren't benchmarking new version. This new benchmark have tons of new stuff, effects, 2 different races, and what's best different map in benchmark :p

    Steam has now updated to new version (0.90). Never compared different versions, as things chance. Even Crazy settings ain't comparable anymore, since they are different.
     
    Last edited: Feb 25, 2016
  3. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    Thanks for pointing this out, I don't have the game on steam, it's on gog
     
  4. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,440
    Likes Received:
    109
    GPU:
    Surpim LiquidX 4090
    I really don't see how it's overblown, the Async implementation serves a purpose. Over-tessellation does not. One is *there* to increase performance for hardware that supports it. The other, *tessellation*, does not increase performance. In fact, it only tanks performance on all cards. Maxwell II gets hit the least hard. Again, nVIDIA is not closed from the games source code. That only compounds the situation.

    I will agree it's not a good metric to base future games on only because NVidia has more money than they deserve to throw at developers. And the 70% market share makes it nonsensical as a developer to not cater to Maxwell.
     

  5. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    The amount of times I've been called both an AMD and Nvidia fanboy on this forum is hilarious.

    The problem is most people see these arguments as black and white. Like it's impossible to not acknowledge that there is an issue with Nvidia's cards that should be addressed, but not declaring that AMD is the DX12 champ until 2019 like GeniusPro did. It's either Nvidia is amazing and AMD sucks, or AMD is amazing and Nvidia sucks.

    I get the same crap in politics. If I come out attacking a republican candidate for saying something, people automatically assume I'm voting for Hillary or Bernie, or I voted for/support Obama. I actually just hate them all and I think they all have some screwed up positions on things, but good positions on other things.

    I fully Acknowledge that Nvidia pulls a ton of crap. The 970 thing was a total PR job by them and it should have never happened. I acknowledge that Nvidia uses tessellation in Gameworks knowing it gives them the advantage. I acknowledge that Nvidia generally uses anti-competitive practices to gain advantages, like stopping Nvidia cards from working as PhysX when AMD is present in the machine.

    But AMD pulls the same crap. They shoved TressFX into the 2013 Tomb Raider right before it launched the same way Nvidia did Hairworks in Witcher 3. They had that whole Nano review Fiasco. They lied about working with the Project Cars developers. They instigated the Crysis 2 tessellation nonsense which turned out to be completely false.

    The truth usually is in the middle and that's generally where I sit. Nvidia definitely needs to get it's **** together with Async. It should be enabled already and it should definitely be fixed for Pascal. I'm glad that benchmarks like this are pointing that out. That being said, making completely asinine statements about the future of DX12 or whatever, just doesn't sit well with me. Especially when you lead into it with comments that are effectively just flamebait. Which of course I always take but should probably ignore.
     
  6. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000

    I ran the benchmark on crazy, framerate tanked; halved, specifically. 37.2 fps

    Now I've been told I'm running an outdated version of the game, but this is the latest update I have on gog.

    I'm going to try and pull the older benchmarks from summer 2015 and compare to those and try and figure out % change compared to version 0.9 and extrapolate

    Scrap that idea: http://www.techspot.com/review/1081-dx11-vs-dx12-ashes/page3.html

    older benchmark from november; still inconsistent with my data, 980ti performs almost 30% faster (compared to my run) at 1440p Crazy preset despite being at stock clocks
     
    Last edited: Feb 25, 2016
  7. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,440
    Likes Received:
    109
    GPU:
    Surpim LiquidX 4090
    Nice twisting of words, and completely missing the context given. Async compute isn't everything DX12 has to offer.

    *golfclap* Do you see DX12 anywhere in there?

    Do you see that I own 5 Maxwell II cards?

    *golfclap*
     
  8. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    You could own nvidias entire stock of maxwell II cards and it wouldn't change a damn thing, you're a prototypical forum troll, perusing threads making inflammatory comments.

    Fine, you want to separate async from dx12 ? nvidia has supported full hardware async since fermi, in cuda though

    How do you get this number, 2019 ?

    Are you listening to yourself ? You're just spewing random unfounded opinions about things one does not usually opine about. What you do is either analyze whatever data you have and draw meaningful conclusions, shut up and wait for information or display your predilection for infantile behavior here on this forum
     
    Last edited: Feb 25, 2016
  9. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,440
    Likes Received:
    109
    GPU:
    Surpim LiquidX 4090
    It means that I own 5 Maxwell II cards, you're somehow connecting this with some unspoken reason that I'm not aware of and that you haven't mentioned.

    I prefer the word, bifurcation. Hardware async in Maxwell II? I'm excited now. If NVidia needs to tweak the driver for context switching, it's not fully hardware based, now is it?

    Volta.

    No, I've just read more. Have access to more hardware, and configurations.

    And, the fact that I'm done being lied to by NVidia. You went from benchmarking happy to angry pretty fast.
     
    Last edited: Feb 25, 2016
  10. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    I'm connecting it to some unspoken reason? Reason for what? You're incapable of formulating a sentence, how am I supposed to have a discussion with you

    As for hardware scheduling, more specifically concurrency and parallelism between compute and 3d workloads on maxwell II.

    For someone with access to more information, hardware and systems than I do you sure seem incapable of reading things i've previously posted in this thread.

    Now, had you been more polite and less of an obnoxious jerk I'd have gladly linked again and even explained why the cuda implementation doesn't sit well with the d3d12 one
     
    Last edited: Feb 25, 2016

  11. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,440
    Likes Received:
    109
    GPU:
    Surpim LiquidX 4090
    You're going off-topic.

    I never said it would change anything, so it's difficult for me to know what you mean.

    What are your thoughts for async compute at the driver level, as per NVidia's tweet?
     
  12. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    My thoughts regarding the driver-level implementation of async ? It sounds like a crock of ****, unless it involves running the compute in CUDA I don't see how it can really be done

    And no, that was not off topic, your sentence was meaningless until you clarified.

    Make more of an effort to make sense, worry about my being off topic once you've got that nailed down

    You mentioned your having five Maxwell cards, either you thought it was a meaningful addition to the conversation or that actually was off topic

    TLDR: Owning several Maxwell cards doesn't make you a hardware expert in much the same way that owning several animals doesn't make you a veterinarian
     
    Last edited: Feb 25, 2016
  13. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Did you seriously just quote yourself and golf clap?

    I don't care how many cards you own, 1, 5, 100, probably Zero but you just changed your profile to leverage it as part of your argument -- because that's usually the only time people bring that up. It's not relevant at all.

    Both Async and Tessellation serve purposes, but once you go past that purpose it becomes a benchmark of Async and Tessellation and not a game. AoS goes way above and beyond the normal level of compute. Like in what other game are you going to be rendering 1000's of units on the screen and computing the lighting for all those units globally. In normal levels of compute, like what you see in the "High" preset in AoS and what you see in Fable, the Ti and Fury X perform similarly. Which is what I would expect two $650 cards to do.

    Anyway, I'm done with this thread. It spawned some useful discussions but it's essentially just turned into an echo chamber, and I'll admit that includes myself.
     
  14. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    Fury X and stock clocked Ti perform similarly *
     
  15. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,440
    Likes Received:
    109
    GPU:
    Surpim LiquidX 4090
    Okay, you're making assumptions again. I own 5 Maxwell II cards. I said this because I feel I'm not biased. NVIDIA advertised async compute and full DX12 support. My last AMD card was the 7970. Facts are facts, and misleading owners is something I have a great distaste for, which NVIDIA is hugely guilty of. I do speculate, but I'm not alone in my reasoning nor do all think they're off.

    To change the PSU from 300W to 330W. Thank you, once again, for twisting facts that literally can't be missed. I'm also glad you're done. And you are still failing to realize that DX12 at 4K exploits the use of async computation, hence the gains. It's not just an AoTS thing. When games utilizing VR AND Async come into play, there will be more emphasis on it, and that's what I'm getting at. NVidia will need a more robust method.

    It's not just about a scenario with 1000 (not 1000 anyway) units, but yeah, I give up.
     
    Last edited: Feb 25, 2016

  16. AlexUsman

    AlexUsman Guest

    Messages:
    1
    Likes Received:
    0
    GPU:
    Gigabyte HD7870
    Isn't cross vendor Multi-GPU performance depends on which card is "main"?
    I think I have seen similar tests few months ago. But I'm not sure, I could confuse it with something else.
     
  17. Extraordinary

    Extraordinary Guest

    Messages:
    19,558
    Likes Received:
    1,638
    GPU:
    ROG Strix 1080 OC
    Explicit Multi-GPU

    Does that mean AMD users will be able to use dedicated NVIDIA cards for PhysX without hacked drivers ?
     
  18. -Tj-

    -Tj- Ancient Guru

    Messages:
    18,103
    Likes Received:
    2,606
    GPU:
    3080TI iChill Black
    It does have a more robust, its just in cuda level not dx api level.

    I doubt things will change with pascal, its how they implemented it, first chip was Gk110.


    So it has dedicated HW for it, the only SW part is driver telling HW when to utilize it.. Its up to dev to utilize it properly, like it has to utilize it properly by Amd if they want to support it in dx12 api lvl.

    Dunno what so hard to understand here or to make assumption its only SW based.
     
    Last edited: Feb 25, 2016
  19. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    Yes, but this is tangential to his point, namely that nvidia are royally screwed until 2019. I love computer hardware,meaning I'm interested in it,consequently I like to to understand it as well as i can.this thread, this whole forum has and will continue to be to the benefit of people who want to learn .

    I don't mean to get philosophical, but ignorance is both a right and a burden and while simple explanations for complex 'things' are easy, they are also just that, simple explanations.

    I'm not saying everyone is wrong, or that the guy from Toronto I've been having a discussion with is wrong, I'm saying when you are wrong and this community is doing something positive- informing you, and perhaps even informing themselves in the process - it's very likely you're simply being an asshat if you're firing accusations and making claims that consist of very simplistic arguments
     
  20. -Tj-

    -Tj- Ancient Guru

    Messages:
    18,103
    Likes Received:
    2,606
    GPU:
    3080TI iChill Black
    Why until 2019? What will happen then? :) Volta? Maybe volta will use it too, its part of cuda. Amd has what 8 ace engines, nv e.g. gm200 has 22smx, each smx cuda block acts the same way as those 8 ace blocks. All smx controlled by gigathread and hyperQ.
     

Share This Page