Review: Ashes of Singularity: DX12 Benchmark II with Explicit Multi-GPU mode

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Feb 24, 2016.

  1. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,050
    Likes Received:
    1,348
    GPU:
    2070 Super
    we're all guilty of glancing through this pic:

    [​IMG]

    but we simply don't have enough information,
    and speaking for myself, I just concluded that it's either FCAT bug(not being up to date with DX12), or AMD output bug(less likely), either way, which will be quickly fixed.

    If Hilbert had made a big fuss about this, I would understand the need to defend AMD.
    But all that G3D said is that it LOOKS LIKE VSYNC, and no claims about AMD stuttering or anything like that have been made.

    As for over-zealousness of ET and this Hruska guy, that does not surprise me one bit. They've been in bed with AMD for quite some time.
    At least thats what the back of my mind says.
     
  2. Dygaza

    Dygaza Master Guru

    Messages:
    536
    Likes Received:
    0
    GPU:
    Vega 64 Liquid
    There is absolutely no reason to have any negativity from this. It's an error which needs to be fixed. If Hilbert wouldn't have posted this, this would have simply taken longer to reveal itself. It's good that this game to out attention this early, so future game reviews won't be suffering from this.
     
    Last edited: Feb 26, 2016
  3. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,113
    Likes Received:
    634
    GPU:
    .
    DX12 transition need a re-implementation of the graphics back-end, it's not a re-factoring like it was from DX10 to DX11.
    Epic Games started working to DX12 RHI implementation over 1 years ago..
    And yes, current drivers from all vendors are sill not 100% ready.
     
  4. Denial

    Denial Ancient Guru

    Messages:
    13,527
    Likes Received:
    3,073
    GPU:
    EVGA RTX 3080
    They had a semi-functional build of UE with DX12 in October of 2014, so they probably started it earlier that year. So yeah it's been in development for around ~2 years and they still aren't finished with it yet.. 4.11 includes a bunch of new stuff and they already announced more major changes for the 4.12 branch.
     

  5. OnnA

    OnnA Ancient Guru

    Messages:
    13,686
    Likes Received:
    3,525
    GPU:
    3080Ti VISION OC
    It's from Extreme Tech

    "Trust your eyes

    There’s one final reason to dispute what FCAT is reporting: It doesn’t match how the game appears to run on AMD hardware. The reason that Scott Wasson’s initial report on sub-second GPU rendering was so influential is because it crystalized and demonstrated a problem that reviewers and gamers had noticed for years. Discussions of microstutter are as old as multi-GPU configurations.

    Microstutter in the 7990 driver when running Shogun 2, and believe me, you could see it.

    That microstutter was clearly, obviously visible while benchmarking the game. It might not have shown up in a conventional FPS graph, but it popped out immediately in the FRAPS frame time data. Looking at that graph for the first time, I felt like I’d finally found a way to intuitively capture what I’d been seeing for years.
    Ashes of the Singularity doesn’t look like that on an R9 Fury X. It doesn’t look anything like the FCAT graph suggests it does. It appears to be equally smooth on both AMD and Nvidia hardware when running at roughly the same frame rate.

    [​IMG]

    Ashes of the Singularity measures its own frame variance in a manner similar to FRAPS; we extracted that information for both the GTX 980 Ti and the R9 Fury X. The graph above shows two video cards that perform identically — AMD’s frame times are slightly lower because AMD’s frame rate is slightly higher. There are no other significant differences. That’s what the benchmark “feels” like when viewed in person. The FCAT graph above suggests incredible levels of microstutter that simply don’t exist when playing the game or viewing the benchmark.

    AMD has told us that it recognizes the value of FCAT in performance analysis and fully intends to support the feature in a future driver update. In this case, however, what FCAT shows is happening simply doesn’t match the experience of the actual output — and it misrepresents AMD in the process.
     
    Last edited: Feb 26, 2016
  6. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,314
    Likes Received:
    31
    GPU:
    RX 6800 XT Black Ed
    Ironic, isnt it. Hilbert says he can see it (60fps VSYNC), then Joel says to trust his (your) eyes.

    There exists a verbal test in the US that grades reading comprehension, and findings have found massive differences between people of varying intelligence. At the low end, reading a passage and understanding 1 connection was difficult. At the high end it was possible to gather 3+ connections.

    I read Hilbert's addendum, it looks like he was far nicer than I was to Joel.
     
    Last edited: Feb 26, 2016
  7. chispy

    chispy Ancient Guru

    Messages:
    9,090
    Likes Received:
    1,333
    GPU:
    RX 6900xt / RTX3090
    Great article and good read. Thank you for the update Hilbert. Nice findings as well. In my opinion this is a good thing , seen a 390 beating a 980Ti when fully utilizing all the available hardware in the GPU. My sons gaming PCs have my old 290X video cards on 2 different PCs and i believe those cards have been worth it every single dime i spent on them , they are only getting better with time , it seems AMD VGAs age much much better than Nvidia GPUs in my opinion.
     
  8. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,314
    Likes Received:
    31
    GPU:
    RX 6800 XT Black Ed
    Someone on here had a bang on analogy. AMD fine wine, NVIDIA being chocolate milk.


    Ok... so i added in the chocolate.
     
  9. Sneakers

    Sneakers Ancient Guru

    Messages:
    2,717
    Likes Received:
    0
    GPU:
    Gigabyte 980Ti Windforce

    Imo the only part of dx12 iam looking forward to is the low level access and how The gpu can queue up and simultanously execute from the queue. This if done right will free up tonnes of cpu cycles and make modern games less cpu dependent and or open up for even more advanced games due to freed up threads.

    Dx11 is a stone age api with its sequential execution of queues. Graphical advancements dont really need more tools since u can raise level of tesselation and brute force it with ever more powerful gpus. However we need an api that breaks free of sequential queue execution to run LARGE multiplayer games with huge maps and advanced ballistics for fps games.
     
  10. mohiuddin

    mohiuddin Master Guru

    Messages:
    870
    Likes Received:
    92
    GPU:
    GTX670 4gb ll RX480 8gb
    You know what? I totally agree with you. R9 290 /x still banging gtx970's head, even more now. Whereas I see Gtx 670, Gtx 780 are slowly leaving the stage. I also now believe somehow that AMD GPU ages better. It is good for budget-concerned-not-so-fancy-gpu-customers like me.
     

  11. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    I'm fascinated by your results, your average framerate using the crazy preset is significantly (~20%) higher than mine at 1440p ( I ran my tests using the 0.81 beta) yet at high and extreme presets my version of the benchmark ran significantly better

    Extreme 1440p I got 64 ~fps

    Crazy 1440p I got 37~
     
  12. Glottiz

    Glottiz Master Guru

    Messages:
    594
    Likes Received:
    193
    GPU:
    TUF 3080 OC
    i'm not sold on this age like wine crap, at least not yet. we need larger DX12 games library to make such claims.

    after reading your post i actually checked a bunch of 290X reviews when it first launched, and it generally seemed faster than 780 then. then i checked 780 vs 290X in 2015/2016 games and i don't see any big performance gains for 290X as compared to 780 (i mean relative performance of 290X vs 780 remained about the same throughout the years), with one big exception being Ashes benchmark.
     
    Last edited: Feb 26, 2016
  13. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,113
    Likes Received:
    634
    GPU:
    .
    Oh, I just read the v-sync **** only now.

    just a little summery:

    -There is only one major presentation model in DX12: the flip presentation mode.
    -There are two modality to handle frames: sequential (Windows Store/Phone styleish(c)) and discard ("new" for flip model!).
    -You can get both synced and un-synced to screen refresh rate on both windowed and fullscreen mode.
    -The only exception with the current OS&SDK (TH2, aka 10586 branch) is the border-less window where actually unleashed frame-rate is not allowed, a future update will change that allowing unleashed frame-rate on border-less window too.
    -This open to new and interesting techniques not allowed before on DirectX, like low-latency triple-buffering.
    -The exclusive full-screen mode is no more, instead it is emulated (this allow the OS to pop-up notifications if allowed by the user, mostly off by default)
    -If you use the July Windows 10 release&SDK (TH1, aka 10240 branch) you will get incorrect behaviour.
    -If you use a Windows 10 "Redstone" preview build you can get incorrect behaviour.

    More info on dedicated MSDN and Youtube resources.
    Al the rest is crap (C)(TM).
     
    Last edited: Feb 26, 2016
  14. mohiuddin

    mohiuddin Master Guru

    Messages:
    870
    Likes Received:
    92
    GPU:
    GTX670 4gb ll RX480 8gb
    Its obvious a 980Ti user will not get sold on this matter.
    I have an aged gpu. I saw many benches for aged gpu for that. I had to have this general impression for Nvidia GPUs. I can't remember but most probably ROTTR benches are the recent disappointment to aged Nvidia cards. I looked for Gtx 600. U know Gtx 670. And maNY claimed about Gtx 780. So that's it.
     
  15. Spets

    Spets Ancient Guru

    Messages:
    3,126
    Likes Received:
    250
    GPU:
    RTX 3090
    Yeah I don't get it, Kepler is doing comparably fine in the latest releases:
    http://www.guru3d.com/articles_page..._graphics_performance_benchmark_review,7.html
    http://www.guru3d.com/articles_pages/anno_2205_pc_graphics_performance_benchmark_review,7.html
    http://www.guru3d.com/articles_pages/fallout_4_pc_graphics_performance_benchmark_review,7.html

    Also AoS is more than likely going to be the worst case scenario for NV GPU's, which even with Async blocked on a driver level still doesn't seem that bad, let alone when(if) they get a fix out. SeanP seems pretty confident about it.
     

  16. Andrew LB

    Andrew LB Maha Guru

    Messages:
    1,205
    Likes Received:
    202
    GPU:
    EVGA GTX 1080@2,025
    Its being abused right here in this benchmark. It's been stated many times that this benchmark uses a level of async compute far higher than we'll see in any game, and has the sole purpose of emphasizing a strong point of AMD hardware and a weak point of nVidia's.

    Also keep in mind, Oxide knew that nVidia cards have excellent async compute through Cuda which is how nVidia had always intended async compute to be done. I'd love to see nVidia do a driver update which does AC via Cuda because thats the only proper apples-apples way to do this.
     
  17. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,809
    Likes Received:
    3,366
    GPU:
    6900XT+AW@240Hz
    Year and half ago GTX 780Ti was equal or up to 15% faster than MSI GTX 970 Gaming OC while running 1440p.
    http://www.guru3d.com/articles_pages/msi_geforce_gtx_970_gaming_review,12.html
    It is not that hard to retest those games and see if GTX 780Ti remains where it was and GTX 970 gained as much as it leads in new games.
    In new Tomb Raider regular GTX 970 now has 14% lead on 1440p.
    http://www.guru3d.com/articles_page..._graphics_performance_benchmark_review,7.html
    In Witcher 3 GTX 970 had advantage of 31%.
    http://www.guru3d.com/articles_pages/the_witcher_3_graphics_performance_review,6.html
    Fallout 4 (original release) 8% faster on GTX 970.
    http://www.guru3d.com/articles_pages/fallout_4_pc_graphics_performance_benchmark_review,7.html
    Black OPS 3 - 12% in favor of GTX 970
    http://www.guru3d.com/articles_page..._graphics_performance_benchmark_review,8.html
    I did find exception in SW: Battlefront - 8% faster on GTX 780Ti
    http://www.guru3d.com/articles_page...ta_vga_graphics_performance_benchmarks,6.html
    (wonder how it is now)

    From other sites: Hitman beta 13% faster on GTX 970.

    One could go on, and on. Result is that from average 10% lead is now GTX 780Ti 10% behind (in new games). And I think nV is simply not investing that many man hours into Kepler optimizations. Maybe none at all unless something is broken. And Kepler just benefits partly from optimizations done for Maxwell.
    I think there are quite few threads in nV section.
     
  18. Carfax

    Carfax Ancient Guru

    Messages:
    2,913
    Likes Received:
    465
    GPU:
    NVidia Titan Xp
    Yeah, the people saying that NVidia can't do asynchronous compute have no idea what they are talking about. It can't do it in DX12 perhaps (this might change eventually), but it definitely works in CUDA.

    Look the PhysX benchmarks for Batman Arkham Knight. Maxwell has a massive lead over GK110, and GK110 has a massive lead over GK104 precisely because of the increased compute capabilities.

    Maxwell v2 in particular can do 1 graphics plus 31 compute tasks concurrently..

    [​IMG]
     
  19. Dygaza

    Dygaza Master Guru

    Messages:
    536
    Likes Received:
    0
    GPU:
    Vega 64 Liquid
    Indeed, Async compute does work on Cuda.

    This post has pretty good explanation:

    http://forums.anandtech.com/showpost.php?p=38057390&postcount=991

     
  20. TBPH

    TBPH Active Member

    Messages:
    78
    Likes Received:
    0
    GPU:
    MSI GTX 970 3.5+.5GB
    This benchmark is BS. I average something like 50FPS with the PhysX stuff on.
     

Share This Page