Red Dead Redemption 2: PC graphics benchmark review (re-tested)

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 24, 2020.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,531
    Likes Received:
    18,841
    GPU:
    AMD | NVIDIA
    LOL no clue as to why or how I typed it like that. Corrected, sorry bro :)
     
    jbscotchman likes this.
  2. HybOj

    HybOj Master Guru

    Messages:
    398
    Likes Received:
    327
    GPU:
    Gygabite RTX3080
    Thanks a lot, I REALLY appreciate that you did that and the way you interact with the community, this is very special, respect!

    PS: even if its just indicative, it can be seen that the performance got more in check, for example the 1080ti which behaved sub-par is now more at where it should be etc, I think the comparison is valuable and shows the progress made on the game and drivers. Good to see!
     
  3. Fender178

    Fender178 Ancient Guru

    Messages:
    4,194
    Likes Received:
    213
    GPU:
    GTX 1070 | GTX 1060
    Yeah seems like the 1000 series cards are starting to show their age. Even the 1080 Ti which was a great card at the beginning of the RTX's life cycle started to struggle. Man I need that upgrade. 3060/3070 should do nicely. Yeah it is understandable that the 1000 series cards would struggle considering DX12 and Vulkan are not their strong suits.
     
  4. anxious_f0x

    anxious_f0x Ancient Guru

    Messages:
    1,907
    Likes Received:
    616
    GPU:
    ASUS TUF RTX 4090
    Doesn’t look like it from reading around the internet :(
     

  5. k0vasz

    k0vasz Active Member

    Messages:
    71
    Likes Received:
    26
    GPU:
    nV GTX1060 6G
    Pascal was a bit weaker using DX12/Vulkan, compared to DX11, where GCN was running just as fine in both environment. nVidia corrected this in Turing
     
  6. dampflokfreund

    dampflokfreund Master Guru

    Messages:
    203
    Likes Received:
    31
    GPU:
    8600/8700M Series
    How about benchmarking the settings from Digital Foundry instead of Ultra. That would give a much better idea how that game performs.

    The settings from DF have the best quality and performance ratio and looks very close to Ultra, with much better performance.
     
  7. jbscotchman

    jbscotchman Guest

    Messages:
    5,871
    Likes Received:
    4,765
    GPU:
    MSI 1660 Ti Ventus
    There are so many settings to adjust and tweak its different for everybody. A global benchmark shows the game's true performance.
     
  8. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    17,564
    Likes Received:
    2,961
    GPU:
    XFX 7900XTX M'310
    Ultra probably works well for later GPU's as well even if some of it has the usual minor difference in visual quality not so minor difference in performance due to all sorts of scaling and changes and what the GPU is good with or has problems with especially when some things like shaders might scale to higher resolutions and show a notably higher performance cost at 2560 compared to 1920 and such.

    Would be neat if NVIDIA could take their DLSS solution one step further for the next version since it already uses minimal data and no per-game training model, their tech of course but it would get a lot of use for the upcoming games when scaling the resolution can make for some massive performance changes, maybe less so for benchmark purposes but for playing the game it has some really interesting utilization purposes and potential gains coupled with the upscaling to minimize image quality loss.


    Comparably though some shaders and stuff like volumetric effects and the upcoming dabbling into ray tracing will be costly, Assassin's Creed Odyssey and it's cloud quality setting hitting a near 60% framerate decrease at ultra might just be a bit of a starter though new GPU hardware might help a bit or just do as usual and brute force the performance really. :p
    (Think it's still something like 40% on V.High and then it gets a bit more reasonable although still costly from High or Medium plus it has some pretty extreme image quality / performance ratios where it seems to barely change a thing above medium or high quality ha ha.)


    Red here I suppose is down to the massive view distance and additional effects even if Vulkan and D3D12 prevents another GTA IV bottleneck situation.
    Console max view distance was something at like 10 - 20% of that slider already showing again that sometimes the PC version gets a hefty increase in settings and scalability although the newer console generation and the upcoming one might change it around a bit again.
    (Well sorta, been some years since games like Crysis or Half-Life 2 where the low settings actually look like a generation back and the higher settings are very future proofed much as it gets called out as unoptimized when that's attempted.)


    EDIT: But yeah this game might work pretty well as a benchmark suite until I don't know, Horizon Zero Dawn has some benchmark mode coming out but might not be using anything too fancy.

    GTA VI after this I suppose far as looking at what is next for Rocksteady in a year and then I suppose they are still doing that whole a year on console first thing still heh.
    Character models and texture detailing aside as a bit of a lower point though still good looking it'll be interesting to see what the next-gen version of this RAGE engine can pull when no longer limited by the PS4 or XBox One hardware base.
     
  9. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,036
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    This is incorrect, and appears to be based on a mis-interpretation of the following

    • One Network For All Games - The original DLSS required training the AI network for each new game. DLSS 2.0 trains using non-game-specific content, delivering a generalized network that works across games. This means faster game integrations, and ultimately more DLSS games.

    But further on states

    "Using our Neural Graphics Framework, NGX, the DLSS deep neural network is trained on a NVIDIA DGX-powered supercomputer.

    DLSS 2.0 has two primary inputs into the AI network:

    1. Low resolution, aliased images rendered by the game engine
    2. Low resolution, motion vectors from the same images -- also generated by the game engine
    Motion vectors tell us which direction objects in the scene are moving from frame to frame. We can apply these vectors to the previous high resolution output to estimate what the next frame will look like. We refer to this process as ‘temporal feedback,’ as it uses history to inform the future."

    https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-2-0-a-big-leap-in-ai-rendering/

    I realise the snippet from the dlss page also says

    "While the original DLSS required per-game training, DLSS 2.0 offers a generalized AI network that removes the need to train for each specific game."

    But this isn't the case if you want the content to look correct.
     
    Last edited: Jul 26, 2020
    JonasBeckman likes this.
  10. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    17,564
    Likes Received:
    2,961
    GPU:
    XFX 7900XTX M'310
    Ah that clears it up then, if NVIDIA had cleared the hurdle of per-game training down to requiring only small amount of data they would have been close to a almost generic implementation or setting in the control panel that maybe wouldn't be as detailed but could still be leveraging the DLSS 2.0 and newer improvements without game specific implementations being needed so it could just work on everything pretty much.

    Upscaling from a resolution that could be just a quarter of what it's scaling up to for the final output with good enough results to be well worth the performance gains, assuming that had been the case it's a bit like a immediate win against the competition as outside of direct comparisons and settings users would just toggle it and that'd be it, big performance gains with few drawbacks and it'd probably improve further over time.

    Scaling of geometry data and of course the pixels for the resolution targeted and what it's scaling up from plus shader performance scaling at different resolutions along with finer detail preservation and and also with TAA and maybe some sharpening and you'd have a pretty strong advantage there and not one I think AMD could easily match without having to make their own solution from start which would take time and resources and manpower to try and match this.

    Game wise assuming it would have worked then yeah that's overall a pretty hefty boost to performance and from what's seen of it so far it would even be usable for downsampling resolutions at a lower performance cost or just staying at say 1920x1080 or 2560x1440 but with much less demand on the GPU now.


    Well good to hear that cleared up though, bit confusing in the wording there but it makes sense after reading your explanation and getting it cleared up as I don't think it'd be possible to get away with even a game-engine generic version of this implementation (yet?) as not everything on Unity or Unreal to use these popular ones for this example is going to be similar after all though if that could be done that'd be a pretty huge thing at least from my view on how this works and how performance would change to where there would be no real competition as one GPU vendor would just have a way to make things significantly faster and with continual improvements that could be implemented over time as well.


    Upcoming generation of games and demand on shader and geometry performance would be a big thing although eventually newer hardware capable of D3D12 Ultimate and the Vulkan utilization of these would be required for anyone not on a Turing type card.
    Or already for that matter what with pushing shader effects that can halve framerate or more plus it's scaling with the users preferred resolution even if it might not be 1:1 it will still incur a noticeable higher performance hit above 1920x1080 :)


    EDIT: Optimistically that's a 30 - 50% extra performance just like that, kinda hard to match.
    Realistically yeah it can't be quite that easy though it sounded like NVIDIA had cleared one of the major obstacles towards this.

    Well that probably is enough on that and back to the actual benchmark.
    Who knows maybe if NVIDIA has Ampere launching early maybe it won't be too long until the results will be updated with what these cards can do. :D
     
    Last edited: Jul 26, 2020

  11. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,036
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    They are also using the AI's knowledge of various items that has been fed into it, from photo's so its not entirely scene fed, but there are still going to be things that don't exist outside of the game to sample.
     
  12. macatak

    macatak Maha Guru

    Messages:
    1,291
    Likes Received:
    47
    GPU:
    PowerColor RX 6950
    what's up with the 70% gpu usage ?

    rdr2.jpg
     
  13. Undying

    Undying Ancient Guru

    Messages:
    25,477
    Likes Received:
    12,883
    GPU:
    XFX RX6800XT 16GB
    Cpu bottleneck. All 8threads maxed out.
     
  14. macatak

    macatak Maha Guru

    Messages:
    1,291
    Likes Received:
    47
    GPU:
    PowerColor RX 6950
    from the article....

    "Our test PC was outfitted with this heavy set up to prevent and remove CPU bottlenecks that could influence high-end graphics card GPU scores."

    i9-9900K # of Threads 16

    "DX12 and Vulkan
    Much to my surprise, the game supports the Vulkan API. We found no significant enough performance differences though after a quick run back and forth. For this test (and we are very GPU bound), DX12 is marginally faster."

    more like CPU Bound
     
  15. erho

    erho Member

    Messages:
    35
    Likes Received:
    20
    GPU:
    Pascal
    In the part I bolded, they are not talking about the training phase, but about DLSS 2.0 running in-game on the end-users hardware.

    Imo, Nvidia is clearly stating "no per-game training" everywhere and I don't see any reason to think DLSS 2.0 would work otherwise.
     

  16. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,036
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    Yes they are
     
  17. erho

    erho Member

    Messages:
    35
    Likes Received:
    20
    GPU:
    Pascal
    So would you say Nvidias developers are lying when they say DLSS 2.0 is a fully generalized model and that they don't need to collect new training data when implementing it to new games? See 4:23 (couldn't get timestamp working):

     
  18. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,036
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    because you can do it right, and train your own engine or you can do it half arsed and let the cloud sort it out.
     
  19. erho

    erho Member

    Messages:
    35
    Likes Received:
    20
    GPU:
    Pascal
    Well, maybe you know what you are talking about, but Nvidia's senior researchers word is the one I'll believe this time. ¯\_(ツ)_/¯
     
    Gandul likes this.
  20. Loophole35

    Loophole35 Guest

    Messages:
    9,797
    Likes Received:
    1,161
    GPU:
    EVGA 1080ti SC
    I think there is a bottleneck but it ain't CPU. My guess would be RAM. You would be surprised how many games are starting to see RAM bottlenecks now.
     
    Deleted member 213629 likes this.

Share This Page