Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 24, 2020.
LOL no clue as to why or how I typed it like that. Corrected, sorry bro
Thanks a lot, I REALLY appreciate that you did that and the way you interact with the community, this is very special, respect!
PS: even if its just indicative, it can be seen that the performance got more in check, for example the 1080ti which behaved sub-par is now more at where it should be etc, I think the comparison is valuable and shows the progress made on the game and drivers. Good to see!
Yeah seems like the 1000 series cards are starting to show their age. Even the 1080 Ti which was a great card at the beginning of the RTX's life cycle started to struggle. Man I need that upgrade. 3060/3070 should do nicely. Yeah it is understandable that the 1000 series cards would struggle considering DX12 and Vulkan are not their strong suits.
Doesn’t look like it from reading around the internet
Pascal was a bit weaker using DX12/Vulkan, compared to DX11, where GCN was running just as fine in both environment. nVidia corrected this in Turing
How about benchmarking the settings from Digital Foundry instead of Ultra. That would give a much better idea how that game performs.
The settings from DF have the best quality and performance ratio and looks very close to Ultra, with much better performance.
There are so many settings to adjust and tweak its different for everybody. A global benchmark shows the game's true performance.
Ultra probably works well for later GPU's as well even if some of it has the usual minor difference in visual quality not so minor difference in performance due to all sorts of scaling and changes and what the GPU is good with or has problems with especially when some things like shaders might scale to higher resolutions and show a notably higher performance cost at 2560 compared to 1920 and such.
Would be neat if NVIDIA could take their DLSS solution one step further for the next version since it already uses minimal data and no per-game training model, their tech of course but it would get a lot of use for the upcoming games when scaling the resolution can make for some massive performance changes, maybe less so for benchmark purposes but for playing the game it has some really interesting utilization purposes and potential gains coupled with the upscaling to minimize image quality loss.
Comparably though some shaders and stuff like volumetric effects and the upcoming dabbling into ray tracing will be costly, Assassin's Creed Odyssey and it's cloud quality setting hitting a near 60% framerate decrease at ultra might just be a bit of a starter though new GPU hardware might help a bit or just do as usual and brute force the performance really.
(Think it's still something like 40% on V.High and then it gets a bit more reasonable although still costly from High or Medium plus it has some pretty extreme image quality / performance ratios where it seems to barely change a thing above medium or high quality ha ha.)
Red here I suppose is down to the massive view distance and additional effects even if Vulkan and D3D12 prevents another GTA IV bottleneck situation.
Console max view distance was something at like 10 - 20% of that slider already showing again that sometimes the PC version gets a hefty increase in settings and scalability although the newer console generation and the upcoming one might change it around a bit again.
(Well sorta, been some years since games like Crysis or Half-Life 2 where the low settings actually look like a generation back and the higher settings are very future proofed much as it gets called out as unoptimized when that's attempted.)
EDIT: But yeah this game might work pretty well as a benchmark suite until I don't know, Horizon Zero Dawn has some benchmark mode coming out but might not be using anything too fancy.
GTA VI after this I suppose far as looking at what is next for Rocksteady in a year and then I suppose they are still doing that whole a year on console first thing still heh.
Character models and texture detailing aside as a bit of a lower point though still good looking it'll be interesting to see what the next-gen version of this RAGE engine can pull when no longer limited by the PS4 or XBox One hardware base.
This is incorrect, and appears to be based on a mis-interpretation of the following
One Network For All Games - The original DLSS required training the AI network for each new game. DLSS 2.0 trains using non-game-specific content, delivering a generalized network that works across games. This means faster game integrations, and ultimately more DLSS games.
But further on states
"Using our Neural Graphics Framework, NGX, the DLSS deep neural network is trained on a NVIDIA DGX-powered supercomputer.
DLSS 2.0 has two primary inputs into the AI network:
Low resolution, aliased images rendered by the game engine
Low resolution, motion vectors from the same images -- also generated by the game engine
Motion vectors tell us which direction objects in the scene are moving from frame to frame. We can apply these vectors to the previous high resolution output to estimate what the next frame will look like. We refer to this process as ‘temporal feedback,’ as it uses history to inform the future."
I realise the snippet from the dlss page also says
"While the original DLSS required per-game training, DLSS 2.0 offers a generalized AI network that removes the need to train for each specific game."
But this isn't the case if you want the content to look correct.
Ah that clears it up then, if NVIDIA had cleared the hurdle of per-game training down to requiring only small amount of data they would have been close to a almost generic implementation or setting in the control panel that maybe wouldn't be as detailed but could still be leveraging the DLSS 2.0 and newer improvements without game specific implementations being needed so it could just work on everything pretty much.
Upscaling from a resolution that could be just a quarter of what it's scaling up to for the final output with good enough results to be well worth the performance gains, assuming that had been the case it's a bit like a immediate win against the competition as outside of direct comparisons and settings users would just toggle it and that'd be it, big performance gains with few drawbacks and it'd probably improve further over time.
Scaling of geometry data and of course the pixels for the resolution targeted and what it's scaling up from plus shader performance scaling at different resolutions along with finer detail preservation and and also with TAA and maybe some sharpening and you'd have a pretty strong advantage there and not one I think AMD could easily match without having to make their own solution from start which would take time and resources and manpower to try and match this.
Game wise assuming it would have worked then yeah that's overall a pretty hefty boost to performance and from what's seen of it so far it would even be usable for downsampling resolutions at a lower performance cost or just staying at say 1920x1080 or 2560x1440 but with much less demand on the GPU now.
Well good to hear that cleared up though, bit confusing in the wording there but it makes sense after reading your explanation and getting it cleared up as I don't think it'd be possible to get away with even a game-engine generic version of this implementation (yet?) as not everything on Unity or Unreal to use these popular ones for this example is going to be similar after all though if that could be done that'd be a pretty huge thing at least from my view on how this works and how performance would change to where there would be no real competition as one GPU vendor would just have a way to make things significantly faster and with continual improvements that could be implemented over time as well.
Upcoming generation of games and demand on shader and geometry performance would be a big thing although eventually newer hardware capable of D3D12 Ultimate and the Vulkan utilization of these would be required for anyone not on a Turing type card.
Or already for that matter what with pushing shader effects that can halve framerate or more plus it's scaling with the users preferred resolution even if it might not be 1:1 it will still incur a noticeable higher performance hit above 1920x1080
EDIT: Optimistically that's a 30 - 50% extra performance just like that, kinda hard to match.
Realistically yeah it can't be quite that easy though it sounded like NVIDIA had cleared one of the major obstacles towards this.
Well that probably is enough on that and back to the actual benchmark.
Who knows maybe if NVIDIA has Ampere launching early maybe it won't be too long until the results will be updated with what these cards can do.
They are also using the AI's knowledge of various items that has been fed into it, from photo's so its not entirely scene fed, but there are still going to be things that don't exist outside of the game to sample.
what's up with the 70% gpu usage ?
Cpu bottleneck. All 8threads maxed out.
from the article....
"Our test PC was outfitted with this heavy set up to prevent and remove CPU bottlenecks that could influence high-end graphics card GPU scores."
i9-9900K # of Threads 16
"DX12 and Vulkan
Much to my surprise, the game supports the Vulkan API. We found no significant enough performance differences though after a quick run back and forth. For this test (and we are very GPU bound), DX12 is marginally faster."
more like CPU Bound
In the part I bolded, they are not talking about the training phase, but about DLSS 2.0 running in-game on the end-users hardware.
Imo, Nvidia is clearly stating "no per-game training" everywhere and I don't see any reason to think DLSS 2.0 would work otherwise.
Yes they are
So would you say Nvidias developers are lying when they say DLSS 2.0 is a fully generalized model and that they don't need to collect new training data when implementing it to new games? See 4:23 (couldn't get timestamp working):
because you can do it right, and train your own engine or you can do it half arsed and let the cloud sort it out.
Well, maybe you know what you are talking about, but Nvidia's senior researchers word is the one I'll believe this time. ¯\_(ツ)_/¯
I think there is a bottleneck but it ain't CPU. My guess would be RAM. You would be surprised how many games are starting to see RAM bottlenecks now.