9 more games adding support for NVIDIA DLSS

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Sep 13, 2018.

  1. sverek

    sverek Ancient Guru

    Messages:
    6,074
    Likes Received:
    2,952
    GPU:
    NOVIDIA -0.5GB
    DLSS is interesting. I bashed DLSS A LOT cause it's up to Nvidia and developers to make game work with it and Nvidia only profits by selling it.

    However! It brings new ideas on the table. Developers might be able just to borrow GPU cloud machines and support their games for DLSS in future.
    I hope AMD steps up and provide similar technology with open standards as it always done, which hopefully will work on all GPUs.
     
  2. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,525
    Likes Received:
    3,204
    GPU:
    6900XT+AW@240Hz
    Compare target scope of those demos you worked with to all the different things you see in each game. Smallest deviation you get in game is in simple platformer game.
    Take any real 3D game and that dataset goes out of window with 1st shot, explosion, HDR effect, ...

    There is reason why nVidia is not demoing godly deblocking done by AI on videos. Scale of data and variance is beyond what they can do. So, we are back to beginning. DLAA scale is at mere pixel scale because that can be applied universally to known edges. Network is not complex, data required is small set. Much smaller than your examples.

    In time nVidia may even drop that supercomputer learning phase marketing and just enable switch for all games because there is really not much difference between edge which needs AA in one game than other.
     
    Dragam1337 likes this.
  3. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,529
    Likes Received:
    524
    GPU:
    Inno3D RTX 3090
    This is completely wrong. You can.
    [​IMG]
    [​IMG]
    [​IMG]
    Fox you are closer, but it is still inference. For something that it needs to be fast and pixel accurate as AA/upscaling needs to be, they will obviously need specifically tailored network training data, but that's not necessarily huge.
     
  4. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,525
    Likes Received:
    3,204
    GPU:
    6900XT+AW@240Hz
    I did not state that it is not inference, quite contrary. One of my post clearly states that while behavior may look like interpolation due to small scale, it is still inference.
     
    Last edited: Sep 16, 2018

  5. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,529
    Likes Received:
    524
    GPU:
    Inno3D RTX 3090
    Ah ok then, I didn't get you. BTW I hope we see this in the future from any vendor, not just nvidia.
     
  6. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,525
    Likes Received:
    3,204
    GPU:
    6900XT+AW@240Hz
    All those features introduced are to do something faster than before at cost of visual fidelity.
    Is it needed for 1080p/1440p? Definitely no. At 4K/8K, yes. But who really wants to go there?
    75Hz 4K screen costs more than GTX 1080Ti. Purchase of 144Hz 4K screen effectively doubles investment towards new gaming computer.
    8K TVs which could really benefit here are way outside price realm of very expensive gaming PC. What will be price of 8K monitors? Likely nothing pleasant.

    So while it is nice that nVidia is making features for someone who spends $3000~6000 for monitor/TV. I prefer features for us on the ground.
    And that's exactly their push for raytracing. That's good thing here. I expect that AMD will adopt those 1/2 ~ 1/16 precision shaders as there is no way to beat nV in performance with 1:1 precision. AMD will likely come with quite a few thing like that too since their engineers may have different ideas.
    I just hope it will not screw up my 1080p gaming on visual side in similar fashion as TAA did.
     
  7. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,847
    Likes Received:
    242
    GPU:
    EVGA GTX 1080Ti SC
    You're all over the place. Explain clearly what your point is since you've been moving the goalpost for two days now.

    I don't understand how you can tell how large the dataset that is required for training DLSS neural networks is. A single 4K x64 image is ~1.5GB in size at 8-bit per color channel. If you only need thousands of those, that's already in the TBs. Your previous point was alluding to the idea that Nvidia would have to store a huge amount of data in our drivers, which is blatantly false given the information I posted recently. Now what's "small" according to you? And what exactly do you propose as the alternate approach that Nvidia cannot do? Sounds like a strawman to me. You seem to say Nvidia cannot perform object recognition in all games. Duh! And that DLSS is pixel level. What do you even mean by pixel level? And how do you know, again, what sort of features Nvidia's DLSS neural networks are looking for? What do you say if I told you that if Nvidia end up retraining their neural network from scratch for each game, that's a new set of features detected every single time? And what if I told you that a convolutional neural network might detect a feature and orient a filter towards that feature in a way we would never have thought possible or guessed? That is the entire point of a neural network - we are not manually involved in constructing it and telling it what it needs to see and recognize.

    Again, read up carefully on how convolutional neural networks work. The only reason other people aren't actively chasing down the misinformation is that they don't understand what you're trying to say.
     
  8. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,525
    Likes Received:
    3,204
    GPU:
    6900XT+AW@240Hz
    Actually, it is not me who jumps from one stream of thoughts to another randomly. Entire time I am writing about client (gamer in this case). And as such, I do not give a F* about training part as any gamer does not need to. (And they should not worry about, as they are buying product, not the cloud computational capacity of any sort.)

    I am sure, it was clear that data set I wrote about was not one required for training. So, why are you even bringing it here. Or is it because I wrote about nVidia possibly dropping that marketing phrase which they use just for making it look bigger? It would be really funny, if you meant that. Or tragic. But anyway, it looks like you still put almost everything I write out of context. And then create some weird contradiction. (I lost interest in correcting those "misunderstood" things you bring, too wasteful.)
    Looks like disagreeing for sake of disagreement.
     
  9. jaggerwild

    jaggerwild Master Guru

    Messages:
    789
    Likes Received:
    283
    GPU:
    EVGA RTX 2070 SUP
    @ $1500 you can shove it Green team!!!!
     
  10. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,529
    Likes Received:
    524
    GPU:
    Inno3D RTX 3090
    The visual fidelity argument is a bit misguided, especially since all of the rendering tricks are hacks. Well optimized games just lower their visual fidelity in ways we cannot perceive (see the Frostbite games and their almost perfect LOD transitions, DOOM and the way it reduces shadow and light quality depending on what's happening, etc).
    It depends on how much any developer wants to invest to effects, and what is the target resolution. It might be "needed", according to some.
    I never understood why people buy 1,000$+ gaming monitor crap, since you can get an OLED TV for less, with sub 21ms real latency in HDR, that will blow any monitor away. Also you assume that perfect 4k60 has been reached by this generation, and I am absolutely certain it has not.

    DLSS is the exact opposite of that. Depending on how good it is (and if NVIDIA's AI work is any indicator, it is great), then people who have the lower end of the GPU spectrum will be able to hold their stuff longer. This gen the tech is held for larger GPUs due to the 16nm stagnation, but next yer I can see them putting it in pretty much everything.

    Turing can do all of the 16/8/4bit precision tricks that Vega can do now, so AMD does not have that advantage any more.
     
    Last edited: Sep 17, 2018

  11. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    17,340
    Likes Received:
    2,685
    GPU:
    MSI 6800 "Vanilla"
    Pascal could also do 16-bit float couldn't it without any other steps, Vega just had the means to do 2x operations but the overall gains were probably fairly small from that.
    (And Far Cry 5 is as far as I know the only explicitly listed game actually utilizing this optimization.)

    EDIT: Well it doesn't matter too much I suppose and for DLSS if it's AI trained the only limiter would be time this could take and the complexity of it.
    (Though they have a good lineup of initial games using it, just have to keep at it and get more on the program to keep it going well far as gaming use is concerned at least.)

    If some later patch decides to re-arrange objects or reduce detail then that would no longer match the AI algorithm wouldn't it? And it'd have to be updated again however long that operation actually takes and how much of the existing data could still be used. Suppose most games only do minor changes most of the time though once the content is in beta at least and mostly finalized.
    (With some exceptions, HomeFront: Revolution and Assassin's Creed Unity I believe had more extensive changes via patches altering or removing detailing and complexity of certain areas due to performance concerns such as clutter objects.)


    EDIT: Well for the fairly narrow view I have of the development process at least ha ha.

    Pre-alpha -> Development.
    Alpha -> Content finalization.
    Beta -> Testing and polishing.
    Gold/Retail/Release whatever. -> DNF mark (Do Not Fix.) the remainder of the open issues and ship! (Patch it up if sales are strong, otherwise dump.)
     
    Last edited: Sep 17, 2018
  12. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,525
    Likes Received:
    3,204
    GPU:
    6900XT+AW@240Hz
    1st sentence hinted that post is not about DLSS alone as it starts with: "All those features introduced..."
    Then it is not like 7nm cards matter when you are about to get 2070 as lowest card capable to do DLSS efficiently (as you considered that to be topic).
    So, "DLSS is the exact opposite of that." will become relevant when there is cheap RTX card. Till then I talk about what nV is about to release now which is set of relatively expensive cards not targeted for low-end gamers, nor mainstream.
     
    PrMinisterGR likes this.
  13. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,847
    Likes Received:
    242
    GPU:
    EVGA GTX 1080Ti SC
    If I indeed misunderstood, it's because you have no idea what you're trying to say. If you do, then spell it out boldly and clearly.

    I will ask the question again: 1) what sort of data on the client side do you think is required for DLSS to work for a particular game?

    Older questions you never answered properly:

    2) How do you think DLSS works?

    3) Where did you get the information on how DLSS works from?
     
  14. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,525
    Likes Received:
    3,204
    GPU:
    6900XT+AW@240Hz
    1) In contrast to what you and @JonasBeckman above believe, it is not set of small scale pictures identifying objects and how their edge should look like.
    It is learning of how edges themselves should look like vs. aliased edges at different angles. (For 5th time I'll repeat myself with small scale at mere pixels. You misunderstood it each and every time... I am sure on purpose as alternative would be...)

    2) inference of aliased edge into expected result based on known result for that given angle/color information

    3) nVidia's materials which mentions quite a few things between lines. (Requires you to know that certain something requires or prevents certain other thing.)
    And not expecting nVidia to do it in stupid way. (One trick pony examples you posted are the stupid way of dealing with AA. Use of particular objects is hammer when you can do better with scalpel and leave smaller footprint.)

    That's why JonasBeckman's fear of need to retrain after few objects/geometry/effect change in game is far from real. And that's why nV can use DLSS on any game which has absolutely no AA as long as they have learning set done on noAA to AA content.
    And then they would need different set for games which have certain level of AA/filter altering edges in way they are not as sharply aliased to begin with.

    In other words: "One DLSS network settings can fit hundreds of different games as long as user selects properly matching option in game's menu."
     
    JonasBeckman likes this.
  15. XenthorX

    XenthorX Ancient Guru

    Messages:
    3,643
    Likes Received:
    1,606
    GPU:
    3090 Gaming X Trio
    Glad to see lots of unreal engine 4 game in the list, i assume lots of devs are gonna be able to benefit from DLSS with proper ue4 support.
     

  16. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    17,340
    Likes Received:
    2,685
    GPU:
    MSI 6800 "Vanilla"
    That's a lot more flexible than I had expected and counters one of the bigger risks of things changing after the initial learning has been completed. :)
    I was assuming it would basically sample the entire scene (Minus actors.) and then use this as the basis of scaling the image near picture perfect from a much higher sampled source or well several samples and using that for reconstruction and scaling but if that can be simplified to just "knowing" edges or aliasing then it's much easier or at least flexible in implementation and quite some impressive coding work too for being able to achieve something like that. :)
    (More finesse or how to say, not just brute force and taking several terrabytes worth of data and how knows how long to process and have the AI network slowly learning game per game and all the fine tuning and changing needed for that method.)


    EDIT: Now if it can also work with different materials and shaders that would be really interesting (More reading needed! Seems like it could be accomplished though.) but even if it's geometry and edges that's already a good amount of aliasing able be removed and with less of the side effects of traditional shader anti-aliasing or temporal anti-aliasing from what I saw in the earlier whitepaper documentation for Turing.

    EDIT: Meaning it's less of a extreme form of AI learned super-sampling and more intelligent in how it works. Or how to describe what I assumed is how it would be doing things.

    Well that quite clearly shows I have more learning and reading to do, it's quite a interesting little feat and also quite new so it's going to be really interesting to see how this works once there's games and other software out that utilize this functionality some time after the hardware itself has launched.
     
    Last edited: Sep 17, 2018
  17. southamptonfc

    southamptonfc Ancient Guru

    Messages:
    1,876
    Likes Received:
    97
    GPU:
    STRIX 2080 Super
    De-noise is not the same use case. I still say it wont be better than realtime AA. Time will tell.
     
  18. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,525
    Likes Received:
    3,204
    GPU:
    6900XT+AW@240Hz
    Not better than MSAA and its siblings. But I think it will deliver much better edge AA level than FXAA (maybe with similar weaknesses where it is not applied on some edges).
    And they are incorporating temporal part like TAA which does marvelous job on shimmering surfaces/edges. Considering that and small performance impact. I think people will prefer it over MSAA.
     
  19. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,847
    Likes Received:
    242
    GPU:
    EVGA GTX 1080Ti SC
    I never said the data stored on the client would have to be a set of small scale pictures identifying objects and edges. I never, ever said that. I'll repeat - I said the training set would have a set of images - just like any other neural network used for image processing.

    You also didn't answer the question directly. The answer you gave is this:
    How does this answer what sort of data. You answered a question of what with how.

    I want a straightforward answer. What sort of data would Nvidia / developer need to couple with driver / game in order to have DLSS, which is I remind you, a neural network, in order to work for that particular game?

    Your English certainly doesn't help with understanding and misunderstanding you. Try to be a bit clearer. You talk in a weird, convoluted way.

    That would have been an edge detection neural network. No need for all this supersampling and prediction of a supersampled image. Why would you go around and build a whole model for predicting supersampled images when you only care about the edges and could have fed the neural network with a ground truth of downsampled / processed images at the same resolution as the input resolution? Can you clarify why you believe this is merely an "inference of aliased edge"?

    What does "inference of aliased edge"? Dude, please, enough redefining your vocabulary and let me make this very, very clear:

    Inference means the act of feeding an input image to the input layer of the neural network, multiplying each output by each perceptron by the corresponding weight and adding in the corresponding bias for each connected perceptron in the next layer, so on and so forth for each layer until you get to the output layer which produces the predicted output image. There is no other definition for inference in this context and you cannot use it how you please without sounding like you don't know what you're talking about.

    Any link to any Nvidia materials would do at this point as my bar for your evidence is currently extremely low.

    Sheesh. Object detection is something any convolutional neural network has to perform in order to output the results you need. This is no hammer vs. scalpel. This is basic convolutional neural networks. You know, Fox, why don't you get into the research industry and knock out some awesome, steaming-hot research papers that shatter the Earth and redefine all of these inefficient approaches we tend to take in this whole industry?

    I ask this of you because I'm currently doing a research project where I had used a convolutional neural network for handwritten digit recognition as practice and honestly, I think you might have an idea that reduces this "hammer" of having to set up 32 filters in that neural network that essentially facilitate object recognition (oh! This is the digit 3!) by the "scalpel" of maybe detecting some flow of the hand in an easy and straightforward way. Share, share, this could be grand!

    Neither you, Jonas, or I know this for sure. You termed it Jonas's "fear" because he suspects or has reason to suspect. What's funny is that only you are asserting that it would indeed be possible to have this work on hundreds of different games as long as the proper "option" is selected in the menu.

    I ask you, again, what sort of options would adjust a convolutional neural network's weights and biases to be properly oriented towards a particular game or another? This is a very straightforward question that deserves a very straightforward answer: how do you feed in a set of what parameters that can be chosen from a drop-down menu INTO a neural network SUCH THAT it becomes OF HIGH ACCURACY for a PARTICULAR TARGET?
     
    Agent-A01 likes this.
  20. vbetts

    vbetts Don Vincenzo Staff Member

    Messages:
    15,107
    Likes Received:
    1,685
    GPU:
    GTX 1080 Ti
    Why in lords name is it the same people in the same threads?!

    No more. One any only warning in this topic.
     

Share This Page