9 more games adding support for NVIDIA DLSS

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Sep 13, 2018.

  1. Valken

    Valken Ancient Guru

    Messages:
    2,924
    Likes Received:
    901
    GPU:
    Forsa 1060 3GB Temp GPU
    Improve or increase ROPs budget, performance in lieu of SFX tech features.

    It sounds like DLSS is pre-baked AA (probably fixed scene geometry) while leave basic AA budget to player / AI / moving models. If they prebaked it very high, say 8K or 12K @ 64 or 128xSSAA levels, it may be alright but again, this dependent on Nvidia's AI farm to create the AA mask or filters.

    It seems devs don't want to or don't have the budget to do for their games. But we should wait and see AA performance vs quality to see how it goes.

    After experiencing RGSSAA on AMD, you would get spoiled based on quality.
     
  2. fry178

    fry178 Ancient Guru

    Messages:
    2,078
    Likes Received:
    379
    GPU:
    Aorus 2080S WB


    So your saying you're not happy that company A does not care to spend money/time on game optimization for brand b and C..?
    Lol, thats not how capitalism works..
     
    Last edited: Sep 15, 2018
  3. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    Machine learning is nothing like that. There is no prebaking, just a huge, perfect algorithm tailored for the specific game. There is nothing like it. I wouldn't be surprised if it looks better than traditional supersampled AA.
     
  4. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super

    What nvidia is doing is they train their supercomputer with the game's ultra quality screenshots rendered at 64xSS and the results of trained DNN are passed to consumer's tensor cores for infering (AA)

    I have no idea how he came up with "prebaking".
     

  5. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    Most people don't realize that neutral networks self adjust until we tell them it's OK, and then they apply what they learned.
    This is all what this is about.
     
    Embra, yasamoka and fry178 like this.
  6. southamptonfc

    southamptonfc Ancient Guru

    Messages:
    2,626
    Likes Received:
    654
    GPU:
    Zotac 4090 OC
    I'd be extremely surprised. You can't AI your way out of a lack of visual information.
     
  7. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    This isn't what DLSS does. DLSS does not upscale, let alone from a lower resolution to your native resolution ...

    Nvidia trains a neural network with heavily supersampled images (64x) so that it is able to predict, based on a lower resolution image (that lower resolution being your *native resolution*), what the supersampled image might look like. The Tensor cores are used for inference - that is, they run the neural network already pre-trained by Nvidia by feeding in the image rendered at your native resolution to the input layer neurons, multiply each value at each neuron by its corresponding weight, add in the bias, rinse and repeat for each and every layer of that neural network (keeping it simple here, for a run-of-the-mill neural network) until you finally get a predicted "supersampled" image.

    Then, the image is downsampled back to your native resolution.

    There is no trickery here. Nvidia aren't magically getting back deficits in performance by tricking us into using DLSS.

    You guys need to read up on how neural networks work so that you'd understand the approach Nvidia is taking here. As well as see some examples of state-of-the-art neural networks in action for speech recognition, image processing, object recognition, facial recognition, etc...
     
  8. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,793
    Likes Received:
    1,396
    GPU:
    黃仁勳 stole my 4090
    Depends on what he means by "better". Smoother edges? Maybe. Overall better image quality? Seems impossible considering it's going to have inaccuracies since it has to pull a guess out of its ass.

    I'm actually happy that ray tracing is finally being pushed forward along with more advanced upscaling. Games will never look real without ray tracing and we're not going to get reasonable performance anytime soon without upscaling. Not those of us who want 144 fps anyway. Considering all of nVidia's BS this is probably a silver lining. Then again I'm sitting on a 1080 Ti so I'm okay waiting a long time for non-retarded monopoly prices or the generation after, anyone who wants a card soon is screwed. RIP your wallets and your dignity.
     
  9. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    None here really has to read about that what does not apply here. On other hand, you missed important point there. And that is simply: "DATA"
    When @Valken considered it pre-baked, it was wrong, but without huge amounts of data, you do not get to do much with neural network. I do not expect nVidia bloating their driver with required data for each game to have real high scale AI work, as you may want us to believe they are doing. Game installation itself can carry the bloat, that's true but is it worth it from data perspective as you have to fit it then to VRAM?
    Much more rational approach is downsize working set from large detailed images replacement (like in image recovery IA stuff) down to pixel stuff. (Since that's scale at which AA works.)
    At that point you can almost stop calling it inference and call it interpolation. But since it takes action based on learned and stored information from past, it is still inference action.
    Fact that nVidia calls it algorithm tells you some more... Their farm information will simply tweak few values for each game or scene type per engine.
    (That's why I think there will be no big driver/game bloat of data for DLSS to work. And it would likely work very well even with games they do not deliver specific values for. Just copy+paste values from similarly toned game/engine.)

    And as for question in other thread, you basically wrote same thing in other words.
     
  10. Monchis

    Monchis Guest

    Messages:
    1,303
    Likes Received:
    36
    GPU:
    GTX 950
    Boost in framerate needs to come from somewhere, so it either renders at lower resolution and uses this new tech to recreate a higher quality bigger image, or renders at your native resolution, produces the supersampled like image and uses AI to pop intermediate frames into existance (twice the frame rate)... but I don´t think game logic would permit that.
     
    Dragam1337 likes this.

  11. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Image is rendered without AA, that's where performance comes from.
    DLSS then applies "AA" where needed.
    - In smartest scenario:
    -> DLSS upscales rendered image to 2x resolution above one user rendered it on. (This increased resolution is for editing purposes.)
    -> Since GPU holds polygon information, it has list of all the edges, Tensor cores run along them to quickly determine jaggies
    -> where needed Tensor cores replace jaggies with appropriate "shaded/graded" value based on algorithm which has perfect source from step above (polygon coordinates tells you where exactly each edge starts/ends = damn good subpixel accuracy). This allows Tensor cores to use exact angle and deviation of edge from center of pixel to give weight from each side
    -> after step above, only color information remains to be changed based on texture information or image used for work
    -> downsampling to finish work
     
    Monchis likes this.
  12. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    See, that's how you demonstrate you've been blowing smoke the entire time and do not understand how training a neural network works and where the huge data goes.

    The huge amount of data you speak of is ONLY in the stage where you're training the neural network, adjusting the weights and biases. The amount of data used during training has absolutely no relation to the size of the neural network other than that generally, with a deeper neural network, you require more training data for accuracy, but they are not actually related in that the amount of data you use for training immediately impacts the amount of data one would use to represent the neural network ... Meaning, you can go for an infinite number of training samples yet keep your neural network exactly the same size. The data which is actually bundled in the driver on a per-game basis are the matrices of the weights and biases for each layer as well as any parameters that might need to be specified. Do you know how big those matrices are? Here's a hint - this is one of the world's top, and most massive neural networks:
    https://resources.wolframcloud.com/...sNet-152-Trained-on-ImageNet-Competition-Data

    Its trained size is 244MB.

    Nvidia aren't using the one of the world's most massive neural network for our game frames, mostly because that's overkill, not suited for this particular task and performance-deficient compared to something that needs to be done in less than 16.67ms for sure. So let's say we're talking an order of magnitude or two smaller - that is, invisible.

    But let's have some fun and say a DLSS neural network for a particular game were to be 244MB. How big are games nowadays? If one of your games suddenly got 244MB larger after a patch, would you even notice ...?

    So no. Read up about neural networks before flooding *all these threads* with the same regurgitated baseless assumptions about how this works.

    And of course, since this follows from baseless assumptions as well as your hobby of calling things that really are, what they are not, it has no meaning. You can't just barge in the technological world and redefine what is very clearly inference, a very very specific word used in the machine learning domain, and claim this is interpolation - you would be laughed out of any room you were in immediately.
    This is a machine learning algorithm ...

    You obviously at this point have no clue how neural networks work. They can't tweak a few values. They have to feed in supersampled frames rendered from that particular game then train the neural network for that. The best they could do in terms of "tweaking" is fine-tuning to an already existing generic neural network that does a somewhat good job on many / most / all games so that they don't have to start from what every neural network starts, which is random weights and biases. That way they could cut down on their training time. However, this is by no means manual hand-crafted tweaks of "few values" as you would misleadingly make it sound.

    Enough. We have had enough.
     
  13. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    Dude where the **** are you getting this information from! You're unbelievable ...

    Did you just guess what sort of features the convolutional neural network Nvidia are probably using is detecting and adjusting? Do you have some insider at Nvidia? Entertain us ...
     
  14. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Just one question. is that 244MB enough to reconstruct dozens of different images from different places/environments around the world? Or does ti allow to get "Very Successful Results" for "Very specific set."
    You read what you want... Believing that I believe in some huge images, while contrary to that belief of yours, I considered it not feasible at all.
     
  15. Dragam1337

    Dragam1337 Ancient Guru

    Messages:
    5,535
    Likes Received:
    3,581
    GPU:
    RTX 4090 Gaming OC
    Exactly, the performance boost doesn't come out of the blue. The first scenario you speak of is exactly what it does according to the article i linked. It lowers the load on the rasterazation cores by rendering the image at a lower resolution than your native resolution (which is what gives the performance increase), and then use the tensor cores to upscale.
     
    Monchis likes this.

  16. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    Read about how convolutional neural networks work... This isn't something you can work out with your ongoing sense of intuition. It's giving you wrong information here in this case. Convolutional neural networks perform feature detection through learning in the training phase what sort of features to look for to match the prediction they make to the ground truth. That 244MB is for connecting the artificial neurons in such a neural network so that it can perform that task. It uses no pre-stored data to "fetch" information to fill in...

    I can understand if this sounds like magic to you, particularly because you've never worked with neural networks before. But I can't understand how you insist on your assertions without the requisite premises to back them up..
     
  17. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    No, I just told you that it is particular data-set which sets up network in certain way. And it is for certain purpose only. Will not work on out of scope stuff. That's why i kindly mentioned "different places/environments around the world".
     
  18. NewTRUMP Order

    NewTRUMP Order Master Guru

    Messages:
    727
    Likes Received:
    314
    GPU:
    rtx 3080
    *You don't care. Fortunately YOU don't speak for US.
     
    Aura89 likes this.
  19. fry178

    fry178 Ancient Guru

    Messages:
    2,078
    Likes Received:
    379
    GPU:
    Aorus 2080S WB
    @southamptonfc
    lol, thats exactly what they can do.
    they restored images that were missing up to half of its information, and the ai was able to reconstruct to complete the picture.

    there is a reason why its called artificial INTELLIGENCE....
     
  20. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    What is your point now?

    That the particular dataset used is only suitable in certain places?

    Sure. What's surprising? We already discussed to death that Nvidia is training their neural network for each game with a dataset extracted from that particular game. So?
     

Share This Page