NVIDIA: Rainbow Six Siege Players Test NVIDIA Reflex and Two new DLSS Titles

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Feb 23, 2021.

  1. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    I understand perfectly fine, you seem to miss how this sort of neural network works. For the fine details, you literally see what the network "guesses" is there, according to the previous frames and training. There are examples (in Wolfenstein and Control), where DLSS at 1440p is actually more detailed than the native 4k render.

    From the Eurogamer article:
    From my own eyes: Control at 1440p DLSS looks better than Control at 4k. That's with a huge display (65") at 2160p120 native, with Full range RGB and 12bit pixel depth. Meaning that any reason that might create any weirdness or artifacts is removed. DLSS is not perfect, but 99,9% of people wouldn't even notice it from 1080p to 4k (I have tried with my wife and friends), and from 1440p they all told me it looks "better". It's the same in Cyberpunk between native 4k and DLSS Balanced/Quality. DLSS looks definitely better.
     
  2. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    So you do not know. There is huge difference between running 50fps and 200fps. Because one has temporal information that's 20ms old and other has temporal information that's 5ms old.

    Are you still having trouble comprehending?
    Move in 3d game forwards, backwards, strafe to sides, turn around. And now think how information changes on projection plane of viewport. And which situation causes particular type of distortion or missing information.
     
  3. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    This is all nice in theory (if you think that the people making this are morons who didn't think of this), but it's also wrong from what me (and others with the actual thing in our hands) can see with our own eyes. You can also see it in screenshots.

    EDIT: The really negative thing about this is that both Microsoft and AMD were caught with their pants down and didn't expect it. All GPUs need something like this, and all APIs need standardized methods of using it. Perhaps the AMD/Intel neural network is not as good, for example, but it needs to be there. NVIDIA are already playing by themselves (there is literally zero reason to buy any other GPU at this point), this makes it even worse.
     
    Last edited: Feb 26, 2021
    yasamoka likes this.
  4. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Yeah, lets throw into garbage whole field of information theory.

    Entire temporal part of DLSS is subjected to same rules as is video compression when it gets to motion in space and density of information over time.
    Not that I expect you to understand why you need much higher bitrate in different scenarios described to achieve same resulting image quality in video stream.
     

  5. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
  6. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Isn't it great to bring static scene as validation of temporal processing?

    Outside of apparent and common misunderstanding. It shows (as always) where it is doing good job and where it fails.
    SMAAx2 top right tree which looks like in fog gives more "photo-realistic" impression while DLSS one looks more detailed. Which could be taken as matter of taste. And I would prefer DLSS on it.
    But then right next to it on left you have much darker tree which does not look very good with SMAAx2 and DLSS makes it cartoonish, which is even worse.

    Roofs are more detailed with DLSS. But all that foliage from walls to camera's position are worse.
    Texture details of the walls is worse with DLSS too.
    For sheep, it is mixed bag, some are better, some are worse due to blurriness of fine details of geometry and textures.
    - - - -
    Now objectivity question of comparison. You stated 4K downsampled. And images are 1440p which would be confirmation.
    But why did you render both on 4K and then downsample? That makes differences less visible. Yet, they are still apparent.
    And it would be actually nice to confirm, that you did same procedure for both. (Which means that both were really rendered at 4K and then downsampled.)

    But in that case, you could have as well post original 4K images. Or render both at 1440p without downsampling from 4K and post original 1440p.
     
  7. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    Speaking about throwing entire sections of science into garbage, it's clear you don't understand how this type of NN works. The temporal information is only a part of the puzzle. In fact, the more frames it has, the better it works. Video streams are also lossy, this processing is not. You are not seeing the result of a video stream, in fact you are seeing what is closer to a shader than anything else.

    Bitrate is completely irrelevant in this scenario. I actually wonder how you can participate in this conversation at all, and we take you seriously when talking about "bitrate" in this situation (in any context).

    Also you are ignoring reports of people who have actually seen how DLSS does what it does. Bitrate would only be relevant in video comparisons. You are basically disputing every person who has seen this, and expert reviewers on top.

    I will post this video in case someone else following this thread wants to learn anything, as it is 100% certain you will not see it, yet you will keep talking as if you had.



    Check around 7:18
     
    Last edited: Feb 26, 2021
    yasamoka likes this.
  8. itpro

    itpro Maha Guru

    Messages:
    1,364
    Likes Received:
    735
    GPU:
    AMD Testing
    So neural networks will take our jobs. AI is the future worker unless you do work for corporations or government.

    Native image loses to calculated ones. That's the moral of today's lesson.
     
  9. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    @PrMinisterGR : Please, do something about your attention span.

    You: "The temporal information is only a part of the puzzle."
    Me few posts before: "Entire temporal part of DLSS is subjected to..."
    Me few more posts before: "Then you misunderstand how temporal part of DLSS works."

    You: "In fact, the more frames it has, the better it works."
    Me Few posts before: "There is huge difference between running 50fps and 200fps. Because one has temporal information that's 20ms old and other has temporal information that's 5ms old."
    ...
    ...
    And bitrate in comparison is not irrelevant. It tells you amount of information certain temporal scenario needs to reach certain image quality. While data DLSS uses from frame to frame are "lossless", video bitrate tells you actual amount of data addition required to reach given image quality from frame to frame.
    -> This means that if you have static scene, there is little to no data needed over time to keep track of changes in video stream.
    -=> Same static scene run in DLSS has almost all data required for new frame already present in previous frame(s)

    -> In contrast to that, high motion scenes, viewport rotations where no previous data are available require high bitrate.
    -=> In this high motion scene DLSS has proportionally less temporal data to generate image.


    Imbecile could realize that I am pretty aware on how DLSS works. And that you when being corrected adjust narrative as if you are correcting me.

    I feel we are repeating Console discussions here (multiple). And only honest reply you gave is that yet again, you do not learn anything technical at all from it.
     
  10. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    The Alexa I have at home can barely turn on the correct light at moments, so I wouldn't be too worried. I think they will take away a lot of the specialized repeatable work though.
     
    itpro likes this.

  11. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    Excellent, I am an imbecile.

    One request only:

    WTF are you trying to actually say in this whole thread?
     
    yasamoka likes this.
  12. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    You made false statement. I corrected it (#18). You got triggered into sandbagging. Rest did happen.
     
  13. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    Both aliasing methods compared in that MB2:B screenshot have temporal component. But DLSS is a more stable one, while being better at resolving/reconstructing far detail. Also more sharper with transparent props.
    Compared to these two methods, no-AA is a flicker-fest.

    DLSS suffers from a stupid bug in which tip of the spear sometimes gets blurred (when spear is carried on my back)
     
  14. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    If the correction was that DLSS doesn't provide better image quality than native, then it wasn't a correction. I get triggered because you are a combo of stubbornness with ignorance, that is hard to avoid when we try to have a normal conversation around here.

    Go and actually see the Digital Foundry video I posted twice already. Since you don't have the hardware yourself, you can at least see what it does.
     
  15. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    This is interesting, it sounds like it's probably z-aware or something. Do you get any different treatment on the spear on the back if normal TAA is applied?
     

  16. itpro

    itpro Maha Guru

    Messages:
    1,364
    Likes Received:
    735
    GPU:
    AMD Testing
    I had that conversation in University with a colleague. He insisted that AI is fundamentally unable to overtake scientists. I tend to believe otherwise. It's inevitable in my opinion.
     
  17. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Well. At this point. It is fair to say that I should correct myself in meaning of my post.
    My assumption that you made false statement was based on expectation that you made honest mistake. Now I know that you are lying.

    As for the DF video. I have seen it day they released it. And as it sometimes happen, people compare DLSS with blurry implementation of TAA, which is case of this particular DF's video too.
    And you are fully aware of it. DLSS has not scored single victory over natively rendered image unless it has been degraded by one of blurring techniques first.
    So, only statement which can be said about DLSS based on such comparison is that it is less trashier than trashy TAA.

    I personally do not run TAA anywhere because I have self respect.
    Scientific breakthroughs are done by few genius people and army of patient followers who test and test and test combinations/processes.
    AI will not fake thinking process of person, therefore it will not come to form sane theories.
    But it can take over all the lab work and do it more efficiently unless some work may happen to be volatile in nature.
     
    itpro likes this.
  18. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
  19. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    Why would any non-crazy person lie? And self-respect goes with the use of TAA? You're arguing against every single person that has actually seen, tested or used this technology. You're not just arguing, you're attacking people personally.

    There's something wrong with this attitude and you should fix it, and I'm not saying this in any kind of mean way. I can't imagine how it must be to have someone with this attitude around, and I bet neither could you.

    Interesting. Does the game use near-depth focus blur at all?
     
    yasamoka likes this.
  20. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    I'm rewatching Star Trek: Voyager, and I think we will end up with neural networks doing repeatable jobs using simple commands. In the series they all talk to the computer using simple language and the computer interprets meaning and actual metrics to provide a result/service. I think that the things that NVIDIA has shown already (people designing a simple place in Paint, and then an AI turning it into a "real" picture) will be the future. Should be more like extrapolation with more and more nuance.

    On the other hand, you might be interested in this free Sci-Fi ebook:
    Blindsight by Peter Watts (rifters.com)

    It basically argues that consciousness is just dead weight and that you can have something that is terrifyingly smart and efficient, that isn't conscious at all. It's free to read, I would recommend it just for the thought experiments alone.
     
    itpro likes this.

Share This Page