NVIDIA DLSS 2.1 Technology Will Feature Virtual Reality Support

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Sep 9, 2020.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    42,169
    Likes Received:
    10,113
    GPU:
    AMD | NVIDIA
  2. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    12,486
    Likes Received:
    4,794
    GPU:
    2080Ti @h2o
    First game to get it is a year old. Let's wait until 2021 to really see how this will be adopted with new games... I don't doubt that CP2077 has a lot of tech toys and gadgets to play with, but other games? Not so sure. Same as with Turing's release and the adoption rate of both RTX and DLSS 1.0 and later in games.
     
    Embra likes this.
  3. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,809
    Likes Received:
    3,366
    GPU:
    6900XT+AW@240Hz
    In VR, it is fight for every single pixel. There is no space for reduction of IQ.
    Would we have access to 4096x4096 screens as fast as those in index, one could get away with faking.
    But in VR, pixels cover so big viewing angle that it is like playing on 1280x1024 32'' screen.

    And again, we get into absurdity scenarios. Buy expensive headset, buy expensive PC, but cheap out on GPU. Therefore use DLSS x9 to have playable fps at whatever IQ costs it brings.

    VR today uses 2880x1600 in case of index. And even 3070 with that good price is able to do above 60fps in most demanding titles.
    And there is ASW already for VR. (Which is "fake" => transformation of full resolution already.)

    So, imagine DLSS taking your previous frame, moving parts around in way it believes image will look in current frame, then VR taking this frame and moving/warping it again.
    More the merrier, right?

    I hope AMD improves CAS instead.
     
    Prince Valiant likes this.
  4. itpro

    itpro Maha Guru

    Messages:
    1,246
    Likes Received:
    661
    GPU:
    Radeon Technologies
    Nvidia is desperate for marketing. They try to copy the "hdmi 2.1 is better than hdmi 2.0" . :p Dlss 2.1 with new gpus with more features than dlss 2.0. :D
     

  5. mbm

    mbm Member Guru

    Messages:
    189
    Likes Received:
    6
    GPU:
    3090 RTX
    I dont get it..
    Is DLSS a software but hardware depended?
    You write DLSS 2 is working with RTX 20xx/30xx. But at the same time DLSS 2 in wolfenstein only for 3090 card ?
     
  6. geogan

    geogan Master Guru

    Messages:
    888
    Likes Received:
    191
    GPU:
    3070 AORUS Master
    Maybe this will all change in next few days when Facebook announce their next gen headsets.

    Maybe it will have some form of next generation foveated rendering or something else - anyone that has seen the videos of the research that Facebook VR division has done in last few years trying to do this will know the massive amount of work & money they have put in, and number of failed prototypes and research done, and none of this work has seen the light of day in a consumer headset yet.

    I have a feeling we will have another 2080Ti situation with current headsets (and all the software tricks used currently being obsolete) when or if they come out.

    What we need is intelligent foveated rendering with some form of eye tracking, as well as the whole multiple focus depth of field stuff, and this all needs to work with NVidias GPUs to maximise efficiency and not waste time rendering edge of eye areas at same resolution as where the eye is currently looking at - which should be done at full super sampled real resolution with no DLSS type trickery in this area.
     
  7. Brogs

    Brogs New Member

    Messages:
    2
    Likes Received:
    0
    GPU:
    390x
    I have Pimax 8kx which already runs at high resolution in native mode with 2080ti. Not sure what Dlss will do in my case?
     
  8. XenthorX

    XenthorX Ancient Guru

    Messages:
    3,824
    Likes Received:
    1,807
    GPU:
    3090 Gaming X Trio
    Sounds interesting, will follow reviews.
     
  9. Lex Luthor

    Lex Luthor Master Guru

    Messages:
    751
    Likes Received:
    312
    GPU:
    GTX 1060
    Seems to me it should be much more valuable for VR than on a monitor.
     
  10. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    6,166
    Likes Received:
    2,484
    GPU:
    HIS R9 290
    I would be a bit concerned about any added latency. The more post-processing you do, the more you're delaying each frame from rendering, and that contributes toward nausea. Perhaps it isn't significant enough.
     

  11. Denial

    Denial Ancient Guru

    Messages:
    13,529
    Likes Received:
    3,074
    GPU:
    EVGA RTX 3080
    Why would it add latency at all? In every case that I know of DLSS increases framerate, which lowers the latency of the frame.
     
    XenthorX likes this.
  12. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    6,166
    Likes Received:
    2,484
    GPU:
    HIS R9 290
    A higher frame rate means lower frame time, but it says nothing about when the frame is rendered on your display. Whether you're at 60FPS or 600, it doesn't matter if the frame you're seeing doesn't line up with your input commands.

    Think of it like this:
    Imagine having a high-speed camera attached to a high refresh rate display. Thanks to all the image processing, what you see on the display is delayed by a few milliseconds. So even though you might be looking at an image at 300FPS, you're still distinctly looking at something in the past.
    When it comes to GPUs, that delay is even greater, because they're reconstructing the entire image themselves, as opposed to just simply capturing one.

    That's why some people hate v-sync: on a 60Hz display, the frame rate you see is the same but v-sync adds a lot to latency, so people who are sensitive to that will feel a constant delay.

    Having said all that, all post-processing requires a frame to be rendered, and then more clock cycles are used to continue modifying the image (hence the name). So, for every additional layer of PP you do, the more you're delaying the image from being rendered on the display. Yes, the frame rate is going up and that's good, but when it comes to VR, getting the lowest latency possible is critical.
     
  13. Denial

    Denial Ancient Guru

    Messages:
    13,529
    Likes Received:
    3,074
    GPU:
    EVGA RTX 3080
    But the post processing is built into the framerate? Say you're targeting 60fps - 16.7ms.. it's not 16.7ms + post processing. It's just 16.7ms and part of that 16.7ms includes the post processing. So if DLSS allows them to do 144fps (7ms or 6.9 or whatever it is) where they normally couldn't achieve that, it's just decreasing the total latency.
     
  14. Bobdole776

    Bobdole776 Member

    Messages:
    25
    Likes Received:
    4
    GPU:
    zotac amp extreme 1080ti
    If they want to try this in an already established game, they should get Frontier to add it to Elite Dangerous.

    Be a good place to test it at least, same with No Mans Sky.
     
  15. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    6,166
    Likes Received:
    2,484
    GPU:
    HIS R9 290
    Again... it doesn't matter what the frame rate is, what matters is when you see the frame. Rendering more frames per second has nothing at all to do with when the fully processed frame reaches your eyes.
     

  16. Denial

    Denial Ancient Guru

    Messages:
    13,529
    Likes Received:
    3,074
    GPU:
    EVGA RTX 3080
    It absolutely does lol..

    If I can only render 1 frame per hour - how long is it going to take to get the fully processed frame to reach my eye? More than an hour right? So why would that change when the frame is being done in 1/60th of a second?

    Latency is composed of the following:

    [​IMG]

    In our example the render latency is 16.7ms at 60fps. If you decrease the latency to 7ms (144 fps) and the rest of the chain stays identical - then you effectively lowered the time it takes for the final frame to reach your eye by 9.7ms.
     
  17. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    6,166
    Likes Received:
    2,484
    GPU:
    HIS R9 290
    That's not what I meant... My point is latency is not entirely dependent on frame rate. Like I said with my camera example, it doesn't matter how fast the camera is because processing the image, sending the signal to the display, and then having the display render that signal takes time. You can have a perfectly smooth experience that is delayed by several seconds. Ever been on a phone call with someone where there was a significant delay? It's not like you or the other end is thinking any slower than you normally would.

    Completing something faster doesn't mean you deliver the results faster. That's why I brought up the v-sync example, because even though the perceived frame rate is the same, the synced frames are literally older.

    I understand that, but I'm not necessarily referring to the rest of the chain. That chain is also more complicated than that once DLSS is involved, because there's a lot more back and forth talking for the AI and the tensor cores. That is where the added latency comes in. So, even if you were to play the same game without DLSS but lowered the detail level enough where you get the same frame rate, the latency should be better, because the tensor cores are not involved.
    The thing about DLSS is, in theory, it can be done in-parallel. The next frame could be rendered while the tensor cores are doing their work. So, the total amount of time the frame is being rendered is longer, but you're able to render more FPS. This I could be wrong about, though.

    Clearly, the tensor cores do add a significant delay, because on Nvidia's own website, they mention that DLSS is not enabled if a high enough frame rate can be achieved without it:
     
  18. Denial

    Denial Ancient Guru

    Messages:
    13,529
    Likes Received:
    3,074
    GPU:
    EVGA RTX 3080
    You keep comparing this to some other scenario where latency is increasing somehow, but you're not explaining why it's increasing. Yeah I've been on a phone call where there was delay - but why was there delay? And even if there is a delay - if I was able to cut 9.7ms off the processing on the phone - wouldn't there be less delay overall? That's what going from 60-144fps is doing, that's what DLSS is doing (if it's increasing it from 60-144, obviously it isn't this much on average).

    It 100% does because it's the only thing that's changing. The rest of the pipeline is identical, the rendering latency is decreasing, thus the entire chain is getting quicker.

    I fundamentally disagree with this and I think this is where our difference of opinion lies.

    If the frame-rate is identical than the latency is identical (given that the mouse/cpu/screen/etc is all identical). The DLSS is only adding latency to the portion of the pipeline that falls under "render latency" - which is equal to your framerate. If the render latency is 16.7ms, you're getting 60fps. If turning DLSS on increases the framerate to 144fps then it's also decreasing the latency to 7ms. Those two properties are linked - there is no other area where latency is increasing to turn DLSS on.

    DLSS doesn't add a significant delay, it adds a fixed delay - when that happens, sometimes it's quicker to simply render the frame at the native resolution faster than that fixed delay.

    For example, say we have a frame that takes 20ms to render at 4K but only 10ms at 1080P and DLSS takes 6.7ms to upscale it to 4K. Now you have a 4K DLSS image at 16.7ms compared to 4K regular image at 20ms. Naturally your framerate is increased with DLSS because you can do that more times per second.

    But in another example, let's say you have a frame that takes 6ms to render at 4K but 2ms at 1080P. Now you have a 4K DLSS image at 8.7ms (2ms + our 6.7ms "DLSS delay" vs a native 4K image at 6ms. It's better to turn DLSS off in that case.
     
  19. phawkins633

    phawkins633 Member

    Messages:
    39
    Likes Received:
    8
    GPU:
    EVGA 2080ti
    Who the hell cares? Nice to see on paper, and it's effect with the old 200 series cards.....but unfortunately, the fact that crypto-mining is back, eliminates any availability of 3 series cards. They (3 series) aren't even released, yet somehow folks in China are being able to purchase literal CASE loads. So, I'm gonna say it will be a repeat of last time, and the 3080 will be well above $2000 when it is eventually available....fu88ing 2020.....
     
  20. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    6,166
    Likes Received:
    2,484
    GPU:
    HIS R9 290
    Well, I figured for someone like yourself, the reason would be obvious: because communication with the tensor cores adds to the render time.
    Think of it like AMD's CCX design - the communication between the chiplets adds latency, but each individual core has good IPC, so certain workloads, you can get better performance than Intel despite the added latency.
    The delay is for various reasons, but the point I'm making here is neither you or the other end of the line is "operating slower". The level of performance does not affect when the signal is received.
    If we're talking about going from 60FPS without DLSS to 144FPS with it on then yes, the performance gain of DLSS is significant enough that, despite the added latency of the tensor cores, there is likely an overall latency improvement. But, that's also an unrealistically huge jump in performance. In real-world scenarios, it'd be more like going from 60FPS to 85FPS. I don't know how much added latency the tensor cores make, so it's hard to know if the reduced frame time will make up for it.
    If you were to instead lower the detail level instead of use DLSS, you're bound to get better latency.
    No, it isn't. Tensor cores add to the pipeline.
    I don't disagree that DLSS adding more FPS will decrease latency per-frame. I'm saying that compared to a scene with the same frame rate and no post-processing, the frames rendered by DLSS will arrive to the user later.
    So - DLSS is overall a good thing, because yes, if the frame rate drops enough, it can offer better latency and frame rates without sacrificing detail. But my argument is that sacrificing detail ought to yield better latency, which is why Nvidia turns DLSS off when the frame rate is too high (because otherwise the tensor cores slow things down).
    A fixed delay is still a delay, and it can be significant when we're talking about reducing motion sickness.
    I agree with all of that.
     
    Last edited: Sep 9, 2020

Share This Page