NVIDIA AI Powered DLDSR Unveiled

Discussion in 'Frontpage news' started by Rich_Guy, Jan 11, 2022.

  1. Krizby

    Krizby Ancient Guru

    Messages:
    3,067
    Likes Received:
    1,743
    GPU:
    Asus RTX 4090 TUF
    Upscale from 1080p --> 4K take around 3.5ms with tensor cores on 3090, 2.5x slower would mean 8.75ms

    Avg frametimes for 3090 at 1080p is 5.3ms and 10.5ms at 4K (correspond to 188FPS and 98FPS)

    Now if you add 8.75ms to 5.3ms, you get 14ms, that's slower than rendering 4K native, not to mention the reduced visual from upscaling from 1080p --> 4K

    See? it doesn't make sense to use DLSS on shaders, except maybe make an inferior version that trade visual for higher performance like DLSS 1.9, that's just wasting resources where it could be used to further advance DLSS.
     
  2. Horus-Anhur

    Horus-Anhur Ancient Guru

    Messages:
    8,638
    Likes Received:
    10,687
    GPU:
    RX 6800 XT
    Where did you get those numbers.
    Intel's presentation shows much lower overhead.

    [​IMG]
     
  3. Krizby

    Krizby Ancient Guru

    Messages:
    3,067
    Likes Received:
    1,743
    GPU:
    Asus RTX 4090 TUF
    You can test it yourself, try 1080p vs 4K DLSS Performance

    Intel can't even release their own GPU properly to save their asses, you trust that they make XeSS to be as advertised? Intel advertising materials are just a bunch of baloney at this point.
     
  4. Horus-Anhur

    Horus-Anhur Ancient Guru

    Messages:
    8,638
    Likes Received:
    10,687
    GPU:
    RX 6800 XT
    For a moment I thought you had access to some development tools that measured how much weight DLSS has. Or some accurate numbers provided by nVidia.
    But all you did is compare the frame rate with 1080p native vs 1080p DLSS to 4K. For all we know half those tensor cores are sitting idle.
    Regardless of Intel still not having released their GPUs, they are e better source than you comparing some charts on Techpowerup.

    Take a look at this comparison by nVidia.
    The RTX 3080 has 272 Tensor Units. The RTX 3070 has 184 Tensor Units. This is a huge difference, with the RTX 3080 having much greater TOPs.
    But using DLSS performance, compared to native, the RTX 3080 gains 2X the performance. While the RTX 3070 gains 2.3X the performance.
    If TOPs was the limiting factor after rendering the frame at 1080p and upscaling it, then the RTX 3080 would have even greater frame rates.
    Chances are there are other bottlenecks in the rendering pipeline with DLSS. Maybe shuffling data between shaders and tensor units. Maybe caches. Maybe some buffers.
    But whatever it is, the overhead with DLSS is not the amount of TOPs.

    [​IMG]
     
    Last edited: Jan 13, 2022

  5. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    Good catch. Unless we have official metrics from NVIDIA, I cannot see how this could ever be measured. The graphics bottleneck itself is too much interference.
     
  6. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,011
    Likes Received:
    7,353
    GPU:
    GTX 1080ti
    you can measure it in Pix.
     
  7. Exodite

    Exodite Guest

    Messages:
    2,087
    Likes Received:
    276
    GPU:
    Sapphire Vega 56
    When DLSS were first announced my hope was, and has been ever since, that we'd be getting something to replace MSAA/SSAA rather than a performance boost through upscaling.

    This might finally be it, murdering TAA on the way is just the cherry on top. :p
     
  8. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    There's difference just looking within Ampere but between Ampere/Turing/Pascal there's going to be scheduling differences as well.

    I understand that all these things might be capable on paper but I don't think any of us are really in the position to know how these would be implemented in real life. For all I know DLSS could run on Pascal but it would need to be majorly rewritten to do so - which would take resources away from the tensor application and potentially worsen the quality of the product. Worth it? I don't know - like people keep throwing the GPU shortage as a talking point for it but as far as any of these companies were concerned the shortage was supposed to be over by now. And if it was, I don't know if we'd be having this discussion, more people would be on RTX cards. In a year or so most Nvidia users will be on a tensor core capable card.

    I also don't know if I really like throwing Intel as a comparison point. We have no idea what the quality level of their implementation is - could be far worse than DLSS and if that's the case then I don't think the performance metrics matter - for all we know Nvidia could push the performance further as well, but they are using some of it to make quality gains instead. Further Intel and our boy Raja that's running it has made tons of these statements in the past about things "just working" or "look at this performance" and it almost never turns out to be true. So I'd actually like to see XeSS perform across these GPUs the way Intel claims before I really start forming any conclusions on it.

    That being said I think you're putting up some good arguments for it and doing a great job of explaining why it may be possible so kudos for that.
     
    Horus-Anhur likes this.
  9. Krizby

    Krizby Ancient Guru

    Messages:
    3,067
    Likes Received:
    1,743
    GPU:
    Asus RTX 4090 TUF
    You can't use FPS to calculate the DLSS overhead, use frametimes. Also you need 1080p data vs 4K DLSS Performance number, not 4K vs 4K DLSS Performance.

    Yeah sure Intel is a better source, when they haven't released their products for testing and just throw out marketing materials :rolleyes:. Theranos would love to have you buying their stocks for sure.

    Here is DF analysis on possible DLSS on Switch, you will understand what I'm talking about
     
    Last edited: Jan 14, 2022
  10. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    9,236
    Likes Received:
    5,208
    GPU:
    AD102/Navi21
    trying it out in rdr2,with fantastic results.
    4K ai upscaled runs at 65-75fps at 4K on 3060Ti,and the image quality is a lot better than 1440p,either with taa or dlssq

    [​IMG]
    [​IMG]
     
    Last edited: Jan 14, 2022
    eGGroLLiO likes this.

  11. aufkrawall2

    aufkrawall2 Ancient Guru

    Messages:
    4,356
    Likes Received:
    1,816
    GPU:
    7800 XT Hellhound
    I can see the DL oil paint look (which DLSS 2.x doesn't have). It gets worse when you try a game with more detailed textures, found it to look incovenient in Witcher 3.
     
  12. Trunks0

    Trunks0 Maha Guru

    Messages:
    1,278
    Likes Received:
    791
    GPU:
    PC RedDevil 7900XTX
    In the RDR2 pic above? RDR2 use's DLSS 2.2.10. Have to google/YouTube around to see if replacing the DLSS version with a newer DLSS 2.x improves things or not.
     
  13. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    9,236
    Likes Received:
    5,208
    GPU:
    AD102/Navi21
    damn it looks so good
    2.25x performance is pretty much exactly what it takes fps-wise, 63x2.25x is 140
    the results are incredible though,I remember using 4K dsr before,same 1440p monitor,and it wasn't that much of a difference visually.
    1440p looks like crap in comparison to 2.25x dldsr

    go fullscreen or else it'll distort the image
    https://imgsli.com/OTIxODg

    god damn I need a faster card
    i'm just blown away
     
  14. Dragam1337

    Dragam1337 Ancient Guru

    Messages:
    5,535
    Likes Received:
    3,581
    GPU:
    RTX 4090 Gaming OC
    I cant see the difference ? (Just joking)

    What dlss mode are you using with 2.25 dldsr? I ask cause i find that 4.00 dsr with dlss ultra performance provides a better image and at the same fps as 2.25 dldsr with dlss quality. Ofc this is at 4k, might be different at 1440p.
     
  15. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    9,236
    Likes Received:
    5,208
    GPU:
    AD102/Navi21
    dlssq
    i'll check that out,thx
    but it's probably gonna change with 1440p indeed

    i tried 4k dsr with dlss performance in rdr2 before and it looked bad.
    quality with 2.25x looks amazing
     
    Dragam1337 likes this.

  16. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    9,236
    Likes Received:
    5,208
    GPU:
    AD102/Navi21
    does anyone have far cry 3 installed on the drive ?
    can you compare the game's standard ambient occlusion vs in-game AO turned off and ao+rtgi running via freestyle filters ?
    would that get rid of this ugly ass black halo around everything ?

    don't wanna have to download the whole game just to check it out
     
  17. RealNC

    RealNC Ancient Guru

    Messages:
    4,953
    Likes Received:
    3,228
    GPU:
    4070 Ti Super
    Since I got a GPU last week that can finally do DLDSR, I only tried it now.

    It does a good job at making Doom 4 look sharp. If I disable the in-game anti-aliasing (which is a somewhat blurry and ghosty TAA) and only use DLDSR 2.25x (which in my case downscales 4K to 1440p) I get a very clean image.

    I wouldn't call DLDSR a "game changer," but it's really nice to be able to get DSR 4x quality without the huge perf hit of running games at 5K (that's what DSR 4x is with 1440p displays.)

    I wonder how many older and blurry TAA-using games I can clean up like this.
     
  18. tsunami231

    tsunami231 Ancient Guru

    Messages:
    14,725
    Likes Received:
    1,854
    GPU:
    EVGA 1070Ti Black
    try grid 2019 that whole game just look like blurry mess to me with TAA on at 1080p,I would really like to see DLDSR my self on my monitor, from what i understand DLDSR 1.78x should look better then DSR 1.78x time.

    It would would also be nice if all game could use, without the need for need the desktop to be at those resolution for it work cause it dont show up in game some even when use "fullscreen" seem to be happen more and more. and i dont like switch my desktop to DSR res there blur mess and pain to read. and for some randomly screw my icon positions. so i wont use it there.

    In-games are only place it worth using
     
    Last edited: Feb 19, 2024
  19. RealNC

    RealNC Ancient Guru

    Messages:
    4,953
    Likes Received:
    3,228
    GPU:
    4070 Ti Super
    I need to find a pirated version of that that still has DX11. On Steam, the DX11 build of the game was removed. The DX12 version of the game crashes every 10 minutes or so,
    which made it unplayable for me. If there ever was a case where a refund should be possible even after you played a game for a very long time, this would be it. They just broke the game I paid good money for.

    Any DSR factor other than 4x never did anything for me. Well, actually it did something, but not the thing I want to use it for, namely fixing aliasing, especially shimmering. 4x (at 0% smoothing) was the only factor that did that.

    Nvidia should really fix that. It's kind of annoying to have to change desktop res every time and then switch it back.
     
  20. tsunami231

    tsunami231 Ancient Guru

    Messages:
    14,725
    Likes Received:
    1,854
    GPU:
    EVGA 1070Ti Black
    all the DSR x look like blurry mess on my monitor on desktop. ( and yes i know bound to happen) I pretty much use 1.78x on my monitor to get 1440p in game which it works and i dont have change the desktop res on, and it looks amazing on my 24" monitor. I even prefered 1440p on 24" with DSR over 27" that actual is 1440p.

    The only actual use I would have for these Tesla cores is DLDSR to do 1440p or higher on my 1080p monitor for all game I can get it to work on.

    DLSS/FSR/XeSS and FG are all moot feature
    to me the scalers look terrible at anything lower then 1440p to my eye to much artifacts to much blurry compare to native
     
    Last edited: Feb 19, 2024

Share This Page