Microsoft Eying DirectML as DLSS alternative on Xbox

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Nov 13, 2020.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,544
    Likes Received:
    18,856
    GPU:
    AMD | NVIDIA
    Much like AMD is eying DirectML as a substitute for DLSS, Microsoft is doing the same thing. And yeah they're actively developing it as part of DirectX. However interesting to learn is that technolog...

    Microsoft Eying DirectML as DLSS alternative on Xbox
     
  2. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    Yees please. Looks great.
    But we already knew that AI image reconstruction is capable of great quality. Minimizing performance hit is the key issue.

    https://creativecoding.soe.ucsc.edu/QW-Net/

    " Recently, a combined real-time image reconstruction technique called Deep Learning Super Sampling (DLSS) [Liu 2020] was introduced, but the details of the underlying network are unknown.
    Concurrent to our work, Xiao introduced a reconstruction technique based on U-Net. Using an optimized inference implementation they reconstruct a 1080p image in 18 to 20 ms on a high-end GPU. In comparison, DLSS reconstructs a 4K image in under 2 ms. Both these approaches can reconstruct images at a higher resolution than the input render. "


    [​IMG]
     
    pharma likes this.
  3. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    To be clear, DirectML is a substitute for NGX, not DLSS. DLSS is an application built on top of Nvidia's NGX. Just like whatever AI based upscaler AMD/Microsoft will build will be an application on top of Win/DirectML.
     
    craycray, nevcairiel and pharma like this.
  4. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,793
    Likes Received:
    1,396
    GPU:
    黃仁勳 stole my 4090
    Thank frack that M$ is stepping up where AMD hasn't yet.
     

  5. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,872
    Likes Received:
    446
    GPU:
    RTX3080ti Founders
    So better quality at the cost of latency and performance.
     
  6. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    At the cost of performance? Not quite the DLSS alternative.
     
  7. Dazz

    Dazz Maha Guru

    Messages:
    1,010
    Likes Received:
    131
    GPU:
    ASUS STRIX RTX 2080
    We have been hearing about a DLSS type API for AMD's new cards, wonder if this is it?
     
  8. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    In short - YES. AMD needs to step up.
    MS has been "eying" DirectML image upscaling at least since 2018. But MS can't do it alone. In order to have acceptable perf. HW partners need to provide arch. specific optimizations.

    The images from Forza demo you're seeing in the article have been done using use Nvidia tensor flow model converted to DirectML and further accelerated using Nvidia's specific optimizations.

    [​IMG]
    [​IMG]
     
    pharma likes this.
  9. Strange Times

    Strange Times Master Guru

    Messages:
    372
    Likes Received:
    110
    GPU:
    RX 6600 XT
    wow i miss no aa days. taa must be destroyed
     
    Prince Valiant and Exodite like this.
  10. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,020
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    Should MS succeed, this could make Xbox a more appealing platform. Sure, PS5 gets the better load times, but I doubt Xbox is going to load slow enough to annoy people.

    Right... because developing an AI to basically come up with lost information by itself is totally something they can pull off in just 2 years....

    I'm sure DLSS was in development for a while, especially since Nvidia wasn't exactly in a rush since the idea of it didn't really exist. The first iteration of it was unappealing enough that many didn't care to use it. So if Nvidia had all the time in the world and a chip dedicated to processing it and they still didn't yield ideal results, I find it rather unreasonable for AMD to come up with a compelling response in a timely manner. It's not a matter of them "stepping up", the problem is this isn't a simple task.

    Consoles tend to play games at 30FPS. The latency is already garbage. So really, the latency difference shouldn't be noticed - it should just yield better graphical detail.
     

  11. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,872
    Likes Received:
    446
    GPU:
    RTX3080ti Founders
    I don't agree, the latency will be noticeable enough to feel like it's playing streaming content. Imho, this tech needs hw support, period. There's a reason why Nvidia did it their way with hw, they've obviously done all the tests. This is also why they're willing to donate it to DirectML, because they already know it runs like crap, while running great on their hardware. Totally sound decision business-wise.

    Now, if Switch Pro (obvious use-case) supports DLSS, then, low-powered hardware could produce much better image-quality...
     
  12. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,793
    Likes Received:
    1,396
    GPU:
    黃仁勳 stole my 4090
    Yes, and they knew of it far far longer than 2 years, they were just betting on it being a fad which was stupid. I don't think you understand how enormous AMD are even if they are tiny compared to the others. They most definitely could have and should have had something by now.

    DLSS matters so much at this point that it makes people buy nVidia instead of waiting for AMD. My local PC shop telling me that (so far) there is no pre-ordering of 6800 XTs combined with CP 2077 being weeks away and supporting DLSS made me snap and order a 3080, despite how much I loathe nVidia in recent years, and their filthy 10GB BS granting me a nice 1GB downgrade in capacity. It'll be arriving most likely next week.

    Off topic-ish, but a question for anyone who knows: Are GPUs binned at all anymore?

    I don't think they are as nVidia just craps out dies without the binning they used to do, and I haven't heard of the partners doing binning lately. I went full retard and ordered the TUF 3080 OC, $50 over the base model and $100 over the MSRP of the 3080 FE, because I have no idea how much of a difference the different BIOS will make, when proper flashing tools will be available if it matters, and if there is any binning whatsoever. It already costs a stupid amount, I wasn't about to cheap out on $50 on a gamble that they're the same level of GPU die... which they should be. Basically feels like I flushed $50 down the toilet.
     
    Last edited: Nov 13, 2020
  13. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,020
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    Well, if the first version was anything to have gone by, they'd have been right.
    AMD is large but they struggle to make competent drivers for Windows and they've had other woes with Ryzen AGESAs. On top of that, they struggle to have hardware that can compete against Nvidia's performance. So, I'm not sure why you had expectations they would have the resources to develop an AI when for the past 2 years they've been playing catch-up on multiple fronts.
    You do not represent everyone. DLSS is one of the best ideas Nvidia came up with but most people don't see it as a dealbreaker, because it doesn't work for everything. You're better off paying more to reliably get more performance, than to pay less for something that might have great performance with slightly compromised quality.
    Don't get me wrong, I want AMD to get DirectML working ASAP too, I just think you have very unrealistic demands for a company that is only just recently able to actually compete against Nvidia in an apples-to-apples comparison.

    Yes, they are. That's how you get the OC and super OC variants from AIB partners. But GPUs are so complicated these days that even if you have 2 of the same model from the same brand, you'll get different results.
    Probably. At least you didn't flush $250 by buying from a scalper.
     
    Gandul likes this.
  14. RED.Misfit

    RED.Misfit Member Guru

    Messages:
    144
    Likes Received:
    83
    GPU:
    MSI 1080 Gaming X
    The story is simple to understand, they've done the job on the CPU side with CPUs dropping in a nearly 1 year cycle. They're going to try to do the same on the GPU cycle. Let them 2 or 3 years before maybe leading nVidia by a small inch (beginning at RDNA1 release)
    RDNA 1 is July 2019, RDNA 2 is november 2020 (doubling or almost doubling the performance while increasing the consumption by roughly 50%), RDNA 3 should be very late 2021 or early 2022, and might give them an edge on nVidia until Hopper is released.
    nVidia was on a 2 year cycle between each generation, AMD might brake the performance crown if they can follow the same pattern and roadmap execution from the CPU side.

    nVidia did have a weird launch this year, and the fact their communication is weirdly handled also seems to mean they didn't expected AMD to be that close (yeah those 3070ti or 3080 20Gb). nVidia did a fair job improving performance each generation (while improving efficiency by a lot), introducing ray tracing and DLSS.
    But it is still tied up to only a few games. Giving some more times for AMD to catch up in these 2 specifics parts. And AMD did a pretty good job from the efficiency perspective in 2 GPU generation.

    Supporting DXML/Super Resolution is not a bad thing rather than developping its own solution, it's open, it's bound to DirectX. It should work with more games than DLSS could and game developers won't have to bother supporting both. They will go right for DirectML and nVidia will support it through Tensor Core.
    There's only the question about what Vulkan/Khronos gonna do.
     
  15. labidas

    labidas Guest

    Messages:
    328
    Likes Received:
    119
    GPU:
    R9 290 / RX5700XT
    frack that. I'm not gonna use fake resolutions. #purist #nofakepixels
     
    ViperXtreme and Strange Times like this.

  16. vestibule

    vestibule Ancient Guru

    Messages:
    2,195
    Likes Received:
    1,410
    GPU:
    Radeon RX6600XT
    I'm liking the look of that. Bring on the textures and ray tracing.
     
  17. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,872
    Likes Received:
    446
    GPU:
    RTX3080ti Founders
    It's not apples to apples though, still different node. Nvidia clearly miss-calculated and should've stuck with TSMC. What we're seeing is apples and oranges, with AMD being on a smaller node, it would've been a disaster for AMD if they still couldn't compete even with that advantage.
     
  18. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Those days when games had 32x32 textures at most? Days where AF was buggy and effect changed depending on surface angle?
    Days when Matrox had better IQ than AMD and AMD had better IQ than nVidia?
    I do not miss those days. Sure, I had GPU OCed by 40% and had quatro softmod. But I miss actual ability to get AA of my choice into any game.

    And I agree, TAA in almost all forms it was implemented to date should die. Per pixel IQ loss just to get rid potential shimmering in game that does not suffer from it anyway...

    Then someone comes and makes IQ comparison of bad TAA against DLSS. At least here, it is shown for what it is in terms of SNR. TAA often much worse than no-AA image rendered at lower resolution.
    TAA on Zengarden example is plain "WTF?". Would someone take those cutoffs and shown each to random 1000 people, most of people would fail to recognize greenery after TAA messed it all up. And paradoxical fact is that SSIM tells you that TAA image is 99.3% similar. While no-AA is 96.4% similar to reference and looks much better.

    I think that they can use something like early discard. Because unprocessed image had reasonable IQ on most surfaces and needs help mainly with edges while each of those methods blur surfaces to some degree.

    It is nice that they used 256x SSAA for reference images. But when downsampling method is not sharp enough to preserve fine detail, reference image is nothing great to write about.
     
    Last edited: Nov 14, 2020
    Venix likes this.
  19. tsunami231

    tsunami231 Ancient Guru

    Messages:
    14,750
    Likes Received:
    1,868
    GPU:
    EVGA 1070Ti Black
    I never liked TAA first time I saw it was in SkyrimSE and I had it on , till I realized it blured thing in motion so turn it off and went back FXAA which the only Shader based AA (and if done proper I think is really good) i like still TXAA was supposed to fix blur issue with TAA but I never see it so i dont know

    I Think MSAA is myth at this point I rarely if ever see these days in game options, most are shader based at this point
     
  20. Venix

    Venix Ancient Guru

    Messages:
    3,473
    Likes Received:
    1,972
    GPU:
    Rtx 4070 super
    @Fox2232 Days when Matrox had better IQ than AMD and AMD had better IQ than nVidia?


    No no no no ! Amd had just cpus ! "Days when Matrox had better IQ than ATI and ATI had better IQ than nVidia?" There fixed !

    Hehe i just had to brother i am pretty sure since you know that about that time force of habit made you write amd :p
     
    Fox2232 likes this.

Share This Page