Microsoft Eying DirectML as DLSS alternative on Xbox

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Nov 13, 2020.

  1. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,804
    Likes Received:
    3,359
    GPU:
    6900XT+AW@240Hz
    Sure I do. It is deformation by now. Told people that ATi is no more and there is now AMD so many times, that I simply have them as one entity :)
     
    Venix likes this.
  2. Strange Times

    Strange Times Master Guru

    Messages:
    289
    Likes Received:
    85
    GPU:
    RX 580 UV
    I was referring to games from 2016 and earlier
     
  3. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    1,550
    Likes Received:
    717
    GPU:
    107001070
    first thing that comes to my mind - if ML was an option,why did nvidia sink so much money into r&d and allocated whole server farms for dlss.
    seems astoundingly wasteful even if the goal was to make it propietary.
     
  4. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,816
    Likes Received:
    2,240
    GPU:
    HIS R9 290
    Nodes hardly make a difference these days, especially when you consider Samsung's is only off by 1nm (maybe not even that much, depending how you measure node sizes). People make fun of Intel's 14nm+++++ but when they don't compensate their lack of innovation for more clock speed, their efficiency is actually still very competitive against AMD.
    The main reason to shrink the node now has more to do with squeezing more product into a single wafer.

    So if you really want to get pedantic, it's more like an apples to pears comparison.
     

  5. Astyanax

    Astyanax Ancient Guru

    Messages:
    10,306
    Likes Received:
    3,704
    GPU:
    GTX 1080ti
    my only concern is that unlike DLSS, DML implementations take 20ms.
     
  6. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,634
    Likes Received:
    335
    GPU:
    MSI GTX1070 GamingX
    OK, I can accept that.
     
  7. TieSKey

    TieSKey Member Guru

    Messages:
    187
    Likes Received:
    65
    GPU:
    Gtx870m 3Gb
    Well, comparing to bi-linear is like comparing your new 2020 car against original Ford T model....

    I really don't understand the appeal of this kind of tech. Used as an AA for SAME resolution screens sounds reasonable as many other techniques with their own drawbacks.
    But spending GPU power to upscale a lower res render instead of using it to actually render things? If that actually "solves" anyone's problem, I think it screams that the problem is self-generated/silly in the first place (not saying it would be easy to solve, but doable). Problems like... a really really bad driver<->API stack, really aged rendering tech, screens with resolutions above what the users can actually see from a coach, etc..


    I rather have nvidia/amd/ms/vulkan-consortium working full steam on more and better differential shading rate, better ray tracing integration, etc.
     

Share This Page