Microsoft Eying DirectML as DLSS alternative on Xbox

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Nov 13, 2020.

  1. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Sure I do. It is deformation by now. Told people that ATi is no more and there is now AMD so many times, that I simply have them as one entity :)
     
    Venix likes this.
  2. Strange Times

    Strange Times Master Guru

    Messages:
    372
    Likes Received:
    110
    GPU:
    RX 6600 XT
    I was referring to games from 2016 and earlier
     
  3. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    9,236
    Likes Received:
    5,208
    GPU:
    AD102/Navi21
    first thing that comes to my mind - if ML was an option,why did nvidia sink so much money into r&d and allocated whole server farms for dlss.
    seems astoundingly wasteful even if the goal was to make it propietary.
     
  4. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,009
    Likes Received:
    4,383
    GPU:
    Asrock 7700XT
    Nodes hardly make a difference these days, especially when you consider Samsung's is only off by 1nm (maybe not even that much, depending how you measure node sizes). People make fun of Intel's 14nm+++++ but when they don't compensate their lack of innovation for more clock speed, their efficiency is actually still very competitive against AMD.
    The main reason to shrink the node now has more to do with squeezing more product into a single wafer.

    So if you really want to get pedantic, it's more like an apples to pears comparison.
     

  5. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,035
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    my only concern is that unlike DLSS, DML implementations take 20ms.
     
  6. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,872
    Likes Received:
    446
    GPU:
    RTX3080ti Founders
    OK, I can accept that.
     
  7. TieSKey

    TieSKey Master Guru

    Messages:
    226
    Likes Received:
    85
    GPU:
    Gtx870m 3Gb
    Well, comparing to bi-linear is like comparing your new 2020 car against original Ford T model....

    I really don't understand the appeal of this kind of tech. Used as an AA for SAME resolution screens sounds reasonable as many other techniques with their own drawbacks.
    But spending GPU power to upscale a lower res render instead of using it to actually render things? If that actually "solves" anyone's problem, I think it screams that the problem is self-generated/silly in the first place (not saying it would be easy to solve, but doable). Problems like... a really really bad driver<->API stack, really aged rendering tech, screens with resolutions above what the users can actually see from a coach, etc..


    I rather have nvidia/amd/ms/vulkan-consortium working full steam on more and better differential shading rate, better ray tracing integration, etc.
     

Share This Page