AMD Could Do DLSS Alternative with Radeon VII through DirectML API

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jan 17, 2019.

  1. DrKeo

    DrKeo Member

    Messages:
    35
    Likes Received:
    12
    GPU:
    Gigabyte G1 970GTX 4GB
    Radeon VII cost so much because 16GB of HBM2 cost more than 300$. So Nvidia’s cards are overpriced because of the rt and ai cores (and obviously because they have no competition from AMD) and the Radeon vii is overpriced because of the HBM memory.
     
  2. ht_addict

    ht_addict Active Member

    Messages:
    67
    Likes Received:
    10
    GPU:
    Asus Vega64(CF)
    Probably because they got sucked in to the RTX hype. Atleast with the VII its just for the cost of 16G HBM. A feature that's not dedicated to one specific task and sits idle when not in use.
     
  3. Denial

    Denial Ancient Guru

    Messages:
    12,658
    Likes Received:
    1,879
    GPU:
    EVGA 1080Ti
    I mean it kind of does considering zero games really need that much vram.

    Regardless both arguments are dumb. Various features and things throughout the years have ate die space until they became useful. It's not like Nvidia is going to push dxr into hundreds of games without hardware for it. It has to start somewhere.
     
  4. DrKeo

    DrKeo Member

    Messages:
    35
    Likes Received:
    12
    GPU:
    Gigabyte G1 970GTX 4GB
    Its actually the other way around, no game uses 16GB or needs 1000gbps but some games use dlss and rt (or at least more will in 2019). But both are highly high end features and should be optional. Nvidia should make a 2080GTX and AMD should make an 8GB Radeon vii.
     
    Last edited: Jan 19, 2019

  5. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,955
    Likes Received:
    2,298
    GPU:
    5700XT+AW@240Hz
    Sounds awfully like MADD of FMA. Any GPU has those instructions optimized today.
    Tensor vs. standard compute. I say there is no difference till proven otherwise. (Who's up to challenge?)
     
  6. DrKeo

    DrKeo Member

    Messages:
    35
    Likes Received:
    12
    GPU:
    Gigabyte G1 970GTX 4GB
    Tensor cores specialize in matrix arithmetics. They aren’t just more cuda core. I wish they where, we could have had better preformance in rasterisation.
     
    Last edited: Jan 19, 2019
  7. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,955
    Likes Received:
    2,298
    GPU:
    5700XT+AW@240Hz
    But is there software which compares them an those important workloads to generic CUs/SMs? I do not think we have that, yet.
     
  8. DrKeo

    DrKeo Member

    Messages:
    35
    Likes Received:
    12
    GPU:
    Gigabyte G1 970GTX 4GB
    I don’t know but I guess there are professional AI programs that can test that considering Volta cards had tenser cores for the past two years. Nvidia claims x12 the preformance of the cuda cores at fp32 on matrix arithmetics.
     
  9. dr_rus

    dr_rus Ancient Guru

    Messages:
    2,984
    Likes Received:
    333
    GPU:
    RTX 2080 OC
    Radeon VII costs so much because it will perform close to RTX2080 which cost as much. It's as easy as that really. If it would perform close to 2070 then it would cost as 2070 no matter how much VRAM it would have. Pricing is decided by the market, not by features or die or RAM sizes.

    MAD is FMA. Tensor cores perform MAD on a matrix of values meaning that you get 16 results per clock (I believe that this number varies based on precision used on Turing) instead of just one. Hence why tensor cores are so fast compared to regular SIMDs. But not all data can be presented as matrices and thus tensors usability is limited to deep learning algorithms mostly.

    You can say the same about both RT and tensor cores of Turing. There are applications for both beyond raytracing and DLSS. But at least in this case these are new h/w and not just an extension of VRAM pool which will most likely be useless on RVII in gaming.
     
    Last edited: Jan 19, 2019
  10. DrKeo

    DrKeo Member

    Messages:
    35
    Likes Received:
    12
    GPU:
    Gigabyte G1 970GTX 4GB
    If the RVII would have preformed like the 2070, it wouldn't have existed or it was a different card. You can't price a card that has 300$+ memory at 500$ no matter what you do so they would have never been able to challenge the 499$ 2070 RTX. But I do think that if the 2080 RTX wasn't a 700$ card, AMD would have used 8GB and priced it closer to 500$.
     

  11. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    1,627
    Likes Received:
    359
    GPU:
    .
    You do not understand: 8-bit or less precision integer are useless without a proper dedicated hardware meant to boost the throughput.Boost the throughput having a proper dedicated hardware is the only meant to exists for those shitty precision types. In fact we do not need 4-bit and 8-bit int, nor FP-16 and FP-10 at all. Those lower precision types can potentially be used on certain operations that do not need greater precision, but without a proper hardware support they will become just a waste of time compared to higher precision types.
    Precision types doesn't matter at all if the final output can be stored without accuracy loss, but what really matter are the internal operations, or even better: the computational approach used to reach that output. Traditional shadow mapping will be never take any advantage on those lower precision types, in fact traditional shadow mapping is already full of artifacts and compromises caused by accuracy loss caused by floating operations.
    Ranting on missing shitty precision types support thinking that it will not magically increase rendering quality and speed is just ranting on pure PR marketing lies.
     
    Last edited: Jan 19, 2019
  12. Cyberdyne

    Cyberdyne Ancient Guru

    Messages:
    3,399
    Likes Received:
    189
    GPU:
    2080 Ti FTW3 Ultra
    Didn't I have a long argument with you about you saying FreeSync would NEVER be supported by NVidia?
     
  13. MorganX

    MorganX Member Guru

    Messages:
    127
    Likes Received:
    9
    GPU:
    Red Devil Vega 64
    Keep in mind, things are not static. All RAM prices will continue to decline over time, 10-20% this year by based on various estimates. Also, we don't know how much AMD is paying for HBM in volume today. As all of the silicon manufacturing processes mature and yields go up, prices go down.

    2080 is generally 799, and now being discounted to 699. By the time it drops another $50-$100 AMD should have more margin to play with.

    We won't know until sales start, but I think there's more demand that people think. Provided it does trade blows with the 2080 at stock.
     
  14. DW75

    DW75 Maha Guru

    Messages:
    1,161
    Likes Received:
    566
    GPU:
    ROG GTX1080 Ti OC
    Things are going to get interesting next month. I think that a few weeks after release of this card, the price of the RTX 2080 and Radeon VII will both drop 50 bucks, and perhaps even 100. There is going to be a price war between these cards. Nvidia will then lower the price of the 2080 Ti, because no one will buy one anymore. All of these cards are too expensive for the performance they offer. At 700 bucks though, and with 16 gigs of vram, if this card matches or slightly beats a 2080, it will render the 2080 a pointless buy. AMD knows exactly what they are doing here, and this is a very smart move. That huge frame buffer will convince many people to buy this instead of a 2080.
     
  15. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,955
    Likes Received:
    2,298
    GPU:
    5700XT+AW@240Hz
    AMD is not only one in need of HBM2. I do remember article from around 6 months ago that Samsung is doubling its HBM2 production and that it will still not be enough to satisfy demand.
    Volume production, better manufacturing technology => higher yields => lover price.
    But supply <=> demand mechanics are not exactly in place to reduce prices much above that.
     

  16. MorganX

    MorganX Member Guru

    Messages:
    127
    Likes Received:
    9
    GPU:
    Red Devil Vega 64
    Fair point. But the fact that so many are using it now, justifies the investment in production capacity and process improvement. I don't know what the curve looks like, time will tell based on prices, but it is inevitably headed in the right direction.
     
    Fox2232 likes this.
  17. dr_rus

    dr_rus Ancient Guru

    Messages:
    2,984
    Likes Received:
    333
    GPU:
    RTX 2080 OC
    Freesync is AMD's trademark and thus cannot be "supported" by anyone but AMD. And I've never said anything about NV never supporting VESA adaptive sync.
     
  18. Andrew LB

    Andrew LB Maha Guru

    Messages:
    1,104
    Likes Received:
    162
    GPU:
    EVGA GTX 1080@2,025
    Only when looking at still images that are zoomed in like most critics love doing. DLSS really shines when its in motion. You can't honestly look at this video and tell me that TAA 4k looks better.

     
  19. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,955
    Likes Received:
    2,298
    GPU:
    5700XT+AW@240Hz
    I don't want to break it to someone, but TAA in this comparison loses on every front. Be it in static or motion imagery.
    - TAA is more blurry even when video is downsampled from 4K to 1080p to fit my screen. Which means that on 4K difference must be much worse.
    - TAA suppresses fine details
    - TAA performance is worse on top of those 2 primary downers
    [​IMG]
    Now in this instance (image above) DLSS has some weakness too:
    - Zipper does not have fine details TAA has because DLSS did not render it at high resolution in 1st place. (Thus performance trade-off.)
    - But stop starting at b00b, look at hair. With TAA you see classical dithered-like artifact (sky pixels sharply visible through) which in motion looks much worse. DLSS does that in much more natural fashion.
    - Look at jaw line, neck. TAA is jaggy.

    So, one can attempt to say: "So, TAA has advantage somewhere."
    No, sorry, TAA does not. Moment you configure rendering to work with DLSS in a way which results in comparable framerate, DLSS wins absolutely everywhere.

    Situation above demonstrates that DLSS wins on 90% of screen while providing better performance.

    Best demonstration would be if game was rendered on 4K on both, and then TAA/DLSS applied for 8K output.
    Result would be same performance and absolute difference in details achieved showing strength of DLSS. (Or for easier access, rendered on 1080p and displayed as 4K.)
     
    Maddness likes this.
  20. xrodney

    xrodney Master Guru

    Messages:
    331
    Likes Received:
    46
    GPU:
    Aorus 1080ti xtreme
    Demo on benchmark that noone use because it does not really reflect real game, it also does not show many DLSS issues you find ingame...
    Try to look at DLSS test GN did, pretty much any thin objects like fences, power lines or cranes in game flicker terribly when moving.
    Also text or very small details looks worse/blury when compared with TAA.

    This is because DLSS is not at all anti-aliasing, but rather upscaler (render at 1440p -> upscale to 4k) with some post processing trying to compute approximation of missing pixels and that is whats causing this issues.
    TAA is not perfect but DLSS is not either.

    Better option here is SMAA but that come with performance hit.
     

Share This Page