Review: AMD Radeon RX 5700 and 5700 XT

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 7, 2019.

  1. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I've even read somewhere that while this RIS is driver level implementation of some kind of sharpening, there is that other similar thing called FidelityFX.
    And that is implemented via AMD GPU Open. (Does not look as lock to me. And will work on all capable GPUs including nVidia's.)
     
  2. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    Please explain to all of us what sort of magical specialized hardware this needs to run in the first place. Like literally everything else on a graphics card, the set of algorithms behind this can be run on regular shaders. Sure, it can be slower. But saying a graphical effect cannot be back-ported to older cards means you have absolutely no idea why we use specialized hardware in the first place (hint: it's not to enable - it's to accelerate).

    You can do ray tracing on an Arduino if your scene is small enough. It would take ages, but that doesn't mean there's something magical in your graphics card that enables this feature. You could even cast rays using pen and paper if you like. These are compute workloads, there's nothing magical about them.
     
  3. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,037
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    looks like basic luma sharpen to me, not heavy at all and can be done in a shader.
     
  4. Undying

    Undying Ancient Guru

    Messages:
    25,477
    Likes Received:
    12,883
    GPU:
    XFX RX6800XT 16GB
    I've been using luma and adaptive sharpen for years in a reshade and it does not look like this.

    This looks better, like it have more details. Image looks cleaner.
     

  5. Evildead666

    Evildead666 Guest

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    I have read about FidelityFX too, i'll have to read up about it more.
     
  6. Evildead666

    Evildead666 Guest

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    The two key words I was replying to were "Feature Lock" which you completely missed in your reply.
    PhysX is a good example of Feature lock, especially after Nvidia bought it.
     
  7. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,037
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    could be, superscaling and adaptive mipmapping. that's doable in a programmable shader too.
     
  8. Picolete

    Picolete Master Guru

    Messages:
    494
    Likes Received:
    261
    GPU:
    Sapphire Pulse 6800
    I want to know if it works in videos too, and not only games
     
  9. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    Yes, of course, it's not like you actually said more:
    As for feature lock, your second statement justifies the first and is exactly what I'm addressing. Feature lock to the newer cards due to allegedly specialized hardware. Read more carefully. Do you understand what you're saying at this point?

    I despise the hypocrisy with every AMD fanboy here figuring it's normal that this feature might not make it to older cards because of some rubbish excuse that specialized hardware is needed. I'm not saying it's not making it to older hardware - I'm merely saying that you guys finding it okay just because it's AMD who's doing it is stupid.
     
  10. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Interesting Hypocrisy it is indeed. Especially since you did happen to "miss" my reply stating that "software" enabled variant will be available even to nVidia...
    And decided not to let me know details why you think I am letting AMD slide while I "bashed nVidia for some feature lock against their older generations."
    (I am really interested in seeing what are those intentionally limited features. Really.)


    Instead you go and look where you can bash someone who's reply was not perfectly blocking your accusations. What is there to gain from it. You made multiple accusations, yet you could not (or did not want to) put any data behind either.
     

  11. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    I ignored your previous comment precisely because I was not addressing whether you or I think whether the feature will make it to older cards or not. I was addressing, quite clearly, your hesitation to condemn any potential feature lock when you considered that AMD may simply have specialized hardware that deals with the sharpening. Sure, if you think the feature is coming to older cards or is available to Nvidia, that's great, that was not my point. My point was that even when Nvidia delivers some technology that does require specialized hardware to perform properly, you start telling us what Nvidia engineers should have done instead, as if you know any better.

    I'm not arguing data, I'm not arguing whether AMD is bringing that feature to older cards or not, I'm not the one who's busy prodding unknown hardware and discussing specialized hardware that might be or not be there. I don't have that time to waste. I'm pointing out the double standards here when it comes to lockdown from AMD vs. from Nvidia and how these double standards can only possibly mean one thing - unrelenting and irrational bias.

    It's also funny how you ask for data when you deliver baseless conjecture like this:
    Where is your data, honey?

    AMD has more on its plate in the GPU sector than to add graphical features that require specialized hardware. That would be quite the lack of focus. The company is having trouble matching Turing's efficiency without going down to 7nm. Care to guess what happens Nvidia moves to 7nm? There are far more pressing issues here.

    The forums have recently been flodded with pointless discussions regarding technologies the vast majority of those discussing them do not understand and would swiftly condemn only because they emerge from one company and not the other. Now that Navi has released, the praise starts - regardless of any potential issues that AMD might be at fault with.
     
    Last edited: Jul 10, 2019
  12. moo100times

    moo100times Master Guru

    Messages:
    577
    Likes Received:
    330
    GPU:
    295x2 @ stock
    Murdered by words
     
    Undying likes this.
  13. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    And I did ask you to quote me for reference. I want to see exact wording I made, not your recollection.
    Am I biased? Really? For example I did defend DLSS till it managed to consistently fail in every implementation. And even then I wrote that I expect nVidia to rework it into something that actually does work better than their currently available examples.
    And I even wrote that while some parts of images looked rather bad, there were some details antialiased so good that I have not seen better. And I did same for other nVidia's Turing functions except raytracing for which it is too early due to weak HW.
    (Which is something even some people with 2080Ti could confirm.)

    So for 4th time already. Show me how I was evil and jumped nVidia for not enabling HW based feature of new generation on older HW. Thank You. Or stop repeating it.
     
  14. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    So, that means you have not seen Navi release conference where it has been stated. Go watch it. Maybe it will shut you.

    I have no objective reason to Bash AMD for barely enabling something new on buggy release driver for Navi. And not enabling it on every single older card for whatever performance cost it may inflict depending on it being FP16 or not. Or whatever else.

    Because AMD introduced FidelityFX. Existence of that makes all your ranting about HW locked feature in this thread completely irrelevant. And done either from lack of awareness or purely for sake of ranting.
     
    Last edited: Jul 10, 2019
  15. ladcrooks

    ladcrooks Guest

    Messages:
    369
    Likes Received:
    66
    GPU:
    DECIDING
    Thanks for your replies - I understand a bit better now ;)
     

  16. __hollywood|meo

    __hollywood|meo Ancient Guru

    Messages:
    2,991
    Likes Received:
    139
    GPU:
    6700xt @2.7ghz
    no. anisotropic filtering scales any given texture based upon the oblique viewing angle, & thusly, the mipmaps distortion. think about it this way: 16x filters angles twice as steep (in degrees) as 8x, which filters angles twice as steep as 4x, etc etc. how many steeply angled degrees for a texel are there?? the answer is veeeeery few, so unfortunately, higher levels of AF like 32x/64x wont do much to improve perceptible image quality in theory. thats why its never been implemented - they would only improve image fidelity of a tiny percentage of the screen (little spot here, a rock over there) due to the threshold set by the filter for higher angles relative to the player/camera POV.

    the takeaway is that this isnt a higher level of AF. the performance hit is next to nothing so its likely that its just an advanced, intelligent sharpen shader rather than downsampling, but i dont know for sure
    i know better, i can chip in with authority here :D nvidia engineers shouldnt have been directed by C-level execs to design new technology hinging on black box proprietary hardware. proper technology follows the innovation of the engineers, but nvidia instead shapes the products around marketing. "but theyre a business!" you say. well, great. jensen huang can buy himself another leather jacket. that facile reasoning is why we had physx add-in cards at one time in the past. thats why we currently have RTX with AI cores that were originally designed for their titan line for scientific computation (which ironically throw rounding errors reproducibly in lab setting - ask me how i know!)

    how big is RTX die again? how much faster would it be if they took the surface area dedicated to their marketing gimmick silicon & instead just expanded the actual chip?? if they want to develop new forms of AA or AI/dynamic downsampling or implement crude one-ray-sample-&-dithered-reflection raytracing by brute force then fine! great! do it in software. write a sophisticated algorithm & leverage developing hardware in tandem, the correct way. if its incompatible with certain hardware due to the nature of the code, then thats the price of progress. if its incompatible with certain hardware because the marketing department & the CEO are designing it that way.........

    anyway my point is that i bet you twenty bucks that once ATI division stabilizes their ridiculously bad release drivers, they will try to get the filtering working on previous gen cards due to consumer demand
     
    carnivore and Fox2232 like this.
  17. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    I'm not really sure why this is a bad thing. This strategy grew Nvidia to 80% of the GPU market. It also created a bunch of key technologies that competitors were forced to create open alternatives to. Would I have preferred Nvidia go straight open from the start? Sure why not - but they didn't - regardless they still forced innovation.

    How do you know?

    Probably not because the limit of general performance is power consumption not die size. Even in normal games with no RT/Tensor the 2080Ti is hitting 300w.
     
    Last edited: Jul 10, 2019
  18. MasterBash

    MasterBash Guest

    Messages:
    819
    Likes Received:
    18
    GPU:
    EVGA GTX970 SSC+
    I was interested in buying a 2080 Super, but looking at the prices of the 20xx cards, there is no way I am way I am going to give my money to nvidia and I am just going to settle for a 5070XT. The reviews on Anti-Lag seems to indicate that its working. Thats cool.
     
  19. JulianBr

    JulianBr Member

    Messages:
    44
    Likes Received:
    5
    GPU:
    RX 5700 XT
    Yeah, I know, but the RTX 2080 Ti was not worth the price. I enjoyed it the months I had it, but I would rather have a RX 5700 XT or RTX 2070 Super until next year. Selling the card gave me a brand new rig, so i'm happy. Next year maybe AMD and Nvidia will be having a greater fight with price vs performence and maybe even Intel will shake things up alittle with their GPUs, who knows.
     
  20. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,037
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    enjoy your downgraded overall experience.
     
    warlord likes this.

Share This Page