AMD Teases FidelityFX Super Resolution 3.0 at GDC 2023: What You Need to Know

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Mar 25, 2023.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    46,372
    Likes Received:
    14,219
    GPU:
    AMD | NVIDIA
    Embra, Maddness and fantaskarsef like this.
  2. reslore

    reslore Active Member

    Messages:
    79
    Likes Received:
    60
    GPU:
    6800 XT
    So they're making fake interpolated frames like Nvidia. Probably more blur on top of the upscaling blur. I am not interested in fake frames that give motion sickness and strange artifacts.
     
  3. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,405
    Likes Received:
    2,255
    GPU:
    Nvidia 4070 FE
    Yeah, I'm also not interested in seeing them, but in the bigger picture it could be unavoidable development. Although MCM should help things a lot, the end of the rope could still be approaching. Process nodes won't keep shrinking much longer, due to physics themselves. Power consumption can't keep rising. Prices are already up there, although I suppose they will still increase more. So, where does further performance development come from? Frame generation seems like the easiest answer, especially if the whole game scene doesn't change completely and escape into the cloud, which I'd dislike far more than frame generation.

    Before we have a real revolution, like optical computers or whatever, it's going to be all about squeezing the last drops out of the existing tech.
     
    schmidtbag likes this.
  4. mikeysg

    mikeysg Ancient Guru

    Messages:
    3,206
    Likes Received:
    702
    GPU:
    MERC310 RX 7900 XTX
    I've not used FSR in any of the games I play on my main rig, but with my 2nd rig with the RX 6900 XT hooked up to a 4K TV, FSR would inevitably be needed. Played DSR with FSR set to 'Quality' and it does help. bringing framerate to playable at 55fps and higher (will have to check if I'd had RTAO enabled). For a game like DSR, even mid 40fps is playable, and since I don't pixel peep like Steve of DF, it looks pretty dang good!

    So, understandably, I'm curious as to what FSR3 brings to the table in terms of PQ and framerate, and I hope it runs well on the RX 6000/5000 series of AMD cards at the very least.
     

  5. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,562
    Likes Received:
    1,231
    GPU:
    MSI RTX 4090 Trio
    nVidia's latest implementation so far is... usable, I was pleasantly surprised. I expected it to be dumpster fire in terms of input delay and artifacts, but surprisingly, as it is, I do give it a pass at least in CP2077. Make no mistake, they're still fake frames and have all the issues you've heard of. The question is, will AMD's implementation match that minimum level of quality that makes it worth using in some scenarios?

    Honestly I don't know why AMD are even bothering with consumer graphics anything, it's clear the share holders don't see it as a good venue for profit so it's been absolutely dogshit for years now. They didn't make nVidia compete on price at any price point, they just basically matched Jesen's "eat crap and die" prices. It's pretty much the same result as an absolute monopoly. I was huffing down all the hopium and copium, just praying that Intel might make a dent, but Raja Raja'd the world again.

    Yet at this point, I still have more hope for Intel's borderline-nonexistent (and possibly dead) graphics division making a difference resulting in less insane prices, than I do for AMD.
     
    cucaulay malkin and Krizby like this.
  6. wavetrex

    wavetrex Ancient Guru

    Messages:
    2,030
    Likes Received:
    2,043
    GPU:
    ASUS TUF RX 6800 XT
    nVidia: You need to buy our fancy new ADA 4000 series GPUs to get fake frames :cool:
    AMD: We have fake frames too, but our fake frames work on Ryzen 3 2200G :D
     
    olymind1 and CPC_RedDawn like this.
  7. pegasus1

    pegasus1 Ancient Guru

    Messages:
    3,546
    Likes Received:
    2,020
    GPU:
    TUF 4090
    Option 1 - Buy a top end card and play 4k max eyecandy without this technology and without compromise.
    or
    Option 2 - Buy a mid range card and play 4k max eyecandy with this technology and with some compromise.
    Or sit in your mums basement eating crisps and tell multi billion dollar tech companies they are doing it all wrong.
    #justsaying
     
    Embra, Maddness and GoldenTiger like this.
  8. GoldenTiger

    GoldenTiger Master Guru

    Messages:
    254
    Likes Received:
    78
    Nvidia DLSS3 frame generation works really well by user reports I've read so long as you start with 60 or so fps. Amd as usual copies innovation and is way late with much less adoption.

    They said h1 2023 when they launched their 7900XT/7900XTX but here we are in late March and it's "too early to show"? Bleh.

    DLSS3 already confirmed for a ton of titles with more to be announced, is this going to be the same as with DLSS 2 where FSR adoption lags far behind Nvidia?
     
  9. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    8,196
    Likes Received:
    4,775
    GPU:
    6800 Fighter 2.5GHz
    can't see them confirming anywhere how many real frames are needed to generate one interpolated frame, let alone that this will work on vega.
     
  10. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    8,196
    Likes Received:
    4,775
    GPU:
    6800 Fighter 2.5GHz
    you don't ?
    is selling 900-1100eur cards as "value alternatives" not profitable enough to keep doing that ?

    and rx6800s were great cards from a consumer standpoint, and still are, just got a 6800 fighter for 400e (not new, lightly used - though we'll see once it arrives and I take it apart). They just got greedy with rdna3, renamed a cut n31 as 7900xt and put a 250e premium on a 6800xt pricetag, despite those +50% performance/wat targets they teased were clearly missed.
     
    Last edited: Mar 26, 2023

  11. pegasus1

    pegasus1 Ancient Guru

    Messages:
    3,546
    Likes Received:
    2,020
    GPU:
    TUF 4090
    I've never used any of this tech to be honest, the most graphically demanding game I currently play is Meteo EE and at 4k and full RT with max settings really flys along. Even CP2077 does 60fps according to benchmarks so until something more demanding comes along il not be using it.
     
  12. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,405
    Likes Received:
    2,255
    GPU:
    Nvidia 4070 FE
    Haha, multi-billion tech companies are making mistakes all the time. That's because there's no positronic brain making the decisions but ordinary humans. Things can go especially bad when you don't have engineers making the decisions in tech companies, but you have businessmen instead.

    How many thousands of people were telling Intel they are making a mistake during the 4-core maximum decade of degeneration? Yet Intel did nothing until AMD janked the carpet from under their feet with the Zen MCM tech. Intel still hasn't recovered completely because ever since the 7000 generation, Intel has been forced to compete by factory overclocking their CPUs. Only now Intel is getting its MCM pro CPUs out, but by the looks of it, they won't yet be in the same weight class as AMD's offerings.

    So, yeah, maybe those multi-billion companies actually should, occasionally, listen to your mum's basement eating sitting crisps. #justeatingcrisps
     
    H83 likes this.
  13. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,432
    Likes Received:
    3,843
    GPU:
    HIS R9 290
    I think frame generation is fine in some cases, it just depends on the game. Same goes with supersampling, playing a game at 60FPS rather than 240FPS, playing a game at 1080p with AA vs 4K, and so on. When you have to make sacrifices, there is no one-size-fits-all solution for a better experience. Obviously if we all had our way, we'd be playing everything at native 4K+ with AA and 300FPS of genuine frames.


    I predict something like stackable GPU cores. The high frequencies we see today would have to be lowered for each additional layer but by being able to effectively double the compute power per square millimeter is a big deal. I presume this would be easier to implement than a chiplet design, and in fact might work even better since data has less distance to travel. It would be more expensive to manufacture, but ones facilities are equipped to do this regularly, costs would likely go down.

    Otherwise, I predict things will turn out like the old days, where people have to just simply learn how to stop being so lazy about coding. While I have griped many times here in the past about optimizing software to have a smaller disk and memory footprint, devs have also been super lazy about taking advantage of writing code to use fewer cycles. Sometimes performance losses come from things as simple as using a char variable/field to store an integer, or as complex as utilizing more hardware instructions. Usually though, it's just making code simpler, like doing 1+1+1+1=4 rather than just 1*4=4. Obviously that's a pretty stupid example but it's just to demonstrate how there's multiple ways to skin a cat but many of them are a lot worse than others. There have been times I've reduced code to 1/4 of its original size while retaining the exact same functionality. You may ask "how do you know this is a prevalent issue?" and the answer speaks for itself: if software has a lot of bugs/glitches, hacky hotfixes, memory leaks, or any warnings while compiling, those developers could not have possibly spent enough time writing more efficient code.
    I guess it is worth pointing out that you can have rock solid software that is very inefficient, but it's not possible to have unstable software that is very efficient. There's a lot of unstable software out there.
     
    Kaarme likes this.
  14. CPC_RedDawn

    CPC_RedDawn Ancient Guru

    Messages:
    9,920
    Likes Received:
    2,460
    GPU:
    PNY RTX4090
    I hate FG but it doesn't give motion sickness, to me at least. Its the latency that is the issue and artefacts which are super noticeable.

    On the other hand Nvidia uses previous technology to create their own branded version and then lock it behind their walled off garden and it costs a premium to enter that garden. Then after a few months/years they abandon it as it eventually becomes open source thanks to other companies.

    How well is GSYNC going now? PhysX? Gameworks? Hairworks?

    Also:

    AMD:
    Number of employees
    25,000 (2022)

    Nvidia:
    Number of employees
    26,196 (2023)

    This is the WHOLE company. Nvidia make GPU's, AMD make much more.

    Then factor in this:

    Nvidia:
    Net income: US$4.368 billion (2023)

    AMD:
    Net income US$1.32 billion (2022)

    If anything its a miracle that AMD can compete at any level
     
  15. TimmyP

    TimmyP Maha Guru

    Messages:
    1,320
    Likes Received:
    245
    GPU:
    RTX 3070
    Extrapolation. If it was interpolation, frames would be double in every case, look like ****, and it would have been incorporated a long time ago.
     

  16. wavetrex

    wavetrex Ancient Guru

    Messages:
    2,030
    Likes Received:
    2,043
    GPU:
    ASUS TUF RX 6800 XT
    Just so everyone here knows, this has existed for a while: https://nmkd.itch.io/flowframes
    And in the most recent versions, it's really, really good, the glitches are very minimal and usually only occur when fixed text or logos are displayed over video.

    AMD might use a similar algorithm but with game movement vectors instead of determining video movement vectors by looking at several frames, and they don't need to worry about sharp text glitching since the UI is added after the AI interpolation.

    Ah yea, and it works with Vulkan, so GPU agnostic.
     
  17. fellix

    fellix Master Guru

    Messages:
    222
    Likes Received:
    50
    GPU:
    MSI RTX 4080
    These types of interpolation methods are not timing critical, thus not suited for low-latency sensitive applications, i.e. games.
    The tight integration of DLSS in the rendering pipeline can also access the depth buffer data and further suppress interpolation artefacts above and beyond what any other pure post-process algorithm can achieve.
     
  18. wavetrex

    wavetrex Ancient Guru

    Messages:
    2,030
    Likes Received:
    2,043
    GPU:
    ASUS TUF RX 6800 XT
    FSR 2.x already has access to the rendering pipeline, and most likely 3.x will as well.

    I for one can't wait to try it on my "outdated" 6800 XT and see what this game frame generation fuss is all about.
     
  19. Undying

    Undying Ancient Guru

    Messages:
    22,310
    Likes Received:
    10,264
    GPU:
    Devil RX6750XT 12GB
    It will bring a new life into rdna2 cards. Who needs an nvidia 40 series :p
     
  20. H83

    H83 Ancient Guru

    Messages:
    4,826
    Likes Received:
    2,331
    GPU:
    XFX Black 6950XT
    It seems AMD and Intel have no option but to copy Nvidia`s features, even if some of them don`t make much sense...


    Everyone, kneel before the true power of marketing!!!:(
     

Share This Page