AMD's FSR 3 to remain Exclusive to Company's Graphics Cards

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, May 24, 2023.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,392
    Likes Received:
    18,564
    GPU:
    AMD | NVIDIA
  2. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    9,236
    Likes Received:
    5,208
    GPU:
    AD102/Navi21
    when will this finally see even a 10s snippet ?
     
  3. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,696
    Likes Received:
    9,574
    GPU:
    4090@H2O
    Well, I'm curious to see how 4 interpolated frames instead of 1 are dealing with ghosting (which at times is a major issue with all those techs).
     
    chispy and Embra like this.
  4. nevcairiel

    nevcairiel Master Guru

    Messages:
    875
    Likes Received:
    369
    GPU:
    4090
    Much worse then only using 1 frame, thats for sure. Any mis-prediction will be far more obvious when only 1/5th of the frames are "real", rather then half of them.
     
    chispy likes this.

  5. H83

    H83 Ancient Guru

    Messages:
    5,465
    Likes Received:
    3,003
    GPU:
    XFX Black 6950XT
    More fake frames for everyone!

    Yummy...
     
  6. Krizby

    Krizby Ancient Guru

    Messages:
    3,067
    Likes Received:
    1,743
    GPU:
    Asus RTX 4090 TUF
    Can't wait for AMD to block out DLSS3/XeSS in more games
     
    GoldenTiger likes this.
  7. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,696
    Likes Received:
    9,574
    GPU:
    4090@H2O
    I suspect the same. Effectively, if you have 1 extra frame after every originally rendered one, it's already at least 50% "interpolated" content / information, right? Imagine this at say 4/5 or 80% (!) interpolated information...
    I could imagine that they are using 1 generated frame for low natively rendered FPS, but ultimately if you're running 100fps rendered, they can easily make it 400 fps with interpolated ones? Because nobody sees a single frame here? Maybe that's what they are trying to get at?
     
  8. Undying

    Undying Ancient Guru

    Messages:
    25,341
    Likes Received:
    12,753
    GPU:
    XFX RX6800XT 16GB
    Im glad FSR3 will be amd cards only.
     
    alanm, Maddness, CPC_RedDawn and 6 others like this.
  9. pharma

    pharma Ancient Guru

    Messages:
    2,485
    Likes Received:
    1,180
    GPU:
    Asus Strix GTX 1080
    I am curious as well though don't think ghosting will be the primary issue. With more fake frames (not AI generated) you would think it should get further from developers artistic intent, and why only up to 4 frames and not more.

    Edit: Second thoughts about ghosting ... someone mentioned driver side frame generation might mean no motion vector in the equation though maybe FSR3 has a way to handle this.
     
    Last edited: May 24, 2023
  10. moo100times

    moo100times Master Guru

    Messages:
    566
    Likes Received:
    323
    GPU:
    295x2 @ stock
    What goes around comes around
     
    CPC_RedDawn likes this.

  11. AuerX

    AuerX Ancient Guru

    Messages:
    2,537
    Likes Received:
    2,332
    GPU:
    Militech Apogee
  12. heffeque

    heffeque Ancient Guru

    Messages:
    4,422
    Likes Received:
    205
    GPU:
    nVidia MX150
    I think that's the gist of it.
    If your machine can only do 20 fps then you have the option of:
    - Adding 10 fps more: it will look a bit less chunky (though VRR should also help)
    - Adding 20 fps more: it will make it better. The amount of "real" frames is still 20.
    - But adding 80 fps will make it even smoother, while the 20 "real" frames is maintained.

    I think the basic requirement is that the "real" frames don't get reduced, and that the "created" ones actually "interact" with user input, so as to get rid of the lag/sticky feeling.

    PD: I'm assuming that the AI processor that comes with the newest APUs such as the 7940HS will help FSR3 do its job better.
     
  13. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    Maybe that's why it's AMD-specific - it doesn't really make sense why this otherwise couldn't be platform-agnostic. Of course, I could be wrong, and considering AMD's answer to DLSS, I'm likely to be wrong.
     
    Embra likes this.
  14. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,696
    Likes Received:
    9,574
    GPU:
    4090@H2O
    Well that would make sense, only that I have little hope that "we" can actually interact with generated frames... aren't they just interpolated due to math, while interaction (input in FPS games for instance) would mean they would have to be rendered frames to register e.g. mouse movement or clicks?
     
    Embra likes this.
  15. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,696
    Likes Received:
    9,574
    GPU:
    4090@H2O
    I honestly have to say I have no issue with it being done for AMD only, even if it would technically work on Intel or Nvidia as well. They put the work in (read: paid engineering hours), they should reap the benefits (read: sales of GPUs).
    That said, I can't see why it would be technically necessary except it does delve into deep architectural hooks... which I doubt tbh. For all of those DLSS and FSR or XESS, I doubt that.
     
    Embra likes this.

  16. Horus-Anhur

    Horus-Anhur Ancient Guru

    Messages:
    8,638
    Likes Received:
    10,689
    GPU:
    RX 6800 XT
    The issue is that the fake frames are generated only on the GPU. So there is no input from the player while the GPU is rendering those frames.
    That's why DLSS3 has higher input lag than with real frame rate.
    If AMD somehow makes it have a 1:2 ratio or 1:4 ratio of real vs fake frames, then it's going to be a slog to play. And it would feel stuttery to play.
    Not to mention that having fake frames so long in the screen will make artifacts much more obvious.
     
  17. H83

    H83 Ancient Guru

    Messages:
    5,465
    Likes Received:
    3,003
    GPU:
    XFX Black 6950XT

    With so many BS frames being generated, my 6950XT is going to end up faster than a 4090!...:eek:o_O:D
     
  18. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    Well, the article seems to imply the code is still open-source. The notion of reaping benefits from making something proprietary only works if you control a large percentage of the market, which AMD does not. AMD's (and Intel's) focus on open platforms actually reaps more benefits to them, because:
    A. In the event they have the superior platform, they can look good on the charts. It's easy for Nvidia to look good with things like CUDA and DLSS when they're just competing against themselves, especially when there is no real competing technology (OpenCL never garnered enough attention, because AMD was lazy and Intel didn't have a GPU powerful enough to warrant optimization).
    B. An open platform can be used by a wider market; if your developer resources are limited and you can only choose one technology, it makes more sense to pick the technology that applies to the widest audience possible, even if it isn't the best. Of course, this somewhat backfired for AMD since even though FSR can be adopted by anyone, Intel and Nvidia have no incentive to do so. Since FSR can be enabled without developer efforts, it makes more sense for devs to just focus on DLSS, since that doesn't work with just any game (and because Nvidia controls a majority of the GPU market).
    C. They can mooch off the efforts of others, thereby reducing their own development costs. This has actually been greatly advantageous for AMD in the Linux world - not only have competitors (like Intel) developed code that AMD uses (granted, it works the other way around too) but Valve has actually done quite a lot to team up with AMD to improve their drivers. These are not small changes either - Valve has yielded great framerate increases. If AMD wasn't open with their drivers, Valve probably wouldn't have teamed with with them to make the Deck.
    Having said all that, if FSR3 requires developers to implement it, we can basically consider it DOA, no matter how good it is.
    I don't think the architecture so much the issue, but rather, it's likely very specific in the rendering process. For example:
    You don't want a HUD or any kind of on-top-of-everything text to be frame interpolated. If you were to use a generic library to do frame interpolation, it might just interpolate the entire frame (which seems to be what Nvidia's approach does), because each driver has its own way of interpreting the instructions. I'm sure this varies drastically from game to game, but in a lot of cases, I'm sure all the driver has to do is see what assets are being loaded closest to the camera and render those on top of each interpolated frame, or perhaps which assets are rendered with an orthographic projection.
    In other words, by making this work at the driver level, you can more easily control what is or isn't interpolated.
     
  19. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,696
    Likes Received:
    9,574
    GPU:
    4090@H2O
    I agree with you that it only works if you got the bigger market share, so devs actually have an incentive to adopt it. The thing is though, to sell GPUs, it's better to have a solution other copanies can't use (if it's technologically superior / on the same level). So that's why I think, if they are doing better than Nvidia in that regard, they should try to protect that advantage.

    I fancy how these days outlets like DF do in depth comparisons for the different methods, since there you can at least see how the tech is adapted in those games. I just think that AMD could make it a proprietary tech (only to be used with AMD hardware) and still have devs work on it... I don't see how that contradicts itself.

    B. only comes into consideration if you have either the better tech, or offer to take off the work from devs... which ends up being what they all do these days. But that's also how Nvidia got there were it is, and Jensen for a long time knew that and Nvidia has done that for the last 10 years at least. But it's a question of cash to pay engineering hours to "donate" them to devs. But you are right in the regard that it makes sense to concentrate on the bigger market share GPU. Although, iirc, isn't DLSS just a tick box to activate with UE, for example? If so, again, if both FSR and DLSS are just to "activate" with no work needed, you need the better tech (image quality, stable frame rate, less ghosting, more FPS gain) to actually score a win and get an advantage.

    C. I completely agree.

    I agree with you that taken the full frame and interpolate gives them issues, like we see with HUDs, like you mentioned. Somewhat, DLSS3 has improved (saw a video of DF on that topic), but it did not improve in every game (which shows us it's probably a game by game adaption still). But that's the thing, if the GPU has to look into the frame again, and check for what's rendered how and where, wouldn't it be more like VRS (variable rate shading) than true frame interpolation? Not that this could still be a good thing, but it would require an extra step in calculations, and thus, at least a tiny bit of performance.
     
    Embra likes this.
  20. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,975
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    Not necessarily - what matters is having a compelling offer. While in some cases that would be having a [noteworthy] technology that competitors don't have, it isn't limited to that. In this particular context, Nvidia already has frame interpolation, so AMD isn't really doing anything particularly exclusive, unless of course their implementation is better.
    The problem comes down convincing devs it is worth their time, even if it is better. No matter how good something like FSR 3 is, it won't be enough to convince people to switch. AMD needs to pull something revolutionary and do it well if they expect something proprietary to be a financial success.
    Exactly - and that's where FSR 3 becomes questionable, because it probably isn't going to be better, and AMD tends to not really work with game devs directly anywhere near as much as Nvidia does.
    As for DLSS, the AI-generated stuff isn't a tick box, but I think the rest of it is.
    I'm not really an expert on how the drivers or the rendering process works but I imagine it doesn't have to be too complicated. It's basically just checking whether something should be rendered until later. Of course, every calculation has an impact on performance, but in this case I imagine it'd be nearly negligible.
     

Share This Page