Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 7, 2019.
Nvidia's Freestyle does sharpening - how it differs from this idk. Some videos/pictures I've seen I like it, others I don't but honestly that's similar to RIS for me. The Strange Brigade screenshot posted here looks great but the face example they use on the RIS feature website looks like crap in my opinion. Also LTT video they talked about it in various games looking way too overdone and not liking it. In the past using Reshade and Freestyle, big landscape textures I always liked it on but smaller objects, human faces, etc I always feel like it doesn't do a good job, just makes it look fake/cartoon or something, I can't really explain it.
I'm the guy that sets my TV's sharpness to 0 so maybe I'm just weird lol
You are right, performance impact is minimal using 5700. I would very much like to see polaris and vega support.
Also i like how its not developer dependant but a simple toggle. Better than a waiting for year and have it only in a four games, right?
Well, even if it costs like 10% performance on older cards, it would be good. Because that may bring IQ on 1080p much closer to standard 1440p.
In the end, race is about having best performance at same image quality.
(Or same performance while having better IQ.)
You need to read up on what feature lock is.
This is new technology, which may not be able to be back-ported to older cards.
Thats a personal opinion i guess. I like my games being sharp, textures pop even more in that case. Maybe im weird idk hehe
I've even read somewhere that while this RIS is driver level implementation of some kind of sharpening, there is that other similar thing called FidelityFX.
And that is implemented via AMD GPU Open. (Does not look as lock to me. And will work on all capable GPUs including nVidia's.)
Please explain to all of us what sort of magical specialized hardware this needs to run in the first place. Like literally everything else on a graphics card, the set of algorithms behind this can be run on regular shaders. Sure, it can be slower. But saying a graphical effect cannot be back-ported to older cards means you have absolutely no idea why we use specialized hardware in the first place (hint: it's not to enable - it's to accelerate).
You can do ray tracing on an Arduino if your scene is small enough. It would take ages, but that doesn't mean there's something magical in your graphics card that enables this feature. You could even cast rays using pen and paper if you like. These are compute workloads, there's nothing magical about them.
looks like basic luma sharpen to me, not heavy at all and can be done in a shader.
I've been using luma and adaptive sharpen for years in a reshade and it does not look like this.
This looks better, like it have more details. Image looks cleaner.
I have read about FidelityFX too, i'll have to read up about it more.
The two key words I was replying to were "Feature Lock" which you completely missed in your reply.
PhysX is a good example of Feature lock, especially after Nvidia bought it.
could be, superscaling and adaptive mipmapping. that's doable in a programmable shader too.
I want to know if it works in videos too, and not only games
Yes, of course, it's not like you actually said more:
As for feature lock, your second statement justifies the first and is exactly what I'm addressing. Feature lock to the newer cards due to allegedly specialized hardware. Read more carefully. Do you understand what you're saying at this point?
I despise the hypocrisy with every AMD fanboy here figuring it's normal that this feature might not make it to older cards because of some rubbish excuse that specialized hardware is needed. I'm not saying it's not making it to older hardware - I'm merely saying that you guys finding it okay just because it's AMD who's doing it is stupid.
Interesting Hypocrisy it is indeed. Especially since you did happen to "miss" my reply stating that "software" enabled variant will be available even to nVidia...
And decided not to let me know details why you think I am letting AMD slide while I "bashed nVidia for some feature lock against their older generations."
(I am really interested in seeing what are those intentionally limited features. Really.)
Instead you go and look where you can bash someone who's reply was not perfectly blocking your accusations. What is there to gain from it. You made multiple accusations, yet you could not (or did not want to) put any data behind either.
I ignored your previous comment precisely because I was not addressing whether you or I think whether the feature will make it to older cards or not. I was addressing, quite clearly, your hesitation to condemn any potential feature lock when you considered that AMD may simply have specialized hardware that deals with the sharpening. Sure, if you think the feature is coming to older cards or is available to Nvidia, that's great, that was not my point. My point was that even when Nvidia delivers some technology that does require specialized hardware to perform properly, you start telling us what Nvidia engineers should have done instead, as if you know any better.
I'm not arguing data, I'm not arguing whether AMD is bringing that feature to older cards or not, I'm not the one who's busy prodding unknown hardware and discussing specialized hardware that might be or not be there. I don't have that time to waste. I'm pointing out the double standards here when it comes to lockdown from AMD vs. from Nvidia and how these double standards can only possibly mean one thing - unrelenting and irrational bias.
It's also funny how you ask for data when you deliver baseless conjecture like this:
Where is your data, honey?
AMD has more on its plate in the GPU sector than to add graphical features that require specialized hardware. That would be quite the lack of focus. The company is having trouble matching Turing's efficiency without going down to 7nm. Care to guess what happens Nvidia moves to 7nm? There are far more pressing issues here.
The forums have recently been flodded with pointless discussions regarding technologies the vast majority of those discussing them do not understand and would swiftly condemn only because they emerge from one company and not the other. Now that Navi has released, the praise starts - regardless of any potential issues that AMD might be at fault with.
Murdered by words
And I did ask you to quote me for reference. I want to see exact wording I made, not your recollection.
Am I biased? Really? For example I did defend DLSS till it managed to consistently fail in every implementation. And even then I wrote that I expect nVidia to rework it into something that actually does work better than their currently available examples.
And I even wrote that while some parts of images looked rather bad, there were some details antialiased so good that I have not seen better. And I did same for other nVidia's Turing functions except raytracing for which it is too early due to weak HW.
(Which is something even some people with 2080Ti could confirm.)
So for 4th time already. Show me how I was evil and jumped nVidia for not enabling HW based feature of new generation on older HW. Thank You. Or stop repeating it.
So, that means you have not seen Navi release conference where it has been stated. Go watch it. Maybe it will shut you.
I have no objective reason to Bash AMD for barely enabling something new on buggy release driver for Navi. And not enabling it on every single older card for whatever performance cost it may inflict depending on it being FP16 or not. Or whatever else.
Because AMD introduced FidelityFX. Existence of that makes all your ranting about HW locked feature in this thread completely irrelevant. And done either from lack of awareness or purely for sake of ranting.