Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 7, 2019.
Thanks for your replies - I understand a bit better now
no. anisotropic filtering scales any given texture based upon the oblique viewing angle, & thusly, the mipmaps distortion. think about it this way: 16x filters angles twice as steep (in degrees) as 8x, which filters angles twice as steep as 4x, etc etc. how many steeply angled degrees for a texel are there?? the answer is veeeeery few, so unfortunately, higher levels of AF like 32x/64x wont do much to improve perceptible image quality in theory. thats why its never been implemented - they would only improve image fidelity of a tiny percentage of the screen (little spot here, a rock over there) due to the threshold set by the filter for higher angles relative to the player/camera POV.
the takeaway is that this isnt a higher level of AF. the performance hit is next to nothing so its likely that its just an advanced, intelligent sharpen shader rather than downsampling, but i dont know for sure
i know better, i can chip in with authority here nvidia engineers shouldnt have been directed by C-level execs to design new technology hinging on black box proprietary hardware. proper technology follows the innovation of the engineers, but nvidia instead shapes the products around marketing. "but theyre a business!" you say. well, great. jensen huang can buy himself another leather jacket. that facile reasoning is why we had physx add-in cards at one time in the past. thats why we currently have RTX with AI cores that were originally designed for their titan line for scientific computation (which ironically throw rounding errors reproducibly in lab setting - ask me how i know!)
how big is RTX die again? how much faster would it be if they took the surface area dedicated to their marketing gimmick silicon & instead just expanded the actual chip?? if they want to develop new forms of AA or AI/dynamic downsampling or implement crude one-ray-sample-&-dithered-reflection raytracing by brute force then fine! great! do it in software. write a sophisticated algorithm & leverage developing hardware in tandem, the correct way. if its incompatible with certain hardware due to the nature of the code, then thats the price of progress. if its incompatible with certain hardware because the marketing department & the CEO are designing it that way.........
anyway my point is that i bet you twenty bucks that once ATI division stabilizes their ridiculously bad release drivers, they will try to get the filtering working on previous gen cards due to consumer demand
I'm not really sure why this is a bad thing. This strategy grew Nvidia to 80% of the GPU market. It also created a bunch of key technologies that competitors were forced to create open alternatives to. Would I have preferred Nvidia go straight open from the start? Sure why not - but they didn't - regardless they still forced innovation.
How do you know?
Probably not because the limit of general performance is power consumption not die size. Even in normal games with no RT/Tensor the 2080Ti is hitting 300w.
I was interested in buying a 2080 Super, but looking at the prices of the 20xx cards, there is no way I am way I am going to give my money to nvidia and I am just going to settle for a 5070XT. The reviews on Anti-Lag seems to indicate that its working. Thats cool.
Yeah, I know, but the RTX 2080 Ti was not worth the price. I enjoyed it the months I had it, but I would rather have a RX 5700 XT or RTX 2070 Super until next year. Selling the card gave me a brand new rig, so i'm happy. Next year maybe AMD and Nvidia will be having a greater fight with price vs performence and maybe even Intel will shake things up alittle with their GPUs, who knows.
enjoy your downgraded overall experience.
Theres no LOL about it, AMD still hasn't gotten basic WDDM features such as dx11 driver command lists and dx11 multithreading, so a good number of titles today are going to be inferior on AMD graphics cards, with more cases of inconsistent frame times, and drawcall contention frame drops.
People must be fully aware of their choices when they are playing the morality card on a future purchase.
and basing it on Antilag, a technology that is a numbers only mind-trick (you don't feel it in actual game play) is stupid.
OK Novidia salesperson I keep it in mind
You know those stupid nicknames you come up with make you look it.
AMD should reimplement their directx11 and opengl support so they are actually all-around competitive instead of just being dx12 and vulkan devices.
I await your counter argument that actually has a basis of substance and relevance.
I am personally happy w/o even testing it. As this "placebo" works wonders for people who tested it in competitive fps games around
(Since release, I did not even have time to play any games.)
Why that? I'll take a deep breath and have a good day instead, but you feel free to fight your GPU wars! It will surely gain you fame and sense of pride and accomplishment!
Oh you want to feel good about something? Here:
Novidia > AMD.
Now now, it's slower then 2080 super for sure. But why so hostile. Also driver side ui AMD has nvidia beat and there is no geforce experience gladly.
Also if one wants to use Linux AMD is better hands down.
You're unable to provide valid critique you stoop to stupid nicknames, while pretending you aren't actually a amd fanboy.
AMD will get my attention, when they can perform all around and not just on the two mostly unused graphics api's.
ah, the UX vs Usability argument that killed Firefox, one day you'll get over your need for graphics and see the functionality lack of is being masked by a shiny plastic cover.
Sorry, it took me a while to realize you were joking after all! My bad XD
You too have a great day, @Astyanax
P.S. And it's true! My Novidia GTX970 -0.5GB and 2500K are greatest example of me being an AMD fanboy! Totally shed a tear on Witcher 3 benchmarks
With all due respect, pointless GPUs, since console's plebeian race Xbox Scarlett is gonna destroy them by having RDNA 2.0 with Raytracing support. Nvidia it is for 2019 then. It is only for those who cannot continue living by not upgrading NOW.
AMD and Firefox are fine, what a load of hogwash.
This is what ignorance looks like.
Firefox is an underwhelming chrome clone as of the dropping of XUL core extensions, and AMD can't render many applications as efficiently as nvidia.
AMD fanbois cry hard when reminded that their worship target is lacking in various points, then scramble for strawman arguments such as "but that isn't used as much"
as though dx12 and vulkan are used much more, Atleast nvidia has an all around performance basis and makes use of as much as the available system as possible.
Scarlett will be interesting, it won't be an nvidia killer though, if anything it'll bring them in reach of nvidia's Ti parts.
and then nvidia will move to chiplet gpu's.....
Reminder: It's not that AMD / Nvidia is better/worse at D3D12 or Vulkan, its the fact both of these must be optimised per architecture, and most game studio's are lazy and go with a good-enough implementation.
oh, and FYI,
It's not that Time spy is optimized for nvidia, its the fact AMD suffers when async compute is NOT optimized specifically for their parts (its a bare bones implementation with no real architecture targetting at all)
Offtopic for mods... when and how did this kind of people get into G3D forums?
This isn't your safe space, never has been.
if you can't handle opposing views which are founded in reality, maybe the amd forums are better for you, since those are an echo chamber where everyone has to agree with you.
Will say it again, when AMD have DirectX11 implemented properly and port back their multithreaded opengl client to windows from linux, they'll deserve a real look at.