The RTX developer page mentions Ray Tracing. I noticed when I was running WoW without NVAPI that the RT option was still available, and implies RT doesn't need anything specific. I'm wondering if RT has both an accelerated path on NVIDIA, and a generic option? It sounds like OptiX is the RTX-specific option for RT? If developers implement RT, do they control whether it goes through OptiX (like GPU PhysX), or does the OS or driver control it?
no - for the question in thread title. ray tracing is ray tracing,it's not nvidia's proprietary feature.never was.rtx is nvidia's branding for cards with rt cores, AI-acceleration ( tensor) and cuda.
the answer is the game uses the generic path , Here is some background. Before DXR/vulkan raytracing became available, nvidia had the OptiX api which was used to get raytracing acceleration from gpus for a long time (~2009), mainly for software like blender , I don't think any games use it. Edit: It appears that nvidia added Optix to Gameworks quite some time ago(~2013), but it doesn't look like any games useit, probably because it would've been incredibly slow at the time. and it seems no games since Volta/turing launched use it either . here is one use case I found, bungie used optix for pre-baked shadows and lighting on maps, no realtime stuff Spoiler
I'm still trying to understand this. It sounds like games for the most part either use DXR or go through Vulkan's RT equivalent. There's DXR available for non-RTX GPUs. I'm guessing DXR accelerates RT through whatever the driver presents? NVIDIA has RT cores on RTX, but apparently uses a generic method for non-RTX cards? I don't understand what AMD has, but it sounds like they have their own RT acceleration on RDNA1/2. Basically, how can I tell how RT is being accelerated? I suspect I'm accelerating RT through some generic means, but I want to try to understand what's supposed to be happening.
By getting playable framerates Care to take a look at this, Your GPU is able to render at 4k, dlss balanced (will look pretty awesome), high ray tracing at 60+ frames. If it was not accerelated in some way or form... do you think you would get a performance like this I'm joking though. I know that it is really being accerelated. We can see how 1080ti fares with ray tracing when compared to something like 2060/2060 super. Even if its generic or something, it is clear that there's a huge gap
My understanding is that with DXR on non-RTX it goes through compute shaders and is accelerated on the same chip (pipeline?) as 3D, whereas on RTX it goes through a separate path to dedicated RT cores. It still GPU accelerated either way, but more optimal on RTX.
You may not be able to enable RT on a none RT capable card, the option in game will probably be greyed out. DXR has a software fallback if RT can be enabled on none RT hardware but will run dog slow on the CPU.
You basically have the gist of it, in its simplest form: Game > API > Driver > Hardware The API/Driver communicate the steps for the hardware. Non-RT core cards accelerate on the shaders, cards with RT cores accelerate off those cores. AMD's "Hybrid Cores" are designated shaders that only handle RT in RT games and traditional tasks in non-RT games, the rest of the shaders handle the traditional tasks and RT in RT games. As far as Spider-Man goes it's throwing some of the RT workload on the CPU for some stupid reason, I'm assuming the Playstation has spare resources to throw at it that way to get the most performance out of it.
Sorta....Radeon Ray Accelerators are part of the Compute Units, but that part is solely dedicated to RT. They do not accelerate anything other than RT. So on RDNA 2, the RA units handle the BVH while regular parts of the CU's handle the transversal and shading.
Rdna3 should have dedicated rt cores right? Amd mentioned of total revamp of their ray accelerators and how they work.
No idea. But it wouldn't surprise me, as it seems like the logical step to take to increase the performance.
Ps5 relatively speaking does have way more cpu power than gpu power... so it absolutely does make sense they made it that way
You also have to remember that consoles may be based on x86 and GPU architectures we use on PC... but they are not PC's. How the GPU, CPU and storage talk to each other is very different. That's why the PS5 can do things like crazy fast streaming of data around. It can load data from Spider-Man over 1,700 percent faster than the PS4 for instance.
The PC can do that too, but apparently some party (Microsoft & Khronos?) can't get their heads out of their butts to create a cross-vendor API that does the trick, e.g. with regard to GPU based decompression. Or someone slows them down.
Nvidia's RTX cards have a BVH accelerator built into the pipeline that is used, where as AMD uses the general purpose shaders for BVH, this means on Radeons the Bounding Box algorythm is competing with all other graphical effects for processing time.