Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jan 1, 2019.
oh dear here we go again ! What are you doing ngreedia ???
Wasn't that what Dice said as well? They were not currently utilizing the RT cores although this was back in the alpha or beta version and shortly after the initial presentation and it's also changed since a few patches back.
(The implementation now also uses SSR - in addition to the ray-tracing itself that is. - to cover more reflections in addition to tweaking it for performance and other improvements or changes.)
Wonder what state the SDK is in, it's being implemented and used in various games (DLSS, NVIDIA's take on ray-tracing through DX12 and also Vulkan via extensions.) though I guess that's covered by NDA's and it's still very early on.
(Something's going on with Shadow of the Tomb Raider too but I doubt we'll ever know what and it'll eventually make it's way into the release build of the game via some future patch.)
EDIT: Ah it's covered in a earlier reply already.
Wonder when we'll see benchmarks of BF-V with every 2016-2019 HEDT GPU running Raytracing via the fallback layer...and what the variances will be.
nvidia is there again! 3,5+0,5 RAM
Nobody who put more than 2 minutes of thought into it were surprised. This is just another case of the usual suspects trying to turn everything they can into some huge conspiracy. I warn people time and time again, this is what happens when you watch too much CNN and MSNBC.
How are you so sure?
Both of you need to do a little more research before making assumptions. As well as many other users on this thread.The Titan V is a volta gpu which was originally intended to run ray tracing. It's also priced higher than the 2080ti. The 2080ti performs better in pretty much everything else such as 4k gaming at a fraction of a price so what's the problem? People really need to learn how to relax.
There is so much misinformation here..... Lets cover this.
No, you can't trick DXR into working on any other GPU that doesn't support it natively.
Titan V was ALWAYS capable of DXR, it was the only GPU that was capable of doing it when DXR was first announced without the compatibility layer
RT Cores are not required to do ray tracing. Never was. They only increase the performance when doing it. You can read more on why here (http://www.pc-better.com/titan-rtx-review/3/) Read the RT Core section.
The Titan V isn't as fast as the 2080ti. I didn't bother to read where they are testing, but on the same system (and the most important part) in a scene that processes a lot of rays, the 2080ti is about 30% faster than the Titan V at the same clocks. I have tested this.
So in short, you can't raytrace in DXR with anything that isnt Titan V or Turing without the compat layer, Titan V isn't as fast at it, and no one was being deceptive, Titan V was advertised when DXR launched as the only DXR capable card available.
guys you ever thought why dxr only alowed on rtx cards.
while its dx12 feature.
Hi Unreal-- As it stands, NVIDIA only officially supports DX-R hardware acceleration (for games) on the RTX series. However, DX12's DX-R supports Volta as well - it always did. NVIDIA just never placed support for DX-R / Volta in their drivers in a sense to be recognized as "RTX supportive" or the other way around - support is there but each "RTX" game must list what cards to key in via device-id or whatever method used. This could be subject to change with Volta now shown to utilize DX-R with full acceleration in BF-V via mod & not being in the RTX series.
Can the techniques used on BF-V/Volta for DX-R be applied to other architectures/games as well? Possibly; I suspect they're driver side but I've yet to pour through all of 3dcenter's forum. I think what happened with Volta was it utilized support that the API already had there far as DX-R & acceleration was concerned. I think once you try and force similar techniques onto Pascal or other generations it'll either break the application or depending on how DX12 is constructed (and I have to research this) force those older cards into some fallback layer method. Still - if it works - cool crap.
This is just a side-bar. I've noticed some comments here and there in the thread. DX-R isn't RTX/Nvidia "proprietary" it never was. I think some confusion may have stemmed from the manner in which NVIDIA marketed the feature at launch of the RTX series. They're first to market to support it. Internal SDKs / drivers may be in testing AMD-side concurrent with developers under NDA.. such are the ways of the world; irregardless of what Lisa Su says publicly. Testing can take a year to years. For now outside of that, all we laymen know of is the "Fallback layer".
Some light reading for anyone interested in acquainting themselves with DX-R & the various properties associated.
"A Gentle Introduction To DirectX Raytracing - cwyman.org"
"DirectX Raytracing Tutorials - Github"
"Getting started with DirectX Raytracing - DirectXTech Forums"
the problem here is not having the Titan V supporting RT or not ( we already knew that ) but it's the fact that nvidia didn't add the support to it in Bartlefield 5, the price range of the 20xx has been a hard pill to swallow already to add it the fact that there are other architectures that can handle it and prety nicely, even a 3000$ gpu
that fact coold put another layer of doubt in the mind of a potential buyer.
cough my GTX 9cough70 cough
Gives another meaning to "It just works!"
DICE started the DXR development on Titan v because they didn't even know about the RTX cards until around two weeks before they were announced. That's why it supports Titan v, the same card that ran the UE4 Star Wars demo. If you want to run Ray Tracing on a consumer card, not a 3000$ professional card, you need dedicated hardware like the RT cores or wait a few years until we have powerful enough cores in a consumer card. Be sure that if AMD could have done Ray Tracing in real time (and it exists in DirectX in the form of DXR and in Vulkan so nothing is stopping them), and in theory their cards are more well-suited for Ray Tracing than Nvidia because they are more parallel, they would have.
I've also seen on these forums people with Titan V that claim that they get around 1/3 the performance of a 2080ti and minimum fps drops to the single digits, and that's from a professional card that costs x2 of a 2080ti. Also, Titan V is the best half-precision card in the market after the new Titan RTX, they might be using half-precision to get this performance. Users also report extreme noise using the Titan V, that means that using the Titan V the engine isn't using a denoiser (that costs more performance) or it just throws fewer rays into the scene which makes it much lighter to run (and less attractive to look at) or its using half-precision which noises the image.
Well we knew DICE did the bring up for DXR in BF5 on Titan V's - they mentioned that in the Digital Foundry analysis. We also know that DICE doesn't utilize Tensor cores at all for their implementation - I think this is because they want the feature to be vendor agnostic. Regardless, they run the noised RT output through their own denoiser - Titan V should theoretically have an advantage in performance here especially if the denoiser is utilizing FP16 (it almost certainly is as there isn't a big need for higher precision and every card DICE plans on running this on has half precision).
The RT process is essentially two parts - you have the BVH part (this where Turing received improvements) and you have the denoise part. With recent patches DICE made a ton of changes to how the BVH representation works. During the beta/demo the BVH representation was overly complex - they found a massive improvement in performance by merging the more complex objects into one (IIRC 30% improvement). They also made changes in the various patches since then to simply the BVH representation and reduce intersect calculation. So basically every change they made should theoretically increase performance on the Titan V moreso than the 2080Ti - as the Titan V should spend more time bottlenecked by BVH then 2080Ti.
On top of that there are conflicting reports about the performance anyway with people mentioning that the Titan V is slower than 2080Ti in similar scenes.
I hoped you had direct proof, although I respect your arguments which are likely correct, but full dismissal cant be made yet.
I think too much Fox News but hey lol.
They mention the Titan V hardware for development here.
They talk about BVH implementation regarding RT cores here.
DICE talks about the BVH/Intersect improvements here
Performance is roughly 35% slower on Titan V in RT heavy maps despite the theoretical performance (FP32 Tflops) being roughly identical and Titan V having 50% more FP16.
You put it all together and it just makes sense. Titan V has more than enough compute to do denoise (in fact it should do this much faster than 2080Ti) and render the game well - the only place it lacked was BVH/Intersect performance but DICE has been actively improving that with each patch. Yet despite this the 2080Ti is still 35% faster - which is most likely the RT cores speeding up that BVH further.
Shader Model is just a virtual machine, and the code is always passed to the driver. It doesn't matter if is legacy DXBC or the new DXIL. The driver could just accept input FP16 meta-registers, upcasted by the driver or coerced by the hardware to FP32, processed as FP32 and then truncated back to FP16 values to feed the DirectX runtime correctly. The implementation is always opaque. If AMD still lacks RTX support it means that it's driver-hardware is unable to produce corrected results for DirectX runtime.
Implementation on NVIDIA hardware is even more opaque since NVIDIA GPUs lacks of public hardware programming guide and we have only the CUDA (another virtual machine) developer documentation.
Yup, some of my thoughts as well. Some people are acting like a 1080 Ti is running DXR the same as a 2080 and it's just some huge scam lol. No guys just a $3,000 card performing like a $1,200 card. Go ahead and buy a Titan V if you feel the 2080 Ti is a rip off lol.