Noisiv posted it in the 2080Ti thread in the Nvidia section: Key points from the video taken from a reddit thread: Dice developed the game with Titan V with no dedicated RT Core to run specific Ray Tracing functions. They only received Turing 2 weeks before this demo. They are planning to improve the fidelity at launch as Turing can accelerate these functions better than the Tensor Cores in Titan V Ray tracing is running at 1:1 parity with Raster resolution. DF changed the resolution around and got the following: 1080p @ 60fps, 1440p @ 40-50 fps, and 4K @ 20fps Dice is planning to allow greater control of RT settings including changing the amount of rays shot per pixel, scaling the RT resolution independent of rasterized resolution (e.g. Game at 4K, RT at 1080p or lower), or using intelligent upscaling of lower resolution RT using AI denoising/checkerboarding. Expecting 30% RT performance improvement with one type of optimization (merging separate instances in various objects). Demo is using RT Cores after the rasterization of G-Buffer. They are planning to run the RT Cores in parallel asynchronously. Dice is happy to have Real time RT in hardware instead of coming up with time-consuming non interactive or inaccurate raster techniques to cover various cases. In the video he also mentions that they aren't using the Tensor cores at all for denoising/scaling, but doing their own algorithm. I don't know what that's running on but doing that portion the Tensor cores should theoretically free up performance as the Tensors are completely separate from the FP cores.