I understand perfectly fine, you seem to miss how this sort of neural network works. For the fine details, you literally see what the network "guesses" is there, according to the previous frames and training. There are examples (in Wolfenstein and Control), where DLSS at 1440p is actually more detailed than the native 4k render. From the Eurogamer article: From my own eyes: Control at 1440p DLSS looks better than Control at 4k. That's with a huge display (65") at 2160p120 native, with Full range RGB and 12bit pixel depth. Meaning that any reason that might create any weirdness or artifacts is removed. DLSS is not perfect, but 99,9% of people wouldn't even notice it from 1080p to 4k (I have tried with my wife and friends), and from 1440p they all told me it looks "better". It's the same in Cyberpunk between native 4k and DLSS Balanced/Quality. DLSS looks definitely better.