There is no central database involved. The cost per frame (throughput / latency) to fetch from a central database would be too high to do this in realtime. Also, there's no technique in place to know what to search for within such a hypothetical database. Instead, Nvidia is training a neural network from that database of 64x supersampled images such that it can reliably predict, with reasonable accuracy, what a supersampled image would look like given an input image of a lower resolution. This is essentially object recognition. Then, they are dishing out that neural network via driver updates / game profiles. DLSS 1x renders at half the resolution while DLSS 2x renders at native resolution. The technique for prediction, which is in technical terms the "inferencing" part, is essentially the same. Inferencing relies on the Tensor cores which are optimized for very fast matrix multiplication operations that are generally required for inferencing in neural networks. Both 1x and 2x save on the cost of inferencing on the shader cores, while 1x goes one step further and saves on the cost of rendering the input frame itself.