That doesn't change my point at all... Unless you happen to know different, it's basically the GPU equivalent of a paging file. The GPU ideally doesn't depend on that pool of system memory to perform calculations every frame, but it will when the VRAM isn't sufficient. It's woefully inefficient to feed a GPU core from system memory. Just look at how badly crippled the AMD APUs are by memory bandwidth. Their performance scales up almost perfectly proportionately with memory speed. That doesn't always apply with discrete GPUs. Of course NVMe drives don't rival DRAM bandwidth, I never said they would. But NVMe drives are already fast enough to be bottlenecked by the x4 lanes that many of them are slotted in. That means NVMe drives can demand at least 25% of an x16 slot. That's significant. So if you have on-board storage for your GPU, that spares you all those lanes for other purposes. Or in the case of my argument, spares those lanes to the point where they're no longer necessary.