Sure, and you do point out a valid worst-case scenario where the bandwidth is easily saturated. But which can is it that you're implying is being kicked? The argument in this thread is all about whether we need x16 lanes in the future for consumer-grade hardware, and if technology trends are anything to go by, cases suggest you don't. It's not that your scenario isn't valid, but it's so extreme that it's not really relevant to this discussion. That's like comparing a pickup truck trying to tow a fully-loaded freight train - even if you wait for newer better trucks in the future, it's never going to be the right vehicle for the job. The whole reason people are arguing with me in the first place is because they think "well we need something to pick up the slack when we run out of VRAM". PCIe is the easiest way out but it's not the right way out. It's only a sensible solution for low-end hardware, which we have already proven doesn't demand that much bandwidth. I don't know enough about your workload to say whether another approach can be done. But if your workload can be done in batches that don't soak up all your VRAM (at least not too much), that might be a better choice. That's basically how certain BOINC projects work. Take milkyway@home for example, where each workload is a small chunk contributing toward the same grand simulation. Each workload uses a minimal amount of VRAM, but there's no way you could run the simulation on a single PC because you'd soak up all the memory immediately. Running your workload in chunks that don't saturate your PCIe lanes ought to complete in less time than running the whole thing in one batch. If VRAM didn't need to be cooled, I would really like to see GPUs with memory slots, and I'm sure they'd be cheaper to produce too.