No, that's not how it works. You mount the drives via network, the SAN/NAS is located in a centralized server room. For most workstations, Gigabit performance is all that is needed. For higher performance, you just need 10GbE add in cards, which are relatively cheap (<$300) and just need 4-8 PCIe lanes for 2 ports. This means you still have more than enough lanes for a dGPU, if required for the specific workstation. Unless you are running a rendering farm, mining operation, or have a very unique usage scenario needing lots of AICs, you generally don't need that many PCIe lanes. Yes it looks annoying on paper, but even for most enthusiasts, the difference is actually quite trivial.