sorry but DDR3/DDR4 system ram has practically bandwidths to CPU around ~25-35 GB/s those 15 GB/s from PCIe 16x will not help much with connecting HBM to CPU... even upcoming pcie spec 4.0 won't reach DDR4-3600 speeds @16x they will hit 3200... so there's no benefit to let CPU use a dGPU adapter's memory. ...at the moment
"ATM " So pcie3 is ~ 1 GBps ea. lane - thats a tough number to find IMO. I wonder if it can be exceeded under ideal conditions? Raven ridge apuS will probably have 512GBps hbm2 vram mounted ~ directly on the core unit (ccx), mere millimeters from the cpu/apu, connected to pcie3 lanes embedded in the cpu wafer (ala ryzen). All 3 components (cpu/gpu/vram) separated by ~120mm max trace lengths (cpu & gpu ~240mm ea. as i recall). A discrete pcie3 GPU would have traces to the cpu of 5-10x that? u would think that would allow bumped comms speeds for the apu? It would have far fewer than 16 lanes, but given the tight integration of an APU, amd have no need of rigidly adhered to pcie3 standards for intra APU communication
The interesting thing to see about that would be if the Infinity Fabric bandwidth in that case is going to be related to the HBM bandwidth or the main memory. If it's the former, then the iGPU performance will be amazing.