Do all motherboards change from PCI-E x16 to 2x PCI-E x8 when you install a second graphics card?

Discussion in 'Processors and motherboards Intel' started by Englishlad, Nov 28, 2018.

  1. Englishlad

    Englishlad Member Guru

    Messages:
    142
    Likes Received:
    0
    GPU:
    MSI NVIDIA GTX 970
    I had a look at the specs for some motherboards recently and I noticed that all the boards that I looked at, even the very expensive ones, couldn't run two PCI-E x16 slots at full speed at the same time. I expect this is due to bandwidth limitations. Is this the case for all motherboards, and what is the implication for the performance of graphics cards that are installed into the slots? The spec sheet for my aging motherboard (GA-X58A-UD3R - which is still fine by the way) doesn't cover the topic, but the slots in that board are PCI-E v2. Maybe only PCI-E v3 is affected by this? Thanks in advance.
     
  2. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    12,666
    Likes Received:
    5,094
    GPU:
    2080Ti @h2o
    Yes, this is the usual case. I think there are mainboards that offer true 2x x16 PCI-E, but you have to specifically look at them. They are out there.
     
  3. D3M1G0D

    D3M1G0D Ancient Guru

    Messages:
    2,068
    Likes Received:
    1,341
    GPU:
    2 x GeForce 1080 Ti
    HEDT boards can use two 16x slots at the same time (e.g., X399 for AMD, X299 for Intel). Consumer boards are typically limited to just one 16x slot.

    Speed isn't much of an issue in most apps (8x should be plenty, at least for current-gen GPUs), and SLI isn't really worth it anymore. For regular consumers, there is rarely a need for two 16x slots.
     
  4. pimpernell

    pimpernell Master Guru

    Messages:
    482
    Likes Received:
    50
    GPU:
    ASUS RTX 2070Super

  5. LNCPapa

    LNCPapa Master Guru

    Messages:
    421
    Likes Received:
    17
    GPU:
    2xEVGA 1080 Ti FTW3
    This is exaclty why you see us talking about PCI lanes... HEDT boards support way more lanes. Consider that a current GPU can use up to 16 and an NVMe drive can use up to 4. Also remember that capture cards and discrete sound cards can use a couple. It adds up when you start trying to build a beast of a machine for instance with a couple NVMe drives, SLI GPUs and a capture card - all fairly typical for a streaming/capture rig.
     
  6. jura11

    jura11 Ancient Guru

    Messages:
    2,631
    Likes Received:
    698
    GPU:
    RTX 3090 NvLink
    Older HDET like X99 CPU like 5930k or 5960x or 6850k or 6900k plus Xeon have 48 PCI_E lanes

    On my older X99 Extreme6 I have run 3*GPUs and there I have run all GPUs 16x, with current one Extreme WS my GPUs running at 16*/8*/8*

    Plus there are boards with PLX chips but current Nvidia support for PLX is non existent and you will get BSOD when you use more than 4*GPUs in Win10 and Octane

    Hope this helps

    Thanks, Jura
     
  7. Wolf9an9

    Wolf9an9 Member Guru

    Messages:
    140
    Likes Received:
    9
    GPU:
    Radeon RX 6800XT
    My X58 Asus P6T de Luxe can run 2 PCIE slots at full X16 speed.
    I still use this computer everyday and it still copes very well.
     
  8. Englishlad

    Englishlad Member Guru

    Messages:
    142
    Likes Received:
    0
    GPU:
    MSI NVIDIA GTX 970
    Thanks for the replies. Do both cards immediately slow down to x8 or is it all done dynamically depending on where the bandwidth is best used? I suppose the lanes have to be allocated in groups of 1, 2 ,4, 8, or 16? Can't have one card running at x12 and another at x4?
     
  9. LNCPapa

    LNCPapa Master Guru

    Messages:
    421
    Likes Received:
    17
    GPU:
    2xEVGA 1080 Ti FTW3
    I'm fairly certain that the number of lanes in use by a device has to be a power of 2 so no, you won't see x12 + x4 on two GPUs. It's also not dynamic. It's not necessarily x8 on both GPUs either - you should refer to your motherboard manual to be sure you're using the appropriate slots to get the best performance. On some older motherboards they would make only certain slots capable of reaching x8 or x16 and if you populated the wrong ones you may end up running x4 + x8 or some other mismatch. This would sometimes become an issue when we started using hard bridges for SLI.
     

Share This Page