Gigabyte Thunderbolt 3 - RX 580 Gaming Box

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Mar 19, 2018.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,541
    Likes Received:
    18,853
    GPU:
    AMD | NVIDIA
    Gigabyte releases their RX 580 Gaming Box, same box, different graphics card. Basically, if you have a laptop with Thunderbolt 3 connector, you could hook up an external graphics card solution. Giga...

    Gigabyte Thunderbolt 3 - RX 580 Gaming Box
     
  2. spectatorx

    spectatorx Guest

    Messages:
    858
    Likes Received:
    169
    GPU:
    Radeon RX580 8GB
    Last edited: Mar 19, 2018
  3. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    I'd rather pay less for weaker GPUs. I've seen benchmarks of an external 1080, and the limited PCIe bandwidth gave it underwhelming performance. A 1060 or a RX 570 are pretty much as good as you can get without frequently wasting the GPU's potential. That's not to say something like a 1070 couldn't perform better, but in a lot of titles, it won't be faster than a 1060.
     
  4. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Well, if Box is meant for ultrabook, then 15W TDP CPU with 2C/4T is quite and limiting factor. If we already had faster USB standard, I would like such Box to support USB. Some GPU like 1060 or RX570 would probably be quite OK even at USB's 10Gbps as long as it would have sufficient VRAM to minimize caching.
    USB 3.2 which links 2x USB 3.1 for 20Gbps would serve well even on this RX 580 box.
     

  5. Loophole35

    Loophole35 Guest

    Messages:
    9,797
    Likes Received:
    1,161
    GPU:
    EVGA 1080ti SC
    But it still says “Gaming” so if this is due to the GPP it appears that the word “gaming” is not off limits for AMD products.
     
  6. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    I think what you said is heavily dependent on whether you're plugging a display directly into the eGPU or trying to use your laptop's display. If you are expecting to use the laptop's built-in display (especially if it is 1080p or higher) with high-res textures, to my understanding, that soaks up a lot of your usable bandwidth, where a 580 would be bottlenecked.

    Keep in mind that in a lot of cases, a GTX 1080 (non-Ti) starts to see performance losses when going from x8 PCIe 3.0 lanes down to x4 lanes at various resolutions, when using a display connected straight to the GPU. PCIe 3.0 with x4 lanes is roughly 55Gbps. If we were to dramatically generalize that a RX 580 uses proportionately less bandwidth than a 1080 (when accounting for its lesser performance), I don't think it is possible to consistently take advantage of it's processing power over USB 3.2, regardless of which display is used. But, I could be wrong.
     
  7. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    While true, that bottleneck may still be very acceptable as long as graphics does not need to load bunch of textures from system memory every frame. For years, people use PCIe 2.0 and 3.0 x1 in notebooks (meant for Wifi and such) for external PCIe GPUs. Ugly, but relatively cheap solution. And it somewhat works.

    I tried to check actual IO utilization, but HWiNFO64 always shows this value as 0%. So lovely configuration for graph displayed over time and nothing to be shown :/
    Maybe someone with nV card or older driver/OS can try, as it may be some bug with insider build I have.
     
    Last edited: Mar 19, 2018
  8. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    True, but it's the bottleneck why I specifically feel a 1060 or 570 make for a better option - you pay less without really any performance loss in most cases. Meanwhile, weaker GPUs use less power. This is important to factor in, since the power bricks get disproportionately more expensive, large, and hot as you increase the wattage. From what I recall, the price really starts to go up fast once you breach 80W. If you go for an RX 550 or GTX 1050Ti, those are efficient enough to not need any PCIe power connectors, (therefore should be sub-75W) and a power brick for that should be relatively cheap. They're also slow enough that as long as you lower texture details, they should perform well on USB 3.2. They still might be a little too much for PCIe 3.0 x1 slots, though.

    I'm not sure what your level of interest in all of this is, but there is something I could recommend to you, that maybe isn't elegant but very cheap. There are products on eBay that allow you to convert M.2 slots into x4 PCIe slots. I think they cost something like $5USD. You can also get M.2 riser cables. Combine these together and buy some 12v power brick and you could create your own eGPU solution for laptops for about 1/10 the price of these other eGPU enclosures. Since you're feeding directly from PCIe, there is no latency loss. The downsides are the ugly appearance, fragility, and not hot-swappable.

    What were you testing? If you're not really doing anything, PCIe usage tends to stay from 0-1%.
    BTW here's what I was referencing earlier:
    https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080_PCI_Express_Scaling/
    They seem to do one of these tests every other year. Pretty interesting and comprehensive - I don't know of anyone else who does tests like this.
     
  9. Killian38

    Killian38 Guest

    Messages:
    312
    Likes Received:
    88
    GPU:
    1060
    Don't waste your money. If this is your only option, you need to rethink your life decisions.
    Harsh but true.
     
  10. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    It's just a generic technology interest. I saw those PCIe 3.0 x1 cables and hacks people did to have it working. M.2 slot to PCIe 3.0 x4 would in most cases have no issue. Except I think that native PCIe in both cases have trouble with turning card ON/OFF. Hotswap behavior is unusual for PCIe in consumer segment.

    Anyway. In test you linked, there comes following info:
    PCIe 1.1 x4 (1GB/s = 8Gbps) loses around 33% of GTX 1080 performance in worst case scenario. Best Case Doom 19% loss.
    PCIe 2.0 x4 (2GB/s = 16Gbps) loses around 15% of GTX 1080 performance in worst case scenario. Best Case TR 5% loss.
    (And those are on 1080p gaming, where mobile CPU is more likely to be bottleneck "instead".)

    So I would say that USB 3.2 is quite acceptable with 20Gbps.
     

  11. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    I've done a few experiments myself with x1 slots and haven't had any trouble with them (in terms of them being on or off). But, I've only tested such situations in Linux. I intend to try some tests with one of those M.2 to x4 converters but I'm not sure when I intend to get around to it. Like you, I'm more interested in the technology than actually having a need for it.

    Don't forget though - there's also the bandwidth of transferring the video signal back to the laptop, if you intend to. That 5% loss in the best-case-scenario will increase significantly. But as stated before, if you intend to use a separate monitor then yes, USB 3.2 ought to suffice.

    Another thing to consider that I forgot about is there's some overhead due to the USB translation layer, which will result in some performance loss. This is why, for example, there aren't any external USB 3.1 drives that can meet or exceed SATA III, despite the 4Gbps lead. To my understanding, Thunderbolt is a lower-level connection than USB and doesn't suffer the same overhead.

    Something I'm not too sure about is how bandwidth is allocated by the device. So more specifically, if you have a PCIe 2.0 GPU, to my understanding, the GPU is getting a more bandwidth out of a 2.0 x16 slot than a 3.0 x8 slot, even though the total bandwidth of the slots are roughly the same. From benchmarks I've seen, it seems Thunderbolt will actually split up the bandwidth into multiple lanes, but I'm not sure if a USB adapter will. If it can, that'd be awesome.

    Anyway, interesting discussion. I personally find this stuff fun to think about - I hope I'm not coming across as annoying or arrogant, but I find you're good to bounce ideas off of.
     
  12. KissSh0t

    KissSh0t Ancient Guru

    Messages:
    13,950
    Likes Received:
    7,770
    GPU:
    ASUS 3060 OC 12GB
    Fit a whole gaming PC inside that box and then I'll be excited.
     
  13. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    1080p 60Hz image signal from graphics to monitor takes around 4,5Gbps. That's with blanking which is not needed internally. If we presume that there is no compression used. RAW internal data in 1080p 60Hz 24bit needs 3Gbps only as there is no blanking needed.
    That at worst turns USB 3.2 20Gbps into something around PCIe 2.0 x4. So it is still very fine.
    Apparently if it was to be used to some 4k internal display in notebook, it would tank badly.

    Anyway, time to get USB 4.0 standard out there into the wild as that's supposed to be 100Gbps (~Single Channel 1600MHz performance). At that bandwidth and latency, one can even develop CPU to CPU connection for some kind of offloading between 2 PCs as CPUs today have some USB interfaces integrated.
     

Share This Page