AMD Uncovers a bit more on the X370 chipset

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jan 6, 2017.

  1. Aura89

    Aura89 Ancient Guru

    Messages:
    8,127
    Likes Received:
    1,261
    GPU:
    -
    I'm not sure someone understands the difference between PCI-Express 2.0 and 3.0, at least the significance of it. Most graphics cards don't even saturate 2.0, wtf else do you need that would be more intensive then a graphics card?

    Now, for SLI and crossfire, sure i get that, but i find it funny that most of the people complaining, aren't even currently doing SLI or crossfire.... And even then, you're not likely to saturate it...

    Exactly, 100% agree in all fronts. I think people have this idea that "more is better, even if i can't fully use it" ...

    But realistically people are forgetting that if a motherboard manufacturer wants to put more SATA ports and etc. on a motherboard then the chipset allows, they can, and will.

    Take 990FX, it's 5, almost 6 years old.
    Only, technically, supports:

    PCI-Express 2.0
    USB 2.0
    No support for m.2
    among other things

    Yet, there are motherboards out there such as the GIGABYTE GA-990FX-Gaming AM3+ AMD 990FX, which has:

    PCI-Express 3.0
    Not only USB 3.0, but also 3.1
    m.2 support absolutely
    among other things

    So to look at these chipset features and completely disregard what the motherboard manufacturers will do is a bit ridiculous.

    But i'll admit, i am confused. People are looking at this chipset and thinking it is so much worse then intels, yet i'm looking up intels chipsets and not seeing a whole lot of difference...
     
    Last edited: Jan 7, 2017
  2. rl66

    rl66 Ancient Guru

    Messages:
    2,665
    Likes Received:
    274
    GPU:
    Sapphire RX 580X SE
    you can connect SATA on SATAe, so 6 port
     
  3. BLEH!

    BLEH! Ancient Guru

    Messages:
    6,033
    Likes Received:
    132
    GPU:
    Sapphire Fury
    Exactly. Surprising what you can tack on with PCIe controllers. My old X58 rig would happily chug along for ages more if I could be bothered to add SATA3, USB3(.1), etc. etc.
     
  4. Humanoid_1

    Humanoid_1 Master Guru

    Messages:
    960
    Likes Received:
    66
    GPU:
    MSI RTX 2080 X Trio
    Yup, I''m lucky, my old X58 board has SATA3 and also USB 3 ports.

    I honestly have not seen any "need" to upgrade, I don't even bother overclocking it much as I do not need more performance Most of the time.

    Handbrake will appreciate an upgrade to a Zen based system though!

    Still will though mainly for use of newer high performance SSD types.
    The power saving will be a nice bonus too ;)
     

  5. weewoo87

    weewoo87 Member

    Messages:
    13
    Likes Received:
    0
    GPU:
    Radeon HD 5850
    Confused...

    X300 SFF doesn't have a YES in the column for Crossfire/SLI. Yet it says it has dual PCIe slots?
     
  6. ChicagoDave

    ChicagoDave Member

    Messages:
    45
    Likes Received:
    2
    GPU:
    EVGA 1060 / EVGA 970
    I fully understand the difference between PCIe 2.0 and PCIe 3.0. Been building computers since 386 days my friend. And I laid out the exact use case above your quote:

    GPU at full x16 via processor
    x4 3.0 NVME via processor
    x8 2.0 via chipset

    I don't do SLI/Crossfire, but I do plan to have my graphics card running in the CPU slot, using the full 16 lanes of PCIe 3.0. I then plan to have two NVME drives - one for the OS and another for transcoding, game storage, etc. If I'm interpereting the slide correctly, the CPU does provide PCIe 3.0 x4, which is great since a Samsung 960 Pro almost saturates that link already. However my second one will be relegated to the chipset where it doesn't even get 3.0 at all - it'll be running at PCIe 2.0 x4 at best.

    You guys can say "you won't notice it, only synthetic benchmarks will push it that hard, blah blah blah". To me, it's indefensible that AMD's next gen architecture, the one that's supposed to blow away Broadwell-E, is still using PCIe 2.0 at all. It's crazy that how many of you are fine with this.....hell PCIe 4.0 is supposed to debut later this year (it was ratified in 2011).

    Maybe you don't use the BW *right now* but what about 2-3 years from now? When I build a system, generally the motherboard is the last thing to be replaced (at that point I just build another). So this is hobbled from the get-go...I don't have a single PCIe 3.0 capable lane from the chipset at all. If Intel did this you'd be screaming "lazy Intel" but when AMD does it it's "playing smartly". Give me a break. I want AMD to succeed, I'm not cheering for their demise. Just calling a spade a spade - this is crap. I really hope they have a second gen chipset waiting in the wings, because PCIe has taken on a lot more importance with NVME drives becoming the new enthusiast drive to get.

    I will give credit where credit is due though - AMD does have more than just 16 lanes on the CPU (PCIe 3.0 x16 plus the I/O PCIe 3.0 x4). Intel's consumer chips require all NVME drives to go over the chipset, splitting DMI bandwith, unless you don't have a graphics cards. Just wish AMD's chipset was up to par with the CPU...
     
  7. Aura89

    Aura89 Ancient Guru

    Messages:
    8,127
    Likes Received:
    1,261
    GPU:
    -
    Have you looked at Intels chipsets? Has anyone who is confused by these chipset features actually looked at intels chipsets?

    Again, you're going off this idea that the motherboard manufacturers won't do their own thing. By your logic, no 990FX chipset would ever have the ability to have PCI-Express 3.0, yet that is not correct, so...?

    You're also going off the idea that what doesn't matter, does matter, and that the future may mean that more bandwidth is required on something that still doesn't matter, but apparently matters.

    Similar to how people have this idea that the more ram they have, the better, for the future and for their overall performance.
     
  8. Loophole35

    Loophole35 Ancient Guru

    Messages:
    9,781
    Likes Received:
    1,135
    GPU:
    EVGA 1080ti SC
    Actually what comes directly from the chipset and CPU is very important. I personally don't want third party controllers being used for simple things like USB's I sure as hell don't want tacked on PCI. And if you think the the added on PCIe3.0 on 990FX boards were anywhere close to being actual PCIe3.0 speeds I have swampland to sell you.
     
  9. Aura89

    Aura89 Ancient Guru

    Messages:
    8,127
    Likes Received:
    1,261
    GPU:
    -
    Well, they are...so..yes?

    If you're going to throw in a claim such as that you're going to have to provide proof as currently that's the most illogical statement i have heard in this thread.

    But, again, you talk about USB, or lets go with SATA as well, AMD chipset vs Intel

    Intel Z270 released January 3rd, 2017
    No USB 3.1, 10 USB 3.0, 4 USB 2.0, Total 14 USB ports
    Sata - 6 ports

    AMD X370 with Ryzen processor
    2 USB 3.1, 10 USB 3.0, 6 USB 2.0, Total 18 USB Ports
    Sata - 6 ports

    You just posted about USB, and there is no problem here, it even has more USB ports capable, and has official support for USB 3.1, which MIGHT come to the 300 series chipset, but considering that the latest 200 series was just released and the previous chipset release was in 2015 before that, you could be waiting 2 years for that.

    Others have posted concerns with SATA ports, and yet they are the same. Only one i have seen with more then 6 SATA ports officially supported is the X99, and that's an outdated chipset.

    In regards to PCI-Express, i'm getting confused here.

    In the slides it states
    PCI-Express 2.0 8x
    PCI-Express 3.0 16x
    I/0 PCI-Express (possible) 2x (are these 3.0?)
    2 Sata Express (PCI-E 3.0) aka PCI-Express 3.0 x4 (each sata express has 2 PCI-Express 3.0 lanes)

    So does that mean an x370 with ryzen processor would have 20 PCI-Express 3.0 lanes and 8 PCI-Express 2.0 lanes with a possible addition 2x PCI-Express 2.0/3.0 lanes? (30 lanes possible total?)
     
    Last edited: Jan 8, 2017
  10. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,578
    Likes Received:
    2,081
    GPU:
    HIS R9 290
    I agree with Aura89 here. Unless you go with a Xeon or one of the "extreme edition" CPUs, Intel doesn't really have better offerings. But even then, that's a ridiculous comparison - the only Intel setups that are distinctly better are going to be at least $700 more expensive (assuming the AMD setups are roughly $300). And again - what are you going to use that will realistically saturate all of this bandwidth?

    No, it isn't indefensible because you haven't defended why anyone needs anything better. As stated before, there are still no GPUs out there that saturate 2.0 @16x bandwidth. With Vulkan and DX12 lowering PCIe usage, that saturation threshold will drop even further.
    To my understand, the only reason why PCIe 4.0 is being developed is because of USB-C and Thunderbolt. Considering how many devices can be daisy chained, you need all the bandwidth you can get. But I'm willing to bet you that we will not see a single product that will saturate a PCIe 4.0 @ 16x for at least 3 years, but probably longer. I have my doubts we'll see anything saturate PCIe 3.0 in that amount of time.

    And in case you forgot, the CPU holds many of the PCIe lanes. Again, if you compare an i7 to a Xeon, you'll find the Xeon could have nearly triple the amount of PCIe lanes even though some motherboards will support both models.
    And you don't need a single PCIe 3.0 capable lane, and I don't know why you think you do. Remember - you're the one complaining about AMD's progress when they're basically matching Intel. You want to call a spade a spade, but you're clearly calling it a club.
     

  11. Aura89

    Aura89 Ancient Guru

    Messages:
    8,127
    Likes Received:
    1,261
    GPU:
    -
    I didn't notice this statement before.

    X99 chipset, most widely used enthusiast chipset from Intel as far as i can gather, only provides 8x PCI-Express 2.0, while the CPU provides the PCI-Express 3.0 lanes.

    Now i understand, as i earlier stated, X99 is outdated, and t here are newer chipsets that do provide PCI-Express 3.0, but X99 isn't "that old" and no one then (2014), even though PCI-Express 3.0 had been available since 2010, was complaining. So i'm not sure where you get your idea that this is how it would be.
     
  12. -Tj-

    -Tj- Ancient Guru

    Messages:
    17,113
    Likes Received:
    1,900
    GPU:
    Zotac GTX980Ti OC
  13. Aura89

    Aura89 Ancient Guru

    Messages:
    8,127
    Likes Received:
    1,261
    GPU:
    -
    Interesting, the higher the resolution, the less the bandwidth matters. Kinda like how the higher the resolution, the less the CPU matters in many games.

    I figured that would be the opposite.

    Dooms 4K test boggles my mind. There's a performance difference between them all, but after PCI-Express 2.0, it really doesn't matter, but what boggles my mind is the testing (though i'm sure this is margin of error) shows PCI-Express 3.0 4x performs best, lol?
     
  14. thatguy91

    thatguy91 Ancient Guru

    Messages:
    6,644
    Likes Received:
    98
    GPU:
    XFX RX 480 RS 4 GB
    The higher resolution meaning less effect of the bus speed in terms of difference does make sense, in a way. Assuming the game textures etc are the same (which they most likely are), the amount of transfer over the bus remains relatively the same. Once the data is copied to the GPU, it is the GPU that does the rendering to 4K, so the limiting factor becomes the GPU, not the bus.
     
  15. Loophole35

    Loophole35 Ancient Guru

    Messages:
    9,781
    Likes Received:
    1,135
    GPU:
    EVGA 1080ti SC

  16. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,578
    Likes Received:
    2,081
    GPU:
    HIS R9 290
    I think it makes sense. Regardless of the resolution, the GPU is still instructed to render the same thing by the CPU, only with more detail. In other words, the GPU has to work harder to produce the "same image", which slows it down and therefore it doesn't need as much PCIe bandwidth.

    At that point all the results are within the margin of error. The 3.0 @ 4x isn't actually the best. But suppose it was, I have one theory as to why it might perform better:
    I figure in some situations, a CPU may work faster with fewer faster lanes rather than more slower lanes. In other words, 3.0 @ 4x could possibly take more effort for some CPUs than 2.0 @ 8x. At 4K resolutions, the GTX 1080 isn't going to demand that much bandwidth so 3.0 @ 4x is likely the sweet spot between not pushing the CPU too hard and supplying enough bandwidth.

    I could be totally wrong though.
     
  17. chispy

    chispy Ancient Guru

    Messages:
    8,926
    Likes Received:
    1,122
    GPU:
    RX 6900xt / RTX3090
    Well for me i am ok with the chipset features and remember board partners will add more features on top of that , be it PLX chip for more pciE lanes , more usb and sata ports.

    All i need is 2x Sata 6G for my 2 SSDs in raid0 + 1x Sata 6G for my 3TB HD back up and storage drive + 1Sata 3G for my Blue Ray drive = I only need 4 Sata ports.

    Since i will be using only a single video card all i need is 1x16 3.0 pciE slot + 1xpciE 1.0 slot for my sound card and that's it , i do not need much on a motherboard for gaming and 24/7 rig.

    Be patient guys i'm sure there will be plenty of motherboards with lots of full features and added ones too not coming directly from the chipset , Ex. ASmedia , PLX , etc...
     
  18. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,465
    Likes Received:
    494
    GPU:
    Sapphire 7970 Quadrobake
    There is a lot of confusion with AMD's new chipsets, and that's because a lot of the chipset functionality is now on the CPU itself. Anandtech did a part for that (for Bristol Ridge APUs, so Zen will probably carry more on itself), but it has some detail on how this works. All and all, you get more connectivity with it than with an equivalent 115x platform:

    CPU (in this case a Bristol Ridge APU, not Zen. Zen wouldn't have the display IO, but it has 32PCIe lanes in total, instead of 16.):

    [​IMG]

    Chipset (B350):

    [​IMG]
     
    Last edited: Jan 9, 2017
  19. Loophole35

    Loophole35 Ancient Guru

    Messages:
    9,781
    Likes Received:
    1,135
    GPU:
    EVGA 1080ti SC
    AMD's own slide contradicts this though.

    [​IMG]

    What i find horrendous is the lack of USB on this new platform. Only 4 USB 3.0 from CPU and if you get a SFF setup with a X300 you get no extra USB's of any revision without third party controller (I don't like this because they require third party drivers as well so no worky out the boxy in most cases).
     
    Last edited: Jan 9, 2017
  20. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,578
    Likes Received:
    2,081
    GPU:
    HIS R9 290
    So... what about this slide:
    cdn [dot] wccftech [dot] com [slash] wp-content/uploads/2016/09/AMD-AM4-Chipset-Features.jpg

    Regardless, it may be referring to USB host ports, in which case the quantity is perfectly fine. Most people aren't aware that pretty much all of their USB ports (including ones provided by their own chipsets) are connected to a built-in hub. Generally speaking, for every 4 ports, there is 1 host. It varies - I've seen setups with 2, 3, and 5 ports per host.

    All that being said, having at least four total USB 3.X hosts is pretty good IMO.

    On a side note - what I mentioned is one of several reasons why you shouldn't do RAID via USB devices. If you're good at mapping out which USB ports belong to which host, you can also try maximizing the performance of your USB devices by plugging your high-bandwidth devices in separate ports.
     
    Last edited: Jan 9, 2017

Share This Page