AMD X390 and X399 chipsets diagrams reveal HEDT Information

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Mar 24, 2017.

  1. thatguy91

    thatguy91 Guest

    PCI-E 4.0 spec is out, if bandwidth is a major issue something high end like this should look at that.
     
  2. JEFUK

    JEFUK Guest

    Messages:
    4
    Likes Received:
    0
    GPU:
    GTX770
    I'm probably the only person excited because I saw 1394 on the diagram! Sound "Card"!

    I'm also the only person that likes a PCIE2x slot on a board: Why? Because I have Fibre Channel 8GB Card and its 2.0x4 /8 physical. I don't want it pulling down a 3.0 slot to 2.0.

    I would love a Dual socket system with over-clocking if the silicon could do 4+GHz reliably dam the power... and dam the power! As I have and will continue to have OTT water cooling. Currently cooling 2x147Watt CPU + a GPU below 50 (peak) with my fans at supermicro's min speed and only artificially loading all (furmark and prim/foldiing) can see temps get to 60 ish

    On my current project ableton tells me I'm at around 70% on the 2x5690 Xeons

    So give me 150W+ CPUs with lots of cores and a high clock speed. Please, pretty Please.
     
  3. DeskStar

    DeskStar Guest

    Messages:
    1,307
    Likes Received:
    229
    GPU:
    EVGA 3080Ti/3090FTW
    Please.....PLEASE...PUHLLLLLLLLLEEEEEEEASE come out swinging with a killer price point and decimate the competition (Intel)....!!!

    All I have been waiting for is a "cheaper" enthusiast upgrade to my ever so ailing X79 setup. I would love an M.2 slot or two for the OS along with 64GB+ of RAM. Not to mention the MGPU support that I "OH SO LOVE!!"

    KILL THEM AT THE PRICE/PERFORMANCE RATIO!!! "DO IT!!!!!!"
     
  4. MegaFalloutFan

    MegaFalloutFan Maha Guru

    Messages:
    1,048
    Likes Received:
    203
    GPU:
    RTX4090 24Gb
    YES PLEASE ill take one, time to upgrade my X99 Sabertooth and 5820K @4.5gz to 10-16 core machine.
    BUT i wont touch anything with less than quad channel and x10 SATA3 ports, got spoiled by Intel and yes I use all of my sata ports
     
    Last edited: Mar 25, 2017

  5. thatguy91

    thatguy91 Guest

    PCI-E 4.0 is backwards and forwards compatible, it's a signalling update. I suspect it wouldn't be as simple as plugging in a PCI-E 4.0 Zen into an AM4 socket, I do suspect however that one of the advantages of the Zen+ successor's rumoured socket upgrade, AM4+, is that it will support PCI-E 4.0 (plus any new memory tech).

    This is relevant as the same would apply to X399/X390. You could have a PCI-E 4.0 compatible board (given the timeframe of release) and still have a PCI-E 3.0 (or 3.1) CPU, for now, but have the ability to use PCI-E 4.0 CPU's later. Considering the data requirements of such a processor it would make sense to have the capability for higher data transfers.
     
  6. user1

    user1 Ancient Guru

    Messages:
    2,746
    Likes Received:
    1,279
    GPU:
    Mi25/IGP
    Not so sure about that since there are duplicates of each, would make more sense that A1/A1 is a different channel than A2/A2, may have something to due with it being an MCM(IE channel a1 is die a, and a2 is die b). I think the single socket platform is quad channel and the dual socket board is 8channel like the naples platform is, which would make more sense since it is most likely a server derived chipset.
     
  7. thatguy91

    thatguy91 Guest

    It all should be considerd speculation at the moment, including Naples. What you said makes sense though. However, if a single socket is quad channel and a dual socket is octa channel, you could summise that each socket is connected by its own quad channel to the memory, meaning one channel per RAM module with each CPU directly connected to only four of the eight channels. For the second socket to access memory stored on the first sockets RAM, there must be an interconnect between the two RAM controllers which could induce latency. It would be a clever act of data partitioning in the RAM and thread allocation to reduce this.

    There is something questionable on that diagram though, the single PCI-E 4x connection to the rest of the chipset functions. This is especially true considering the transfer of most of that stuff through the chipset requires data connectivity with the CPU, I believe, and in both cases there is connectivity between the IO and the X390/X399 chipset that exceeds the X4 connection with the CPU, and therefore memory and CPU connected IO.

    Another big point:
    The X399 clearly shows onboard graphics, which is interesting. Is the RZ4700 a mega APU?
     
    Last edited by a moderator: Mar 25, 2017
  8. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    Last edited: Mar 25, 2017
  9. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,439
    Likes Received:
    108
    GPU:
    Surpim LiquidX 4090
    PCI-e 2.0 was saturated by quad SLI Titan (original) on X79, and it was shown to have improvements going to a board with an extra PLX (AsROCK Extreme 11)
     
  10. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    Again, barely. So how can you justify stating that they are saturating PCI-Express 3.0?
     

  11. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,439
    Likes Received:
    108
    GPU:
    Surpim LiquidX 4090
  12. Pictus

    Pictus Master Guru

    Messages:
    234
    Likes Received:
    90
    GPU:
    RX 5600 XT
  13. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    Love it when people post youtube reviews. Why do people like to throw them around like they are actually good reviews?

    Anyways, i watched both of your reviews, and again:

    Barely, so how can you justify stating that they are saturating PCI-Express 3.0?

    There's a very small amount of FPS difference in those videos, so again, incase the question is not CLEAR. How can you JUSTIFY stating that it's SATURATING 3.0 16x? For example, EVEN if it was, which its not, but EVEN if it was, how would you prove it? what would you have to prove that against?

    And since you're fond of youtube reviews, i'll just link you to a nice, reputable and accurate review, which in reality shows what those youtube videos also show, that it's barely, very barely saturated.

    http://www.guru3d.com/articles-pages/pci-express-scaling-game-performance-analysis-review,1.html
     
    Last edited: Mar 25, 2017
  14. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,439
    Likes Received:
    108
    GPU:
    Surpim LiquidX 4090
    Actually, I meant 8x vs. 16x, as it's impossible to tell if 3.0 is being saturated without overclocking the BCLK/PCI-E, which no one has done tests for.

    And the author of the video literally has a different conclusion. 170fps vs. 210fps in one game, 60fps+ vs. <60 in another. The 1080 video isn't as impressive since 1080s are too weak for 4k.

    Same conclusion, depends on the APP, which is what the author of the videos said.
     
  15. Agonist

    Agonist Ancient Guru

    Messages:
    4,284
    Likes Received:
    1,312
    GPU:
    XFX 7900xtx Black
    Well 8x 3.0 is the same as 16x 2.0.

    I bet you I barely get any difference in fps in my 2.0 slot vs my 3.0 slot with my fury @ 3840x1600.

    And with heavily overclocked GTX 670 4gbs I saw 3fps more avg running them in 3.016x vs 2.0 16x @ 5120x2160 which is nothing.

    And that was when I ran a 125 bootstrap as well.

    And I ran 105 bootstrap when I had my i7 3820 to have it @ 4.5ghz before getting this board and 3930k and saw no difference.

    Now I saw a little more difference with GTX 970 sli in 3.0 @ 7680x1080 in 3.0 vs 2.0. That was actually about a 10fps difference. But was really was not noticeable as my fps was always above 40 in most games.
     

  16. Evildead666

    Evildead666 Guest

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    It looks like the onboard graphics is coming from the chipset.
    Its much more likely to be a ASPEED type video out, just for management purposes.
     
  17. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    ^ That's the thing, you didn't state that it would saturate, specifically, PCI-Express 3.0 8x, you just stated PCI-Express 3.0, which implies 16x, since that's the max for PCI-Express 3.0.

    In the sense that PCI-Express 4.0 would help with having lower lanes yet not be over-saturated, i agree. For instance, if you had a board that was only able to provide two 16x slots with 8x lanes when in SLI, if they were PCI-Express 4.0, that would be great, because that'd be effective PCI-Express 3.0 16x.

    But in the sense that PCI-Express 3.0 16x is being saturated, when there's barely any difference (aside from some very specific scenarios) between PCI-Express 3.0 8x and 16x, just wouldn't make any sense. I'm not saying you wouldn't get a little more FPS in 4.0 in general, as the guru3d article showed, you can relatively comfortably run most games on PCI-Express 1.1 16x just fine still.
     
  18. Evildead666

    Evildead666 Guest

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    Just had a thought about the memory slots on the single cpu platform.

    It looks to me like you have to populate the memory in pairs, 2xA1 slots, 2xB1 slots etc...
    That would equate to one per MCM Package.
     
    Last edited: Mar 25, 2017
  19. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,941
    Likes Received:
    1,239
    GPU:
    .
    THIS is a chipset.


    But now (since HBM is here to stay) it's time to kill PCI-E and moving to NVLINK or something equivalent, PCI-E x4 is a Big NO-NO.
    Please not I am talking about only the protocol, not the attachment form factors.
     
    Last edited: Mar 25, 2017
  20. Evildead666

    Evildead666 Guest

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    I suspect you re meaning the chipset interconnect ?
    Intel also use PCIe3x4 as their chipset bus, just under another name (DMI 3.0 i think)
    If they need more they can just pop to a x8 bus, or PCIe4x4, when its needed.

    edit : as for the HBM, i'm betting the next AMD APU's based on Zen are going to be sweet. Could be even better for discrete gaming too.

    edit2: or at least the next APU versions of the single cpu workstation chips, which might have hbm.
     
    Last edited: Mar 25, 2017

Share This Page