Radeon Vega 20 Will Have XGMI Interconnect

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Sep 6, 2018.

  1. waltc3

    waltc3 Maha Guru

    Messages:
    1,377
    Likes Received:
    494
    GPU:
    AMD 50th Ann 5700XT
    I'm with the "laugh" crowd...;) Nothing like seeing pre-production marketing for non-shipping hardware, is there?...! Countdowns...secret this and secret that...yada-yada-yada...false leaks galore, etc. Such fun. Well... sorta--maybe...nahhhhhhh.... :/ Not really, after you've seen it a dozen times already.

    Thing is, sadly, some people always believe the hype--but, fortunately, the cure for that is simply living a few years longer! nVidia is egregiously bad about it, & Brian Burke has an affair going with "50%" this or that, he obviously inherited from nV--going all the way back to the original set of "Detonator drivers," guaranteeing "up to 50% performance increases!" (With most people not seeing the "up to" qualifier, and before Burke's time at nV--while he was still at 3dfx, IIRC.)

    I think in those days after the fantabulous, incredilissimo !!! original Detonator drivers were released (I had either a TNT or TNT2 w/a couple of V2 6MB cards--so I could test 3dfx-GLIDE acceleration and compare it to nVidia's OpenGL 3D acceleration all day long--had a big GLIDE game library--and D3d/OpGL at the time was slim pickins!)--- we found one place where the frame-rate of the Dets sky-rocketed--HUGE- fps increase. It was in one location in a popular FPS shooter, and you could see it every time, as soon as you moved your character over in a corner and facing the wall--it was incredible!! Long as you sat still looking at the wall while nothing at all happened graphically--darned if the frame-rates didn't spike through the roof! Heh-Heh...I doubt I will ever forget things as over-blown as the "Detonator" driver scam, or the nV driver-on-rails 3d-Mark FutureMark scandal, or the Nv30-cum-leaf-blower scandal, or...well, I'd run out of room very soon so I should quit. Oh, almost forgot to mention, that in all other respects that could be seen--the Detonators were indeed duds--and in most cases there was no performance improvement whatsoever to be observed. Thus the name "Dudanators" was soon applied....Haven't seen a new "Detonator Driver Release" out of the ol' nV in many a year...such a shame as they are so much fun to to critique...;) Outlandish, hugely exaggerated performance improvement claims are among the easiest to debunk--naturally. DOesn't seem to stop nV's PR department, though--they are consistent if nothing else...:)

    Myself...if AMD or nV, for that matter, can ever come up with a way to put two GPUs together on a card and have the OS, the GPU drivers, *and* the 3d game APIs see it as not two, but just one single GPU...then we might actually see something worthy of even nVidia PR exaggerations and exclamations..!!

    We may be closer than any of us thinks...D3d12 contains the stuff to get it done--as it moves multi-GPU support into the API transparently--but the catch is a d3d12 game has to be written ground-up to do that--as D3d still supports the old non-support of multiGPUs in the API, too. We may have to wait awhile--but this does sound interesting. Still, may be awhile...hard to judge.
     
    JAMVA, Fox2232 and carnivore like this.
  2. Pimpiklem

    Pimpiklem Member Guru

    Messages:
    162
    Likes Received:
    49
    GPU:
    amd
    hbm2 shits all over ddr5x.
    945 stock to 1150, so much for not overclocking.
     
  3. Clawedge

    Clawedge Ancient Guru

    Messages:
    2,599
    Likes Received:
    925
    GPU:
    Radeon 570
    Inter-Chip Global Memory Interconnect
     
  4. Agent-A01

    Agent-A01 Ancient Guru

    Messages:
    11,554
    Likes Received:
    1,036
    GPU:
    3080Ti Strix H20
    Unfortunately not in practical use, which is why hbm will never see mass production in all card lineups.

    GDDR6 is faster than current implementations of HBM2

    HBM2 certainly didn't close the 30% gap between 1080Ti and vega64
     

  5. HeavyHemi

    HeavyHemi Ancient Guru

    Messages:
    6,954
    Likes Received:
    960
    GPU:
    GTX1080Ti
    The gist of your post is...'if you look at wall your FPS skyrockets'...who knew? What happens if you just stare at the sky? I might have to try that out! ;)
     
  6. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,809
    Likes Received:
    3,369
    GPU:
    6900XT+AW@240Hz
    What do you mean by "GDDR6 is faster than current implementations of HBM2"?
    Faster per mm2 of memory chip?
    Faster per chip?
    Faster per watt?
    Faster per $?

    I think you mean faster per pin. But that's not very important, since HBM2 have magnitude more pins on tiny chip.
    I have read Micron's official "blogger" post where he went into all great hings about GDDR6. Comparing it to GDDR4/5. Mentioning power efficiency. Ending with: "if you need more... HBM2."

    Why are people always bringing into discussion AMD's cards which are Power capped? (300W)
    Don't you and other get fact that if those cards used GDDR, they would have even less power left for GPU once AMD would place enough of GDDR5(x) chips on board?
    When you think about AMD's cards with HBM which eat 300W (and gain additional performance once you increase power limit), they would definitely be worse cards with GDDR whatever version at time available.

    As of cost. People who never saw any pricing material say for years: "HBM costs fortune. HBM2 costs fortune."
    Would be lovely if they ever cared to post comparison of chip capacity versus price comparison against few GDDR5(x)/6 from few manufacturers.


    Edit: And. btw... practicality...
    Any SoC, high performance mobile device (from notebook to cellphone), anything in data center where cost of ownership matters more than initial price.

    Why? Small form factor. High bandwidth per package. Low power consumption per transferred bit of data. High data density.

    Do you think that $700~1000 cellphones should not use HBM2? I think it is exactly what they should be using at that price point to at least look justified. And consumer at least gets much faster and more power efficient memory.
     
    Last edited: Sep 7, 2018
  7. Vananovion

    Vananovion Member Guru

    Messages:
    152
    Likes Received:
    77
    GPU:
    Radeon RX Vega 56
    I second what Fox2232 said. If you want to know more, Gamers Nexus has a great video explaining why AMD had to use HBM2 on Vega, even though it made the cards more expensive -

    .

    I'd also add, that HBM is a fairly new technology, while GDDR has been around for quite some time. I like to think about it as HDDs vs SSDs - SSDs used be out of reach of regular consumers when they first came out. Nowadays, no one sane uses an HDD for any sort of performance use-case. It is still some way off, but I think the same is going to happen with GDDR and HBM - GDDR will move down to budget solutions, while HBM will take mainstream and high-end stuff.
     
  8. Agent-A01

    Agent-A01 Ancient Guru

    Messages:
    11,554
    Likes Received:
    1,036
    GPU:
    3080Ti Strix H20
    Faster as in memory bandwidth.

    AFAIK fury x > vega64 in memory bandwidth ~500GB/s with vega being lower.

    Comparing upcoming 2080ti with 352bit bus using GDDR6 has 616 GB/s total bandwidth which is much more than consumer cards.

    Even titan v with 3092bit HBM2 barely has more bandwidth than 2080Ti with cut 352bit bus(384bit unlocked would put it > titan v at 672GB/s)

    GDDR6 uses 1.35v vs GDDR5 1.5v with lower latency so in practical use it shortens the gap between HBM.

    Only in very power-strained cases would GPUs need the marginal difference saved in power usage.

    As for AMD being power-capped, that's their own fault.
    They wouldn't need the extra few watts of savings if they didn't decide to try maximize clock speeds out of the box to try to get closer performance to equivalent NV cards.

    Less voltage and lower clocks would have saved them a ton of wattage making HBM unnecessary.

    As for mobile phones, a single DDR6 chip has a tiny power envelope.
    A single HBM stack is not going to bring in hours of extra usage.

    Anyways with 2080Ti using GDDR6 it's obvious that hbm doesn't bring enough benefits to offset the cost it incurs.
     
  9. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,809
    Likes Received:
    3,369
    GPU:
    6900XT+AW@240Hz
    So, you meant that meaningless per pin bandwidth. One thing nobody really has to care about. One can have easily 8 HBM(2) packages around GPU.
    How many GDDR(X) packages you can fit on PCB? How many watts it will eat to even compete in bandwidth with HBM(2)?
    And IMC area. Way AMD boasted about HBM having small IMC for bandwidth...

    When you talk about small, HBM(2) is clearly better except pricing, but that's not that bad. When you talk Big GPUs with huge requirements...
    GDDR6 is needed, otherwise you would be again in times where you stack memory banks on both sides of PCB.

    Ad "AFAIK fury x > vega64 in memory bandwidth ~500GB/s with vega being lower."
    It underscores fact that you again overlook development in technology you try to make look worse. Fury X had to use 4 HBM1 chips, Vega56/64 has 2 HBM2 chips.
    And since then HBM2 improved.

    Please, do not judge memory technology based on bandwidth of product which could have used more or fewer memory chips depending on vendor decision.
    I do really wonder how would you phrase following argument if Titan V had 4x HBM2 chips...
    "Even titan v with 3092bit HBM2 barely has more bandwidth than 2080Ti with cut 352bit bus(384bit unlocked would put it > titan v at 672GB/s)"

    You should understand another important thing in comparison of those technologies. HBM has very short traces. Papers for GDDR6 imply that traces length and shaping has higher importance than ever.
    This means that with HBM, you are mainly looking at what chip can do. With GDDR6 there is PCB design too due to noise sensitivity.
     
  10. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    17,558
    Likes Received:
    2,951
    GPU:
    MSI 6800 "Ref"
    Bus is a lot wider for the HBM cards for AMD but speed is hindered a bit. Fury caps out at I think it was around 320 - 360 GB/s when measured due to the core clock speed not keeping up so not really the theoretical max of 512 GB/s though the 4096 (4x 1024) bus width is nice but on it's own it's probably not a deciding feature. (290X had some 512-bit ring bus design being all the hype back when it launched, didn't really compete directly with NVIDIA's offerings and was apparently pretty complex and costly for AMD.)

    Vega manages somewhere around 480 GB/s I think it was, Vega 64 that is and even that is slightly held back due to specs originally being 1 Ghz at 1.2v and not 945 Mhz at 1.35v though it's possible Vega 20 here will have the refresh chips hitting higher speeds without having to downclock and overvolt them.
    (Clock speed I think the Frontier edition got it right, 1400 Mhz at 1.0v but it can boost up a bit higher. 1600 at 1.2v just generates a lot of extra heat and that turbo mode is ridiculous with power usage versus what little gains you get.)

    Guess that with Vega 20 being more of a server or workstation card it might not matter too much for gaming and general consumer needs as the workload is more specialized although some of the improvements could carry over into Navi as a further tweak of the GCN arch although how much remains to be seen. :)
    (The various other features the GPU lost also probably factor in even if overall performance gains might not have been quite all that AMD hyped them up to be though it looks like the primitive shaders are making a return with Navi at least in some form.)


    EDIT: Well that and far as gaming goes catching up to the 1080Ti needs at least another 30% performance bump and then between that and the 2080Ti it could need another 20% at least to try and match that although I guess AMD isn't going try for the top GPU performance position but it will be interesting to see what's next. (Coming in several months after the 2000 series from NVIDIA isn't going to help things either.)

    Not too sure about Vega 20 either, heard a lot of this type of deep learning already goes via CUDA so that makes it hard for AMD to get into this area I suppose. Also something that will be interesting to hear more about.


    EDIT: And I guess we might see more from Intel next year too and what that card will bring.
    Miners will have quite a few choices now ha ha. Well the 2000 series might actually exceed Vega now if they have faster memory but I guess that activity has lessened a bit too from how it was just a year ago.
    (Well until it inevitably flares up again and some other popular coin type appears.)
     
    Last edited: Sep 7, 2018

  11. Agent-A01

    Agent-A01 Ancient Guru

    Messages:
    11,554
    Likes Received:
    1,036
    GPU:
    3080Ti Strix H20
    The only thing HBM has going for it right now in current products is smaller package and lower power usage.
    I'm well aware that hbm saves a lot of space.

    Realistically neither of those are a big issue right now which is why it hasn't seen widespread usage.

    GDDR5x was around the 20watt mark for total power usage which isn't a big deal.
    GDDR6 will be much more efficient.

    Signal integrity for GDDR6 in traces are non-issue when a board is designed correctly.

    As for 'judging memory technology based on bandwidth' you need to look back at my original post where I said GDDR6 is faster than current cards with HBM.
    That's all my argument was, not an argument trying to say HBM is bad or anything.

    BTW, stacking memory chips on both sides of PCB does not help increase performance nor is necessary for increased vram size.
    GDDR6 2gb chips are possible which puts current 11gb > to 22gb using the same 11 32bit memory channels.
    There is no need for dual sided memory on the PCB.

    Also, don't forget the GTX 285 days where it had 16 32bit memory channels; there is plenty of space left with current GPUs.

    Anyways I wish HBM3 was in all new cards but that's not happening any time soon.
    Too expensive for benefits that aren't necessary right now, at least for NV.

    And lastly, memory bandwidth is very important especially when core architecture is getting much faster.
    It hasn't reached the point to need several stacks of HBM for NV yet though, apparently. ( 2080ti being faster than titan v)
     

Share This Page