AMD Working on GDDR6 DRAM Controller

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Dec 5, 2017.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    37,366
    Likes Received:
    6,393
    GPU:
    AMD | NVIDIA
    It is always funny to see what people always note down on their linked in pages. This time it is a principal member of AMD's technical team, Daehyun Jun. His entry states he is/ was working on a DRA...

    AMD Working on GDDR6 DRAM Controller
     
  2. nz3777

    nz3777 Ancient Guru

    Messages:
    2,386
    Likes Received:
    177
    GPU:
    Gtx 980 Strix
    This could be Big News!
     
  3. warlord

    warlord Ancient Guru

    Messages:
    2,825
    Likes Received:
    945
    GPU:
    Null
    Ram is important, but we need proper gpu juice to cope with. Resolution up to 8k is already smooth with that kind of bandwidth. I can't say the same about raw power of processing unit. :( Some times progression of technology is not equal to all aspects.
     
  4. Denial

    Denial Ancient Guru

    Messages:
    12,658
    Likes Received:
    1,880
    GPU:
    EVGA 1080Ti
    I wrote this in the other GDDR6 thread - faster RAM can lead to faster GPU indirectly. Previously an AMD card may have required a 256bit bus to hit x bandwidth, but with GDDR6 they may only need 128bit bus to hit that same x. This leads to reduced power consumption, which then can be used for faster clocks and it reduces the die size taken up by the memory controller - which potentially means you can stuff more cores in the same size chip.
     
    Silva and warlord like this.

  5. warlord

    warlord Ancient Guru

    Messages:
    2,825
    Likes Received:
    945
    GPU:
    Null
    I agree technically with how you place it, but think it as that mate:

    We have a motorway or a big wide road -> translates into bus depth (bits)
    The quality of this road, surface, asphalt, safety precautions etc -> translates into bandwidth and ram generation

    But in the end of the day, the most game changing factor is the kind of car, the year of it (model) and most of all its horsepower and/or capabilities. (GPU core for this subject)

    All in all, I believe we firstly should have the demand for a GPU core that surely needs that kind of support, rather than paying for dram manufacturers innovation and their new products for testing. The whole GPU's hardware should be scaling equally as the years passing by.
     
  6. Silva

    Silva Maha Guru

    Messages:
    1,049
    Likes Received:
    359
    GPU:
    Asus RX560 4G
    Plus: smaller bus equals to cheaper cards.
     
  7. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,776
    Likes Received:
    1,548
    GPU:
    HIS R9 290
    HBM is great, practical, and arguably necessary, but only for compute tasks. I recall people here saying that Vega doesn't take advantage of HBM and that is simply false, when doing certain GPGPU tasks. Pretty much every time a Vega 64 outperforms a 1080Ti, that's because HBM kicked in.

    HBM is overkill and a needless expense for gaming purposes. I seriously hope both AMD and Nvidia commit to HBM for FirePro/Quadro/Titan GPUs, but for everyone else's sake, GDDR6 is obviously a better choice. Until HBM can be mass-produced affordably, I don't want to see it on consumer/gamer GPUs.
     
    Silva and rl66 like this.
  8. kruno

    kruno Master Guru

    Messages:
    258
    Likes Received:
    70
    GPU:
    4890/1
    Power efficiency is second big HBM gain, you simply can not go against physics . HBM is stacked and very close to GPU so consumes much less power then classical memory. I have seen estimates on internet that says that if AMD had gone GDDR5 route it would cost them 100W of power more in comparison to HBM2
     
  9. icedman

    icedman Master Guru

    Messages:
    951
    Likes Received:
    86
    GPU:
    MSI Duke GTX 1080
    AMD's problem in my opinion is they decided to go full retard on the gpu core a 3096 shader gpu with higher clocks would have probably been more efficient and probably performed the same while keeping costs down.
     
  10. kruno

    kruno Master Guru

    Messages:
    258
    Likes Received:
    70
    GPU:
    4890/1
    From all the stuff and all the website that i visit they all come to same conclusion that AMD :
    A) doesn't have resources to develop 2 (or more)different arch's, one purely for graphics and second for compute ,so they develop one jack of all trades that is not very efficient
    B) Process node. GloFo 14nm that has been co-developed with Samsung (Samsung was in charge) is optimized for low power low clocks (phones).And that process is great with power if you stay with in it's limits, and if you go over bye bye game over, power draw hit the roof
    C) what you have said, if they (AMD) would lover the clocks then VEGA would be great power wise (you have bunch of examples and revives on line where they tried to under clock and under volt Vega and results are fantastic)
     

  11. Reddoguk

    Reddoguk Ancient Guru

    Messages:
    1,824
    Likes Received:
    163
    GPU:
    Guru3d GTX 980 G1
    Whats also interesting on that page is DDR5 in 2018 but by the look of it it doesn't start to get any faster till 2020. Really DDR5 already? Will that mean new ram sockets will be needed or will current DDR4 Dimms still work with DDR5?
     
  12. user1

    user1 Maha Guru

    Messages:
    1,476
    Likes Received:
    485
    GPU:
    hd 6870
    Known for quite some time, references to Gddr6 have been present in the adl libraries for almost 2 years now

    I will say , the implementation of the infinity fabric on amd's gpus, should make future gpu development faster and cheaper, they can basically just cut and paste various blocks(like memory controllers) now.
     
    Last edited: Dec 6, 2017
  13. sykozis

    sykozis Ancient Guru

    Messages:
    21,400
    Likes Received:
    806
    GPU:
    MSI RX5700
    It's better to have memory that is faster than what is required by the GPU, than to have a GPU that requires data faster than the memory can handle it. In other words, the GPU itself should be the sole bottleneck of a graphics card, never the memory. If the memory is the bottleneck, the engineers screwed up.

    Expecting GPU makers and memory makers to develop products that perfectly compliment each other is insane and would increase prices to the point that the dedicated graphics market would almost completely collapse.
     
  14. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    9,288
    Likes Received:
    346
    GPU:
    GTX 1080 Ti @ 2GHz
    Gee, maybe they'll actually sell video cards now instead of just saying "we don't make enough profit on HBM cards" and selling nothing.
    It's about production cost. They don't want to go back to GDDR5 for their planned flagships, and they don't want to pay for HBM. They gambled on being able to produce it at lower costs by now and failed. In turn we all got screwed for it, as if the GPU market wasn't abysmal enough.
     
  15. Amx85

    Amx85 Master Guru

    Messages:
    333
    Likes Received:
    10
    GPU:
    MSI R7-260X2GD5/OC
    The weird thing is that now AMD will use GDDR and Nvidia HBM2 :V so WTF!
     

  16. Loophole35

    Loophole35 Ancient Guru

    Messages:
    9,558
    Likes Received:
    989
    GPU:
    EVGA 1080ti SC
    I remember all the "experts" on this and other forums scoffing at 256bit cards. With GDDR6 you could have 780Ti bandwidth on 128bit memory bus.
     
    Silva likes this.
  17. Denial

    Denial Ancient Guru

    Messages:
    12,658
    Likes Received:
    1,880
    GPU:
    EVGA 1080Ti
    Both companies have and will continue to use both... HBM2 doesn't make sense on budget cards, GDDR doesn't make sense on compute cards. The only reason why Vega has HBM2 is because AMD can't afford to simultaneously develop so many variants like Nvidia can.
     
  18. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,005
    Likes Received:
    139
    GPU:
    Sapphire 7970 Quadrobake
    I wonder what is the effect of HBM for multi chip configurations like Navi. Maybe it does make a lot of sense there, and it will end up actually enabling cheaper GPUs.
     

Share This Page