AMD Big Navi would get Infinity Cache

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Oct 6, 2020.

  1. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    1,901
    Likes Received:
    521
    GPU:
    .
    Not another hardware-closed API please... Waiting for DirectStorage, although with 32-64GB RAM system it will not be such huge improvement and I bet in 10 years those APIs will be completely abandoned...
     
  2. Undying

    Undying Ancient Guru

    Messages:
    14,551
    Likes Received:
    3,734
    GPU:
    Aorus RX580 XTR 8GB
    Hbm2e really needs to find its way to gaming gpus. Amd make that happen already.
     
  3. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,578
    Likes Received:
    2,081
    GPU:
    HIS R9 290
    Considering it's "infinity", makes me wonder if this is actually supposed to be a shared cache for APUs. Right now, memory bandwidth is the #1 issue for APUs and the caches they have are just simply not sufficient. dGPUs don't really need a fancy cache, since you can just widen the memory bus for a major bandwidth increase.
     
    Maddness and AlmondMan like this.
  4. Undying

    Undying Ancient Guru

    Messages:
    14,551
    Likes Received:
    3,734
    GPU:
    Aorus RX580 XTR 8GB
    Wider bus increases the cost and power consumption this could great way to increase the bandwidth on igpu and dgpu
     

  5. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,578
    Likes Received:
    2,081
    GPU:
    HIS R9 290
    A decently large cache is far more expensive. To my understanding, a wider bus isn't necessarily more power hungry if the total amount of components doesn't go up. So for example, whether you have a 8GB on a 256-bit bus or a 384-bit bus, I don't think the power consumption is going to change much. Maybe I'm wrong - I don't have solid evidence of this, but, the memory chips themselves aren't necessarily working harder. The GPU itself is working harder (and therefore will use more power) because it is provided more bandwidth to prevent downtime, but the same could be said of a larger cache.
     
  6. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,325
    Likes Received:
    3,092
    GPU:
    5700XT+AW@240Hz
    That statement is as true as stating that nVidia would never get graphics memory types AMD co-developed. History is clear on that note.
    AMD's reasons are most likely cost and power draw.
    Having 384-bit bus vs 256 incurs only 50% power draw penalty from entire memory subsystem.
    And costs proprotionally more for memories + PCB complexity.
     
    JonasBeckman and Undying like this.
  7. rl66

    rl66 Ancient Guru

    Messages:
    2,667
    Likes Received:
    274
    GPU:
    Sapphire RX 580X SE
    Not sure, you can already do a lot with the AMD's one, also cache isn't always for that (and rapidely found a limit and will not boost the real bandwith)... Anyway we will be fixed soon about that trick.
     
  8. Saabjock

    Saabjock Master Guru

    Messages:
    329
    Likes Received:
    53
    GPU:
    PNY GTX1070XLR8
    I'm all in for innovation.
    If AMD has developed a process that'll effective shorten the path to data, while guaranteeing a boost to GPU performance... I say 'go for it'.
    This should be good.
     
    Maddness and Undying like this.
  9. wavetrex

    wavetrex Maha Guru

    Messages:
    1,342
    Likes Received:
    957
    GPU:
    Zotac GTX1080 AMP!
    It would be so funny if all the rumors are completely off, and this "Big Navi" will be something very different.
    I mean, AMD feeding bulls*** to leakers over the last year is totally possible !

    as this trademarked "Infinity Cache" could literally be anything... doesn't even have to be connected to Navi at all. Maybe it's the new name of Zen 3 cache, instead of last year's "Game Cache"
     
    barbacot likes this.
  10. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,578
    Likes Received:
    2,081
    GPU:
    HIS R9 290
    I don't think I made myself clear:
    In my hypothetical situation, the amount of memory chips would be the same, but in the lower-bit model, some chips would share the same bus. It doesn't appear this happens often, but it does happen.

    Take the new A6000 for example, where that has a 384-bit bus despite having 48GB. That very obviously has multiple chips per bus. Therefore, Nvidia could, in theory, double the bus width without affecting the number of memory chips. Doing so would not have as significant of a power increase on the memory subsystem.
     

  11. Martin.v.r

    Martin.v.r Active Member

    Messages:
    64
    Likes Received:
    0
    GPU:
    AMD 7970 1200/1500
  12. barbacot

    barbacot Master Guru

    Messages:
    504
    Likes Received:
    432
    GPU:
    Asus 3080 Strix OC
    I want to see if this "infinity cache" is really something revolutionary or just a marketing stunt like the "Game cache" on amd zen 2 cpu's which no matter how you called is still L3 cache...so AMD has "history" in this field and with their launch approaching the marketing machine is working at full speed.

    I wouldn't be surprised if a new title will appear that Big Navi uses "quantum technology" in their GPU or it is developed in collaboration with aliens...
    At least something spectacular not like Nvidia's black leather jacket man who was baking something in the oven...

    ...and then everybody would start talking again about bandwidth, chiplets, how great Lisa Su is and so on...

    To tell you the truth I am a little disappointed by their marketing department - I was expecting some "leaked" benchmarks that will blow my socks off and not some fancy words...
     
    Last edited: Oct 6, 2020
    mohiuddin likes this.
  13. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,325
    Likes Received:
    3,092
    GPU:
    5700XT+AW@240Hz
    GDDR6 has 32Gb variant = 4GB per chip.
     
    JonasBeckman likes this.
  14. JamesSneed

    JamesSneed Maha Guru

    Messages:
    1,110
    Likes Received:
    468
    GPU:
    GTX 1070
    Is rumored this is likely the patent or one of the patents that make up the Infinity Cache. if its the case then this really is revolutionary.

    https://www.freepatentsonline.com/20200293445.pdf

    Edit here is a recent video talking about said patent:
     
    Last edited: Oct 6, 2020
  15. ACEB

    ACEB Member Guru

    Messages:
    129
    Likes Received:
    69
    GPU:
    2
    All it is is the precursor to chiplets, if you have a large GPU and break its various functions down into several parts you can theoretically design a system like in the video where each chiplet has a specific task and is all interlinked with infinity cache. You keep individual die size low so costs come down, you can design dedicated architectures per GPU function instead of having to tie it all into a single monolithic ever increasing in size GPU which has higher costs and lower yields and uses more power at higher temps.
    Take an Nvidia design as an example, you could offload the RT cores to its own dedicated chip
     

  16. barbacot

    barbacot Master Guru

    Messages:
    504
    Likes Received:
    432
    GPU:
    Asus 3080 Strix OC
    So..I made an agreement with my kid: every time that somebody writes in this topic "chiplet" and "HBM2" I give him a buck - I have a feeling that I will empty my wallet by tonight...
    You Sir just cost me two bucks!
     
  17. Exodite

    Exodite Ancient Guru

    Messages:
    1,997
    Likes Received:
    199
    GPU:
    Sapphire Vega 56
    This is an area where I'd trust both the engineers and sales people to figure out what's the better option.

    It's worth keeping in mind, though by no means a perfect analogy, that each 77 square mm Zen 2 chiplet also houses 32MB of level 3 cache.

    With the yields TSMC are getting on their 7nm node now, it may just be that cache is both more effective and more economical.
     
  18. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,578
    Likes Received:
    2,081
    GPU:
    HIS R9 290
    According to what? I've only heard about a maximum of 16Gb, which to my understanding isn't even available yet. Remember, we're talking about GDDR6X here.
    Regardless, I doubt there's a major power difference in using a single 4GB chip vs a pair of 2GB chips sharing the same bus, assuming all else is the same.

    Indeed it will be, but it's still disproportionately a lot more expensive than VRAM. Otherwise, what's the point of having VRAM?
     
  19. Denial

    Denial Ancient Guru

    Messages:
    13,230
    Likes Received:
    2,719
    GPU:
    EVGA RTX 3080
    I don't think AMD is getting GDDR6X.
     
  20. Exodite

    Exodite Ancient Guru

    Messages:
    1,997
    Likes Received:
    199
    GPU:
    Sapphire Vega 56
    Well sure, though that's a hardly a fair analogy in this case - it's not like we're talking about a situation where AMD will include 8 to 16 GB of cache on-die.

    GDDR6 is more expensive DDR, GDDR6X even more so. I would expect denser memory configurations to be disproportionally more expensive, per chip, too but that's just an assumption on my part.
    Increasing bus width is incredibly expensive, due to the added complexity of the boards. You may need more components and the additional traces will mean relocating other components, using boards with higher layer counts, using more complex cooling solutions and so on.
    Also, keep in mind that the additional memory controllers on the chip aren't free either - take a look at the Navi 10 floor plan for example.

    This may all amount to nothing of course, we're just speculating on rumors, but my point is that it's not difficult to envision a situation where a large (the rumored 128 MB perhaps) cache solution would be more efficient than widening the bus or using more exotic memory solutions.
     

Share This Page