https://www.reddit.com/r/Amd/comments/j06xcd/number_of_cus_of_navi_23_navi_31_van_gogh_cezanne/ Seems like Big Navi is 80CUs at 2.2GHz with HBM. NOICE.
Yeah i'd really like it to have HBM2 as i think this might be there first card to actually take advantage of it. Everything points to that not being the case though, but never say never.
You won't see it in consumer/gamer versions of the upcoming Navi, only maybe in Fire Pro cards or something. Even then probably not, probably 32GB of GDDR6.
So we're looking at twice the performance of RX 5700 xt ? That's literally 3080 territory right there. Nvidia clearly knew it and didn't want to reproduce the mistake from 2013 where they ended up dropping 780 prices by 150$ 4months after release.
I think because most of the rumors have pointed to 256bit GDDR6 for RDNA and HBM2 for CDNA. Now that AMD has split there Professional and gaming divisions. I'd love it to have HBM2, but the cost might make it harder to compete with the pricing of Nvidias cards.
I don't "know" anything but economically speaking Apple would be one to pref HBM2 over GDDR6 as they are loaded. Cost-efficiency wise GDDR6 would be the 1st choice for AMD, as long as it isn't a significant performance hit.
Even card vendor websites had Ampere cards with wrong CUDA core numbers, so that doesn't say a lot. Also actual drivers have Navi 21 chips with HBM.
Two HBM2E stacks could do over 1TB/sec. They could literally do a 16GB card with that and save a ton of power, or go full and do a card with 12GB or 24GB with 1,5TB/sec. The irony is that HBM2E is also cheaper for the if you go for two stacks and save power and not need to trace 8x GDDR6X chips or whatever. It will also lower the power delivery requirements quite a lot.
It would create a rather large chip and imposter though. Especially when we have no idea how much is added to the die with the dedicated Ray Tracing hardware.
We know all of this already actually. The GPU die size of the Series X is 360mm2 and that includes an octocore Zen 2 CPU which is around 70mm2, I/O, and a 320bit GDDR6 controller. This is a 56CU GPU down there, and you can see it actually uses around 60% of the space. There is even a picture: If we are talking HBM, then AMD could stuff around 80CUs in under 600mm2 easy peasy.
Respectfully @PrMinisterGR I would like to suggest this https://semiengineering.com/hbm2-vs-gddr6-tradeoffs-in-dram/. It's not cheaper to scale. Possibly in a year or two? I cannot say; I think it's close right now. bingo
The best option for AMD to make a proper impact with these cards would be for them to release a 6900XT with 80CU's and 16GB GDDR6 and then a 6900XTX with 80CU's and 16GB HBM2 memory. This could be their TITAN equivalent, a better binned 80CU chip with higher clock speeds and better overclocking potential. After all the Radeon VII had 16GB HBM2 and that released for $700 so its not as if they can't do it and release for a good price. Those highly clocked 80CU's would seriously benefit from the increased bandwidth. This is all, of course, if the rumoured 128MB cache isn't real. As a massive cache would also help the GDDR6 memory compete against Nvidia's G6X memory. But for a total knockout blow a HBM2 variant would kill the 3080 in normal rasterization but then again ray tracing is another matter. I would also like to see AMD have some form of DLSS, their own version. Some rumours point towards them partnering with Microsoft for a global upsampling technique that is said to work at a global level and doesn't require per game support and will work with any DX12/Vulkan game. Sounds too good to be true, either that or it has terrible IQ.
Problem is that AMD split up their architecture into CDNA/RDNA. Unless they are going to reuse the 6900XTX model as a CDNA card it's a massive amount of money to do an HBM chip for one single application. Radeon VII was just one of their Instinct cards rebranded.