Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Oct 21, 2020.
80 Cu and the performance of an rtx 2080
hoping the 6800 comes in at around $500
its higher than that, where the hell did you hear that?
I assume he's joking, there was this meme going around of it only being 15% faster than an RTX 2080 Ti, and we know that's absolute baloney. The card shown, whether it was the 72CU or 80CU model, was almost on par with a 3080.
Not that we needed any proof that it was nonsense, just 2x a 5700XT would be far faster than that.
What we need unless you have done Hilbert is some info re Vram and the potential pitfalls esp at higher resolutions. The 16gb vs 10gb thing could seem like a clever move by Amd.
I think it's potentially a key selling point. Personally if am gonna spend loads on a 4k screen etc i'd want a bit more Vram based of off some benchmarks have seen. (some were at 9gb but I didnt see any slowdown).
When the price and performance is revealed it would make your 3090 purchase to be the stupidest decision you ever made.
They're fully aware of the demand issue and have addressed that already.
As for the yields AMD has already touched on that as well by speaking up on the demand and availability part also. They're locked down on 7NM and have been killing it yield wise ever since.
Is and has there ever been a shortage of AMD processors out there? No.... Not once have I seen a shortage for them even though demand has been high.
I expect good things this go'round from them and shaking up the competition doesn't get any better than that...
No. How so. 24gb of that fast RAM for rendering is a dream at that price point. Them cuda cores WOW!
All in the perspective, but if you bought it for gaming then yeah you're mostly right.....
I mean the least it could do would allow for ultra settings at that resolution.... Step into the future.... Please...? Just messing with you..
People see a card beating another.
I see a performance line between the 3060 and the 3090, and many points in the middle, by AMD and NVIDIA at different price points.
If the 6800 comes in at around $500 that should be a pretty good selling card.
That's still a matter of if the L3 is for dGPUs and not iGPUs.
But let's say it is for dGPUs - traditionally, AMD's seemed to be ridiculously starved for memory bandwidth, to the point that Vega actually warranted HBM2. I get the impression RDNA2 is a little less bandwidth-hungry, but AMD probably realized that the only way to make this problem affordable is to add a cache. Widening the memory bus means more memory chips, and there just isn't room for that.
Even if they provided benchmarks, I'd still take them with a grain of salt. AMD has been better about sneak-preview benchmarks but I still don't trust any manufacturer's cherry-picked results.
Absolutely true! But just like with Nvidia's "conditional truths" (when the benchmarks are cherrypicked), I'd prefer those over the wild speculations which often build up a hype not even the cherry picked benches can satisfy fully
Higher resolution really doesn't make that much difference to the amount of VRAM used.
A 1080p dual 32bit frame buffers and 32bit depth buffer uses 24MB of VRAM.
4K of the same type of buffers uses around 100MB of VRAM.
Now some games might have more buffers for various things, but the vast majority of VRAM is used for holding textures, which don't necessarily need to be bigger.
There's no reason you can't play 4K on 8GB just fine. Of course it depends if the game wants to use way more VRAM, but then the issue is the same at lower resolutions too.
8GB was the sweet spot, I think 10 or 12 will become the next sweet spot. 16 or 20 is just overkill for games, even at 4k or even 8k!
I think AMD´s cards are going to be a little slower than Nvidia´s counterparts, around 10%, but also cheaper providing better a performance/price ratio. If MAD had cards better than the ones from Nvidia then they would say it as loud as possible so very possible customer could eard them.
My guess of course, i need benchies!!!
Its gonna be faster than a 1060 for sure, I went from a 1060 to a RX 5700 and it's like 2 to 3 times faster
HBM(1, 2) was put on to save power, as both the memory itself and the memory controller inside the chip use less energy for a given bandwidth.
It's also the reason why professional chips use HBM2 now, as those may run 24/7 and power becomes quite relevant in the long run !
The unfortunate side effect is that it made BOM too expensive and the gaming cards unprofitable.
HBM is still the future, once it drops in price enough to make GDDR irrelevant.
I don't think you understood me:
I didn't say HBM was used because of the bandwidth, I'm saying despite its tremendous bandwidth, the GPUs were able to take advantage of it anyway.
This looks nice from AMD side, but IMO I have doubts that neither of these cards can beat the 3080 in 4K gaming. But I truly hope they can beat the 3000 series, even though I was lucky to get a 3080 this year!
They don't need to beat the 3080 in 4K gaming. Even if they're 20% slower, they'll still be capable of delivering a good 4K experience.