Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Nov 7, 2022.
Not sure why i got a feeling that AMD will cut 7900 msrp after 4080 launch, 100$ down.
Eh, the packaging isn't cheap. The individual pieces of the chip are cheaper (6nm and yields) and thus the GPU overall should be cheaper.. but the packaging itself is far more expensive than a traditional monolithic chip.
I also disagree about the 4090 killer - the issue with competing with a 4090 isn't the die size - Nvidia isn't limited on die size, it's power consumption.
I'm also really curious to see how AMD chips perform across a range of games. AMD moved from GCN because they had trouble extracting ILP out of workloads.. and now with RDNA3 they brought back dual issue SIMDs. Going to be interesting to see how that plays out. Did AMD cherrypick the results in the demo from games that had good extraction? Left to be seen.
The 6nm chiplets are cheaper but here is the thing AMD are still coming in cheaper than monolithic 5nm even including the advanced packaging, the AMD GPU, and memory chiplets. InFO's from TSMC is very cheap and that small cost is still allowing the chip stay under TSMC's 5nm costs which is much higher than 6nm. I have already ran this past a couple folks that really know. To say it clearly, if AMD's chip was on 5nm and monolithic it would cost slightly more than the chiplet design split on 5nm and 6nm including the advanced packaging costs.
See some of the power limited tests bellow and it clearly shows what you can do with a 4090. At 40% less power you lose only 10% perf. Now imagine using 40% more transistors and same power or 50% at slightly less frequency etc. There is a lot of performance left for more shaders etc if you keep the power down.
TDP-Skalierung: 250 bis 550 W - Seite 10 - Hardwareluxx
Improving Nvidia RTX 4090 Efficiency Through Power Limiting | Tom's Hardware (tomshardware.com)
Yes I agree we need to see AMD cards third party reviewed before we make too many assumptions.
Which makes you wonder why they went with such a small amount of cache on each chip - surely if the cost is in the packaging and all the interconnect then why go to the hassle of 6 chiplets with all that interconnect and a mere 16mb of cache on each one - why not have each one with 32mb or 64mb of cache. Wouldn't that barely increase cost? I mean they vastly increased all the other caches (on chip) but the final level is less then the 6xxx series?
They have to leave something for a refresh. That probably won't happen for at least a year though.
The thing is Nvidia shot to the moon with power. So the 4090 was pushed hard for performance. Insane performance but at an insane price. Most the time Nvidia aim for 30-50% more performance gen over gen. Now we got near 2x performance in some instances.
This made Nvidia think they could charge more for a non Ti model and released the 4080 at 1200$.
AMD was never going to push power in the first place, they always aimed for where the 7900XTX now sits. So it was always going to be $999. The XT is the only really iffy pricing being only $100 lower but all specs have been cut. This is the real reason why I think the 4080 12GB was cancelled. Nvidia knew the 7900XT was going to come out and destroy the 12GB and the XTX probably beating the 16GB too by around 20-25ish% in raster.
I think the 7900XT was originally going to be a 7800XTX but AMD saw the pathetic 4080 12GB and knew they were faster. So upped it to a 7900XT but charged only $100 less in order to push people to go for the higher end XTX model.
AMD kept their pricing structure from last gen, Nvidia raised it massively especially with the 4080 sku with near double the price for what? So 100% more cost, for 30-40% more performance? AMD are charging 20% more with the XT for what looks like 30-40% more performance over the 6950XT. Even if the XT pricing is iffy it still makes way more sense than the 4080's.
Almost certainly to keep latency down. Splitting the cache off in the first place is giving you a latency penalty, so I imagine all their optimization was to hide/regain latency where they could.
Also to be clear I'm not saying the packaging cost is some outrageous price. His initial post made it sound like the packaging would be cheaper than alternative solutions and I was merely clarifying that it's not, it's more expensive. But you're going to save cost everywhere else.
Judging by forums here , i could say nVidia drivers right now are behind AMD, maybe not by a mile ....
With 80 odd percent of the market, there's always going to be more issues on the Nvidia side. That's just a fact until AMD can reach some sort of parity.
If youre willing to drop out 1.2-1.3K for better RT + DLSS, then youre willing to drop out 1.6-1.7k for max. perf. without hesitation.
Also true , but doesn't mean AMD drivers aren't better right now
its worth noting that the cache helps compensate for bandwidth and latency constrained situations. with a 384bit bus + 20gbps gddr6 hitting 960gb/s, the gpu is much less starved than with the rdna 2 cards per compute unit. so its possible it simply doesnt benefit as much from increases in cache, 3d cache variants is of the cache dies are also possible for future products.