Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jun 10, 2019.
64CU is what I expect from largest Navi.
Was wondering if they have lifted that limitation with RDNA. At E3 Lisa said the architecture is scalable so I was kind of thinking that was a little hint they might have lifted this limitation.
Let's just see where the actual performance falls in line after reviews (Especially Guru3D), then we can argue whether the pricing is fair of not.
Not saying that's a hardware limit just that that is where I see them stopping at current 7nm.
We don't know the power usage yet, just the TDP. If its closer to 180 watts it's possible. Also a new chip on 7nm+ will be more efficient(10%) then of course a large chip will likely run a lower clocks. Im NOT sure if it big navi will be greater than 64CU just wishful thinking from a competition perspective.
It remains to be seen. We do not know Voltage requirements at different clocks and resulting power draw.
But considering it is 10.3B transistor GPU and 7nm, I say that it is quite a lot of power per transistor. (As usually too high Clock.) Maybe lowering clock characteristics by 10% may lead to 30% lower power draw.
Making 1.5 times bigger GPU with 0.9x clock rate could easily result in 35% higher performance at 250W. Or even higher performance since this 5700 XT is in "GCN compatibility mode" while not looking like GCN at all. And bigger one is supposed to come with much more different CU arrangement.
Now, important part for Navi is not power draw. That's blown out of proportion in usual fashion. Important is performance per clock per transistor. And at given clock, those transistors do a lot of work. If big one does even more...
Not quite sure what you meant about the Vega cards... https://imagescdn.tweaktown.com/news/6/6/66204_21_amd-details-new-navi-powered-radeon-rx-5700-xt.jpg
Where did you see the TDP? I saw board power some places, which may or may not be official.
Is TDP same as TBP?
That article says it isn't.
Power draw is always important. It determines the amount of heat produced. More heat equals either bigger coolers with more fans or more noise from having to spin fans much faster. While yes it can be blown out of proportion by some to say it’s unimportant is a bit out of touch, no?
This card is competitive and is sucking up the prices nvidia set the bar for, you can thank them. I'll be waiting for big navi in 2020, it better have more goddamn rops by then.
No, it is definitely not ?!?out of touch?!? for train of thought to which I replied.
To sum it up for you simply:
- Original Premise made by JamesSneed: Navi Based chip can be around 450mm^2 and comfortably outperform 2080Ti
- Denial's Counter: Such GPU would have current (probably meant TDP) as 251mm^2 RX 5700 XT has 225W TBP
- My realistic outlook: This 251mm^2 GPU is put so high out of comfort zone (like most of AMD's GPUs in last 7 years) that actually lowering clock of this 251mm^2 GPU by 10% may lead to 30% lower power draw if voltage is decreased too. Which itself suddenly makes bigger chip look much more realistic. Especially since I wrote not about almost doubling the chip size, but adding just 50% more transistors.
And to put more blunt perspective:
RX 5700 => Typical Board Power (Desktop) : 180 W @1625MHz Game Freq
RX 5700 XT => Typical Board Power (Desktop) : 225 W @1755MHz Game Freq
RX 5700 XT 50 => Typical Board Power (Desktop) : 235 W @1830MHz Game Freq
You can notice few things:
- TBP != TDP
- Disabling around 6% of GPU and reducing clock by 7% results in AMD deciding to have 20% lower TBP
- 4% higher clock on XT 50 => 4% higher TBP.
- TBP is matter of choice AMD made. They set certain power limits and GPU stays within them while opportunistically boost clocks if limit is not reached. (Same way as Zen CPUs.)
- AMD is setting clock range and chip clocks up and down to stay inside predefined power limits. (Thinking otherwise would be far from wise. As that would mean that AMD sets clocks and CPU/GPU eats whatever it eats under given workload...)
- Clock being under control of Power limit was here even before Excavator APUs where CPU part would downclock as GPU parts ate too many watts.
What this means:
RX 5700 with limits set to 250W may match 5700 XT as long as it is able to operate at around 1850MHz. (Still under 1905MHz boost clock of 5700 XT and way under 1980MHz boost clock of RX50.)
Clock of all those GPUs are limited by their power limits. Good Cooling + all bumped limits and you may see those max boost clocks in games.
- - - -
So back to Bigger Navi:
AMD can make even 450~500mm^2 GPU as mentioned by JamesSneed and it would stay within predefined TBP/TDP limits while boosting clocks opportunistically.
Fully utilized by heavy workload... maybe 1550MHz. Badly utilized maybe 1950MHz. AMD's approach and 7nm clock range definitely enable that.
Then, please remember next iteration is coming next year. AMD is not stupid, if they expect that GPU would need more power, they would go for improved 7nm to achive required power efficiency boost.
- - - -
Sorry for WOT.
Whoops, my bad! My apologies.
So you didn't watch the stream? The guy that in all games the possible best results were compared in all games (meaning if in 1 game the 2070 was best in D11 and the RX5700XT in DX12/Vulkan, then those results were displayed - this is the fairest comparison possible).
Edit: Nevermind, someone already mentioned.
July 7 NDA lifts for reviews?
I’m not disputing what you are saying at all in here. I was saying actual power draw is very much important. Lower power draw means a cooler card. That is all I was saying.
With that I do agree. 1st thing, I am doing moment we are able to, will be taking control over power and voltage.
Got RX-580 easily from 185W max to 145W max while it performs as well as when I just unlocked TDP to 240W where it ate mostly between 210~220W.
If we can undervolt efficiently 5700 XT, I can already see higher clock. I'll be definitely bumping power limit too (if possible) just to see how clocks and performance change.
5700 power draw is just fine as is. No need to be lower.
For one, 5700, as other prior Radeon cards, use a hardware scheduler among other things that make the Radeon Arch. different from Nv. Thus the need for more power.
IMO, talking about how "high" the tdp of the 5700 is foolishness at best because you don't have a reference to refer to call it high to begin with.
But by all means if there is I'd be the 1st to like to see it.
This will likely be true however none of us have real power numbers so we don't really know. We have TDP and can calculate max total board power but neither tells us actual power draw. We shall know soon enough.
@Hilbert Hagedoorn you should send that 5700XT back because clearly it has been dropped.
Looks pretty terrible from a performance standpoint, just like the RTX 20 series (except for the 2080 Ti).
You'll get the same performance as Vega 64 with basically nothing new besides some post-process image sharpening. RTX 20 series is also pretty disappointing with the failure that is DLSS and the sub-par performance of RTX.
2018 and 2019 will go down as the worst time to buy a GPU as the prices have nearly doubled for high and mid-end, and the performance improvements have been next to nothing from one generation to another.
Bring on 2020 and 7nm+ or EUV GPUs from both camps. Also, let's see what Intel has to offer, though I still laugh when I think of Intel and graphics in the same sentence. They're "extreme!!"
You'll notice HDMI 2.0b support (not 2.1) ill w8 till cards come out with hdmi 2.1 suport, best buy no is a vega56, most bang4urbuck..