Discussion in 'Videocards - AMD Radeon' started by WhiteLightning, Sep 28, 2018.
Wish they would be released beginning of year.
Patience, my friend, all good things come to those who wait. I'm willing to wait for big Navi, I have a feeling it'd be worth the wait.
Guys, don't overhype this. If they can give us 2080Ti performance at around €600, it would be perfect, even if Nvidia has the crown with the 3080 Ti that is inevitably coming at around the same time.
€600 might be a bit optimistic, I'd say closer to €800
AMD Radeon RX 5700 XT Reference Cards are EOL?
AiB Partners are very important for ATI/AMD that's why they have Blower style cooler on reference GPUs.
I'm realistic enough to know that RTX3080 Ti would be the new king of the hill after RTX2080 Ti. I just hope NAVI 23 can at least match, or slightly exceed RTX2080 Ti in games without RT. I just hope that AMD's RT hardware implementation is more......robust (?) than nVidia's.
If it’ll be a 600€ GPU (incl. f!ck!n italian taxes) it’ll be mine, or at least I hope that a 600€ card will be released....
I don’t want to spend too much because I have to put it in my watercooling loop.
I’ll buy it because I think my Vega 64 won’t be enough to play Cyberpunk 2077 with every single option maxed at 1080p60, and because ray tracing (that now I consider wasted die space until conslowles will have it) will gain traction, and Cyberpunk has it.
We will have all price point covered.
199-299 & 349€ 5700/5700XT
399-499€ for 58xx
599-699€ for 59xx
And thats all IMO.
By the time AMD actually release a card faster than 5700xt Nvidia will be on 7nm and 3080 will beat the crap of out of it. Intel joining the playground with its rt capable cards will leave AMD on last place. Dont forget to overhype this new "nvidia killer" gpu so we call be disappointed when its here (sounds familiar?).
AMD should reconsider pricing. Their choices are not viable for smart buy anymore like in the past. GPUs are not CPUs. Radeon is not selling as good as Ryzen years now, that means something. Lisa Su, should have gotten the hint by now, she doesn't seem to be a slow paced person.
At least we could get a killer in price/performance ratio like the old days. AMD lost it and NVidia doesn't even care, instead there is overpricing.
It won't really matter what the implementation will be, since it will be from day one in all the new consoles. So, whatever it is, the industry will have to write for it, and this is kind of a major point in AMD's favor. Nvidia will try to grab some big titles for hype, for sure (like Cyberpunk 2077), but if even those games implement DXR, then the new AMD cards shouldn't have issues with it.
The 5700XT is obviously a mid-low end card. Don't be fooled by the performance or the pricing, this is a decidedly mid-sized die (only 250mm2). My bet would be that the new consoles are basically that GPU + RT + some stuff from the next gen of Navi + 75mm2 Ryzen CPU. I cannot see them go above 350mm2 for the start of the generation.
The original PS4 had a 328mm sq die size, which would fit with something on that ballpark.
As for PC, the real high end is at the 600mm2 range, and the real question is how the Nvidia die shrink goes. Big Navi (80+ CUs) will be as fast as the 3080, and if it has RT, it will be a great card, if AMD can keep thermals in check. I don't think they could hit Nvidia in the full retard 3080Ti range, but they could have compelling offers for the mass market.
There are two things to consider when one looks at next GPU releases.
1st) Where each company stands.
2nd) What can next more denser and power efficient node give them.
And here is where most of our fellow gurus make fatal mistake in judgement by underestimating nVidia's current standing and overestimating AMD's. 7nm power efficiency is going to improve clock only a bit, because while AMD still has to balance clock vs. power draw on ~10B transistor chip (7nm) and manages to read ~1800, nVidia is already at ~1750-1850 even at "12nm". Move to 7nm will enable nVidia to get maybe 12% higher boost clock. on that size of chip. It will be nicely power efficient, but that's about it. And it would be very naive to expect some meaningful improvements to traditional rendering methods when they are pushing for RT.
And funny thing is that their 2080 Ti does have same boost clock ranging from 1750 to 1850MHz. If 2080 Ti was clocking much lower, one could at least argue that there is big clock potential with extra power draw headroom. On other hand, AMD which recently worked on a lot of things is in place where it can only deliver surprises with RDNA. Especially if they improve power efficiency.
And that brings us to trade. nVidia is trading performance for die size which someone has to pay for depending on yields. AMD is trading power efficiency where one has to pay for more robust cooling solution and a bit more for electricity over time.
So, where can nVidia go? 12% higher clock * 20% more transistors at 40% higher price due to more costly manufacturing with lower yields?
And AMD? -5~10% clock, 100% more transistors, and some 25% power efficiency improvements. Then they can go again for HBM2+ to save some extra watts. This time around it can actually be cost efficient. 3x 4GB stacks could be gaming sweet spot.
Regardless, I just hope big NAVI is powerful enough to do at least 3440x1440, high-max setting + RT enable, I need a single powerful GPU as I'd tried running two VEGA64's in my case, and mostly due to my mobo PCIe X16 slots placement, my primary GPU hit critical thermal threshold. So, one single GPU with enough POWAH to do my required res and game setting, that's all I want......and it's gotta be AMD (gotta maintain my AMD street creds)
The die consumes more power, the larger it is. Nvidia isn't really "trading", as they are not on the level of needing to push their hardware higher, unlike AMD who seems to need to run everything they make outside its optimal voltage curve. One is ahead, one is clearly behind.
Nvidia will go with UV, which means that their yields and costs will be completely fine. As that matures, they will be worth the trade off, like in every node transition.
I cannot see higher clocks out of them, but I could see a 6k+ core 3080Ti, on a smaller die size and lower power than the current 2080Ti.
There will be no HBM this time around, I think that AMD is traumatised by it.
I bet there will be Big Navi with HBM2 + HBCC* (HBM3 will be too expensive)
Save my post for the future
* high-bandwidth cache controller
SK Hynix reveals insanely fast HBM2E memory - Faster than an RTX 2080 on a single chip
SK Hynix has revealed the world's fastest HBM memory product, promising staggering speeds of up to 460GB/s per memory stack.
Each stack can also offer up to 16GB of capacity per stack, offering insane speeds and capacities given its form factor.
For context, Nvidia's RTX 2080 ships with 8GB of GDDR6 memory and delivers 448GB/s of memory bandwidth.
Yes, that means that SK Hynix can offer two times as much memory capacity and more memory performance on a single HBM2E memory chip.
Back in March, Samsung announced HBM2E memory stacks which offer 4.1GB/s of memory bandwidth, placing them behind SK Hynix's latest offerings.
This memory was already 2x as fast as the HBM2 memory used on AMD's RX Vega 56, a feat which makes SK Hynix's HBM2E offerings appear all the more impressive.
SK Hynix targets mass production of its HBM2E memory in 2020. This is when SK Hynix believe markets for HBM2E memory will to open up.
SK Hynix expects customers from the GPU market, creators of machine learning accelerators and other AI chip makers.
Right now, the most bandwidth rich graphics card to use HBM memory is AMD's Radeon VII, which offers 1024GB/s of total memory bandwidth.
Using SK Hynix' new HBM2E memory, a Radeon VII like graphics card would offer 1840GB/s of bandwidth, delivering a performance boost of almost 80%.
Even in the wake of GDDR6, SK Hynix has confirmed that HBM memory has a future.
The company's latest HBM2E memory modules pack more capacity and bandwidth per chip than all but the highest-end GDDR6 memory setups, which is no small feat for such compact, energy-efficient chips.
Couldn't hurt but I wonder if the clock speed isn't a primary issue over just bandwidth at least for gaming though it is known AMD's GPU's have been seeing higher than expected gains from memory improvements so there is some bottleneck and it could very well be both memory speed and bandwidth related, improved with VII and then again with Nano but still some sort of bottleneck though HBM is going to be costly and I don't expect E here to change that even if HBM is a nice addition to the GPU hardware but it's problematic from how much it currently costs.
Guess HBM3 is further out too then if they made additional adjustments to HBM2 (Samsung had some 2.99 thing earlier too I believe.) and I heard some low-cost version was also considered and then nothing more although it probably sacrifices cache or something to reduce the cost and complexity of the HBM and interposer circuitry.
Nice to see though whatever this ends up used in.
Except that you wrote that I am right. 1st, you even think that nVidia will not be able to increase clock (while I think around 12% is realistic). And that leaves you at gains gained only via transistor count.
And no, 7nm is not cheaper than 12/14/16nm per transistor. Neither is 7nm EUV. Therefore direct die shrink of 2080 Ti to 7nm EUV will cost more to make. And will deliver higher power efficiency. Therefore higher clock, lower power draw or some combination.
And power thing is one big misunderstanding for you. nVidia is ahead, therefore their clock gains from having better power efficiency at same transistor count will be lower than AMD's gains from better power efficiency. That's because nVidia is closer to clock wall of given manufacturing process at same transistor count and power draw.
As for HBM. That remains to be seen. But it is strong option. Interposer is already low complexity device. And while cost is at HBM I think that AMD more of needs that reduced power draw than bandwidth. Price difference between 12GB of GDDR6 and HBM is not going to be crazy. Especially if AMD does not need cutting edge chips.
SAPPHIRE PULSE RX 5700 XT Tested