Interesting read If this is the case then it will be the microarchitecture present in the Radeon 400 Series including the 'Greenland' GPUs.It confirms AMD plans for major technological upgrade next year. Graphics Core Next, a name that accompanied each GPU since 2011, will soon be replaced by Polaris. http://www.fudzilla.com/news/graphics/39559-amd-polaris-slide-leaked I thought GCN was developed for DX12 looks like Polaris might be AMD's true DX12 hardware hope it is good and takes the fight to NVidia, NVidia has been to dominant lately and are becoming sloppy and lazy (drivers not as good as they used to be) http://www.guru3d.com/articles-page...-january-2016-amd-polaris-architecture,1.html
Yup, pretty interesting indeed. More links: http://hexus.net/tech/news/graphics...-series-based-upon-polaris-microarchitecture/ http://www.techspot.com/news/63311-amd-next-gpu-architecture-allegedly-called-polaris.html
Soon. I feel like both AMD and Nvidia played us when it comes to DX12. They will both do new architectures in 2016 which will probably be far superior to what we have now. Talking about false marketing. And people still want DX12 support for Fermi cards, silly stuff.
As far as we know "Polaris" is as much GCN as "Fiji" was. Nicknames don't mean a lot of things. Efficiency wise GCN 1.2 reached its peak with the Nano, which is really giving 390x performance at an 175W TDP, thus basically reaching almost Maxwell levels of efficiency. I can see them tweaking it more, shrinking it to 14nm and you have Polaris alright. EDIT: I just saw Raja Koduri's tweet about Polaris. "Raja Koduri Retweeted Chris Hook @Gchip Polaris is 2.5 times brighter today than when Ptolemy observed it in 169 A.D" I'll put my tinfoil hat on and predict 2.5 times better performance per watt, and 16.9 billion transistors.
Liz Su actually talked a while back about their FinFET 2016 GPUs, and specifically named it GCN. Polaris is just a new marketing paint for it's new 16nm node since people become bored/wary of the GCN moniker. But from an architecture point of view, AMD would be stupid to discard it completely and start something from scratch. Future console generations would lose hardware backward-compatibility with PS4/Xbox One.
The article isn't clear on what "polaris" actually is as it mentions "AMD presentation slide which refers to Polaris as the fourth generation of GCN." so, GCN 1.3 yet it also mentions "Graphics Core Next, a name that accompanied each GPU since 2011, will soon be replaced by Polaris." saying it'll replace GCN. IMO, it's a bit of a trash article as it's all based on speculation and conjecture, it can easily be GCN 1.3/2.0, we can see it as a die-shrink of fiji, with new instructions/optimisations or we can see it as an altogether new architecture, we just don't have enough information to form a concrete conclusion. For now, all we know is there is a new gpu coming out on 14nm that may have upto 2.5 times the performance of GCN 1.2. - This is the information that's important.
AMD such a Stupid company. They are making comparison of Maxwell and AMD GCN 4.0 in performance perwatt. They should have waited for pascal than make these kind of comparison rather showing their desperation for DGPU market.
'Competition', very legit in terms of what it refers to It's all just marketing talk... yes they have a new architecture, but what it will do we will see once it's around. Not much different from Nvidia claming marvellous gains (deep learning and stuff they mentioned) or naming their next architecture after Volta...
i don't believe anything AMD says, they are the best at PR powerpoint sliders, but in reality nothing they promise comes out, or at least most of their promises never fulfilled. 2016 gonna be interesting, Polaris vs Pascal, SD820 and Chinese companies that will rule the smartphone world very soon... yea awesome.
I just realised, to make their great point in polaris using less power, they used a system with an 4790K for testing. It just makes me grin and shake my head they didn't use any AMD CPUs...
No, let's believe NVIDIA (or any other company for that matter). Why the fanboism? Most company PR sliders are accurate these days, the internet has seen to that. You have to be careful with the conditions they mention. Even if you take the shrink into account, 86W vs 140W is immense. Most forget that lowering the process doesn't mean less power consumption automatically. Sometimes it might even mean MORE power consumption due to voltage leakage. One of the most interesting battles we'll see is going to be Global Foundries/Samsung vs TSMC for the 16/14nm GPUs. NVIDIA's design might be better (for all we know it could also be another GeForce FX, **** happens in all camps), but TSMC's process might be worse than GloFo/Samsung. Don't be a fanboi, buy with your brain and DON'T buy unless BOTH companies have their new stuff out. Also, don't buy unless NVIDIA has their "Ti" product out. As for the architecture not being GCN, it would invalidate their whole strategy right at the point where it starts working.
Sure it does The actual product might not carry lower TDP due to higher tran count, but for the same perf you'll always get lower power consumption on smaller node. It's been like that ever since... Nvidia invented GPU /runs-away /hides
AMD is right that GCN 4.0 is a competition with Maxwell not pascal because if nvidia put Maxwell in a new node than it will really competitive with GCN 4.0 so leave Pascal alone. That is what R&D shortage did to AMD.
Excatly, plus PR your are AMD fanboy, and honestly AMD disappointed a lot recently, they making big promises and failing almost all, and yes nvidia have their share of bad behavior, they disgust me too at times, Kepler downgrade was real very real, i had 780 back then.