Discussion in 'Videocards - AMD Radeon' started by OnnA, Jul 9, 2016.
Even if it's only 50% true
AMD Vega 10, Vega 11, Vega 10 x2 and Vega 20 In 2017 and 2018
Vega 10 is coming next year and it seems that it will be faster than the P100 card. P100 (21.2 TFlops) is a supercomputer part and has much higher theoretical performance than GP102 (10.97 TFlops). According to the raw numbers, AMD Vega 10 with 24Tflops, Vega 10 will significantly outperform Geforce Titan X.
Thanks to our well informed and reliable friends, Fudzilla has learned that you can expect as much as 24 TFlops of half precision performance and 12 TFlops of single precision performance. Both half precision and single precision numbers are higher than Nvidia scores with Pascal P100, its highest end Tesla chip.
Earlier we learned that HBM 2 won’t be really available before the end of the year and this is one of the main reasons why the Vega 10 had to be pushed to 2017. Nvidia announced the Tesla P100 but it always claimed that it plans to ship it in Q4 2016.
Bear in mind that the Titan X uses GDDR5X memory and scores significantly lower than the Tesla P100 and Titan X uses the GP102 chip that packs 12 billion transitions, while the P100 has 15.3 billion.
It is safe to say that Vega 10 will be rather a big chip and it gives you the idea that in case that Volta is not ready to launch in time to compete with Vega 10, Nvidia could have another Titan card based on the P100 with HBM 2 memory up its sleeves.
AMD is working hard to return on all fronts. The Polaris generation is a decent competitor in the mainstream market but it cannot really touch Nvidia in the high end sector. This might change with Vega 10 and it will make AMD / RTG a bit more competitive.
The Vega 10 comes with 16GB HBM 2 memory.
In 2017, AMD finally plans to upgrade all of their GPUs to the new FinFET architecture. While Polaris serves the mainstream, Vega will be aiming for the high-end market. The new graphics chip for Deep Learning will be the Vega 10, a very beefy GPU which we detailed yesterday. AMD Vega 10 is expected to launch in first quarter of 2017.
From the leaked details, we now know that Vega 10 can feature up to 4096 stream processors based on a new GCN uARCH (GFX9). The chip has as much as 24 TFLOPs of half-precision and 12 TFLOPs of single precision compute performance. The chip also features up to 16 GB of HBM2 that is clocked to operate at 512 GB/s. The cards based on this chip are rated with 225W TDP.
Now we noticed that Vega 10 isn’t aiming the HPC market but instead, focused at the Deep Learning servers. It will have two parts available in 2017, a single chip and dual chip solution. Both feature dual precision at 1/16th the rate of FP32. It confirms that Vega 10 is similar to GP102 which cuts down dual precision compute from the more beefy GP100 GPU. The details on this will be covered in a more informative article later on.
AMD Vega 20 Will Be Their First FinFET Based HPC Chip – Launching in 2H 2018
This shows that Hawaii will still stand as the HPC muscle up till 2018. A new GPU will be developed by AMD in the meanwhile known as Vega 20 with DP rated at 1/2 of SP. This would be a solution targeted at GP100 or the new GV100 chip based on Volta which is expected to be introduced by then.
The latter is more possible since Vega 20 would not only use a enhanced built design but also come packaged in the new GloFo 7nm FF process. With up to 32 GB of HBM2 memory (1 TB/s) and a fast 64 compute unit design, this chip will have impressive Dual precision capabilities for HPC platforms.
Also, do you remember that we mentioned a high-performance interconnect being developed by AMD a while ago? Looks like the solution will be known as xGMI. This peer-to-peer solution will be embedded in HPC servers that include Vega 20 and Zen based Naples processors.
Nevertheless, Navi 10 and dual Navi cards are expected to ship in 2019. We can expect these cards to leverage from new GCN architecture, one that is revised extensively for greater efficiency leap over Vega. There’s also next-gen memory (HBM3) and PCI-e 4.0 in the talks. Navi 10 will be aiming the Deep Learning market while the smaller Navi 11 core will be aimed at the Inference market.
AMD Navi 10, Navi 11, Navi 10 x2 in 2019, Navi 20 in 2020?
Moving forward, AMD has plans to ship Navi in 2019. Now you might believe that Navi has been delayed but we don’t find that as the case. This roadmap is specific to the server platform and these products have the tendency to arrive later as compared to consumer parts. Hence, Navi could still be a planned 2018 launch on the consumer side but could miss that time frame on the server side of things.
Just like Vega, Navi won’t see a HPC solution for a while. The Vega 20 GPU will be remain at the helm of AMD’s HPC offerings and eventually be replaced by a powerful Dual Precision part in the form of Navi 20. This solution can hit the market around 2020. That’s it as far as AMD’s server roadmap is concerned, we will cover some specifications comparison between NVIDIA Pascal/Volta and AMD Vega/Navi GPUs in the days ahead to assess their potential in the HPC markets.
THX to Fudzilla & Web
AMD Radeon Pro Series To Get Radeon Pro S9 Nano !
The AMD Radeon Pro series was introduced two months back and featured a range of Polaris 10, Polaris 11 and Fiji based cards. The main attractions were the Radeon Pro WX 7100 and Radeon Pro SSG. The former is the most affordable work-station card while the latter is a heavy-duty, Fiji based card that comes with a on-board SSD solution to deliver immense boost in productivity and performance on workstation suites. More on these two cards here.
AMD plans to introduce a new server-focused card in this lineup too. Based on their Fiji GPU, the card would be known as Radeon Pro S9 Nano. The “WX” stands for workstation while “S” stands for server. The solution is expected to feature the same specifications as the Radeon R9 Nano along with some clock adjustments. The new launch could happen in a matter of weeks as mentioned in the leak. Radeon Pro S9 Nano will aim the inference market and can be sighted as competition against NVIDIA’s recently launched Tesla P40 solution for inferencing.
Of course, the Radeon Pro S9 Nano won’t be the only inferencing card available. AMD will also launch cheaper cards based on Polaris 10 and Polaris 11 GPUs. These will go in nicely against the Tesla P4 solution with possibly affordable price bracket.
Lol 24tflop is half precision.. talk about blowing things out of proportion.
Yup did write about that on the earlier page. But indeed that 20tflops on the p100 is also half precision. The Vega 10 would be tad faster then P100 or Titan X P in terms of compute if these numbers are true. Kind of like Fury X has more compute then Titan X and 980ti.
That is 16bit half precision SP should be around 12 Tflop not too far fetched that is a click bait, misleading article and the writer knows it.
Fiji XT was 8.6 Tflop SP. and if the core are basically the same as Polaris I see it clocked more around 1320-1360MHz not north of 1.4GHz.
8.9 close enough I think I read that Vega should be a bigger step the polaris is. Just that with 12 tflop one can't have gpu under 1.4ghz with 4096 cores it needs 1450+ to achieve that
I know, but 10-12 teraflop with 8/16GB of HBM2, in a range of $450-$600, is going to be quite killer.
How I know the price ranges? They are the ones that don't exist for AMD right now
I hope you are right. I'm thinking more along the lines of $650. The Fury line can only last so long, especially considering its high price.
It would be quite a killer but not out of norm tho! HBM2 should be rather readily available in H1 2017.
GTX 1080 with 2 GHz boost is that range already. So... If NVIDIA manages to release Volta at the same time then nothing has changed. And sadly AMD's 1H is probably late 2Q. At that point Pascals are probably dirt cheap too (1070 at least). But if Vega kicks a punch and offers good value then that should be good. Everyone wants fierce competition since that usually means lower prices.
Considering Polaris could not achieve those clocks I'm skeptical Vega will. Now if this is a better revision of GCN then maybe.
It's supposedly IP9.0, Polaris was 8.1.
Honestly, if it does do something similar to the FP16v2 cores in GP100 that alone would bump the IP level. It also explains the doubling in perf/w since the metric of "perf" is not exactly established in any of those slides. I don't really see any other way AMD is doubling its perf/w - even tiled rendering doesn't give those kind of gains and HBM2 actually uses more power than HBM1 when you have 16GB of it.
I do agree with your other post though, if AMD can price at $600-650 and it's indeed 12Tflops with HBM2 it's going to be a really, really nice card.
I can see it being in the 275W range, to be honest. Unless there is some kind of miracle. If that has happened, AMD will need a shrine dedicated to Raja Koduri. If I'm not mistaken, this will be the first GPU to be done mostly under him, right?
If they give us GCN with tiled rendering NVIDIA will have a serious problem, but I can't see it somehow.
Raja worked at AMD/ATi as GPU hardware design lead (CTO specifically) for almost a decade. He left in 2009 for Apple and recently returned, except now he has control over the driver/software side of RTG as well. I actually find it surprising how few people know that - the AMD subreddit constantly talks about him like he wasn't in charge of the hardware/tech since forever - with both successes like 9800 series and failures like R600. Obviously now that he's in control of both hardware/software he can probably see out his vision better, but yeah - pretty much every ATi card up until 2008/9 was designed by him.
I also expect it to be around 250-275w, 250 if they manage to get tiled stuff going. Honestly even at 275w it will be really good. DX12 adoption will only get better over the next few years, which means the tflop/performance gap will close and AMD will just start pulling ahead in pretty much new title. And if that's the case they are effectively going to deliver Titan X (P) + 10% performance at $600/650 - people are going to buy it - they won't care that it's 25w or more or whatever. In fact if that's the case I will even upgrade to it. Especially because I've been having so many issues with my multi-mon setup and recent Nvidia drivers.
Nvidia's counter will most likely be a Ti release with a price drop for the 1080. I don't think Nvidia's temporary performance lead in DX11 will carry on for much longer (in terms of being a focus), they'll be forced to drop their prices if they want to maintain their high marketshare as DX12 takes hold - or even as AMD's drivers improve in general. I don't know how worried Nvidia is though, recent steam surveys show Pascal outselling Polaris by a pretty large amount. 4:1 1070 to 480 and I've also read estimates that the 1070 is actually cheaper to manufacture overall than the 480. Which means they are making far more money on each sale. AMD needs Vega out as quickly as possible in 2017 - because Volta is going to be launching/rumoring/something also next year.
I reckon when Vega is released, nVidia will release another Titan card (based on P100), and rebrand the Titan X 2016 - maybe 1090 or 1080 ti? That is to say, if Vega is going to perform as well at the price people are predicting
The other thing that needs to happen as well is we need people to actually adopt AMD hardware. In terms of software development, far too often do we see games unoptimised for AMD hardware, heck I can think of an example, not long ago, the development team behind Black Mesa released a large update for the game which enabled CSM Shadows, little did they release that they were completely broken when used on AMD hardware - their reason for the bug to happen? None of the members of the team developed or tested the game on any AMD hardware....
I haven't heard of it, but if this is their "official" explanation, then it's a rather hilarious/comical/plain stupid one...
Even if nVidia holds 3/4 of the dGPU market, there still remains the rest 25% (probably higher irl) + APU +...
Not testing their code for one of the two "players" is quite out of the question in my opinion.
Sounds like either a bad joke or a very good "PR" movement from the other side.
NVidia has something like 2/3 of the discrete market. A market that despite what a lot of people think, is constantly shrinking.
From Q2 2015 to Q2 2016, AMD's market share in discrete GPUs has gone from 18% to 30%. If you include the consoles and factor in the decline of desktop GPU shipments, they command something like 80% of the current GPU market. It's a bit stupid that a developer admits they don't have anything in hand to test. Also these guys are basically amateurs making a mod for free, so we shouldn't judge them with the same criteria as normal developers.
Negligent imo, here's a link to the post:-