Discussion in 'Videocards - AMD Radeon' started by WhiteLightning, Sep 28, 2018.
Brutal as F.unk (I mean Real Heavy).
Very nice. Can't wait for RDNA 2 series to launch.
AMD's RDNA 2 graphics cards reportedly contain more features than its console variants
Rumour has it that AMD's Radeon RDNA 2 architecture won't be the same as the "custom RDNA 2" architectures on Microsoft's Xbox Series X and PlayStation 5,
suggesting that AMD has created a superior product for its PC-centric products.
While their source as tight-lipped on the topic, WccfTech said that AMD's RDNA 2 forced its semi-custom partners to use "custom RDNA 2" within the specs lists of both next-generation consoles,
a factor which will help AMD to differentiate their graphics products from both console systems.
Are both the Xbox Series X and PlayStation 5 using the same "custom RDNA 2"?
For both the Xbox Series X and PlayStation 5, AMD provides semi-custom silicon for both Microsoft and Sony, allowing both manufacturers to work with AMD on their future hardware designs.
Be definition; these semi-custom hardware designs are custom, which means that Sony and Microsoft can define custom hardware tweaks and feature upgrades for their systems, outside of what AMD can provide with its CPU/graphics IP.
In Sony;'s PlayStation 4 Pro, Sony utilised custom checkerboarding hardware, and while this feature wasn't used widely on the system, it is an excellent example of how custom silicon can be used to enhanced the capabilities of a console.
With Xbox Series X, Microsoft lists "Patented Variable Rate Shading" (VRS) as a feature, suggesting that Sony may not have access to the same functionality on its RDNA 2 silicon.
It is also possible that Sony's PlayStation 5 has an inferior implementation of Variable Rate Shading.
Sony has also made no mention of AI and Machine Learning when talking about its PlayStation 5 console, whereas Microsoft has discussed support for 16-bit float and 4-bit integer performance.
Microsoft plans to leverage its DirectML API to utilise low-precision calculations to improve future games using AI and Machine Learning algorithms.
It is possible that Microsoft and Sony aren't using the same "Custom RDNA 2" architecture, and this represents a unique possibility for AMD.
Could AMD's RDNA 2 be "Full RDNA 2"?
It is possible that AMD's RDNA 2 architecture for PC could feature the combined feature set of the RDNA 2 implementations for PlayStation 5 and Xbox Series X.
It is also possible that AMD's PC-grade RDNA 2 architecture has architectural enhancements that were developed too late to be added to both custom RDNA 2 consoles.
We know that silicon for both Sony's PlayStation 5 and Microsoft's Xbox Series X has been in manufacturing for months, raising the question of where RDNA 2's PC variant is.
If RDNA 2 was design complete and being manufactured, why hasn't AMD created RDNA 2 silicon for its PC GPU lineup?
Right now, AMD's RDNA 2 architecture is rumoured to release in late 2020 with an October release date.
That seems pretty late when Xbox and PlayStation developer kits have already shipped, and both systems are being manufactured in high volumes in preparation for the system's holiday 2020 launches.
If AMD's PC RDNa 2 architecture is superior, it suggests that both next-generation consoles are using what could be called an "RDNA 1.5" architecture, rather than full RDNA 2.
That said, this is merely speculation.
AMD has made some big promised with RDNA 2, offering full support for DirectX 12 Ultimate while packing a 50% increase in performance-per-watt over today's Navi graphics cards.
This suggests that RDNA 2 will be a huge leap for AMD's Radeon technology.
AMD has already promised increased clock speeds, improved performance-per-clock (IPC) and logic enhancements, alternations which will all deliver notable performance increases for RDNA 2-based graphics cards.
RDNA 2 promises to be a bigger leap forward than AMD's original RDNA architecture was over GCN, and that's a big deal.
Expect some big changes to the GPU market over the next few years, as AMD's promising some big changes, and that will cause its competitors to work harder for their market share.
THX to overclock3d.net
I have a sneaking suspicion that RDNA 2 cards are going to be something special. I'd say AMD have ironed out most of the bugs from the first gen.
It needed edits to be factual.
AMD CDNA Architecture Based Arcturus GPU ‘Radeon Instinct’ Test Board Spotted – 120 CUs With 7680 Cores, 1200 MHz HBM2 Clock, 878 MHz GPU Clock
AMD's Radeon Instinct Arcturus GPU which will feature the CDNA architecture and aim the server market has been spotted by Rogame.
Featured inside the next-generation Radeon Instinct graphics cards, the CDNA architecture will leverage its compute-optimized GPU design to deliver the highest performance Compute capabilities for data centers.
AMD's CDNA Architecture Based Arcturus GPU Test Board Leaks Out - Next-Gen Radeon Instinct With 120 CUs For A Total of 7680 Cores
The AMD Arcturus GPU leaked out all the way back in 2018 which was before AMD has introduced any 7nm GPU.
The Radeon VII and Navi lineup launched in 2019 and featured 7nm GPUs with Navi being aimed at the mass consumer market.
It was later revealed that AMD's next-generation HPC & AI GPUs would be designed separately from the consumer-end chips.
This meant that the Arcturus GPU would be kept exclusive to the datacenter market.
AMD just recently confirmed in its Radeon CDNA architecture roadmap that all CDNA based GPUs would be exclusively designed for the HPC & data center markets while Radeon RDNA GPUs will power the consumer segment.
Coming to the specifications, it was previously unveiled that AMD's Arcturus GPU would feature an increased cache and double the CUs as Vega.
That along with a list of data center specific features such as XDLOPs, Rapid Packed Math, New Vector ALU & BFloat16 are to be expected in the Radeon Instinct cards that feature the new CDNA architecture.
The previous Radeon Instinct MI100 proto-type 'D34303' board featured the Arcturus-XL die with a rated TDP of 200W and 32 GB HBM2 VRAM clocked at around 1000-1200 MHz.The information for this part is based on a prototype so it is likely that final specifications would not be the same but here are the key points:
Based on Arcturus XL GPU
Test Board has a TDP of 200W
Up To 32 GB HBM2 Memory
HBM2 Memory Clocks Reported Between 1000-1200 MHz
Once again, a test board has been spotted by Rogame which is based on the Arcturus CDNA GPU and from the looks of it, this variant offers 120 CUs for a total of 7680 stream processors & a GPU clock speed of 878 MHz (750 MHz SOC clock). This variant also features an undefined amount of HBM2 memory clocked at 1200 MHz so if we are looking at a 4096-bit bus, we should get around 1.2 TB/s bandwidth which is what Aquabolt is able to offer. But it is very likely that both NVIDIA & AMD would end up utilizing the faster HBM2E 'Flashbolt' standard which goes into production this year and will be capable of delivering up to 1.8 TB/s bandwidth.
Talking about the clock speeds, the 878 MHz for the test board are rather slow as we have seen variants going up to 1334 MHz in the past.
At the mentioned speeds, the chip would boast around 13.5 TFLOPs of FP32 compute power which is lower than the Radeon Instinct MI60 and also the 21 TFLOPs that we got on the previous prototype sample.
It is likely that the first iteration of CDNA GPUs would end up somewhere around 25 TFLOPs FP32.
I had to forget one important detail
Arcturus (Test board)
> 878MHz Core clock
> 750Mhz SOC clock
> 1200MHz Memory clock
— _rogame (@_rogame) April 21, 2020
THX to WCCFtech.
Gears Dev: Load Times 4x Faster on XSX Without Changes; Sampler Feedback Streaming Is a Game-Changer
Gears 5 was the first already released Xbox Game Studios title to be confirmed for Xbox Series X enhancement.
The developers revealed their plan to use the full PC Ultra settings as a baseline while running at 60 frames per second and adding a 50% higher particle count on top of that.
The subsequent Digital Foundry analysis confirmed that the Xbox Series X preview build of Gears 5 already delivered RTX 2080 Ti levels of performance.
Now, in an interview with Windows Central, Gears 5 developer Mike Rayner (The Coalition's Technical Director) expressed his excitement for the I/O improvements enabled by the Xbox Series X.
He then added that Sampler Feedback Streaming is potentially a game-changer that can further increase texture detail.
As a game developer, one of the most exciting improvements that far exceeds expectations is the massive I/O improvements on Xbox Series X.
In the current generation, as the fidelity and size of our worlds increased, we have seen download times and install sizes grow and increasing runtime I/O demands,
which have made it challenging to maintain load-times expectations and meet world streaming demands without detail loss. The Xbox Series X has been holistically designed to directly address this challenge.
With the Xbox Series X, out of the gate, we reduced our load-times by more than 4x without any code changes. With the new DirectStorage APIs and new hardware decompression, we can furth
er improve I/O performance and reduce CPU overhead, both of which are essential to achieve fast loading.
As we look to the future, the Xbox Series X's Sampler Feedback for Streaming (SFS) is a game-changer for how we think about world streaming and visual level of detail.
We will be exploring how we can use it in future titles to both increase the texture detail in our game beyond what we can fit into memory,
as well as reduce load times further by increasing on-demand loading to just before we need it, instead of pre-loading everything up-front as we would use a more traditional 'level loading' approach.
Interestingly, at least on paper, the I/O capabilities of the PlayStation 5's SSD are superior with its 5.5 GB/s raw and 8-9 GB/s compressed I/O throughput,
whereas the specs of the Xbox Series X SSD are 2.4 GB/s raw and 4.5 GB/s compressed.
However, Microsoft might have quite a few software tricks up their sleeves between the
DirectStorage API, Sampler Feedback Streaming, and the new BCPack compression system tailored for GPU textures.
AMD’s Big Navi 7nm GPU Flagship Allegedly Features 505mm² Large Die And RDNA2 – Suggests 2x The Performance Of The RX 5700XT
The user AquariusZi over at PC Shipping forums has been one of the most reliable source of leaks in the past and they have just posted (via KOMACHI_ENSAKA) the die sizes of AMD's upcoming 'Big Navi' lineup.
The 7nm flagship in question, which could likely be called the AMD RX 5950 XT (or anything else really, AMD is free to shake things up you know),
will feature a huge 505 mm² die that based on simple maths will offer at least 2x the performance of the 'smol Navi' RX 5700 XT GPU.
Still, while the user *has been reliable in the past* I will always urge a grain of salt when the information stems from a single source and has not been verified.
AMD's next-generation 7nm Navi 21 GPU will have a huge 505mm² die with at least twice the performance of the RX 5700 XT
It's not just the Navi 21 GPU that has had its die size exposed either, the Navi 22 and Navi 23 will be clocking in at 340mm² and 240mm² respectively.
Navi 23 is likely going to be a successor for the current generation flagship, the RX 5700 XT.
The source has also mentioned that these measurements have an error of 5mm² give or take. Considering the RX 5700XT is exactly 251mm² the die size for the Navi 21 makes a lot of sense.
N21 : 505.
N22 : 340.
N23 : 240.https://t.co/8GW19Ck9sm
> ± 5mm2.
— 比屋定さんの戯れ言@Komachi (@KOMACHI_ENSAKA) April 28, 2020
The RX 5700XT contains 40 CUs based on RDNA1 architecture and the upcoming RX 5950XT (or whatever AMD decides to call it) based on Navi 21 GPU could easily contain 80 CUs based on these numbers.
This would result in a grand total of 5120 stream processors. Not counting any efficiency improvements going from RDNA1 to RDNA2 (which there certainly will be), this is a performance increase of at least 2x.
There is one caveat with this assumption, however, that all of the die area is being used for shader cores. If AMD chooses to deploy dedicated ray-tracing hardware in the Navi 2X family, then the CU count could be less.
My personal best guess is that AMD would rather go for a GPGPU approach to raytracing then dedicated cores like NVIDIA's deployment of Tensor cores on Turing.
In any case, architectural gains combined with a physical size increase makes it a very good guess that you are looking at at least a 2x increase in gaming performance regardless.
Benchmarks of an upcoming AMD next-generation GPU leaked out at CubeVR and shows an incredibly powerful card in the making. Tons of AMD cards have also recently passed RRA certification so AMD is clearly preparing an entire lineup of cards for launch.
All the telltale signs of a full-blown graphics card launch from AMD are there so if you are on the market for a high-end card, you might want to wait a few months for AMD to make a move. From what we have heard, NVIDIA has also been patiently waiting on AMD to roll out their Big Navi lineup before deciding on the final pricing of their Ampere GPUs.
THX to WCCFtech.
Wasn't there a rumor posted several months ago quite similar to this.
WCCFtech has older rumors added to new ones, that's why
Here's hoping there is some truth to it. That sort of performance will put it right up there.
For me? if this gets me 1440p 120-144Hz Gaming at resonable ~495-695€ then im happy man (must be at last 10-15% faster than 2080Ti).