Discussion in 'Videocards - AMD Radeon' started by WhiteLightning, Sep 28, 2018.
Project Scarlett is shaping up to be quite beastly. Reports say the next-gen Xbox will beat PlayStation 5's onboard specs, but on paper the systems seem quite similar:
They both have a high-end AMD SoC with a Zen 2 CPU and Navi GPU, a super-fast PCIe 4.0 SSD, raytracing support, 8K resolution support, and high FPS gaming. But Microsoft promises it's going all out with Scarlett's power.
So what kind of in-game performance can we expect from Project Scarlett?
No exact resolution and FPS matchups have been explicitly confirmed (there's no 4K 60FPS selling point yet like with the Xbox One X, for example), but Microsoft says Scarlett can hit 120FPS, which is likely for 1080p resolution.
We should also expect native 4K gaming without upscaling like with the Xbox One X.
"Yes, we'll have great graphical performance, but things like speed will be a big factor with integrating the SSD. We can support increased CPU, higher framerates, so 120 FPS, having things like DirectX raytracing that's never really existed on a console,"
Microsoft's Aaron Greenberg told Kotaku.
"To put all that in the hands of developers will just bring a whole new set of console game experiences."
Greenberg also says Microsoft isn't fazed by the PS5. Nor should they be. Microsoft has a huge billion-dollar infrastructure of massively popular services like Xbox Game Pass that will carry the new system well into the future.
Xbox is an ecosystem, not a console, and the games-maker shouldn't be concerned about Nintendo nor Sony at this point.
"We're not really worried about PS5. We're more customer-obsessed than competitor-obsessed.
I think Sony has built a great business, they have a very strong brand and a strong presence and we have a lot of admiration for what they've done."
Project Scarlett will also be fully backward compatible with all existing Xbox One games and accessories, and will even play titles from Xbox 360 and original Xbox eras.
Expect to see "Project Scarlett Enhanced" updates for older games that leverage the system's specs, complete with native fine-tuning perf like increased frame rates, smoothed visuals, and more.
Project Scarlett is due out by Holiday 2020. No pricing was announced. Check below for everything we know about Project Scarlett so far:
Project Scarlett confirmed details:
Zen 2 CPU
4x as powerful as the Xbox One X's 6TFLOPs of perf
Super-fast SSD that can be used as VRAM (likely PCIe 4.0)
Supports 8K resolution (likely media playback) 120FPS gaming
Can deliver up to 40x more performance than Xbox One in specific use cases
Backward compatible with Xbox, Xbox 360, and Xbox One games
Compatible with Xbox One accessories
AMD To Introduce 2nd Generation rDNA Based Navi GPU Powered Radeon RX Lineup at CES 2020
AMD's next-generation rDNA powered Navi GPUs for the Radeon RX lineup will apparently make the first appearance at CES 2020. This along with a couple of other rumors surrounding AMD's Radeon and Threadripper product lineups have been revealed by Chiphell leaker, Wjm47196.
AMD To Intro 2nd Generation rDNA Based Radeon RX Navi GPUs at CES 2020 - Ray Tracing Support & More Onboard
The rumor comes from the Chiphell user who has previously been super accurate with his leaks such as Polaris 30 (Radeon RX 590) launching in Q4 2018, Radeon VII in Q1 2019 and 7nm Navi mainstream cards arriving before the high-end enthusiast-grade variants in 2019. With a good track record to begin, let's see what's the latest info that the leaker has for us.
According to him, AMD's plans for their 2nd Generation rDNA based Radeon RX Navi GPU lineup is to offer a product preview at CES 2020. That would be interesting and makes sense since CES 2020 will be a huge event for AMD to unveil their 2020 product portfolio which includes Zen 3 and 2nd Gen rDNA based products for mainstream, enthusiast, notebook and server markets. It is already confirmed that the rDNA 2 GPU architecture is in-design and scheduled for launch in 2020. Some of the features to expect from 2nd Generation rDNA Navi GPUs would be:
Optimized 7nm+ process node
Enthusiast-grade desktop graphics card options
Hardware-Level Ray Tracing Support
A mix of GDDR6 and HBM2 graphics cards
More power-efficient than First-Gen Navi GPUs
AMD also wants to push RDNA 2 towards the higher-end spectrum of the market. While the first generation RDNA GPUs perform great in the $300-$500 segments, we would likely see a range of enthusiast-grade designs with RDNA 2 based Radeon RX series graphics cards. These would take the fight to NVIDIA's RTX 2080 SUPER / RTX 2080 Ti but NVIDIA isn't the company that would just silently sit through a competitor's launch. Plans of NVIDIA's 7nm GPUs are underway and it is likely we would see a grand launch in 2020 for their next-generation graphics architecture, presumably known as 'Ampere'. There are also rumors about NVIDIA introducing an even faster RTX 2080 Ti in the form of the RTX 2080 Ti SUPER in early 2020 which would definitely keep AMD GPUs away from reaching the performance crown any time soon.
It should also be pointed out that high-end Navi GPUs might retain High-Bandwidth memory design like the current flagship. While AMD is featuring GDDR6 memory on their mainstream RDNA based cards, it is likely that the company would go ahead with the newer HBM2E VRAM.
The HBM2E DRAM comes in 8-Hi stack configuration and utilizes 16 Gb memory dies, stacked together and clocked at 3.2 Gbps. This would result in a total bandwidth of 410 GB/s on a single and 920 GB/s with two HBM2E stacks which is just insane. To top it all, the DRAM has a 1024-bit wide bus interface which is the same as current HBM2 DRAM. Samsung says that their HBM2E solution, when stacked in 4-way configuration, can offer up to 64 GB memory at 1.64 TB/s of bandwidth. Such products would only be suitable for servers/HPC workloads but a high-end graphics product for enthusiasts can feature up to 32 GB memory with just two stacks which is twice as much memory as the Radeon VII.
If that's the case there might not end up being a high end first gen Navi card.
AMD second-gen Navi: CES 2020, GDDR6/HBM2, hardware ray tracing
AMD is expected to unleash their second-gen Navi GPU at CES 2020 according to the latest reports, with a preview at CES of the second-gen RDNA-based Radeon RX 6700 family -- at least that's what I'll call it for now.
The new second-gen RDNA 2 architecture is expected to use an optimized 7nm+ process node, offer up enthusiast-grade graphics cards (YES!),
hardware-level ray tracing support, both GDDR6 and HBM2 options, and even more power efficiency over the first-gen Navi products.
The note about AMD using HBM2 is interesting, which could be useful for the enthusiast-grade RDNA 2 cards that would not just compete with NVIDIA's current flagship GeForce RTX 2080 Ti but also whatever NVIDIA is cooking up for 2020 in their new Ampere-based GeForce RTX 3000 series cards.
That sounds like the one I have been waiting patiently for. Can't wait for them to see how it performs.
Intel Details Xe GPU Architecture - Ponte Vecchio For Exascale Compute Scalable To 1000s of EUs, XEMF Scalable Memory Fabric, Rambo Cache, Forveros Packaging, 40X Increase In FP64 Compute Per EU & A lot More!
There's much to cover here so let's talk about the first aspect of the Xe GPU architecture, the lineup itself. The Intel Xe GPU architecture is one scalable architecture powering various products.
Intel is planning to offer three microarchitectures derived from Xe. These include:
Intel Xe LP (Integrated + Entry)
Intel Xe HP (Mid-Range, Enthusiasts, Datacenter / AI)
Intel Xe HPC (HPC Exascale)
Just from the naming scheme, you can tell where these GPUs would be a feature. The 'LP' keyword stands for Low-Power whereas te 'HP' keyword stands for High-Performance.
The HPC keyword is simply the High-Performance Computing aimed architecture which would use a range of new Intel technologies that we are going to talk about. It is stated that Xe LP is around 5W-20W but can scale up to 50W.
Intel's Xe HP is one tier above that and should cover the 75W-250W segment while the Xe HPC class architecture should aim even higher, delivering, even more, compute performance than the rest.
“Architecture is a software compatibility contract. We originally were planning for two microarchitectures within Xe, our architecture (LP and HP), but we saw an opportunity for a third within HPC.” - Raja Koduri
Intel Xe class GPUs would feature variable vector width as mentioned below:
SIMT (GPU Style)
SIMD (CPU Style)
SIMT + SIMD (Max Performance)
Raja specifically talked about the Xe HPC class GPUs since that's what the developer conference is entirely about. Intel's Xe HPC GPUs would be able to scale to 1000s of EUs and each Execution unit has been upgraded to deliver 40 times better double-precision floating-point compute horsepower.
The EU's would be connected with a new scalable memory fabric known as XEMF (short form of XE Memory Fabric) to several high-bandwidth memory channels.
The Xe HPC architecture would also include a very large unified cache known as Rambo cache which would connect several GPUs together.
This Rambo cache would offer a sustainable peak FP64 compute perf throughout double-precision workloads by delivering huge memory bandwidth.
I think we will end up needing an Intel GPU thread section sometime soon. Hopefully they will put out a competetive gaming GPU and not just for HPC and professional markets.
These being proprietary machines, i can easily see them using PCIe x8 SSD's, even with PCIe v 4.0
Or a raid of two x4 drives...
I sort of imagine the Sony cartridge to be that sort of speed. Not huge storage, but very fast storage/cache.
You can always have another much larger SSD for storage...
Black Friday at Radeon RX 5700: Custom design just under 300 euros
Yeah i think @Hilbert Hagedoorn probably have in mind to create Intel GPU and driver section at some point. "From Intel GMA to XE"
Those Lows 0.1% -> Madness
One can imagine what can do Bigger Navi w/HBM2......
Might be a bit of a oops for what it pulls from the server but maybe next week or maybe later, we'll see.
EDIT: Hmm so besides the marketing language for what it does it looks to be lowering the render resolution dynamically (Modifying the back buffer resolution?) in order to achieve a higher in-game framerate. Interesting.
Earlier version but good for giving a clear indication, newer ones could be doing a bit more and would be across multiple (D3D9 only?) games.
EDIT: Wonder if it's going to be like Chill always enabled and activated via hotkey combination?
Though I would prefer options at least a FPS target and a resolution scale from 100% to 50% or something like that for how much it can decrease in order to get as close to said FPS target as possible but we'll just see I guess and it might do other stuff.
EDIT: Oh and the server side images and updates being there doesn't have to mean much but a driver release in one or two weeks would be possible for a mid-December pre-Holiday weekends driver launch.
Wait, so RIS is being opened up to the rest of the GCN hardware stack? But not in DX9?
Seems like it. All the gcn will support it but its unclear about hd5000 and dx9 "only".
Really good news