Discussion in 'Videocards - AMD Radeon' started by WhiteLightning, Sep 28, 2018.
Trolling is being an D*.
Jebaiting is distraction/diversion which may lead to opposition doing something stupid... looking like a fools later.
@Ryu5uzaku : I did calculate increased performance of SM based on average INT to FP ratio in games (nVidia sourced ratio).
It resulted in Ampere SM needing 30.556% fewer cycles required to complete same work as SM in Turing. (Or theoreticaly ~1.44 times faster SM.)
Apparently, since FP blocks in SM doubled, actual performance increase per FP looks worse. But their design practically guarantees theoretical 100% utilization of all blocks as long as INT workload is smaller than FP workload. (So, Turing design is not bad one.)
But question remains: "Is actually GPU able to feed those blocks properly?"
Caches, Data being ready in time, scheduling, ...
But what matters in Turing => Ampere transition is actual performance per transistor per clock.
Therefore as long as Turing performs better at same clock while both architectures invest same amount of transistors into SMs, it is better architecture as it provides better performance for same investment.
AMD's RX 6000 series GPUs will feature AV1 Hardware Acceleration - Microsoft Confirms
A recent blog post from Microsoft has confirmed that AMD's delivering AV1 hardware acceleration with their RX 6000 series of graphics cards, enabling faster, more power-efficient, decoding of AV1 content.
With the AV1 codec, video can be encoded with a 50% larger compression ration than H.264. This allows video files to become smaller without any additional quality loss.
This not only makes video files smaller, but it also allows video streaming services to deliver higher quality video streams using weaker internet connections.
Shifting this decoding work from software to hardware makes the decoding process more power-efficient, increasing the battery life of mobile devices and power draw of mains connected systems.
Microsoft has confirmed that Nvidia's RTX 30 series of Ampere graphics cards, Intel's 11th Gen Core Processors with Xe graphics, and AMD's Radeon RX 6000 series GPUs all support AV1 hardware-accelerated decoding.
When used with Windows 10 build 1909 or newer, PC users will be able to utilise the AV1 video extension, which is available on the Microsoft Store as a free download.
In time, more and more streaming services are due to start using AV1 video. YouTube and Twitch are already working to support the standard, and Netflix has already rolled out AV1 streaming support to users of selected Android devices.
Facebook has also expressed interest in AV1 video.
With AV1, video streams can either aim for higher image quality levels while using the same bandwidth as before or opt to utilise less bandwidth while delivering the same quality levels as before. In the latter case,
AV1 has the potential to dramatically decrease the bandwidth requirements of video streaming services, placing less strain on broadband networks and the services of video stream providers.
AMD Project Quantum Resurfaces in the Latest Patent Listing
AMD Project Quantum has been quite a mysterious product. While we knew that is was an ITX sized, water-cooled case that would feature an Intel CPU with AMD GPU, we never knew if it was coming or not.
Featuring a unique, two-chamber design, AMD managed to develop two sections, where one is used for all the compute components, and the other one contains the radiator and fan for dissipating the heat produced by the compute chamber.
Four years ago, we got the news that the project isn't dead and that it will get an update with AMD's upcoming Zen CPU and Vega GPU back then. However, since that announcement, there was no word on it.
Until today. Thanks to a Twitter user PeteB(@Pete_2097) who found a newly listed patent, the hope of Project Quantum is not yet dead it seems.
On September 15th, AMD filed a patent for the Project Quantum, now protecting the unique design and possibly saving it for some time in the future.
It is almost certain that the company has not abandoned the project, and it could be just waiting for the right time to launch it.
Need my daily RDNA2 fix...
Fortunately --- Lisa Su to the rescue !
Thanks for all the excitement from our fans who joined us last week to launch @AMDRyzen
5000 series. I look forward to seeing all of you again on Oct 28th as we show off our “Big Navi”,
RX 6000 series !!
Oh and Paul from RGT too, BIG(ger) Navi (than teased) // confirmed?
May as well just mention that the card used for benching was the RX6800XT. So yeah, we have 6900XT and 6900XTX above those performances. Unfortunately, it also seems AMD is following NVIDIA's pricing scheme.
amd can't follow those prices without same RT performance and without something like DLSS
same story 5700XT vs 2070s
Well they can follow to certain degree. If RT is roughly the same. DLSS then again is argumentative as it's not available in all games anyway while it is a pretty good thing.
I am actually looking to buy myself a rdna2 more rn since I am just mehd by nvidias 3000 launch.
There's no doubt about it, this has been one of the worst launches ever, if not the worst. The only saving grace is the performance, other than that, not much else to say.
AMD will have a technology similar to DLSS.... I call it RDLSS internally....
Will offer better performance, will be easier to implement but... on the downside won't offer the same quality as NV (well... depending on the quality setting... at least not in the highest quality (lowest difference between the real res and upscaled res)).
New day, new details about RDNA2 from Paul at RGT >
So basically the strongest SKU // as I have also told --- depending on the title - on pair, slightly faster or slightly below 3090, RTX performance lower than Ampere (not told by how much, my assumption --- still better than the 20xx-series) etc.
Seems he's "confirming" my leak that the benches were done on a RX 6800XT.
Thats really nice performance if true. Competiton is great in the high end.
So... according to Patrick Schur (unfortunately I don't know this source) the BIG NAVI (NAVI 21XT) would clock up to 2.4 GHz as the gaming clock.
Navi 21 XT
~2.4 GHz (Game Clock)
And there is still a bigger navi (XTX) there.
So... I don't trust this source BUT if true then we are again speaking about --- 6900XT slightly faster than 3080 but slower than 3090, and the presumed 6900XTX slightly faster or on par with 3090.
TGP is only the GPU side or the "lower" variant. Add 40-50 Watts for RAM and the VRM loss, as GDDR6 is not that power hungry as GDDR6X. So TDP roughly about 300W for the high end.
More and more sources report similar stuff that I have been telling over and over again.... should play lottery maybe
Will be an interesting fight between AMD and NV!
Competition in the high end spurs companies on the bigger and better things. This is what we need.
Yeah it seems interesting though a bit odd too with the driver info and details here trying to think a bit on all of it.
(Which yeah that's not going to go well but still had to give that a try ha ha. )
Like there's a missing GPU model here with somewhere around 60 - 70 compute unit cores instead of AMD's 80 or bust where it's all down to 40 but then these are clocking really high to make up for it, wonder how a 72 or 64 unit or some such would perform though if scaling has been improved it could do really well.
Hmm wonder if AMD will fill out the 6700 and 6800's with either 50's or XT's later on I assume the 6900's already something like a 6900 XT and the B model is the same deal like Vega64 and Navi10 saw with the XTX type model.
Course that also leaves room for a non-XT 6900 that could be above the 40 CU's or whatever AMD's going to do.
6950 XT HBM2 too?
(Unlikely, maybe the CDNA's will be HBM though but that's for later.)
EDIT: GDDR6 is at speeds from to 14 up to 18 GB/s too I think now but then the X model I saw listed at 19 - 21 so that has further room too should NVIDIA go with faster for newer cards.
Not too sure what the higher speeds have for density or RAM capacity per module though, 2 - 4 for GDDR6 I think I saw somewhere and currently 1 for the X model but I would expect that to improve if GDDR6X is going to see further development and improve over time.
All those 40 GPU cores are going to be nice for ensuring availability though I would think, maybe not all of them will boost to acceptable as 6800's though but that leaves the 6700's with stock too.
(Bios CU unlocking as a possibility would be really neat for these but yeah that's probably not happening.)
IMHO - there won't be a 6950 XT with HBM2.... all "consumer" variants will be with GDDR6. On the other hand -- the CDNA2 branch is something different... to what "pure" RDNA2 for consumers offers.
But /// it is not necessarily bad that we won't have HBM2 variants.... remember the special sauce and Infinity cache?
But you haven't heard that from me
I'm dying to get my mitts on a 6800XT or 6900XT (or whatever they'd be called then), IF there's a 6900XTX, I may not go for such a powerful GPU as it'd be for my main monitor which does 3840x1080 144Hz. I almost went for the RTX3080, but due to lack of supply and the ridiculous price here in my neck of the woods, I missed out.....but I'm glad I'd missed out after hearing of the CTD issue.
I may not go for a Zen 3 CPU though, my 3900X is more than enough for what I need it to do....
I would not be at all too surprised, if AMD pulls a rabbit out of the hat at the last minute by announcing two very fast GPUs at the top of the Big Navi2 line-up.
Nobody expected the most extreme of the Ryzen 5000 series CPUs either and it was held right to the end of the introduction.
Same thing could be coming in the GPUs.
Maybe the XT6900 with GDDR6 and a further 6900XTX with HBM to administer a 'knock-out' punch to RTX3090 at maybe $1199.
It is very exciting to be in the market for a graphics card.
Lots of choices.
I'm thinking they might actually do it, 6900 series with HBM as a flagship halo product, very limited run but there just to have that performance crown