Guru3D Content Reviews: AMD Radeon RX 6800 and 6800 XT

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Nov 18, 2020.

  1. kapu

    kapu Ancient Guru

    Messages:
    4,136
    Likes Received:
    173
    GPU:
    MSI Geforce 1060 6gb
    Just benchmarked Valhalla with same settings as guru3d and scored exactly same FPS . Keep in mind i have 3300X :) , zero bottleneck in AC Valhalla tho, GPU use was 97-100%.
    In older games i guess bottleneck will be higher :)
     
  2. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,205
    Likes Received:
    2,996
    GPU:
    5700XT+AW@240Hz
    Yeah, in tests I have seen for CPU scaling of game, it does scale with CPU up to 4C/8T. But benefit of SMT is like 8~10% above what 4C/4T delivers.
    It is really GPU heavy game. Do you happen to have Vermintide 2? It is bit older. But that one scales up with CPU a lot.
     
  3. xynxyn

    xynxyn Member

    Messages:
    23
    Likes Received:
    1
    GPU:
    RTX 3080
    About pricing being to high, which i completely agree, i think the issue is we still have 1080p as baseline. Good to great cards at that resolution are still 200-300€. Want double or three times the performance? pay double and three times the amount.
     
    AlmondMan likes this.
  4. xynxyn

    xynxyn Member

    Messages:
    23
    Likes Received:
    1
    GPU:
    RTX 3080
    Yes, nVidias market dominance is pure to conspiracy and fanboys... has nothing to do with their performance and drivers. you can count feature sets into drivers. Picture quality may be way back old stuff from the past, but nV had better AF/AA quality. And even recently, after Radeon introduced their sharpening, nVidias image sharpening and upscaling outdone it with better results.
     

  5. Ryu5uzaku

    Ryu5uzaku Ancient Guru

    Messages:
    6,979
    Likes Received:
    209
    GPU:
    980
    I remember also days when ATI had better iq. Along with better all in all. But back then I had nvidia. Also you mentioned conspiracy theories and fanboys. Only reason AMD is in such situation is their lack of money to develop anything remotely competitive since 290. Which had lead to them having less features also while yes drivers doesnt make AMD abysmal and while I didn't have issues ever with AMD doesnt mean they didn't have bugs and issues. Now AMD is at a point where they can turn things around as they actually have money for software rnd while before they barely had any for hardware. Their performance wasnt this close with nvidia for years even tho they did deliver solid midlevel stuff
     
    AlmondMan likes this.
  6. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,507
    Likes Received:
    278
    GPU:
    MSI GTX1070 GamingX
    Sorry, I can't take you seriously. You surely know how to write a rant, but, you don't even know anything about the history of Dirt 5 and it's PC performance. Dirt 5 has dodgy performance, that's the reality, end of. You're imagining a scenario that is totally counter to what actually happened.

    That's got nothing to do with RX6000 series cards themselves, which I've already been quite vocally positive about (while sitting on the fence waiting for drivers to mature). You're assuming too much s in your head trying to defend this bs.
     
  7. kapu

    kapu Ancient Guru

    Messages:
    4,136
    Likes Received:
    173
    GPU:
    MSI Geforce 1060 6gb
    I only tested WWZ from games that like cpu more and scale well. Around 180fps. 60% gpu usage, so there is quite big cpu bottleneck but i dont care as fps is big :). Soon i will switch to 1440p will see how it will affect performance.
    I don't really understand why people say 6800 or 3080 is not for 1080p. I have ONLY 100-120fps in Vallhala, that's nice fps with some headroom for extra effects in future games. Perfect gpu for 1080p above 60 fps gaming. 4k is joke.
     
    Only Intruder, Embra and Fox2232 like this.
  8. xynxyn

    xynxyn Member

    Messages:
    23
    Likes Received:
    1
    GPU:
    RTX 3080
    I sure hope so, we as consumers can only win if both trade blows.
     
  9. xynxyn

    xynxyn Member

    Messages:
    23
    Likes Received:
    1
    GPU:
    RTX 3080
    3080 performance scaling is lacking. It performs best at 4K. HardwareUnboxed explained it and has it in its review. 2080Ti and the 3080 are practically the same on 1440p, while at 4K 3080 can be up to 30% faster. 4K is not a joke on my 65" HDR TV ;) I would agree it's overkill below big screen experience.
    Consder that games like AC:V you are playing singleplayer action/adventure games. Their engines arent designed for high FPS. For some 3rd person games the animation system brakes when going above 60FPS or stuff bugs out. Recently PC Master Race has a really hard time accepting that most 3rd person and singleplayer games are designed around an FPS cap. Those arent competitive online games where only FPS matter.
     
  10. Aura89

    Aura89 Ancient Guru

    Messages:
    8,110
    Likes Received:
    1,250
    GPU:
    -
    ....no, literally no reviews state this, including hardware unboxed.
     
    Witcher29 likes this.

  11. xynxyn

    xynxyn Member

    Messages:
    23
    Likes Received:
    1
    GPU:
    RTX 3080
    oh sorry my bad, its 20% at 1440p on average, remembered wrong. already(?!) been over 2 months... still, 3080 scales better at 4K is what the point was
     
  12. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,507
    Likes Received:
    278
    GPU:
    MSI GTX1070 GamingX
    We need twitter-like system to highlight false information.
     
    AlmondMan likes this.
  13. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,461
    Likes Received:
    485
    GPU:
    Sapphire 7970 Quadrobake
    So, AMD went from being two node jumps back, to being half a node jump back. This is very interesting. They really need to fix their driver though, especially for DX9-DX11. OpenGL I won't even mention, as they pretend it doesn't exist. Hope everything is fixed for RDNA3.

    Although my personal theory is that the PC chips are there for them to have extra profit from the console designs they make, and not the other way around. If you purchase wafers in TSMC, you want them all to be tiny, expensive Ryzens, and not huge Radeons for a price war with NVIDIA. That's what makes me believe that the PC Radeons are mostly decorative at this point.

    Also the fact that projects like DXVK sometimes do a better job than the driver with older APIs, tells me a lot about the effort AMD puts in it. All of their extra features look like they could be done only by Unwinder himself, and the person making RadeonPro. Two. People.

    EDIT: Being completely unfair to the two people here. Their software has more capabilities, enforcing Vsync, proper frame limiters, video capture that doesn't suck, and a ton of other things that the basic driver can't even do (hello Vsync). AMD seems to have completely called it quits for anything really hard on the driver.
     
    Last edited: Nov 22, 2020
    Only Intruder, Valken and carnivore like this.
  14. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,205
    Likes Received:
    2,996
    GPU:
    5700XT+AW@240Hz
    Yeah. In a day, I would be -44,000 as some would just have need to go and downvote anything and everything. Thank You for idea. It's bad, so I am against.
    = = = =
    @PrMinisterGR : In theory I could agree with one part of your statement, but not other. And vice versa.
    If we go with premise of AMD GPUs being technologically behind, then drivers look rather flawless to deliver such performance.
    If we go with premise that drivers sux, then those GPUs can as well be vastly superior to Turing.
    In reality, you see AMD's Smaller GPU equipped with extra VRAM (increases production cost) competing with larger GPU by using higher clock.
    While using higher clock this smaller GPU has better power efficiency. This leads to observation that technologically, energy part of GPU is ahead of Turing.
    Architecture is made for higher clock too and presence of IC which runs at GPU clock decouples part of bandwidth from memory chips while providing superior latency from which specific operations benefit greatly. So AMD's design is superior in terms of moving data around too. RDNA1 had pretty good scheduling already, RDNA2 greatly improved this.

    You are right in part where AMD's DX11 drivers sux in certain games. I do use DXVK here and there. Drivers subjectively sux for me as I am high fps gamer. But let's not give ideas to people who target 60Hz screens.

    But back to the cards. Taking in account GPU performance normalized to same power draw, AMD's raw performance per $ is equal to Turing in same price range. (RX 6800XT vs RTX 3080.) As for being equal in DX-R. IT is too early to even know due to games clearly not working properly when enabled on RDNA2. (Like fps is gone, but effect is missing. And that either shows effort done by developer or how broken drivers are. Or maybe something in between.)

    If anything, that says AMD may fix their semi-disfunctional drivers. And nobody knows if it is in this December's release or in a year.
     
  15. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,461
    Likes Received:
    485
    GPU:
    Sapphire 7970 Quadrobake
    I didn't say anything about the hardware. The hardware is designed with help of three of the largest companies in the world (Microsoft, Sony and Samsung). This is a false dichotomy. There are games where the DX12/Vulkan driver is used and performance is fine, and there are games that use DX11/9 that the drivers starve the GPU, which is quite evident. And yes, AMD jumped two generations with this, which is amazing. To continue the meme, can't wait for RDNA 3.0.

    My theory is that AMD will keep doing nothing about the drivers until DX11/9 "go away". I'm not certain this will happen any time soon though.

    Turing is one and a half node back, they are superior to Turing, as results show.

    Yeah, no. I don't know why you keep comparing them to Turing. RDNA2 has 1.5Bn transistors less than Ampere, just 5,4% less, which is far wider than the gap they have in ray tracing, and (don't forget) no Tensor cores. I wonder how much it costs for AMD to do the sampling clear up after the rays are cast, since the cards have no dedicated hardware for that.

    The issue with the drivers exists since at least GCN, and we all know it in this forum. AMD has done nothing to rectify it for almost a decade now. We know from the consoles that GCN had no issue with scheduling in the hardware level, but in Windows the driver sucked immensely and it really starved the GPU, compared to NVIDIA's driver. It was also obvious that AMD was optimizing games on a per-case basis much less frequently than NVIDIA. In Hilbert's tests, I believe you can see which games are optimized, and which are not. I also bet that the cards are terrible at older, less popular DX11 games, and DX9 games.

    Nah, I don't trust them to do that, honestly.

    I'm really harboring the idea that the Radeon cards are more of a statement for AMD to custom chip makers, that they can have a competent piece of hardware, than any real indication that AMD wants to compete.
    And why would they, given the same wafer space, selling Ryzen is much more profitable.

    If they wanted to break the market, they could have dropped the prices on the new GPUs $150 down (like they did with Vega when the RTG was basically just trying to survive), and they could have gotten a serious part of the pie from NVIDIA. For now the GPUs look like a fluff piece for laptop and custom designs.
     
    AlmondMan and Valken like this.

  16. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,205
    Likes Received:
    2,996
    GPU:
    5700XT+AW@240Hz
    @PrMinisterGR Now I see what you meant by node. Sorry, my bad. You did not mean it to be of any relevance. But for fun, please write how many nodes back are:
    Turing
    RDNA1
    Ampere
    RDNA2

    And add point of reference which is 0 nodes behind.

    With the rest like GCN issues. You are missing mark of what is HW and what is software. Using conjecture about DX-R does not help either.
    You've been given tool for comparison which keeps things fair. Yet did not use it on your own. (I see no point in your argument there as it is completely removed out of context.)
    So, go and compare Ampere with RDNA2 at same power draw and tell me what is performance per $ of those cards. And then elaborate on what it says about AMD's smaller GPU.
     
  17. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,507
    Likes Received:
    278
    GPU:
    MSI GTX1070 GamingX
    It's not about votes, it's about posts with false information being highlighted as such, like post 309. It's not about opinions which is what you think you'd be downvoted for, but rather facts.
     
    PrMinisterGR likes this.
  18. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,205
    Likes Received:
    2,996
    GPU:
    5700XT+AW@240Hz
    In that case, it would be even worse. Because one thing is that we can disagree on something for eternity.
    And someone comes and reads our exchange. And will agree or disagree with one or both. It is about perception of 2 people.

    But marking things with resolute True or False on this forum will result in showing what is consensus instead of what is true or false.
    And then someone from outside comes and will think that it is what G3D site identifies with.

    Kind of like Facebook deciding on what is fake news and what is not. Whose account should be banned in time of election to prevent spread of persons ideals.
    Except that it would only look like that for uninformed person from outside. Instead it would be about which side of argument gets more people to agree.

    Like in our 4K discussion. It would easily happen that 4K would be considered as viable. In other thread, I have been told by you that laptop with 1060 is 4K viable. 4K discussion is getting more ridiculous by the minute. In reality it can berelly play semi new games on 1080p with reasonable details.

    By now, I know that what you consider as image quality is edge of polygon and not realism of pixel. Sure, 4K/8K/... will give you more pixels, so you can see shapes of objects in far distance better.
    But that comes at cost of reducing per pixel details for everything that does not need to extra pixels for contouring.

    Could I have true photorealism of movie-like effects on 1080p or pick current graphics on 4K. I pick 1080p movie like visuals.
    Soon following will be achievable on 1080p, or people can stay on 4K and have what they have now.

    And that's the question. Do you want to have each leaf separated on 4K with minimum shading. Or do you want to have lovely shading in environments that just pull you into the game.
    (Most of techniques used in this movie are already available via DX-R, but we are few steps away from having it at proper performance to be game viable.)
     
  19. Astyanax

    Astyanax Ancient Guru

    Messages:
    8,411
    Likes Received:
    2,810
    GPU:
    GTX 1080ti
    Wait, seriously?

    AMD should drop support for everything older than Navi (i mean, its historically what they do anyway) and get rid of all the un-commented code from their crap so they can really work on things.

    nooooo, its been there since the very first Radeon capable of DX11.
     
    Last edited: Nov 22, 2020
    PrMinisterGR likes this.
  20. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,461
    Likes Received:
    485
    GPU:
    Sapphire 7970 Quadrobake
    Nvidia has better performance with an extra 7% of transistors, while being half (if not more) a fab node behind. I can't understand how this is not clear just by reading numbers and any review (including this one).

    It's not really a matter of opinion.

    RDNA 2.0 doesn't have a hardware denoiser, and it's 50% slower in actual ray calculations.

    To sum it up:

    It can find what is in the way of a ray twice as fast as Ampere, but then calculates the rays twice as slow, and clears the final product up using shaders instead of dedicated hardware.

    In what universe this combo is faster than Ampere, I truly cannot fathom. Unless of course there's fuckery, which neither NVIDIA nor AMD are strangers to.

    Do we compare at the same power draw, or with performance per dollar?

    At the same power draw, the 3080 would be at the same average performance as the 6800XT (the 6800XT only uses around 8% less power after all), while still wrecking it in ray tracing and non-hand-optimized games were the AMD driver traditionally sucks.

    In performance per dollar, which is ANOTHER metric, which you so confidently mix, the clear winners are the 3070 and the 6800.
    The 6800 would be a legendary card if sold 130$ cheaper. Alas.
    It's not as if any great effort is being done for the older cards anyway. I don't think that anything would change at all. Navi really was a pipe cleaner for the 7nm console launches, it almost seems like it was sold to prove to Microsoft and Sony that the core design would yield well.

    I think it will be the first AMD GPU to be obsolete as fast since the old 6000 series. Turing, in comparison, seems to have a much better future. It basically has all the DX12 ultimate features.

    And to expand on something else:

    I thought of this yesterday, when I realized that for each Radeon made, AMD could be making five Ryzen chiplets. Ryzen has much higher profit margins, better yield, less third party material costs (no VRAM or PCB to care about), and less software costs.

    Why bother at all with Radeon? Their main competitor will never be caught in productivity as they've defined the sector and they're literally a decade ahead in software with CUDA.

    Thedriver situation, apart from superfluous features is the same. Ok Vulkan and DX12, usually terrible DX11 unless a game is hand optimized by AMD, horrible performance out of the box with the two most popular engines (Unreal & Unity), terrible DX9 and non-existent OpenGL. So, why do they bother?

    My stupid "conspiracy" theory says that they design new GPUs with the help of Microsoft and Sony, and the desktop GPUs are there to get a few extra bucks from the custom designs, and keep the GPU division on its toes until the next major console release cycle has to be prepared. They are still using Vega (albeit with improvements) for APUs, which also tells me that graphics for notebooks aren't even a priority unless Intel smacks them.

    AMD doesn't care about desktop GPU sales. They have a huge opportunity cost compared to Ryzen, and they will never have NVIDIA's software ecosystem. They only bother with a desktop GPU to have presence until the next console refresh, which is always what they use as a new design in general.
     
    Last edited: Nov 23, 2020

Share This Page