Review: ASUS Radeon RX 5700 XT ROG STRIX

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Aug 12, 2019.

  1. alanm

    alanm Ancient Guru

    Messages:
    12,272
    Likes Received:
    4,476
    GPU:
    RTX 4080
    A few days ago...

    Sorry, couldnt resist sykozis. :D
     
  2. jbscotchman

    jbscotchman Guest

    Messages:
    5,871
    Likes Received:
    4,765
    GPU:
    MSI 1660 Ti Ventus
    I gotta give credit to AMD for how Navi turned out. $349 and $399 are very respectable prices for the high end performance you're getting, compared to Nvidia's RTX lineup. Maybe now prices will start coming down on new releases.
     
  3. sykozis

    sykozis Ancient Guru

    Messages:
    22,492
    Likes Received:
    1,537
    GPU:
    Asus RX6700XT
    And as I said in both that thread and this one, I go through this with every new architecture. Also, within a few driver releases, the issues will be worked out like every other time.

    HD4850 experienced stuttering, texture corruption, colors flashing, and DX/driver crashes. HD7870 did the same things, as did my RX470 and R5 240. Actually, the R5 240 still did the last time I booted the system it's in....but that's not driver related. That's actually hardware related. Won't get into what's causing it....but it'll never get fixed because I simply don't care. My HD7950 is the only card that hasn't and that card was purchased towards the end of product life.

    I'm still not impressed with my RX5700. Yes, it easily outperforms my RX470. It even outperforms my 1660Ti.....which I also wasn't impressed with. Would it make you feel better if I said I was considering putting my GTX970 back in? It's sitting on the shelf behind me right next to the RX470, HD7950 and 1660Ti, as well as my i5 6600K..... Yeah, I keep a few graphics cards laying around.....

    But, anyway, those are literally the only driver related issues I've had from AMD's drivers in the last 12 years....and as I said, each time they've been fixed within the first few driver releases. I also don't typically buy a new card from AMD unless there's a major architectural change. The HD7950 was a fluke.

    No, most issues are definitely user induced.... If the drivers were really as bad as people on forums claim, AMD wouldn't still be in the graphics card business.
     
    airbud7 and Fox2232 like this.
  4. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,040
    Likes Received:
    7,380
    GPU:
    GTX 1080ti
    yeah, wrong.
     

  5. sykozis

    sykozis Ancient Guru

    Messages:
    22,492
    Likes Received:
    1,537
    GPU:
    Asus RX6700XT
    So, since you're convinced that AMD's drivers are the problem, explain why there's so many people that have no issues..... Because, fact is, if the drivers were as broken as some people on this forum claim, EVERY.SINGLE.USER would be experiencing all of the same problems.... Forums would be absolutely flooded with complaints. No system builder would touch AMD's graphics cards, simply to avoid complaints. Consumers wouldn't buy AMD's graphics cards, to avoid issues. AIB's would refuse to make AMD based graphics cards, because RMA's are costly. Most retailers would refuse to carry them. AMD would be out of the graphics market.... Fact is, the drivers are not as broken as a handful of users on this forum claim. Most issues, are caused by the user.
     
    airbud7 and carnivore like this.
  6. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,040
    Likes Received:
    7,380
    GPU:
    GTX 1080ti
    AMD's drivers just like nvidia's reprogram the the whole card at driver init replacing the functions of the bios, "optimizations" on a transistor level that work fine for a reference card can then destabilise non reference boards, theres also base board and device inter-compatibility that affects the behavior of drivers on any given system.

    its just about impossible for a normal user to be the cause of a driver screwing up on either vendor unless they've done stupid stuff like login on the Root system account and gone to town deleting registry keys.
     
  7. MonstroMart

    MonstroMart Maha Guru

    Messages:
    1,397
    Likes Received:
    878
    GPU:
    RX 6800 Red Dragon
    But are they real? I've read lot of people online claiming to have major problems with Ryzen 1st gen. Worked perfectly for me. I did not buy it launch day but it's not like i bought it one year later either i think it was 5-6 months old or something. RAM worked perfectly at 3200. Did not have one single issue with it outside of Logitech G crashing but it was solved by a Logitech G update eventually don't think it was a cpu problem at all. But you can still read people saying there was so much problem it was atrocious. I'm sure there was some trouble the first few days maybe weeks but it looks grossly exaggerated by some. Launch day problems are expected and it can happen with any company. I had problems with my nVidia card when Vista launched. Vista was the last thing i bought and installed on launch day because of all the trouble i had with my gpu and my creative sound card. I always wait 3-4 weeks now.

    It seems like there's some troubles with overclocking but overclocking troubles the first few weeks is not unheard of far from it. In fact i gave up overclocking my 1070 since it has stupid hynix memory and apparently i was not lucky at the silicon lottery. Trying to get more than 100Mhz on the core and/or memory gives me noticeable artifacts and even crash if i persist. Should i blame nVidia for that? Nah i know better.
     
    Last edited: Aug 13, 2019
  8. MonstroMart

    MonstroMart Maha Guru

    Messages:
    1,397
    Likes Received:
    878
    GPU:
    RX 6800 Red Dragon
    Windows forums are something. Half the people are n00bs who don't know what they are doing and are purposely inadvertently screwing their own computer. The other half is made of people who think windows is perfect and will deny any potential problems and will tell people to reinstall every piece of drivers and software they have installed to solve it. It drives me crazy.
     
    HandR and carnivore like this.
  9. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    If I am on 16nm, and my GPU uses less power to be faster than your 7nm GPU, you're f*cked the moment I go to 7nm.

    My architecture is so much better that your chips cannot do the same thing at the same power as mine, despite your chips being a major node ahead.

    What is so difficult to grasp here? The 5700XT can barely go against the 2070 Super. What happens when Nvidia gets a 7nm "2070 Super"?
     
  10. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    1. That's not what a strawman is. I made an argument, that others are making too. That argument is that for the 7nm shrink and the new architecture, AMD doesn't seem to have what it takes to actually take on NVIDIA at 7nm.
    2. It is too hot, using too much power and having too few performance considering the power/die size/7nm.
    3. If Nvidia improves zero things in their architecture, and just shrink it, you would get an RTX 2080 Super, using 50W less, at the same die size. That's with the RTX cores and the rest.

    Are the issues clear now? AMD has no other die shrink in sight, and this is the arch until at least 2021. This is not enough.
     

  11. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,544
    Likes Received:
    18,856
    GPU:
    AMD | NVIDIA
    Can we drop the sharp tone a bit and keep the discussion friendly?

    The 7nm/12nm/14nm comparison actually is a bit irrelevant in this discussion. If NVIDIA brings the current architecture to 7nm it will not bring them more other than a change in voltages and perhaps a small process improvement and thus a tiny bump in clock frequency. Smaller fabrication is not bringing advantages over architecture other then that.

    The biggest advantage of a smaller processor is two-fold. You can cram more transistors on the real-estate. That, in the end, means that if at 14nm you can make 250 dies, on 7nm you can make 500 of them (theoretical numbers here). As for AMD, they do have big-NAVI in the works. I expect some announcements at CES time.
     
    carnivore, HandR, Embra and 3 others like this.
  12. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    On other hand, take RDNA with 10.3B transistors, compare it to Turing with 10.8B transistors at same clock. RDNA is ahead. Power efficiency can and will be sorted out. AMD has half dozen of patents out in public that are not yet implemented. What about those that we do not know about? (I am sure that they are aware that their only weakness is power efficiency.)

    5700 XT is closely defeated by 2070 Super? Then it is good, because 2070 Super is based on 13.6B transistor silicon. Make them both on same node, have your eyes bleed from price.

    Power efficiency is very important improvement for AMD, not for nVidia. For them it is bonus. nVidia has to consolidate and get higher performance w/o increasing clock or transistor count. Because there, AMD is slightly ahead and is likely to make more improvements (SSIMD patent information and Lisa Su stating that 1st generation of RDNA is in GCN compatible arrangement of WGP(Dual-CU) while next will go its own way).

    If I look at known things AMD has in pipeline for GPUs and compare it to nVidia... it is just like unpredictable storm. You never know where that lightning is going to strike.

    Historically nVidia used their usual strong business strategy by picking one particular part of rendering, boosting it heavily and marketing it. AMD historically played catch and followed (always behind) while doing other minor improvements that could not change scales to their advantage.
    With RT, nVidia overestimated their HW capability and market reception due to transistor increase needed (cost and market price). And they practically lost when AMD said : "No, this is not right time, yet."

    AMD had RT in pipeline long time before nVidia introduced it via their specific HW, but they knew that they had to solve their memory access, caching and scheduling 1st. That did happen with RDNA1.
    And AMD is still not pressed to deliver RT till 2nd half of 2020 in consoles. And that leaves still same questions. What improvements AMD has been working on since finishing RDNA1.
    What improvements AMD had ready, but decided that they are not suitable for smaller RDNA1?
     
    Last edited: Aug 13, 2019
    Truder likes this.
  13. jbscotchman

    jbscotchman Guest

    Messages:
    5,871
    Likes Received:
    4,765
    GPU:
    MSI 1660 Ti Ventus
    What are you looking for then exactly?
     
    airbud7 likes this.
  14. Undying

    Undying Ancient Guru

    Messages:
    25,478
    Likes Received:
    12,883
    GPU:
    XFX RX6800XT 16GB
    Hes just unlucky i guess. His 1660ti was underperforming and now having issues eith 5700xt. There is a good chance that if he bought a 2070S he would complain about it also. Idk what to tell him...
     
    jbscotchman and AlmondMan like this.
  15. Mpampis

    Mpampis Master Guru

    Messages:
    249
    Likes Received:
    231
    GPU:
    RX 5700 XT 8GB
    Performance doesn't scale with power above a certain threshold. Asus seems to have passed that threshold, and that's probably why they have 2 BIOS options

    Yeah but what you are saying is condition specific. I, for example, could just set my MSI RX 480 to 1266MHz (from the 1303MHz factory overclock), set fan speed to 45% and keep my temps to 67°C max. Performance impact was close to nothing. But I could do that because I had a large case with 6 fans in it. This is not a general rule, nor does it mean that my Polaris GPU is exceptionally efficient.

    What will happen is that the new GPU (let's call it the RTX 3070) will use less power. But you can't be sure that the die shrink will offer performance bumps, as you can't know how the architecture will scale with power consumption on a smaller node.
    P.S. How much would you say this superior architecture card will cost?
     
    HandR likes this.

  16. Exodite

    Exodite Guest

    Messages:
    2,087
    Likes Received:
    276
    GPU:
    Sapphire Vega 56
    No, it's definitely not clear.

    You just bypassed the entirety of my post and restated the very thing I argued against. Let's assume I weren't clear enough, here's another example of why blind faith in die shrinks are misplaced;
    No, that's not how die shrinks work in practice. The marketing materials from the foundry likes to claim certain numbers that are technically true but doesn't actually translate to fully designed chips.

    Some numbers for reference;
    • Vega 10 is a 495mm2 chip with 12.5M transistors at 14nm (Samsung/GloFo). Rated at 210-295W.
    • Vega 20 is a 331mm2 chip with 13.2M transistors at 7nm (TSMC). Rated at 295W.
    • Navi 10 is a 251mm2 chip with 10.3M transistors at 7nm (TSMC). Rated at 180-225W (Couldn't find the Anniversary Edition numbers I'm afraid).
    • GP102 is a 471mm2 chip with 11.8M transistors at 16nm (TSMC). Rated at 250W.
    • TU104 is a 545mm2 chip with 13.6M transistors at 12nm (TSMC). Rated at 210-250W.
    • TU102 is a 754mm2 chip with 18.6M transistors at 12nm (TSMC). Rated at 250-280W.
    While a new process node may reduce the size and power draw of an individual transistor that doesn't translate perfectly across a whole design, as the above list should make clear.

    The 2080(S) is today faster than the 1080Ti overall but it's also larger (~15% physical and transistor count) and using less power on a smaller manufacturing node.
    The 2080Ti is ~60% larger (physical and transistor count) than the 1080Ti and uses comparable power on a smaller manufacturing node.

    Radeon VII is ~33% smaller than Vega64, using slightly more transistors (~6%) at a higher performance level and comparable power.
    Meanwhile the 5700XT is ~50% smaller than Vega64, uses fewer transistors (~18%) and notable less power yet performs close to the Radeon VII.

    What's the takeaway here then?

    I'd argue it's threefold, namely;
    • The claimed process advantages of any particular node shrink, at the transistor level, isn't particularly close to what we're seeing from the end products.
    • Architecture design accounts for as much, occasionally more, than the process does. As evidenced by the Radeon VII vs. 5700XT comparison.
    • Taken together we should be very careful not to expect future node shrinks, especially as we get ever closer to physical limitations, to solve any perceived performance or power concerns of current architectures.
    To address your example specifically, a strict node shrink of the 2080 would still likely be ~365mm2 or roughly ~45% larger and using ~40% more transistors than the 5700XT. If you're not pushing performance than yes, power might be at about the same level as the latter.
    You've now created a GPU that's 40-45% larger than the 5700XT and performing ~15% better, yay?

    I do believe Nvidia's next-generation cards will perform better than this of course, though much like the Radeon VII vs. 5700XT comparison it's going to be down to architecture refinements more than brute-forcing a node shrink into the mix.

    TL;DR: I can't do much about your disappointment regarding AMDs 7nm performance in the GPU space. However, based on objective metrics I feel that it is, at best, misplaced. Navi performs quite well, expecting 7nm to do more at this size/power/performance level was never a realistic proposition.

    Edit: Math hard.
     
    Last edited: Aug 13, 2019
  17. alanm

    alanm Ancient Guru

    Messages:
    12,272
    Likes Received:
    4,476
    GPU:
    RTX 4080
    @Exodite , you cant compare Turing dies to anything AMD or Pascal, due to the extra RT and Tensor cores.
     
    Maddness and Robbo9999 like this.
  18. barbacot

    barbacot Maha Guru

    Messages:
    1,002
    Likes Received:
    982
    GPU:
    MSI 4090 SuprimX
    So...from Asus custom cooled Radeon 5700XT review now we are counting transistors, dies and any other s**t that nobody really cares.
    Bottom line after seeing the first review of a custom cooled Navi I see that in the middle and high end Nvidia has a real competitor now while in the enthusiast range Nvidia is still king of the hill.
    I would expect a card also in this range from AMD - this cards in the enthusiast range are not money makers but they count in prestige and AMD still has to gain that without their usual waiting for the next gen game. Navi is solid and finally a good foundation to build further. Let's hope that we will hear good things.
    Oh..when nvidia will release their I DON'T CARE HOW MANY nm chip I'm sure that AMD will be finally ready after many years caught sleeping.
     
    AlmondMan and Maddness like this.
  19. Exodite

    Exodite Guest

    Messages:
    2,087
    Likes Received:
    276
    GPU:
    Sapphire Vega 56
    I'm not sure whether you're making that argument in jest or not, as humor translates poorly on the Internet, so I apologize if I'm missing the context here.

    It's a terrible argument.

    In general it's of course true you can't directly compare any one architecture, regardless of vendor, to another due to the inherent differences. For example, AMDs Vega and Navi seem to be more constrained on memory bandwidth than Nvidia's current gen.

    That being said there's nothing magical about RT and Tensor cores, they're simply features like anything else on a GPU (encoder block, memory interface, caches etc.) and in any comparison based on user-centric metrics (performance, price, power) they're already accounted for.

    Essentially saying you can't compare Turing to any other architecture due to RT and Tensor cores is like saying you can't compare anything pre-Turing to AMDs Vega or Navi due to Rapid Packed Math. Which would be a terrible argument as it's just another feature.

    Anyway, the focus on my post was to show how moving architectures to smaller process nodes don't generally produce the amazing results implied by the numbers provided by the foundries. And this works across any vendor, as transistors and silicon are just transistors and silicon.

    Ie. a strict functional node shrink of a TU104 die isn't going to be massively different from what we saw going from Vega10 to Vega20.
     
  20. alanm

    alanm Ancient Guru

    Messages:
    12,272
    Likes Received:
    4,476
    GPU:
    RTX 4080
    Not the point. Yes, Turings die sizes are larger because they do extra functions that the others do not have (RT). But the issue is when you are saying...
    The 2080(S) is NOT faster because it is "larger" as is somehow implied in your statement. It is larger because of the added RT/Tensor cores. Its faster due to GDDR6 and more efficient arch improvements despite less Cuda cores, ROPs, TMUs. Thats why you cant compare die sizes and imply thats where its performance comes from vs the 1080ti. Applies to the other cards as well, it all gets muddled when you had Turings die sizes into the mix, unless its a FYI purpose.
     
    Maddness likes this.

Share This Page