1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Review: AMD Radeon RX 5700 and 5700 XT

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 7, 2019.

  1. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,201
    Likes Received:
    138
    GPU:
    MSI GTX1070 GamingX
    Good product and price in UK here is actually ok.

    It's obvious that the drivers are good, but, haven't reached their full potential yet. Also, the move to GDDR6 is probably the best thing they've done here.
     
  2. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,201
    Likes Received:
    138
    GPU:
    MSI GTX1070 GamingX
    The problem is people in youtube-land actually think/thought the 16GB of ram will future-proof this product. Only content creators will be able to justify it, meanwhile everyone else will move over to XT and Super cards.
     
  3. Ufozile

    Ufozile New Member

    Messages:
    3
    Likes Received:
    0
    GPU:
    RTX2080
    Last edited: Jul 7, 2019
  4. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,738
    Likes Received:
    2,198
    GPU:
    5700XT+AW@240Hz
    Not a big issue with heat I think for 64CU card, if CUs have same arrangement.
    64 CUs means 1.6 times higher performance at same clock. And would result in TBP of 335W at those same clocks. But limiting card to 300W would mostly result in 10% performance loss only.
    So some 1.4 times higher performance than RX 5700 XT which would be around 2080Ti's performance and would have some 16B transistors.

    Issue would be with bigger GPUs. Because there RDNA1 would have to throw away all that clock potential just to stay within 300W limit. This 1st iteration can go up to 2GHz for good chips. Gaming clock is around 1700~1800MHz to keep TDP. But bigger GPUs would be forced to go down to maybe 1450~1500MHz. That would be big waste.
    And I think AMD is working hard on optimizing power draw so next iteration can use higher portion of achievable clock.

    It is actually funny, because people with 5700 who raise TBP limit (and other limits) to like 240W may get stock 5700 XT performance via OC if their chip can clock that high. (Something like 7950 vs 7970)
    But I think that it is pointless to even try to OC those cards till all power limits in BIOS are changed.

    @Hilbert Hagedoorn : Do those reference cards have dual BIOS switch? I have seen some unusual rectangular hole on side of PCB close to cover which makes tiny access inside. Maybe there is switch. (Wishful thinking.)
     

  5. Aura89

    Aura89 Ancient Guru

    Messages:
    7,627
    Likes Received:
    898
    GPU:
    -
    This seems to be something nvidia does quite often and it seems to work for them.

    RTX 2080 has less cores but higher frequencies then the 2080 ti, yet the 2080 ti performs quite a bit better then the 2080, same with the RTX titan. Same thing with the 1080 vs 1080 ti or titan X/XP, etc.

    I'd rather have a much larger GPU with more cores and lower frequency and overall more performance options, then not have them at all.
     
  6. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,738
    Likes Received:
    2,198
    GPU:
    5700XT+AW@240Hz
    True. But since AMD is going to hit power limit, making big 20B transistor GPU for example could maybe deliver 1.65 times higher performance than 5700 XT (10B transistors) due to need to stay within 300W limit.
    Issue is price then. Twice as big GPU, much worse yields. As 5700 XT goes for $400, what would be price of card with GPU twice the size? $1000? $1200?

    Sadly, even if AMD went chiplet way and managed to make card with 2 chiplets with same 20 Dual-CUs as 5700XT and therefore potentially $900 price point, it would still be power limited and therefore around that 1.65x multiplier of 5700 XT.

    Had 5700 XT been 200W TDP card, I would be much more hopeful. Because like 60W out of that are memories, VRMs and blower. Then GPU would be lovely 140W and clock sacrifice on bigger/dual chip would be much smaller.
    Maybe AMD will go with HBM2 again to save like 40W on 12GB card. But power is big limiter even for RDNA... for now.
     
  7. Aura89

    Aura89 Ancient Guru

    Messages:
    7,627
    Likes Received:
    898
    GPU:
    -
    Maybe i'm looking at this wrong, but the 5700 XT is only a 225 watt card. Sure there's the 50th anniversary edition, which is 235 watt. But that leaves them with 65-75 watts to play around with for the 300 watt limit.

    To put that in comparison to nvidia, since that was my comparison before:

    RTX 2080 was a 215 watt card, that's only 10 watts away from the 5700 XT, and they managed to increase the die size, add cores, decrease the frequencies, and have a 2080 ti at 250 watts.

    Now, looking at this review, it appears the RX 5700 isn't even taking 235 watts, but rather 204, very close to the 200 watt TDP you wish it'd be. Now, that's just this test, could be variables that aren't taken, i get it. A 2080 takes 230 watts while a 2080 ti takes 266 watts, both above their rated TDPs, in this test.

    Now from a cost perspective, i agree...ish, i still don't think the RX 5700 cards are that expensive to produce, even on 7nm, but i could be wrong. Either way, creating a twice the size GPU, which would realistically only be around 355mm2 (just for someone that is reading things that double 251mm2 of the RTX 5700 would be 502, that's not correct, that'd be more like 4 times larger of a die), i'm not sure that'd be required, and i doubt it'd increase the price of the GPU by twice (not the die, but the whole GPU, as memory prices, PCB, etc. wouldn't change, as long as the amount of memory, among other things, didn't change). I think they could create a large GPU with more cores/CUs and charge $500-600 and bring some really good competition to nvidias high end.

    I'm sure there's a reason AMD didn't do it, high end is in itself a relative niche market, but i do believe the could have, within reasonable price points, brought out a large GPU to compete.
     
    Fox2232 likes this.
  8. sykozis

    sykozis Ancient Guru

    Messages:
    21,008
    Likes Received:
    658
    GPU:
    MSI RX5700
    When you're financially constrained, like AMD is, you don't go for the smallest market first.... You go for the market with the greatest potential. These are mid-range cards.... That is the market with the greatest potential.
     
    carnivore, Embra and airbud7 like this.
  9. Aura89

    Aura89 Ancient Guru

    Messages:
    7,627
    Likes Received:
    898
    GPU:
    -
    That's pretty much my guess as to why they didn't, hence the whole it's a relatively niche market. I feel like that's what they said they were going to do with their CPU market after bulldozer and that didn't work out so well for them. I hope by the time Intel releases their GPUs that AMD is fighting the full lineup again.
     
    airbud7 likes this.
  10. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,738
    Likes Received:
    2,198
    GPU:
    5700XT+AW@240Hz
    @Aura89 : Understandable point of view. Even agreeable mostly... Except maybe expecting much bigger chip to be happy with 256bit memory interface. AMD may have improved compression of some data types and remove need to completely read those compressed data blocks, decompress, alter, recompress, store... as they can now read and write specific chunks directly. But even that may not be good enough improvement to provide sufficient bandwidth for big GPU.

    Actually, I would like to see memory downclocked and have measurement of performance impact.
     
    airbud7 and Aura89 like this.

  11. Aura89

    Aura89 Ancient Guru

    Messages:
    7,627
    Likes Received:
    898
    GPU:
    -
    I'll be honest, the whole 256bit memory interface and increase completely slipped my mind. I'm not sure it'd have a huge impact still though in overall price.

    Memory and the bit interface has always been one of those areas that i find it's difficult to justify the increase if the cost puts it too much higher, as well as find out the bit increase cost.

    By that i mean, RX 580, 4GB vs 8GB, most games show little to no difference between these. Not saying you can't find the difference, just can't find a huge difference.

    And since we don't generally get exactly the same configurations of GPUs with different memory bits, it's fairly difficult to say "An RTX 2080 ti would have similar performance to an RTX 2080, if it had the same 256bit bus, even with more cores", or anywhere inbetween. At least, i haven't seen this kind of test being done.

    So it's hard to say if a higher then 256bit memory would be required to have a better performing GPU by a decent amount, or how much more expensive it would make it if it did require it.
     
  12. Jumbik

    Jumbik Member

    Messages:
    39
    Likes Received:
    19
    GPU:
    Sapphire Vega 64 N+
    That Metro Exodus performance looks very good. If we will see such gains in most of modern games, then I'm all for it. Bring it on AMD. :)
     
  13. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,723
    Likes Received:
    177
    GPU:
    EVGA GTX 1080Ti SC
    That area is already a two-dimensional quantity. That makes 502mm^2 double 251mm^2 as the quantities aren't indicative of a single dimension that twice that dimension would mean quadruple the area.
     
  14. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,738
    Likes Received:
    2,198
    GPU:
    5700XT+AW@240Hz
    Capacity is rarely an issue. But AMD overestimated their own ability to deliver Navi... huge delays.
    Check 1st clocks on RX-480, then last clocks on RX-590. Sadly, same memory clock. And mediocre at best improvement by big clock difference from 580 to 590.
    That card did hit bandwidth limit. Mine 580 @1400 MHz and 2250MHz memory with tight timings does better than 590 at that very nice clock. 2360MHz was defect free limit for memories on my card, but not because of memories themselves, but IMC would hard crash above it. And sadly it works only at runtime, writing that to vBIOS prevented card from booting. 2250MHz with timings, card boots just fine.

    And while I think that 12GB VRAM is not necessity even for 4K, that capacity comes with bandwidth which is needed. 2x 4GB HBM2 may do instead. :)
    - - - -
    As for 2080Ti, get someone to downclock memory and test :D
     
    Aura89 likes this.
  15. Aura89

    Aura89 Ancient Guru

    Messages:
    7,627
    Likes Received:
    898
    GPU:
    -
    Not really sure what you're getting at. Maybe you're talking above my head, but "twice the size" would be defined by area. Just the same as 32 inch monitor is not half the size of a 64 inch monitor. Just like 2160p is not twice the resolution of 1080p, it's quadruple.

    Maybe i'm wrong, i'm no expert here, but i can't see how a GPU developer would pack in double the amount of everything, and somehow make a GPU that is quadruple the area in size
     
    Last edited: Jul 7, 2019

  16. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,738
    Likes Received:
    2,198
    GPU:
    5700XT+AW@240Hz
    I do agree. But what is distance from edge of the chip to area where it is safe to make transistors? (wafer has to be cut, and cut has some thickness)
    Optimal shape of die would be square to minimize circumference and therefore area where transistors are undesirable.
    Taking for example 16x16mm square where 1mm (overblown value) around edge is unusable we would have 256mm^2 die while usable area would be 225mm^2. 12.1% waste
    Doubling die size to 22.6x22.6mm would result in 511mm^2 die and 466.56mm^2 usable area. 8.7% waste.
    Resulting usable area would be 2.0736‬x larger on 2x larger die.

    While your course of thinking is correct, actual die size required would be bit smaller than 2x.
    - - - -
    On other hand, bigger the die, fewer the cuts on wafer => smaller the waste. Sadly I think that none of us here actually have data for either of those points.
     
  17. Undying

    Undying Ancient Guru

    Messages:
    11,750
    Likes Received:
    1,477
    GPU:
    Aorus RX580 XTR 8GB
    Vega prices are gonna drop like flies. Was just watching DF review and Vega64 is continuously slower than even a 5700non-xt. Thats amazing.
     
    airbud7 and -Tj- like this.
  18. MonstroMart

    MonstroMart Master Guru

    Messages:
    565
    Likes Received:
    173
    GPU:
    ASUS GTX 1070 Strix
    The 5700 and 5700 XT perform better than i anticipated. I was happy with Super and was ready to buy a 2070 Super but now i'll wait a bit. 5700 XT is significantly cheaper and it looks like it performs very well in modern engines. It actually did beat the 2070 Super at 2k in a couple of newer titles like Metro Exodus and BFV. I'll wait for overclock reviews using aftermarket cards and more stable drivers. As usual there's a couple of outlier bringing AMD down here and there but it seems like it's not as big of a problem as it has been in the last 15 years or so. My big concern with RTX is i'm afraid the current cards wont be powerful enough when games will start to fully utilize RTX (outside of the 2080 Ti). Also my m1070 is still not working with my FreSync monitor and i'm done waiting for nVidia to support FreeSync so for me it's definitely a selling point in favor of AMD.
     
  19. vbetts

    vbetts Don Vincenzo Staff Member

    Messages:
    14,679
    Likes Received:
    1,253
    GPU:
    RTX 2070FE
    I am super impressed by the power draw. Also seems that the bad launch driver trend is not a thing with these cards.
     
    carnivore, Embra, airbud7 and 2 others like this.
  20. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,723
    Likes Received:
    177
    GPU:
    EVGA GTX 1080Ti SC
    32" is a one-dimensional quantity representing the diagonal. Doubling the diagonal at the same aspect ratio implies doubling both width and height, quadrupling area.

    1080p is a one-dimensional quantity representing the number of pixel scanlines. Doubling the number of scanlines at the same aspect ratio implies also doubling the width, or the length of each scanline, quadrupling the pixel count.

    Twice the size, or in other words, twice the area, when you're already referring to a two-dimensional quantity (mm^2), is double that quantity. In that case, doubling the quantity really does double the area, while in the former examples, doubling the diagonal or scanline count quadruples the corresponding two-dimensional quantity (area, resolution respectively).

    I'm only referring to the mathematical aspect of these numbers, not to how they map to the problem of making chips at specific die sizes.
     

Share This Page