AMD Ryzen 7 5800X3D -1080p and 720p gaming gets tested

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Apr 12, 2022.

  1. 0blivious

    0blivious Ancient Guru

    Messages:
    3,301
    Likes Received:
    824
    GPU:
    7800 XT / 5700 XT
    Hard to say what people are arguing about nor why they are bothering nor whom they are complaining to.
     
    Dazz likes this.
  2. Undying

    Undying Ancient Guru

    Messages:
    25,480
    Likes Received:
    12,886
    GPU:
    XFX RX6800XT 16GB
    L3/L4...its a 3D v-cache now.
     
  3. Agent-A01

    Agent-A01 Ancient Guru

    Messages:
    11,640
    Likes Received:
    1,143
    GPU:
    4090 FE H20
    Technically you are correct but its 64mb stacked onto 32mb of cache. There is some latency and bandwidth penalties just like L4(however minor it may be).
    Since 5775c used a similar idea I just consider it an L4 for old times sake.
     
  4. TheDeeGee

    TheDeeGee Ancient Guru

    Messages:
    9,676
    Likes Received:
    3,455
    GPU:
    NVIDIA RTX 4070 Ti
    But why does everyone want a 12900K for gaming? Everyone seems blinded by K models and 900 numbers...

    Get a 12700 non-K and be happy for 8 years.
     
    Dazz, Krizby, fredgml7 and 1 other person like this.

  5. Undying

    Undying Ancient Guru

    Messages:
    25,480
    Likes Received:
    12,886
    GPU:
    XFX RX6800XT 16GB
    No cpu lasts for 8 years except maybe 2600k.
     
  6. mohiuddin

    mohiuddin Maha Guru

    Messages:
    1,007
    Likes Received:
    206
    GPU:
    GTX670 4gb ll RX480 8gb
    Nice conversation guys. Keep it up.
    A healthy argument is what we need.
     
  7. Krizby

    Krizby Ancient Guru

    Messages:
    3,104
    Likes Received:
    1,789
    GPU:
    Asus RTX 4090 TUF
    Well 8700K @ 5ghz would probably push 8 year without problem for 1440p/4K gamers :D
     
    Airbud, Undying and ~AngusHades~ like this.
  8. Horus-Anhur

    Horus-Anhur Ancient Guru

    Messages:
    8,731
    Likes Received:
    10,818
    GPU:
    RX 6800 XT
    Increasing cache size always brings a latency penalty, but not as big as creating another cache level.
    This L3 cache is just an expansion, as any other, it is just placed on the vertical, instead of horizontal.
    With this 3d cache the latency is probably going to increase by 15%. This means going from 46 cycles to 52-54 cycles.
    Now compare that to the L4 cache on the 5775c, that had a cache latency of more than 150 cycles.
     
    Last edited: Apr 13, 2022
    tunejunky likes this.
  9. tunejunky

    tunejunky Ancient Guru

    Messages:
    4,460
    Likes Received:
    3,085
    GPU:
    7900xtx/7900xt
    well we are gamers aren't we?

    a couple of people hit the nail on the head - this is an inexpensive halo designed for the target audience.
    and this is AMD laying out an upgrade path for R1 and R2 users who are very numerous.

    the vast majority of gamers still game @ 1080p where CPU's are bound the most, the next largest slice of the pie is 1440p and the CPU is still impacted and the cache does wonders.

    the numbers of folks gaming @ 4k is vanishingly small by comparison despite next gen consoles and current gen gpu's.
    this would be different if it wasn't for mining, as the numbers of 4k compliant gpu's shipped would've definitely impacted the 4k gaming numbers (but it would still be the minority).

    AMD has done exactly what it wanted to before the 5800X3D has even hit the stores... they've changed the conversation and gained marketing points at the places people actually game at with an affordable alternative.
    hell, a lot of R2 users already have fast RAM because of IF, so they can just plug n' play and get a massive upgrade w/o changing anything else.

    so we can split hairs or divide into camps, but at the end of the day "mission accomplished" AMD.

    the next battle is the real deal and AL doesn't stand a chance.
    but then AL doesn't have a place in the next battle (lol click-baited you) it's between Zen 4 and Meteor Lake ;)
     
    Last edited: Apr 13, 2022
    Embra, Valken and Undying like this.
  10. Krizby

    Krizby Ancient Guru

    Messages:
    3,104
    Likes Received:
    1,789
    GPU:
    Asus RTX 4090 TUF
    Here's the "cache", 3D V-cache doesn't boost FPS for all games. From TPU review 5800X3D show exceptional gain vs 5800X in just 3 games out of 10. You will need to double check if 5800X3D actually is the fastest CPU for the game that you are playing the most before buying it.

    I don't think anyone care about sky high FPS in single player game, if 5800X3D were confirmed to be the fastest CPU for Warzone, Apex Legends, PUBG, Fortnite, etc...then the 5800X3D is worth its price for competitive gamers.
     
    mohiuddin and Valken like this.

  11. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,020
    Likes Received:
    4,398
    GPU:
    Asrock 7700XT
    I do, but it's part of the CPU. It's not free performance and it isn't a resounding win in all tests. Therefore, the CPU should be treated equally with all others it is tested against, even if, in some cases, it can punch above its weight.
    I understand all of that. I'm not disagreeing with anything you said there. But think of it like this: when Intel first released AVX2, AMD didn't have anything in response. In some tests, AVX2 blasted ahead in benchmarks; more than an L4 cache would do. You could overclock a non-AVX2 CPU all you want, give it better RAM, it's still not going to compete against an optimized instruction. But, it's part of the CPU, and Intel deserves credit for implementing it. Intel deserved credit for the 5775C, because it was an innovation. Same goes for AMD.
    It doesn't matter what the manufacturer does to improve speed, whether that's adding more cores, bumping clock speed, shrinking transistors, adding instructions, or increasing cache size: the CPU was built to work that way. V-cache or EDRAM might just be "very fast RAM" but it's part of the chip and should be tested as such. Again, I agree that it isn't fair to test [both CPUs] at 3200MHz, because that does put Intel at an unrealistic disadvantage.
    Well, I at least provided a source showing how much (or rather, little) of a performance increase adding more Hz does in a synthetic (unrealistic) workload. So surely, I'm not so clueless. You can cherry pick results all you want but that isn't a reason to move goal posts.
    The inverse is also true and more what I was getting at: it doesn't matter what it's good at, it isn't a gaming CPU. That doesn't mean it isn't good at gaming and that doesn't mean you shouldn't buy one for the sake of gaming, but, it's still not a gaming CPU. The 12700K pushed to the same clock speeds is just as good in games but cheaper.
    Yes there is, because what if other tests non-gaming tests were included? They too should be tested using the same specs. Benchmarks should be as consistent as possible across all tests and all participants.
    Because cache is just "very fast memory" and it has more, therefore, according to you, Intel should get more speed. L3 is not irrelevant.
    Or you apparently, seeing as people have in fact made such pairings if you look at reviews from stores. I'm not just making this stuff up.
    This is exactly my point though: maybe you didn't get success at 4000MHz+, but others have. Sometimes it's just bad luck, but because you can get such different results, it doesn't make sense to benchmark hardware that is pushed to the limits, especially if only one is tested that way.
    I agree, except both CPUs do support faster speeds.
    I agree with this too, but I assure you even the 5800X3D will have instances of bottleneck from 3200 DDR4. Again: 64MB will fill up fast.
    No, it's not. Intel should be tested with 4133+ because that gives a bigger picture of how much potential it has, but it should also be tested at whatever the other chips are tested at.
    Not everyone wants/needs to buy more expensive RAM. Not all tests will see a significant uplift of just 500MHz. A lot of people care about relative performance rather than peak performance, and if you're giving an advantage to only one CPU, you can't realistically measure that.
    And the fact they're asking is because traditionally, the standard was to keep all tested platforms as equal as possible. It isn't scientific to give a handicap. If HU is smart, they will test 3800MHz, 4133+, and DDR5.
     
    Last edited: Apr 13, 2022
    mohiuddin likes this.
  12. Why_Me

    Why_Me Master Guru

    Messages:
    204
    Likes Received:
    69
    GPU:
    MSI OC 8800GT 512
    mohiuddin likes this.
  13. Clouseau

    Clouseau Ancient Guru

    Messages:
    2,844
    Likes Received:
    514
    GPU:
    ZOTAC AMP RTX 3070
    Problem is with definitions. What is officially supported and why? Under what scenarios make both companies state 3200MHz being the fastest speed officially supported? Answer is simple, 3200 is achievable no matter how bad the silicon of the product is compared to others within the same exact product. When does the silicon lottery start to begin? Both companies have stated that the lottery begins at any speed higher than 3200. That is why no CPUs are tested with any degree of confidence beyond 3200. Anyone can argue till the cows come home how unfair Intel's offering is handicapped. Makes no difference that Intel itself does not change their official stance that 3200 is the fastest speed officially supported. For the testing to make sense and remain apples to apples, one tests the CPUs at their fastest officially supported speeds which currently for both Intel and AMD is 3200. If Intel was to change their stance and say that the fastest ram speed supported was DDR6 8000, then that is what that chip gets tested at against the fastest officially supported speed AMD says their offering can do.

    The ongoing argument highlights the need for results of what enthusiast grade results can be expected for a given CPU. Here is the caveat for such tests; it needs to be stated what the quality level the silicon being used in the tests is compared to the whole population of the exact same CPUs. Without that declaration, all those tests results are singular in nature and not indicative of that model of CPU as a whole. There are different ways to convey the quality of the silicon by saying that out of 3,5, or however many CPUs of the exact same model were tested, all or half achieved this top speed. It has to be remembered, day one release results have no clue how the chip or chips tested compare to the entire population of that particular model being reviewed. Thereby necessitating the compilation of results from several sites so that a determination can be shown that this particular CPU has a very little chance to extremely likely ability to perform at a particular level. Being able to compare the results of several sites, the definitions applied to the testing parameters need to be 100% exactly the same. Minus all that, the only meaningful tests we are left with for day one release information is to gather results from the top officially supported RAM speeds as stated by Intel and AMD.
     
    fredgml7 and schmidtbag like this.
  14. user1

    user1 Ancient Guru

    Messages:
    2,785
    Likes Received:
    1,305
    GPU:
    Mi25/IGP
    I suspect it has more to do with the motherboard, on most x470/b550 boards and any boards that offers an external clockgen , overclocking via baseclock probably won't be much of an issue( provided it isn't intentionally blocked), that and many boards offer vrm spoofing to report lower powerdraw, which does the same thing as increasing the power limits effectively.
    That said there isn't much headroom to play with since the max safe voltage is 1.35v, so I doubt there will be much gained form doing so other than maybe slightly higher clocks , <100mhz most likely. probably only useful for extreme overclocking.
     
    Why_Me likes this.
  15. Agent-A01

    Agent-A01 Ancient Guru

    Messages:
    11,640
    Likes Received:
    1,143
    GPU:
    4090 FE H20
    I'm aware that extra cache doesn't always add a win in all tests. I provided one example of a game where it doesn't help.
    There's no need force CPUs to be equal because they are very different. AMD added extra cache to accelerate the performance because the memory IOPS on the platform is weaker than alder lake.

    Why are we talking about AVX2 or other instructions? The only thing pertinent to the discussion is gaming performance. AVX2 AVX512 instructions etc have no purpose in games.
    Since AMD purposely added a big cache to accelerate gaming performance that is the only thing relevant to the discussion here.

    An old source? That's irrelevant data now.

    I've provided my own source from hardware I own that shows double the performance in the same test that your source has.
    Me showing memory performance of my 12900K is not cherry picking, it's a direct comparison to what you posted.

    Actually I've owned a 12700K and there were several games where the 12900K was measurably faster at the same clock speeds and with e-cores disabled because it has 20% more cache.
    I know because I tested them both in multiple game benchmarks. HU also showed similar results in games in their 12900K vs 12700k vs 12600K benchmark.

    No there is not. The only thing I am and others are discussing is gaming benchmarks on the OP.
    I'm not here to talk about blender or 7zip performance. Only gaming.

    I don't understand what you are getting at.

    L3 is relevant. When there is not enough L3 cache memory iops is very relevant to reduce performance penalties.
    That's why AMD tripled its cache to reduce the penalties in memory bound applications(games).

    This is another instance where you don't know what you're talking about and are just spitting out what you've read.

    Do you not understand how fclk ratios work? If you want to achieve speeds higher than 3600/3800MTs then you must set memory to 1/2 speed of fclk.
    That incurs huge penalties in latencies. The memory and infinity fabric must be a 1:1 ratio for good performance.

    1:1 ratio is limited to about 3800MTs on ryzen 3 and apparently TPUs 5800X3D review sample can only do 3600MTs.
    TPU specifically said they tested at 3600MT/s because going to 1:2 ratio incurred too big of a performance penalty.

    Just because you've read others have achieved higher speeds is meaningless. Those users lack the knowledge and experience of why it's important to keep a 1:1 ratio.

    So again, I have personal experience with this. This is no different for alder lake where CPUs default to a 1:2 ratio with DDR4 speeds > 3600MTs.
    That is a huge penalty to memory performance and that is why we manually set Gear 1 mode (1:1 ratio).

    Well as mentioned before, the 5800X3D only supports 3600MTs where as alder lake supports 4133+ or 7000+ DDR5.

    In certain applications yes 3200 DDR4 will be a bottleneck. But in games that cache is designed to reduce those penalties. 96MB of L3 cache is a lot.
    For games that are not memory bound it will make little difference how big the cache is or how fast the memory is.

    But games that are, which there is a lot of them, extra cache is a big deal.
    It doesn't matter if it "gets filled fast" the performance gain will still be massive vs having less cache and all things else equal.

    I'm reading it as you are saying once the cache is full that the performance gains will be erased which is not true.

    Yes, that would be the correct way to show performance. Scaling at different frequencies. Not a one set frequency that only both chips are capable of because all things must be "equal".

    I mentioned earlier an $80 kit achieved 4133 CL16. Price is irrelevant in this discussion. Fast memory kits aren't expensive(DDR4 4000+) and are readily available.
    And like I said, anyone buying a 12900K isn't pairing it with extremely cheap/slow memory.

    Yes, that is exactly what they should do. Not the traditional standard where benchmarks are locked to one frequency.

    So to end this, I think we can agree that the best solution is to provide multiple benchmarks showing the scaling of the 12900K at different frequencies.
    This CPU supports fast DDR4 and DDR5 so it's important to show how it behaves.

    The reviews like the OP where they only show one frequency and bench against each other only gives users false information on true performance.

    The 5800X3D should be reviewed with it's highest capable frequency(3600) and the 12900K should be tested 3600 DDR4, 4133+DDR4, an average DDR5 kit(6000) and a fast DDR5 kit of 7000+.

    Lastly, my gripe with reviews is they do not fully understand the specifics of Gear 1 vs Gear 2, infinity fabric ratios etc.
    For example, in a game that is bottlenecked by memory performance going from Gear 1 to gear 2 at say 4133MTs causes a huge performance loss.

    Most reviewers don't understand anything about any of that and don't even list if the CPU is running gear 1 or 2 when it's very important.
    Unfortunately HU is one of those reviewers and I've complained about their testing practices. But that's another issue in itself.


    Intel or AMD cannot change the "official" maximum speed of 3200 for DDR4.
    Why? Because they do not set memory standards. That is a entirely up to JEDEC , an association that sets those standardized specs.

    DDR4 will forever be stuck on a standardized speed of 3200MT at specific timings.

    These specifications are to ensure all manufacturers can meet at minimum those speeds and timings. Intel has been able to 'unofficially' support speeds way past that for a long time.

    So no Intel cannot set their 'official' speeds to 8000 because that speed does not exist in jedec standards.

    Yes silicon varies per unit.
    That's why it's important to review more than one sample to get a real representation of performance.
    der8auer reviewed like 5 12900Ks and the differences between each are more than you'd expect in metrics like power consumption and clock scaling behavior
     
    Last edited: Apr 13, 2022

  16. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,793
    Likes Received:
    1,396
    GPU:
    黃仁勳 stole my 4090
    Well, considering I game at 1440p and use a 3080, this chart is the one that matters to me:
    [​IMG]
    Guess I won't be getting an InteLMAO CPU anytime soon.
     
    HARDRESET likes this.
  17. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    You know what i want to know and have a dedicated review about?

    This processor, combined with many others, tested in VR games. Specifically VR games.
     
    Maddness likes this.
  18. Dazz

    Dazz Maha Guru

    Messages:
    1,010
    Likes Received:
    131
    GPU:
    ASUS STRIX RTX 2080
    Well i am getting a 5800X3D i have a 3900X so 20% performance is a decent jump, although i am more interested in the 0.1% lows. The 3900X i rarely get to use 100% for my everyday tasks and would rather have 1x CCD than 4x CCD's to eliminate cross-talk. Sure could wait for Zen 4 or something but that means a new board and ram which the ram they are using is CL36 6000MT/s which costs as much as the CPU ALONE! yet bizarrely they are using really laxed CL18-20 3600MT/s ram on the Ryzen!!!. My Samsung B 3200CL14 dies are good for 3800MT/s CL14. Can't help but feel there is some bias in Techups review in using ultra high-end DDR5 vs bargain-basement DDR4 for Ryzen and even then it's just on par...
     
  19. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,793
    Likes Received:
    1,396
    GPU:
    黃仁勳 stole my 4090
    It's definitely a retarded comparison considering no one is buying DDR5 Alder Lake systems. And last I checked non-trash DDR5 was $900 CAD for 2x16GB.
     
  20. HARDRESET

    HARDRESET Master Guru

    Messages:
    891
    Likes Received:
    417
    GPU:
    4090 ZOTAEA /1080Ti
    ASUS DARK HERO DOCS , would be interesting to test on , DOCS in theory wound work wonders , 1.3 v easy at 4650 for mt , boost V would not be touch or performance , that's how it works on my system with DOCS .
     

Share This Page