AMD Ryzen 7 5800X3D -1080p and 720p gaming gets tested

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Apr 12, 2022.

  1. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,955
    Likes Received:
    4,336
    GPU:
    HIS R9 290
    While there's nothing at all wrong with seeing how much you can push a product to its limits, it's a rather meaningless benchmark because there are so many variables involved that can pretty dramatically change your results. There's no longer an apples to apples comparison once you have a wide variety of things that are finely tuned and perhaps aren't of the same quality silicon. With the tuning you got with your hardware, there are people who will get much better and much worse results with either of the CPUs you have, so there's no takeaway from such a test other than "hmm, interesting". Your results do not prove which is better, they just prove which of your particular samples are better.

    The kind of overclocking Hilbert does is more meaningful - he OCs the hardware to levels that anyone can achieve, thereby proving how much overhead the products reliably have, which is useful to know. So if he can achieve a 20% overclock with minimal tweaking, that shows a product has a lot of potential. If he gets a 5% overclock with an hour of tweaking, that shows the product is already near its limits.

    Benchmarks are meant to help people know whether the product works as advertised and how well it compares to the competition under normal conditions. Once you start changing variables to suit one product over another, it's not a benchmark anymore, it's just simply a test.
     
    Airbud, carnivore, rflair and 6 others like this.
  2. nizzen

    nizzen Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,149
    GPU:
    3x3090/3060ti/2080t
    That's why you can't use the same memory on different platforms. They are different. Like 10900k could do 4800mhz memory, but Ryzen 5900x could do 3800mhz. Using 3600mhz is nerfing 10900k way more than the Ryzen cpu. If you understand where I'm going :)

    Using slow memory on Alderlake isn't the way to show how it competes to the competitor, when the other cpu can't use faster memory.
     
    mohiuddin likes this.
  3. tunejunky

    tunejunky Ancient Guru

    Messages:
    4,291
    Likes Received:
    2,936
    GPU:
    7900xtx/7900xt
    ok, now we've cleared the deck with facts, let's deal with the soon to be coming marketing blitz.

    in marketing, facts are your friends, but not all facts are equal - which of course is a fallacy.
    but fallacious thinking and hyperbolic speech isn't illegal marketing - in consumer electronics (medicine a whole different deal).

    so we see numbers used as that seems definitive. and gaming is particularly bad in regards to that. it's far worse than any other part of the computer industry. on a daily basis different brands are selling the products of the same company. and it doesn't matter whether it's AMD or Nvidia that designs the gpu, the facts are plain each brand would have you think their model is better than the equivalent and some models are segmented within the gpu by different coolers etc... in the same brand.

    so with that long wind-up and with the evidence we've heard or read, it seems the soon to be coming blitz will be "re-take the throne" or some such. Just as Intel did with AL, AMD will do with Zen 4, and Intel will be somewhere inbetween for Meteor Lake (but shooting for the Moon as it should be).

    just as we've seen with AL, i predict all serious AMD gamers will upgrade, or at least that's what AMD hopes.. but we'll hear about it from now to Zen...4
     
  4. Agent-A01

    Agent-A01 Ancient Guru

    Messages:
    11,628
    Likes Received:
    1,119
    GPU:
    4090 FE H20
    The review is pointless. There is no "apples to apples" because the two cpus are completely different.

    Big L4 caches are designed to increase performance in memory bound situations. The idea is to reduce the need for a fast memory subsystem.
    3200CL14 is fine for such a CPU because faster memory will not make a big difference because L4 > ram.

    On the flip-side, 3200CL14 is extremely unrealistic for a 12900K. Not one person who bought a 12900K is using such a slow memory kit. I've had 3200 CL14 DDR4 on X99, what 8 years ago?

    3200CL14 is a big hamper on performance for a CPU that needs fast memory.

    The correct way to benchmark a 12900K vs the 5800X3D is to pair each with the fastest memory supported.
    12900K can have two setups; DDR4 4133 CL15(easily achieved) and a DDR5 setup with 6800 CL32~.
    5800X3D can have the fastest DDR4 setup(probably around 3800).

    Those setups are more realistic to what a buyer would use.
    Again, there's no need to artificially nerf the 12900K because the two CPUs are completely different. One has big L4 cache. With that thought process why not disable the L4 cache?

    That's like comparing two cars quarter mile times. One is AWD and the other is RWD.. Lets disable AWD to make it "fair". Right..
     
    Airbud, yasamoka, mohiuddin and 2 others like this.

  5. MonstroMart

    MonstroMart Maha Guru

    Messages:
    1,397
    Likes Received:
    878
    GPU:
    RX 6800 Red Dragon
    DDR4 4133 CL15 is not easily achieved not outside of USA anyway. I live in Canada and had to pay a fairly decent amount of money for my 4133 cl19 kit. If i would fine tune it maybe i could achieve better CL but CL 15 i highly doubt it. I don't know anybody who run 4133Mhz ddr4 at cl15 personally.

    Testing with the most popular (among enthusiast DIY gamers) DDR4 and DDR5 kit would be more reasonable. Maybe add two ddr5 kits the most popular and the best money can buy.
     
    Last edited: Apr 12, 2022
    mohiuddin likes this.
  6. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    9,236
    Likes Received:
    5,208
    GPU:
    AD102/Navi21
    really ?
    cause they're cheap here
    depends on the maker really,but my viper steel kit cost about 96eur in 2020
     
    tunejunky and nizzen like this.
  7. MonstroMart

    MonstroMart Maha Guru

    Messages:
    1,397
    Likes Received:
    878
    GPU:
    RX 6800 Red Dragon
    Paid around 150$ for 16GB of viper 4133Mhz cl19 ddr 4 in 2021 (was less expensive that low cl "slower" memory). It's currently selling at 137$. I did a quick search and was unable to find a 4133Mhz kit at cl 15 in canada. I found one cl17 at over 300$ i guess you could fine tune this one at cl5. But i would not consider over 300$ for 16GB as easy to achieve for most people it would be too expensive.
     
  8. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    9,236
    Likes Received:
    5,208
    GPU:
    AD102/Navi21
    yeah I remember a time last year when memories were super expensive
     
  9. Agent-A01

    Agent-A01 Ancient Guru

    Messages:
    11,628
    Likes Received:
    1,119
    GPU:
    4090 FE H20
    4133 CL15 is what good samsung b-die kits can do. Kits with those timings are usually fairly expensive vs something like a 4000 CL17(which is very affordable).

    A month ago I bought some cheap 3200CL14 flare g.skills(like 80USD for 2x8GB) and they did 4133 C16-16-16-32.
    That speed and timings is in line with what a 12900K owner would buy.

    Anyways the point is 3200CL14 on an enthusiast CPU like the 12900K is not realistic at all. Nobody pairs those two together.
     
    tunejunky likes this.
  10. blurp33

    blurp33 Master Guru

    Messages:
    215
    Likes Received:
    38
    GPU:
    RTX 4090
    HD64G, Valken and mohiuddin like this.

  11. kanenas

    kanenas Master Guru

    Messages:
    512
    Likes Received:
    385
    GPU:
    6900xt,7800xt.
  12. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    9,236
    Likes Received:
    5,208
    GPU:
    AD102/Navi21
    Last edited: Apr 12, 2022
  13. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,955
    Likes Received:
    4,336
    GPU:
    HIS R9 290
    The way two CPUs handle identical memory is an important part of the benchmark. A complete benchmark would show results of 2 or 3 different RAM configurations (that both CPUs have to go through), because that would inform potential buyers whether one CPU can squeeze in more performance, or in another perspective, whether a CPU is starved for bandwidth.

    I remember back when Ryzen was first released, its performance was heavily dependent upon RAM speed. Back then, people were saying "yeah but overclock the RAM and then it's faster". Well, overclock the RAM on Kaby Lake and that too got faster, just not to the same proportion.

    I don't care how much a CPU is limited by RAM: a CPU benchmark is not legit unless all parts across all tested platforms are as similar as they can possibly be. If the CPU's potential is held back because you didn't spend an extra premium on fast RAM, that's a design flaw and benchmarks should highlight it. In the case of the 5800X3D, you're paying extra for the V-cache.

    If increasing the RAM speed on only one platform was acceptable, then that would be like giving short people steroids in a marathon, so they have a greater chance at winning. Sure, they are physically doing better, but it's not at all a fair race. Maybe it sucks that the 2m tall Kenyan is going to be the obvious winner, but it's not fair that the competition gets juiced up yet he can't.


    They may be drastically different CPUs but they can both do the same things (other than AVX-512...), they just do them in different ways. The point of a benchmark review is so you know which CPU to get. If one CPU does a better job in a workload that matches what you intend to use it for, that's the CPU to get. It doesn't matter how different they are or how much better the other CPU is at doing other tasks - you get the product for the job that suits your needs and budget.
    I would argue that even benchmarking ARM vs x86 is fair so long as all other specs are equal and that you're using native builds.
    A granny smith apple has a different shape, size, color, shelf life, and taste compared to a red delicious apple. They're still apples, and therefore comparable. One is going to bake better in a pie. One will taste better plain. One doesn't bruise quite so easily. They aren't equal but one isn't necessarily better than the other, it just depends on your taste. But you can't add sugar to a granny smith and say "this is the better tasting apple", just like you can't overclock RAM only on an Intel system and say "this is the faster CPU". That's stupid.
    It's not going to make as big of a difference as you think. Go to something more reasonable like 4600 and you're only getting another ~4GB/s, and that's under a synthetic (unrealistic) workload. That's not bad, but it absolutely pales in comparison to the V-cache, which is ostensibly 2TB/s. Even at 1/5 that speed, the V-cache is going to negate whatever performance improvement Intel gets from a more modern memory speed.
    [​IMG]

    I don't disagree that benching 3200 wasn't a realistic situation, but no, the correct way to benchmark is to pair each with the highest common denominator. So if the 5800X3D is limited to 3800, then that's what both should be tested on.
    There wouldn't be anything wrong with testing the 12900K with DDR5 6800, but that's not how you run a benchmark. Once the 6800X3D (or whatever it'll be called) gets released, then you would have a more comparable test.
    Do you not see the hypocrisy here? Giving Intel much faster RAM is no different than disabling AWD, and it's no different than giving steroids to the shorter people in a marathon. If you're doing a quarter mile drag race, you keep AWD enabled. It becomes an unfair race once you give only the RWD car NOS and extra grippy tires.
     
  14. MonstroMart

    MonstroMart Maha Guru

    Messages:
    1,397
    Likes Received:
    878
    GPU:
    RX 6800 Red Dragon
    Nothing until the next Ryzen which should ship at around September or early October probably. The current Ryzen is old and there's nothing they can do to improve the performance outside of releasing a 5600 and other slower cpus which wont compete with intel anymore. Ryzen 5000 is starting to be old and while AMD took more than usual to have their next cpus ready it's not that uncommon for AMD to be slow. I think the 5800X3D is just to test the waters they probably never intended to have a full lineup. The market for this cpus is Zen 1st gen and Zen+ owners looking for an upgrade and not willing to mortgage their house for a kit of DDR 5 (ddr 5 is still stupidly expensive in canada and many other countries).
     
    tunejunky likes this.
  15. Horus-Anhur

    Horus-Anhur Ancient Guru

    Messages:
    8,578
    Likes Received:
    10,607
    GPU:
    RX 6800 XT
    Seems to be a great CPU for gaming. But for everything else, there are much better solutions, from both AMD and Intel.
     
    tunejunky likes this.

  16. fellix

    fellix Master Guru

    Messages:
    252
    Likes Received:
    87
    GPU:
    MSI RTX 4080
    Looks like the L3 size bump in Zen has limited/selective impact in both server and consumer workloads, as expected for non-inclusive cache. Also, the limited size of the DTLB buffer in Zen3 now only covers 1/12-th of those 96MB.
    A larger L2 cache would have been more useful, but we will see how much, if Zen4's rumored 1MB per core L2 is true.
     
  17. Agent-A01

    Agent-A01 Ancient Guru

    Messages:
    11,628
    Likes Received:
    1,119
    GPU:
    4090 FE H20
    That's like saying at the drag strip cars should only be benchmarked against others with the same exact tire size... I don't agree with that.

    DDR4 vs DDR5 you may have an argument, but that means at the minimum 12900K should be paired with the fastest DDR4 memory it supports.

    If for some reason it takes AMD 5 years(just pretend) to get DDR5 support and Intel is only DDR5 are we going to say the CPUs aren't comparable? No
    We compare what's available now not what the future may hold.

    There is no hypocrisy here.

    AMD gave the 5800X3D VERY fast ram that's built into the CPU. That is the steroids in your analogy.

    The benchmark is nerfing the 12900K. With Vcache, it makes memory speed less important.

    Short analogy.

    Intel is the 1000HP car with RWD. AMD has only 600HP but is AWD.
    12900K is no doubt the faster car but it's limited in the amount of power it can put down. AMD is the car with AWD so it has less power but it can accelerate faster.

    My point is the 12900K can't put its power down without having AWD or grippier tires(fast memory). By putting crappy tires and disabling AWD versus a car that might as well have 6WD(5800 with 3d cache = infinite traction) is not exactly fair.

    It is as big of difference as I know it is because I've owned many generations where I've tuned memory to the max.
    I've thoroughly tested many benchmarks, games included, and there is plenty of evidence that shows that there are many scenarios where performance is memory bound.

    One game example is black ops 3, a game with high draw calls(memory bound scenario), fast memory can increase FPS by 50% versus standard speeds.

    In this example the 5800 with Vcache would destroy the 12900K at stock jedec speeds(3200mhz). But 12900K with 4133+ DDR4 or 7000 DDR5 the difference between the two CPUs would be flipped.

    Also check out my memory speed.attached below. Makes that TPU look paltry in comparison.
    This is what an enthusiast runs on a 12900K.. Not a 3200 setup because AMD can't run that fast of a memory setup.


    Anyways TLDR. The correct way to benchmark is to provide 58003D with it's best setup(3800~) versus 12900K with its best DDR4 setup(4133-4266 CL15) and a separate entry of its best DDR5 setup(~7000 CL32).

    I'm tired of all these pointless reviews that benchmark CPUs with very slow memory.

    Framechasers youtube channel is a good example of good benchmarking for games. He tunes each platform to the max(max core clocks, max memory speed and tight primary/secondary/tertiary timings) and then compares them.
    I don't care what a CPU does at stock jedec memory speeds. That data is irrelevant to me and other enthusiasts because we are running much faster memory.
     

    Attached Files:

    • c14.png
      c14.png
      File size:
      217 KB
      Views:
      2
    tunejunky likes this.
  18. Why_Me

    Why_Me Master Guru

    Messages:
    204
    Likes Received:
    69
    GPU:
    MSI OC 8800GT 512
    I'm curious to see how this $449 cpu stacks up against the $310 12700F at 1440.
     
  19. TheDeeGee

    TheDeeGee Ancient Guru

    Messages:
    9,624
    Likes Received:
    3,409
    GPU:
    NVIDIA RTX 4070 Ti
    According to the reviews that are up you can't go wrong with the 12700f, unless you want 20 more FPS on top of your already 200+ which you won't notice.

    It's also hotter than the already near impossible to cool 5800X.

    If you want a quiet air cooled gaming pc then a Intel Non-K 65 Watt is the way to go. You will also have a plug and play system and don't have bother with a bios update every 2 weeks and hope for a stable system.
     
    Airbud, cucaulay malkin and Why_Me like this.
  20. ngoni615

    ngoni615 Active Member

    Messages:
    56
    Likes Received:
    18
    GPU:
    GTX 1080ti 11gb
    I sense a lot of intel fanboy salt in here. So first of all, games do not really require the higher memory bandwidth offered by ddr5 but they do scale to timings.

    Hardware unboxed did a memory performance scaling on ddr4 and ddr5 with the 12th gen and the results in gaming were not that big and the difference only applied to one or two games that required higher bandwidth and synthetic applications.

    A properly tune dual rank dual channel ddr4 3200cl outperforms most ddr5 kits in games as well. So saying ddr4 3200cl14 is unrealistic on a 12900k is just shallow-minded thinking.

    I know a lot of people that bought 12900ks but they are still using their ddr4 kits ddr3600 - 4133mhz kit. The reason is the price. a good set of ddr5 kits with relatively decent timings will cost you upwards of US500+ and even if money is not an object US500+ for a 32gb kit is hard to swallow.

    The results look promising and i could wager that more games would benefit from a bigger cache than higher memory bandwidth of ddr5 at least until timings get better, say around cl22 or something.
     

Share This Page