Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Apr 12, 2022.
In my opinion 720p testing should not be a thing as I said in the past when Intel CPUs where ahead .... What's next 480p to find a meaningful different numbers?
The price difference between the G-skill 6400 CL32 DDR5 and a DDR4 G-skill 4000 CL18 is something like 3 times or in other words, it is possible to buy the DDR4 4000 32gb kit, a cheap AM4 motherboard and a 5800x for the same money as the G-skill 6400 cl32 DDR5 kit alone.
Get a DDR4 4000 kit and downclock it to max AMD 1:1 speed and put the 400€ saved in the pocket.
By the way, what did the cheap Dell memory cost? You have mentioned multiple times that it was cheap, but I have newer seen the price or partnumber.
Sheesh... A lot of back and forth in this thread. I don't really care who is the fastest, but more so impressed with the performance lift that comes from the L3 cache alone. Means going forward, we can see big gains from a big cache for certain workloads, vs boosting power/thermals to improve IPC for the same result.
That doesn't really matter. The CPUs themselves, which are being compared, aren't technically comparable. The 12900KF is 200 bucks more than what the 5800X3D is expected to cost. Since this is a comparison that totally ignores the price, I said the memory shouldn't be comparable either; it should be the fastest model reasonably available. Of course the total memory size should be the same so that the games don't behave differently.
Thing is that people test such a low resolutions to test the cpus then crank up the details where any significant difference diminishes.
I do know of the problem but ill go on a limb here and say that it was exacerbated a lot by low IQ Reddit users. I have 5 of my friends who have 5000 series CPUs and do not have that problem, granted they just update their bios, click XMP and game. I won't defend AMD for not being able to solve or know about this problem with their products but I'm old enough to know that every time companies reinvent the wheel there are going to be teething problems. You either haven't brought as much hardware as I or are been deliberately ignorant. Your 11900k is the same Fing ring bus CPU that intel has been refining for over a decade and which allowed them to Fing rip customers off.
I want to also add that from the original 2600k all Intel did was improve the clocks, add USB ports and remove or add pins to the sockets so you rebuy the same CPU every two years. I wouldn't expect you to have any problems with an Intel CPU from 2600K to an 11 series as it is all the same with more cores.
Using r5 2600 like forever... no hardware related issues. Or even any major software issue
Completely agree with this. A lot of game testing in general revolves around resolutions no one is playing at given the price of the hardware being tested and games that have 1% lows over 120 FPS even on midrange hardware.
"This $4000 systems plays game X at 547 FPS at 720p." <- wow, such critical information.
If a CPU/GPU is top of the line, 1440p and 4K are really all that matters. Games that have 1% lows above 120 FPS on 3 year old midrange hardware aren't even worth testing.
I think my 3733mhz CL15 is plenty fast with tight 35-36ns latency for years to come
But, but that's why I love this place!...(now where's my coffee?)
We need to test at lower resolutions to see the real difference between CPUs, otherwise tests are almost useless.
If we test CPUs at 4k, for example, performance differences are going to be mininal, giving the (false) impression that all the parts are equally fast.
Anyway, the results seem very good.
I wonder if it´s possible to test the CPU with 2 and 4 cores disabled, to see how much the extra cache really matters.
And also to see how more cores impact performance.
I need Guru`s 3d review!
this is why we talk of some games being CPU bound, and where frame rate consistency is best measured. a lot of folks talk of 1% lows and even 0.01% lows.
but frame to frame is the best analysis of gameplay as it shows (esp Win 11) scheduling latencies.
i could care less about hypothetical max frames @ 4k as only 4-6 GPU's can even get 120Hz + @ 4k across a broad variety of games.
what i'm getting at is the only people who do not see 120+ fps @ 4k as a rhetorical exercise are few and far between numbering in the thousands, while everyone else numbers from the tens of thousand to the millions.
and sub 4k is where the demons of CPU's live.
AND win 11 is still a major factor causing fps drops (from win 10 same equipment), large latency spikes between frames, and other related issues at 1440p too not just 1080p.
But then it's not an apples to apples comparison. Why not just swap out the GPU for a faster one? Why not overclock the AL? It's a CPU benchmark - you want to keep things as similar as possible except the CPU itself.
That being said, knowing temperatures would be good, because a good CPU performance test is one where neither CPU is thermally throttled.
Are you using the same RAM, GPU, and clock settings? Because those will make a difference.
The point of 720p is to make the CPU the bottleneck instead of GPU. Otherwise yeah, it's stupid, because once you get past 144FPS, there's really not much of a noteworthy benefit of going faster. Only a small handful of people can actually take advantage of framerates higher than that, regardless of whether they can see the difference.
When you test CPU, take away other bottlenecks like GPU and Memory. Memory is a huge bottleneck in many games with 100+ fps, so why not take away most of this bottleneck too?
Based on what you say, I notice at least a preference, and that's ok, you don’t need to be ashamed of it. You seem to care too much about your choices being seen as the best ones, because you “clearly know better”, that’s kinda different from just having an opinion.
In my case, despite not being decisive, I have a little preference for Intel stuff. In my house we have 5 computers, only my main rig is AMD, but it could be all AMD (based on cost/benefit, opportunity etc.).
Games have never been a good test for a CPU. However, because 5800X3D is targeting gamers, how should we compare them? At 1440p and above, all of them (from 5700X up to KF) are going to be GPU limited and will produce around 5% AVG FPS variance.
Completely not wrong but I understand the case to use the SAME ram kit.
In an ideal world, we would have the SAME fastest RAM kit to test both CPUs in gear 1 or 1:1 IF but NOT all IMC are not created equal.
That's totally fine, so long as BOTH CPUs are getting the memory upgrade.
Yep, that is why I compare max tuned 5950x vs max tuned 12900k at home. 5950x has max 3866c14 tuned to 52ns, and 12900k has 7200c30 48ns.
Latency wil be a bit lower with "Ghost spectrum" win 10 vs Standard win 11 I use.