I'm sure its been said at this point, but I kind of wish they released a single CPU (7800X3D) with a binned core that allowed for a higher clock. Mixing production and gaming like this leaves us with an expensive chip is that isn't the best at production and is a bit gimped for gaming compared to where it could have been. The main issue with an X3D production work CPU is that not enough people will buy it for production work to justify code repositories to be upgraded to take advantage of the cache.
Some games will use the efficiency cores. I know that Spider-Man Remastered does, and my CPU will average around 130w in that game (at native 4K with maxed settings) because the efficiency cores are used for decompressing the assets and streaming. The vast majority of games won't however. And you can see in the TPU review that Spider-Man Remastered is one of the games where RPL has an advantage over Zen 4 3D.
CPU utilization is a useless metric for determining how CPU limited a game is. Games are almost exclusively limited by single-thread performance . Besides, you're completely missing the point. Benchmarks are there to tell you what a CPU is capable of, not to tell Joe-Blow midrange what is "fine" for current 4K games with current GPU hardware. You're acting like these reviews are just doing 720p or 1080p. They aren't. They're showing the full range. If all you care about is how it's going to perform in todays games with today's GPU's, then that 4K benchmark is fine for you. For those of us who upgrade GPU's every release cycle and CPU's less often , we want to know what the CPU is capable of in a vacuum without the GPU as a factor.
My thoughts on Zen 4 3D. It's undoubtedly an impressive CPU with unmatched efficiency in gaming workloads, but falls far short of the hype. This CPU was supposed to crush RPL, but at most it manages to just edge it when averaged across multiple titles or equal it depending on the memory configuration and optimization. So now predictably, the argument has shifted from "crushing RPL in performance" to blowing it out of the water when it comes to efficiency. The problem with that though is that RPL isn't really inefficient when it comes to gaming. The games that really pushes CPUs are quite rare, Spider-Man Remastered being the most notable example of that. Also, reviewers often test games under unrealistic conditions, ie at 720p with an RTX 4090 to make the games more CPU bound. This is fine of course, but readers should know that it's going to increase CPU power consumption as well. One of the biggest benefits with Zen 4 3D is that it totally resolves Zen 4's biggest weakness......the memory controller. Zen 4 was AMD's first stab at a DDR5 memory controller, and by all accounts, it is subpar and underperforming compared to Intel's. Using standardized DDR5 5200 results in poor gaming performance with Zen 4. With RPL's this is not the case when using the default DDR5 5600. Zen 4 practically needs DDR5 6000 to compete with RPL, but Zen 4 3D having such a massive L3 cache totally resolves that problem. Which brings me to my next point. RPL has a very high memory ceiling and can obviously be pushed much further. No reviewer has tackled manual timings with DDR5 to my knowledge, and they all use the XMP profile. But XMP is literal garbage compared to manually tuned sub timings. Manually tuned DDR5 6400 will outperform XMP DDR5 7200 any day of the week.
I must admit, I'm rather disappointed with the release of 7950X3D. When the 58X3D was first released and I noticed a significant decrease in productivity app performance, I gave them the benefit of the doubt since it was their initial version. However, I expected them to improve upon that with Zen 4 3D, especially with the added benefits of VCache. So I expected that Zen 4 3D versions will have the same (or better) productivity app performance. It didn't happen. In my opinion, it would have made more sense to release only the 7800X3D. I can see some benefits buying 7950X3D, but 7900X3D with its 12 cores, makes no sense to me.
Other than there being numerous complaints that only the 5800X3D was available, and now higher end parts are available, people only want the lower end part...
HH, probably be a good idea to inform Intel that they are "dominant" now... I'm sure this is something they need to hear!
of course you're set w/ a 13700kf. while i have all of the parts for an AM5 rig except the GPU and CPU i don't really need to upgrade at all from my current 5950X & 10700k. well i really feel i do for the 10700k but *nerd alert* i've been watercooling all of my rigs since 2010 and the biggest secret i've learned is to spend money on quality quick disconnect fittings so upgrading is a breeze (except for putting a block on the gpu).
5800X3D was great for some tests on Linux, according to phoronix https://www.phoronix.com/review/amd-5800x3d-linux
Interesting, but completely pointless post since 13th gen doesn't reduce pcore frequencies based on number of pcores utilized (except of the 2 star cores boost, which doesn't have any impact on modern games anyway, because all of them utilize more than 2 cores). And even 13700K runs full frequency on all pcores when not power/temp limited. So no, that is not what is happening there.
interesting , Does it effect the ecore frequency?, I mentioned the abt thing because on the tomshardware review it mentions: it is definitely weird then edit actually did ze maths, this is roughly solvable , a = pcore power and e = core power multiplied by the number of cores with data from tpu, configs are as follows 8a + 8e = 107watts for the 13700k 8a + 16e = 143watts for the 13900k first we simplify by dividing both expressions by 8 a+e= 13.375 a+2e = 17.875 then we solve for e -1(a+e) =-1(13.375) --> -a -e = -13.375 -a-e =-13.375 added to a+2e = 17.875 equals e= 4.5 we find a via 13.375 - 4.5 = a = 8.875 then we plug it back in to see if its right 8(8.875)+8(4.5) = 107 (estimate) 8(8.875)+16(4.5) =143 (estimate) looks ok so how do we know that this is correct? we check against another raptor lake product 6a + 8e = 89.00w(recorded data) for the 13600k drum roll...... 6(8.875) + 8(4.5) = 89.25w(estimate) ... It works! while the clock speeds are different , they are within ~10% , so within our error margins. we could also probably compare against the 13400f by normallizing based on performance but that is alot more work. but anyway. we can basically say that its just the ecores causing the extra powerdraw Its almost too perfect to be honest, is tpu fudging their data??? lol
Your math is correct, however I don't think that testing it on just one other CPU (13600k) is enough as a proof. But even from your math you can see that this power consumption difference would only occur if those 16 ecores on 13900k would be utilized the same way as those 8 ecores on 13700k, which is imho impossible, because the performance difference between 13700k and 13900k in games, even according to the tpu, is just a few percent. So it definitely doesn't add up and I hope that you are wrong about the tpu fudging their data, because that would be just sad.
To be fair , its actually 3 cpus as proof, since the forumla works for all 3, if you were to do the same thing with any 2 cpus , you would be able to predict the power consumption of the 3rd. I think the reason it fits the real data so well is because its an average of a bunch of games. rather than just 1 or 2. also, since threads are shuffled around all of the pcores mainly. its going to be indicative of the increase in idle ecore power consumption, if the average power consumption increase wasn't the ecores , then the estimate for the 13600k would probably be off by a fair amount. so either intel has some weird super symmetrical power scaling happening or its the ecores. the 13900k has significantly more logic enabled. so its not completely unexpected. Spoiler https://fuse.wikichip.org/wp-content/uploads/2022/10/raptor-lake-marked-side-by-side.png the 16 ecores take up almost as much space as 6 pcores. and they draw significant power when running at high frequencies. I suspect that techpowerup uses the performance power plan, which will keep the idle clocks higher and prevents core parking, making it worse than it has to be, Just having cores "on" all the time will increase power consumption even if they aren't actually doing anything.
The only proof is the third CPU, because you used 13700k and 13900k as your input values for those equations. You can't proof anything by testing result on your input values The only thing you test that way is whether you did an arithmetic error. And your second assumption about 40w being consumed by 8 ecores doing nothing is also wrong. When my 13900kf is in full idle with parked cores, it consumes around 8.5w. When it goes to full frequency/voltages for all cores, but without any utilization, it consumes 20-25W... the whole CPU package. So it's fair to assume that 8 ecores, even at maximum frequencies, but with zero load will consume less than 5W.
they would've if they could've, but they couldn't and so we can't seriously tho, there's only one fab in the world (so far) that can do 3Dcache. i wasn't feeding the hype machine when i said it is the most stunning piece of precision engineering the world has ever seen. the tolerances here are in angstroms and both the ccd and the cache have been milled so fine there is "barely" more than a few molecules separating the traces on each die. there is no need for a bus because the circuit makes it's own connection because of basic (but not simple) electrical properties. all of which is true, but for the jaded i offer another truth - Epyc pays the bills. AMD has cloud datacenter dominance and if you don't include the custom silicon for Sony & M$, the datacenter business is the most profitable. AND this is where 3D Cache began. the 5800X3D was a "what If" and wasn't at the time roadmapped. the bulk of production @ node is still Epyc and this is also why the 7800X3D is a late show as they can't stop selling their most profitable item for a unit that will never hit 20% profitability (mass market and the need for competitive flexibility vs being "the only game in town"))
you can derive a and e from any 2 cpus , if you redid the math with the 13600k and the 13900k you could predict the 13700k and if i took the 13600k and the 13700k ,it would predict the 13900k, understand that if the supposition(more ecore logic = extra power observed) was wrong this wouldnt work at all, this is also average power consumption, it is possible that some games use the e cores. you cant compare software package power measurements, because tpu is using physical measurements, which include vrm losses.(note vrms are least efficient when at low power, and the mother board they are using has a monsterous vrm ,the z790 maximus) it is also a mistake to compare the estimated e and p core values independently of the full equation , since they are not real. they include vrm losses soc power and any other power drawing components and as such are not comparable to "real" values. this is ok because everything used in the review is exactly the same other than the cpuconfig, the changes observed are directly a consequence of the change is chip configuration and nothing else. the only thing better to do is use a different data set and re do it. however for our basic supposition " does the extra power consumption scale with the extra logic" what I have done is sufficient , because its well within the error magins and follows tightly (also i'm lazy). it doesn't actually matter where that power is coming from, just that it scales with the logic on the chip. . we are actually fortunate that the 13600k is configured as 6+8 , because it shows that the equation works when p core logic is disabled aswell and not e cores only. that greatly increases my confidence that its not just baloney.
16 reviews roundup, 7950x3d is untouchable for performance/power, 13600K/13700K are far ahead for perf/price over their own 13900K/S and r7000 (incl. X3D). https://videocardz.com/newz/amd-ryz...-core-i9-13900k-a-summary-of-16-reviews-shows just as a side note, see how among 16 sites Hardware Unboxed (Techspot) always has the lowest score for intel but the highest for amd.
Wtf? Techspot gave the 7950X3D, 3.2% over the 13900K. Several sites found performance to be higher. ASCII was 10%. Hardwareluxx was 10.4%. PCGH was 6.2%. Tom's Hardware was 10%.