Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Apr 16, 2021.
Both are maxed out . That 5600X is at it's celling 1966 FCLK + 4750 ALL core = It's the end
how do you know about power limits in this video ?
those numbers look fake as hell
200% faster ?
Where is 200% faster. It's small gap between them. CS GO is only big i think
right there at the beginning in hitman
and where does it show power limits
man this is some russian end of the internet stuff
1% lows yea , that cache difference in favor of ryzen . Not sure if you can feel it during gaming tho . On;ly some games are cache sensitive , i guess Hitman is.
you think 200% difference is due to cache on ryzen ? or even possible between two modern six cores ?
look at hitman numbers here
that russian video is just weird.but then again,most russian internet stuff is just fake
It's the rocket lake problem. Cache is worse on them and some review mentioned some games had much worse 0.1% lows than comet lake. I will look it up later, maybe i can find more data
Intel brought some value with 11xxxF series. They do afford it. Will bring down prices of AMD in future but not when they have a chip shortage.
please do cause it sounds improbable and I never saw it reported anywhere except you and that russian youtube video.
if they're getting 14fps 1% values where others are getting over 100 then their setup is the problem.
I checked GN video now and 11400f had much better lows than comet lake . So russians f-ckd something up with OC, resulting in negative performance . Had same with 2000 FCLK on my 5600X , everything seemed stable but performance was lower than 1933 FCLK.
TPU frametime bench also didn't find anything special.
yup that's what I thought that system is unstable
Or increase power limit and enable vsync (or some other type of fps limiter or sync), so when the game needs more power for cpu intensive parts it would use slightly more, but most of the time it wouldn't use more just when needed.
increasing power limit does the trick, I get ~3600 points in CB20 with my 9700 (non K) it uses ~150W and it reaches up to 74C on some cores. With 9700K in the same system, same power limits (higher than intel spec on both cpus in test) 9700K manages ~3800 since it runs 4.6GHz all core, vs non-K that is running 4.5GHz. Also 9700K (default) was using less power (my sample used ~20W less than non K 9700) and was maxing temps at ~67C (hottest core).
Same cooling, same system.
Aside from that, there is no difference between the two and since I don't care if 9700K can be overclocked to ~5.2GHz (overclock that brings very little to no gain gain in fps on sub $300 GPU) I really don't care for the K. That's because I have been overclocking for past 20 years and now I just don't care to waste time for 2~3fps.
Interesting, i mean if i remember correctly Anandtech tested the 10700 vs 10700K and the K also drew less power, probably because it's less faulty binned chip, like in your case.
By the way i didn't have any K chips and probably never will. I'm not paying extra money for a few hundred MHz. I had an H67 board with i3-2100 went to i5-3550 huge jump, to Ryzen 2600 noticeable jump, but the current pricing on AMD's side not very good, if i want to upgrade i have to pay too much for noticable performance increase in gaming, 2.5-3x price of my cpu for +50% FPS, i would rather go back to intel for an 11400 cpu.