pretty sure f36 on coffee lake is vss to gnd in the intel pdf at least. also j14, j15, au9 , and au10 appear to be rsvd on the 8thgen and 6gen maps im looking at in the intel pdfs, 6th gen , 8th gen. VCC_GTx_SENSE appears to be auxillary to vcc_gt_sense , so if you short it to ground it shouldn't effect anything, it could also ignored via software if its used for posting. Either way all of the pins that are changed appear to be very minor things, pretty sure the Rsvd pins are not used for normal operation , only way to know for sure would be to check them with an oscilloscope or try taping over the them/shorting to gnd( on a kaby lake cpu) and see if the proc still posts in a z270 motherboard. intel could have supported kaby lake on z370 if they wanted to , there is no question about that at least.
Thanks for that link, the first graph you come to in that article shows the average fps difference between CPUs that has been averaged from 15 different games. Some people were saying that the 1700X should be compared to the 8700K as they are similarly priced, and I agree with that statement; this graph in that link shows 8700K gaining 23% more framerate than the 1700X (comparing stock to stock).
Sorry, should've been J17, J19, B39, C40 - clearly labelled VCCG0 or VCCG1 on the ball map (pages 122-123 of 8th gen datasheet). Ah. You're right. It would probably require a new chipset revision to implement CPU detection, and from all we know, Z370 is simply old Z270 silicon with a new hardware ID. https://www.thurrott.com/hardware/140883/intel-takes-8th-generation-processors-desktop This new socket revision looks like a result of a late and unexpected decision to market the rumored 8-core LGA 1151 Coffee Lake/Cannonlake processors - so Z370 could be a stop-gap solution until proper 300-series chipsets arrive with support for 4-, 6-, and 8-core LGA1151v2 processors across the entire range (B350/Q350/H370/Z390). But do not expect any backward compatibility - Intel would probably treat this LGA1151v2 as a new mechanically incompatible socket, much like LGA2011/2011-1/2011-3.
Well, I've had time to look at a variety of videos, including Digital Foundry. For me, it's a touch pricey, but, nothing overly shocking. It's a good gaming cpu and for people like me, there's some extra power for music DAW production. Decent product tbh. I would love to compare it to Ryzen 1700X and 1800X in a DAW and see which one wins.
DAW's need more single-threaded power than multi-threaded. Each channel in your DAW is limited to a single thread for all signal processing that occurs on that channel. This is because splitting the channels threads would introduce latency in playback. More than one channel can share a thread. So multi-threading is a bit of a lark. If your running a lot of channels, threads can be beneficial because your pool of available space is greater. The caveat here though is that if your CPU processing ability can't keep up with a channels needs it can't shift those needs to two threads. So while a particular CPU may have better multi-threading capabilities, from a practical stand point it doesn't expand your capabilities unless the CPU is able to keep up with the demands of each channel. Having 8 threads isn't beneficial if your maxing out its capabilities with one or two plugins. If you run a lot of VST, DSP, etc on a channel more CPU power is essential. If you run a lot of channels with audio playback but not a lot of processing more threads might be beneficial. (From: https://sound.stackexchange.com/questions/30057/is-multi-threading-important-for-daws ) Obviously you'd want the best of both worlds to cover all the bases. Designing a DAW-system can be quite tricky since you have to juggle with a lot of variables depending on your needs.
ah i see, seems like someone forgot to blank those out/label the table correctly, lmao, but again, wouldn't really know if those are different from kaby lake unless you measured both with an oscilloscope. since those pins are undocumented. I would do it my self but i lack the resources to buy a z270 and a z370 plaform for testing. I dunno, if intel can transplant kabylake to x299 despite it not having FIVRs , Their stuff must be pretty flexible, not like the chipset really has anything new and/or uses a different type of dmi link. nothing really functionally different between coffeelake and kabylake besides the extra power pins and a few relocated pins, which are not critical to the function of the processor. if anything im more surprised that intel hasn't launched a revised "7790k" type cpu for z370 to fill the lack of coffeelake cpu supply , Would make a whole lot more sense than the 7740k on x299.
Thank you for the review Hilbert, If possible I would like to know how they perform in some older titles such as Metro Last light and games around that timeframe.The only reason I am asking I do not play any of those games there were tested in this review ( Old-school) here lol. So if possible maybe in the future to add to the reviews? Thank you Hilbert.
LGA 2066 is a server platform, designed with maximum flexibility from ground up - with a choice of 2 or 4 memory channels and 4/6/8/10/12/16-core processors with very different PCIe configurations and lane topologies and/or number of QPI links, all on the same motherboard (and using the same enthusiast-level chipset for Core-branded parts). This probably requires a lot of processor detection and configuration pins to function properly (can't find pin description for LGA 2066 processors though). LGA1151 support for 6- and 8-core processors looks like an afterthought, implemented as a quick-n-dirty solution to counter the AMD Zen processors, which have eaten a substantial share of the desktop market. We do not know what impact these new 6- and 8-core configurations have on support and testing, and we do not know if this implementation is even cost-efficient to make - and in fact, Thurrot.com article linked above assumes that Intel bears substantially higher costs to produce these 6-core processors, but cannot increase the prices because of AMD.
IceLake is a completely new architecture, CanonLake is still SkyLake architecture. https://en.wikipedia.org/wiki/Tiger_Lake_(microarchitecture)
Well i was talking about the availability claim really i guess i should have specified , well obviously amd and intel try to rip each other throats out they compete on the same market after all!
if it did i would bought a 8700k but cause i have to get new mobo to do that, intel wont get anything from me for another 3~4 years when i make my next pc, by then 8c/16t or more @ 4ghz should be thing and the era of multi core/thread aware software should in full swing? maybe? hell if amd keep going they they been going in last year my next system might be amd it all depend on how close amd gets to intel performance on both STP and MTP while still remain much cheaper
There 1 thing called "improvement". You can blindly like whatever Guru3D staff writes or give a feedback how to improve it. We all here appreciate the work. While it's impossible to keep everybody happy with an article, something can be done to majorly improve quality of it.
Yes, it does, but you keep intentionally ignoring the other reasons. Not everyone has your priorities. Again, I'm not saying things like minimum frame rates and frame times aren't worth testing - they definitely are. But to claim the current tests are useless is just simply not true. Name one game that is CPU limited by something like an 7700K or an 8700K at 1080p+ at 144Hz. That game cannot be limited by GPU. That's because there isn't really a significant difference... Yes, there would be a measurable one outside the margin of error, but measurable differences are not the same as perceivable differences. What matters most is frame consistency. 24FPS looks nice as long as the media is intended to run at that framerate, and, if it's consistent. 120Hz can be a terrible experience if there's micro-stuttering. Assuming the CPU has enough threads to meet the game's requirements, it doesn't matter what CPU you get. I don't give a rat's ass if these tests reflect poorly on my CPU. I've done my research and I'm well aware of the poor latency of Ryzen, the scheduler issues, sub-par IPC, and limited frequency. You think that sounds like a good gamer's CPU? And yet, I bought it anyway, because none of those things ruin my gaming experience. As I stated in an earlier post, a Ryzen is not suitable for everyone, but for the vast majority of people, it's more than good enough. As I said before, if hypotheticals and future-proofing is a priority, where do you draw the line? Anyway... As I've stated several times before: I don't disagree that the tests you want provide useful information. I wholeheartedly agree it should be provided. What I don't agree with is how you think the existed data is useless, because it really isn't. P.S. I am writing this on my primary home PC, an Intel Haswell platform.
What's more is that the results are all so far above silky smooth that it reminds me of an old comic back during the Cold War: one guy says to his friend, "I feel so much more secure knowing we can blow up the world 7 times over and the Russians can only blow it up 5 times."
high horse my ass^.....rizen off the rip could not catch the 7700k....8700k gonna signify the truth. no fanboy here!.......But a new 8700k setup would be sweet!....
For any gamer with a monitor less than 90Hz (not sure if they even do 90Hz monitors!) then Ryzen is just fine & probably the better choice due to affordability, lots of cores, and some impressive future proofing because of all those cores, but if you have a 120Hz+ monitor then you gotta go Intel to ensure you're getting the most out of it.
I didn't claim current benchmark data is useless. It does show difference between CPUs. It's just not as useful for users who look into purchasing 144hz monitor and comparing CPU that can actually deliver that amount of frames. We see only a fraction of difference, not a whole picture.
I gotta agree^....any benchmark/graph that shows all cpu's are equal is useless... we would all have i3's then.