Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 21, 2021.
Hey, that's me without coffee in the morning
Thing is, Conroe wasn't all that much better than Pressler in most synthetic and real-world productivity benchmarks. It excelled at gaming. But day to day things? 3DMark06? 10-15%.
Bunch of nonsense. Time to start taking brain supplements.
You're definitely misremembering.
Well, what do you know? Weird, maybe it was the early engineering sample benchmarks that I remembered. It's been a while and I'm old. Thanks for correcting me
Prescott, not Pressler. I remember very clearly the Conroe days. The initial performance leaks a few months before release, ppl were saying no way its that good. The clincher was that a 200-300 chip was destroying AMDs $700 parts (FX chips).
AMDs equivalent in terms of perf gains over previous gen was Zen 3.
Presler, I was referring to the 65nm Pentium D. Prescott was single core/thread and was much older (2004?). Smithfield/Pressler were dual core, Presler was released in early 2006, just months before Conroe. I had all of them, probably why it's all becoming a blur. Though since history repeating itself is a thing, perhaps Alder Lake will be just that - the next Conroe. We can hope.
OK, now I remember it. Presler was short-lived, thats why its not on top of many ppls minds. Yes, it was the first dual core, a hot and hungry chip that wasnt that good to begin with. Conroe set things right with a proper dual core.
you are correct except its Presler with one L
I upgraded from a Pentium D to a E6850 and it was like night and day!
possible conroe was clocked pretty low for most chips, a <2.4 ghz conroe vs a 3.73ghz smithfield/presler would be pretty close
presler is cedarmill(which is a die shrink of prescott) , presler and smithfield are just a dual die packaged version of prescott and cedar mill respectively
I doubt these numbers as well. I do believe and hope that it'll outperform single threaded performance by a noticeable margin but this seems a bit much... Multithreaded with its P+E setup beating the 16 core monster 5950x seems even less believable.
However if it is true it is good news for everyone, including fanboys from either camp: price drops for AMD chips, finally a noteworthy new architecture since forever instead of incremental updates by slapping on additional cores to essentially the same design.
Fair competition is always good and accelerates progress!
I don't really think that equates to running better.
You're talking differences in nanoseconds between cores. That isn't going to translate to more frames much, in the millisecond realm.
And even if the latency was better on the little cores, the bigger cores will still execute the game engine faster anyway as they run higher speeds, and have higher IPC.
Yes, but the Golden Cove IPC improvements come mostly from architectural changes and being able to retire more instructions than the Skylake architecture. There is no additional programming required to take advantage of this.
It makes more of a difference than you think, considering Ryzen's latency issues are what really held it back in benchmarks for the first 2 generations.
For people who are anal about framerate and care about getting in the hundreds, it makes a difference. If you just want a game to have a good framerate, even a FX series CPU will get the job done.
The whole point of my post was to say that the little cores aren't slower just because they're simpler.
As I stated multiple times already...:
What I said assumes the little cores can be pushed faster
IPC won't improve performance if the task at hand doesn't utilize all the instructions
You're taking this a bit too seriously for what is mostly just theory. I'm sure the little cores are missing enough instructions that modern AAA titles either just simply won't run on them, or, will have to use up more cycles to compensate for missing instructions, thereby negating any advantage they had. These cores are not built to run complex foreground tasks.
And I'm sure the little cores share much of the same changes, in which case that point is moot. It wouldn't make sense for the core architecture of the little cores to be different.
Zen 2/Zen 3's high core/thread count CPUs do run surprisingly hot, but yeah, from what I've read they're just designed that way to some extent.
It's the same type of design we've seen in the smartphone market where they do big little. Yes, it's a power saving feature. The idea is you have the power efficient lower performance cores handle mundane tasks like web browsing/video streaming, etc then when you load up a game the big boy cores kick in.
One thing I'm wondering is whether or not both the little and big cores can be active simultaneously for demanding workloads. My confusion is, for a laptop or tablet or phone I understand this type of design, but for a desktop tower plugged into the wall, it seems to me an odd choice since at that point wouldn't you really only care about the big boy performant cores as power draw/battery life is less of a pressing issue? That's what Ryzen is doing currently and what Intel has been doing traditionally so it's interesting to see this switch.
Perhaps there's something I'm missing there though. I do hope that in demanding tasks both the big and little cores can be used/active simultaneously.
Power saving is also saving heat. Imagine that this CPU was run with a full chip of big CPU cores, running at their intended power... you couldn't cool that chip. This one already is water cooled to get those benchmarks (see OP).
A big one probably needs a chiller or a really good custom loop to even run without throttling... the truth behind it? They probably couldn't have made a chip going only big that they could sell as it would never run reasonably well under air cooling.
They simply had to use lower power cores to even make that thing usable under air, I guess.
10nm isn't that bad on power at lower frequencies(<4.5ghz), so a 12core chip is doable, the problem comes from the fact that intel will also be putting this silicon in laptops , 10nm superfin is simply inferior for mobile parts, tigerlake-h vs cezanne (zen3 apu) shows this, at 35w the ryzen chips are able to compete with 45w tgl chips. The only way for intel to get the power consumption low enough to compete, is to use a more efficient cpu core, thats why the atom cores are there, however it's a pretty tough sell, even with alderlake, since the igp graphics performance is likely to be inferior, like tigerlake-h, unless they are going to increase the total die size significantly.
Hey Denial. You are a blast from the past. I'm glad to see you're still around and doing your thing. I was about to jump in but the points are already covered.
And by the way to all of you.....Core 2 Duo FTW!
It was one of the greatest CPU that I ever owned...that's why it stayed in my memory..The E6600...
After the initial tests on my PC it felt like the angels had sung their trumpets...
For those saying that the "leaked" gains aren't possible or are improbable.... Is this supposed to be another "optimization" + small cores or an entirely new architecture? That makes a big difference as to what's possible.
There's no way to do an "apples to apples" comparison between an M1 based product an a product using an Intel or AMD CPU, which makes any direct comparison impossible.
We must have been watching different forums. This forum has had an Intel/NVidia bias for years. From the announcement of Conroe up until Zen3....
As for contributions, I see plenty. Maybe if you try to be a little less negative....or a little less Intel shill like, you'd notice it too.
Core 2 Duo aka Conroe, was also a completely different architecture from it's predecessor..... It's pretty common to see large gains from new architectures. Not really common to see the same gains from architectural optimizations.
People have a tough time believing these gains because neither Sandy Bridge nor Haswell or Skylake blew anyone away. Intel hasn't made these kinds of gains since... Core and Nehalem. To quote a meme "it's been 84 years".