The Intel 13th Generation Core Raptor Lake-S range leaks 4 to 24 cores on three separate dies.

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Aug 19, 2022.

  1. MegaFalloutFan

    MegaFalloutFan Maha Guru

    Messages:
    1,048
    Likes Received:
    203
    GPU:
    RTX4090 24Gb

    On the otehr hand there is that first game where its on par with ZEN3

    Whats funny is that for a hardcore gamer that has 4K display there wont be any difference because of the GPU bottleneck.

    No one said you should game on E cores, they not for gaming, yet they can do that and being just gen 1 it will get faster with each optimization, they have a latency penalty that will be fixed [maybe already fixed since 13th gen leaked benchmarks are insane, it cant be just IPC improvement, i bet the e-cores were improved too]

    Honestly i would prefer a gaming CPU with just 12 P cores and no e-Cores.
    If Intel was smart [and i wonder why they never did so, even when they lost to AMD in speed and core count], offer a CPU without iGPU and use the space to add some core, maybe even double them on some renders the iGPU takes as much space if not more than CPU, so why waste this space on something most people dont care about or cant use [Like I got 12900K and Unify-X and MSI unlike ASUS, doesn't let you access the GPU on boards without display out, on ASUS i had a board without HDMI/DP and iGPU was enabled and used in windows for video encoding]

    No E-Cores and no GPU, that at least at bare minimum could double the P cores to 16C, thats a monster!

    I hope intel comes out with HEDT that not one gen late like it was before but on par with mainstream, i remeber the days of 5820K and would like to get a HEDT 12-16 maybe 20 P-Core CPU that can do 5Ghz by default and with custom loop, say 5.3Ghz [or 5.5ghz if its 13th gen Intel]
     
  2. MegaFalloutFan

    MegaFalloutFan Maha Guru

    Messages:
    1,048
    Likes Received:
    203
    GPU:
    RTX4090 24Gb
    In his benchmarks only the E and P cores are clock limited to 3.9Ghz, all otehr CPUs run at default, it says so near each CPU name
    So it looks like 8 E cores are faster than Ryzen 3600x, thats surprising
    I haven't OCed my CPU, decided whats the point if 13900K comes out soon, but e-cores can easily do 4.1 without sweating and people doing 4.3 on good boards
    Intel need to add HT to E-Cores that will be a good selling point, 13900K with all HT will be 24C/48T CPU, looks good in ads, i hope they do it for meteor lake, or at least make them twice smaller and give us 32 E cores instead 16
     
  3. winning.exe

    winning.exe Member

    Messages:
    22
    Likes Received:
    17
    GPU:
    Nvidia
    Sapphire Rapids is years late at this point and nowhere in sight, it was meant to compete with EPYC Rome and will end up competing with Bergamo. No Intel HEDT will be coming until that point, so you can count on it being uncompetitive :(
     
  4. Horus-Anhur

    Horus-Anhur Ancient Guru

    Messages:
    8,726
    Likes Received:
    10,815
    GPU:
    RX 6800 XT
    Nice argument. If we run tests that don't stress the CPU, then P-Cores an E-Cores are equal.

    [​IMG]
     

  5. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    9,236
    Likes Received:
    5,208
    GPU:
    AD102/Navi21
    min fps 3x higher on p-cores only.
    they're a nice addition as long as they don't cost much but that's all. they're not going to do much for multithreaded games either. you need an actual 8-core.
     
    Horus-Anhur likes this.
  6. asturur

    asturur Maha Guru

    Messages:
    1,373
    Likes Received:
    503
    GPU:
    Geforce Gtx 1080TI
    No, or they would have done it.
     
  7. Kool64

    Kool64 Ancient Guru

    Messages:
    1,662
    Likes Received:
    788
    GPU:
    Gigabyte 4070
    might as well go all E cores then and stuff them with 64 or more. Plus as an added bonus they can get right of the vulnerability trap of HT.
     
  8. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,013
    Likes Received:
    4,389
    GPU:
    Asrock 7700XT
    If you're talking performance-per-watt, sure. If you're talking workloads that don't use a lot of advanced instructions, then I would rather have 2 or 3 E-cores than 1 P-Core (3 E-cores take up less die space than 1 P-core). But if my CPU makes me money and I have a highly parallel workload using advanced instructions, E-cores are not good enough. At that point, it makes a lot more sense to buy AMD. That being said...
    ... if you're committed to Intel then yeah, many E-cores are the only cost-effective way to get a more highly-parallel workload for workstations. While I think Intel is severely price-gouging for their workstation Xeons, they kinda have to charge too much due to their giant monolithic dies, which are inherently very expensive. With Intel, you have to either pay an exorbitant price or lose some performance with E-cores. AMD allows you to have many all-"P-core" chips at a relatively low price with overall better performance-per-watt, where all you really sacrifice is latency (and that doesn't really matter if you're just crunching numbers).
     
    Last edited: Aug 22, 2022
  9. cucaulay malkin

    cucaulay malkin Ancient Guru

    Messages:
    9,236
    Likes Received:
    5,208
    GPU:
    AD102/Navi21
    Last edited: Aug 22, 2022
    shady28 and fantaskarsef like this.
  10. winning.exe

    winning.exe Member

    Messages:
    22
    Likes Received:
    17
    GPU:
    Nvidia
    The only "advanced instruction" not supported by E-cores is AVX512 (E-cores DO support up to AVX2, and up to AVX "fully"), and AMD processors have no support for AVX512. All available information suggests that Raptor Lake E-cores will support AVX2 fully. So, if you have a highly parallel workload with "advanced instructions," it doesn't make much more sense to buy AMD than it does to use E-cores. In reality, AVX2 and AVX512 only show up in niche use cases, are highly L1-cache dependent, and are seldom used because the speed increase due to parallelism is not proportional to power increases and clock speed decreases. A significant amount of software tops out at SSE2 or SSE4.

    As to the second half, I use an AMD EPYC 7742 in my workstation. It is last generation's top of the line 64-core server processor; my CPU "makes me money" and I have a highly parallel workload. One Alder Lake E-core is about 25% faster than one "P-core" on my processor. Even if I were to upgrade to the "latest" 7763 or 7773X (which I plan to do at some point), an E-Core would still be the same speed or faster.

    Consumers have gotten this notion that E-cores are slow because they're slower than Intel's P-cores, but in the scheme of things they are actually very fast, especially compared to server processors.
     
    Last edited: Aug 22, 2022
    GoldenTiger likes this.

  11. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,013
    Likes Received:
    4,389
    GPU:
    Asrock 7700XT
    AVX512 isn't enough to make a P-core nearly 4x larger, because that's not the only difference. From what I recall, AVX-512 will be disabled in 13th gen from the factory, at which point, why include them if that's the only difference? E-cores also lack HT and larger cores, and to my understanding, there are other architectural differences in the structure of the pipelines. It's hard to find any specifics about it in the sea of laymans articles and over-hyped press materials and I don't care enough to spend more time looking it up.

    I would sure hope one of those E-cores would be faster than a 7742's in many workloads. Lots of differences there where core-for-core, even a 12600's E-cores would be competitive.

    I agree with the last bit, though I do feel you're overselling how good the E-cores really are. I'm pro E-core and have been before Intel released them, but if they were as good as you claim then they would wholly replace the P cores.
     
  12. winning.exe

    winning.exe Member

    Messages:
    22
    Likes Received:
    17
    GPU:
    Nvidia
    There are large architectural differences between P and E cores, from the ones you mention in your comment, to the width of the front end, to register sizes. Intel has made all of these details very public.

    The entire reason for the existence of 10+ watt per core "P-cores" is to maintain a competitive edge. If Intel came out and said "our processors now use 1/2 the power but are only 2/3 as fast as AMD's," nobody in the consumer space would buy the processor :D. Manufacturers are forced to push clock speeds and process nodes further and further away from peak efficiency in order to maintain a competitive edge. The "sweet spot" on any current process node's performance-power ratio is between 3 and 4 GHz, yet manufacturers have to push past 5GHz to secure their status as "the fastest." Consumers have been misled to believe that you need a very powerful processor for mundane tasks, to the point of absurdity :(.

    In terms of E-cores being "over-hyped," Intel's most recent Xeon segment roadmaps show all E-core processors by 2024. Moreover, AMD has announced plans for "Zen 4c" all E-core processors to parallel Bergamo in 2023 and 2024. In terms of high performance, highly parallel, energy efficient compute, both AMD and Intel see E-cores as the way forward.
     
    GoldenTiger likes this.
  13. user1

    user1 Ancient Guru

    Messages:
    2,780
    Likes Received:
    1,303
    GPU:
    Mi25/IGP
    while this is certainly true, I would like that to add that the lack of cheap die-shrinks has made wider designs more unfavorable lately, while a wider 3-4ghz cpu would be better efficiency wise, smaller faster clocked chips are cheaper to make and easier to make in volume, this is a big part of the reason I think that we've seen this frequency push from basically every vendor amd,nvidia,intel ect.(specifically for the consumer market, obviously where power efficiency matters the most ,lower clocked phat chips still reign supreme)
     
  14. MegaFalloutFan

    MegaFalloutFan Maha Guru

    Messages:
    1,048
    Likes Received:
    203
    GPU:
    RTX4090 24Gb
    But thats just how it is for every 4K gamer.
    Who buys 3090 to game in FHD?
    Most single player gamers chase quality, not 300FPS, people hit the GPU wall, so these benchmarks are OK but they do not reflect the reality
     
  15. winning.exe

    winning.exe Member

    Messages:
    22
    Likes Received:
    17
    GPU:
    Nvidia
    In the x86 space the "clocking higher" part is a necessity, because x86 as an ISA was not constructed with the foresight of very wide-issue superscalar processors. In fact, the entire notion of superscalar processing and executing multiple parts of instructions in parallel (instruction decoding and micro-ops) was bolted on after the fact :D.

    Your processor has hyper threading (Intel's HT) or simultaneous multi-threading (AMD's SMT) for this reason: the "wide" nature of x86 today would not allow the processor's resources to be used fully with only one thread running on the core. Thus, two threads are crammed into the same core to keep everything busy :p.

    This is a main challenge of x86 processors, and it's why L1 and L2 caches, buffers, registers, and so on continue to grow in size (like with Alder Lake / 12000 series). A similar problem was first faced in the Pentium 4 days and has been masked by HT / SMT ever since.

    It's very easy to tell that x86 processors are already too wide :confused:. For example, even on my very slow per-core EPYC processor, turning off SMT results in performance losses of 30-50% in highly parallel tasks like path tracing and video encoding; that is to say each single core is already "too wide." The x86 ISA does not scale to wider processors as well as it does to faster clock speeds.

    As an example of when things really get "too wide" for x86, Xeon Phi processors had 4 threads per core to keep their AVX512 units busy, double the two threads per core of mainstream processors. Other "very wide" processors like IBM POWER and so on execute 4 or even 8 threads per core :eek:.

    I agree, some of this comes down to cost efficiency, in terms of the feasible size of caches, extra execution units, and so on in the silicon. However there is also a very real point of diminishing returns where the core would become too big to operate correctly if it was "too wide."
     
    Last edited: Aug 23, 2022
    GoldenTiger likes this.

  16. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,013
    Likes Received:
    4,389
    GPU:
    Asrock 7700XT
    Yes, this is something I've criticized Intel of for many years, because they kept bumping up clock speed rather than innovate (granted, they wanted to get on 10nm a lot sooner) and it became too hard to keep up with that. Their architecture just isn't efficient at all with these crazy high clock speeds.
    I agree too that people put waaay too much emphasis on peak performance for everyday workloads (or even in terms of gaming, where they want to achieve framerates their display can't render).
    The thing you are inferring but never directly said is that Intel's big.LITTLE approach allows them to have peak performance in computationally expensivesingle-threaded tasks while otherwise maintaining optimal efficiency in everything else, which is the real selling point of E-cores. However, so long as Intel doesn't want to "glue dies together", the only way they'll ever compete with AMD's core count is with E-cores. But, modern AMD cores are better overall. Intel could just simply compensate by making more E-cores, but then dies are going to get really expensive again.
    Well yeah, of course they would, and they should. Nvidia is dominating the AI server market. AMD is dominating the cloud and big data markets. ARM is an obvious choice for web servers. Intel's only selling point is legacy. AVX512 will help them in certain workloads but it takes up a lot of die space for something that most things don't need. So, Intel needs something that can compete with the huge number of cores that all of their competitors have while remaining efficient. So, an all E-core Xeon is the obvious solution. It is only inevitable that AMD will do the same, and when they do, that once again puts Intel in a bit of a predicament. AMD's chiplet approach means they could fit hundreds of E-cores on a single package. If Intel doesn't come up with a more cost effective solution, they're only going to get maybe 1 or 2 years of sales.
     
  17. winning.exe

    winning.exe Member

    Messages:
    22
    Likes Received:
    17
    GPU:
    Nvidia
    Intel has announced they are now perfectly happy to use glue :cool:. Sapphire Rapids (the upcoming server CPU) has four dies on a substrate, plus four HBM dies. Intel's newest roadmap says they are moving towards an SOC-style multi-chip-module architecture with an IO die (like AMD has now).

    Especially in the enterprise segment, Intel doesn't have to compete on core count (and many customers wouldn't want them to in the first place). Enterprise-grade software is licensed per-core, and licensing costs can be in the millions for big database software. OracleDB can easily run into the millions of dollars for licensing on high-core-count systems :eek:. Even Windows Server is licensed per core; Microsoft thinks the Windows Server 2022 Datacenter installation I run on my 64 core machine should cost over $25,000 because it is licensed on a per core basis. This is a big reason that Intel and AMD are both planning both P-core and E-core server processors (licensing fewer, more powerful cores is relatively a better deal).

    Intel isn't having any trouble in the server market though, despite topping out at 56 cores per socket, and it would be a huge stretch to say AMD is dominating in any enterprise segment. I'm sure Intel is perfectly comfortable with 90% market share, which doesn't show any sign of changing :p.
     
    GoldenTiger likes this.
  18. Minjin13

    Minjin13 Guest

    Messages:
    9
    Likes Received:
    3
    GPU:
    Geforce RTX 2080
    With the leaking of Intel's 13th Raptor Lake generation CPU lineup leaked/revealed, does that mean Intel has abandoned or consolidated their HEDT lineup into the commercial lineup? Intel last launched an HEDT CPU in 2019 with 10th gen Cascade-Lake-X.
     
  19. TheDigitalJedi

    TheDigitalJedi Ancient Guru

    Messages:
    3,986
    Likes Received:
    3,216
    GPU:
    2X ASUS TUF 4090 OC
    From reading some of the post in this thread and from what information I've gathered from tech journals, it seems the 13th gen was designed more for business. It will do well with games with the nice overclock boost but the emphasis on E cores demonstrates a focus on multitasking and multi-threaded task. In my line of work this is essential to us for efficiency. Sometimes I must retrieve areas as large as small cities to look for outages or troubles if you will. It is an extremely detailed replica of where our (OSPE) outside plant equipment is placed. While retrieving the blueprints we must have several applications and databases open. We already have some systems with 12900k processors and enormous amounts of ram that run much faster than our older computers.

    In 2022 we still have information being transferred from the Trunk Inventory Records Keeping System (TIRKS) due to the huge amount of data that was built within it. The transfer process is much smoother with the 12900k systems that have DDR5 ram.

    8 performance cores with 16 efficient cores are what's coming with the 13900k. In the future I see Intel increasing the performance cores and efficient cores to benefit gamers and companies like mine.

    Isn't it possible for E Cores to be beneficial for future gaming software?
     
  20. Krizby

    Krizby Ancient Guru

    Messages:
    3,097
    Likes Received:
    1,775
    GPU:
    Asus RTX 4090 TUF
    Well only when games are designed around huge amount of cores can E cores be beneficial, Intel probably isn't funding game development as much as AMD or Nvidia.

    I think most game developers are optimizing their games around 6cores/12 threads CPU at the moment, 8 Cores/ 16 threads CPUs still have years before they are not enough for games.

    Well if you are doing a whole bunch of background tasks while playing single player games, I think E cores can be of use :D
     
    Last edited: Aug 24, 2022
    TheDigitalJedi likes this.

Share This Page