Review: Core i9 10980XE processor (Intel 18-core galore)

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Nov 25, 2019.

  1. TLD LARS

    TLD LARS Master Guru

    Messages:
    780
    Likes Received:
    366
    GPU:
    AMD 6900XT
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------

    A lot can still be done with more cache, better instructions like AVX at full speed, SMT boost, better threaded software, something like asynchronous compute, build in HBM, build in GPU, more memory speed, more bus speed, lower price....
    Everything is a dead end if you look long enough into the future and so will the silicon replacement be when it arrives, but until that happens we can enjoy it until the next big thing arrives.
     
  2. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,018
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    What you said only applies to a relatively small group of people. Catering to the minority doesn't make a company money. Most of the things you mentioned will not yield an all around improvement in most software. Of the ones that will, you either have to compromise performance elsewhere or are already being pushed to the limits of physics. Remember, the products need to be mass producible so even if they could in theory be improved, it doesn't mean it'll be cost effective to do so. That's why monolithic dies with 18+ cores are so expensive, even though it's overall a better approach than chiplets.
    Most of the things you mentioned require devs to actually implement them. Take a hard look at all programs you run and notice how little of them actually take advantage of modern instructions or subsystems. Clear Linux is a perfect example of the immense potential software has if devs bothered to use the very things you said would help.

    So yes, there's plenty of ways you could improve x86 but none of it matters if you want to retain good performance with old software and if devs don't adapt (and most of them won't). Both Intel and AMD have tried to improve x86 and it takes years until anyone decides to take advantage of what they offer. Why do you think AMD has been so slow with AVX for example? They know nobody is going to use it, even though it offers tremendous power.
     
  3. fry178

    fry178 Ancient Guru

    Messages:
    2,078
    Likes Received:
    379
    GPU:
    Aorus 2080S WB
    @TLD LARS
    doesnt even always have to be faster clocks.
    e.g. if "efficiency" goes up, you can gain enough not to have to deal with higher speeds to see perf increase.
    one reason why you dont always have to increase vram bandwidth vs previous gens.
     
  4. user1

    user1 Ancient Guru

    Messages:
    2,782
    Likes Received:
    1,305
    GPU:
    Mi25/IGP
    There hasn't been a "true" cisc x86 cpu for a very long time, all modern x86 cpus use a decoder to break most instructions up into uops and combine some instructions into macro-ops , there is plenty of room for innovation on x86, The instruction set is not really a limiting factor.

    Its the patent hell thats the problem.
     
    jura11 and Carfax like this.

  5. Carfax

    Carfax Ancient Guru

    Messages:
    3,972
    Likes Received:
    1,462
    GPU:
    Zotac 4090 Extreme
    Funny you should mention this, because last month Intel has finally begun ramping up its 10nm production to high volume for the first time.

    They've already been shipping mobile parts for several months, but now they will be producing server grade CPUs and GPUs on their 10nm+ process.

    This has been the case for several decades already. As user1 said, there are no true x86 CPUs in production. Today's variants are all hybrids.

    Sunnycove core aka Ice Lake has a 18% average increase in IPC over Skylake, and even more for FP/SIMD. Zen 3 is also rumored to have a significant IPC increase over Zen 2.

    Point is, there is plenty of performance to be had on the table and it won't be stopping anytime soon.

    Bulldozer was God awful. They sacrificed far too much single thread performance to make it viable against Intel's Core series.

    This is exactly what Sapphire Rapids is supposed to offer. Supposedly it will be more of a clean break from the restraints of contemporary x86-64 designs.
     
    Deleted member 213629 likes this.
  6. JethroTu11

    JethroTu11 Member

    Messages:
    43
    Likes Received:
    7
    GPU:
    GTX 1050ti
    I'm glad somebody mentioned this. I've been skeptical that people could find these cpus for the price Intel has quoted.

    How long will it take before people can actually buy this cpu for $979? It took many weeks if not months before I could buy a 3900X if I wanted one. If I wanted a new Threadripper or 3950X, I can't buy one at the moment. Now I wonder, could I get a 9900K for $500 if I wanted one? I never checked their availability or price.
     
  7. TLD LARS

    TLD LARS Master Guru

    Messages:
    780
    Likes Received:
    366
    GPU:
    AMD 6900XT
    If someone invents the next new silicon replacement, it will still take years before it will find its way to the rest of the pc, Just like Intel went back to a older generation fabrication for the chipset, because of cost and machine time.
    So the CPU or GPU will just run away in frequency to give better singlecore or more speed in sloppy 4-8 core programs, the silicon memory, chipset, storage will still limit the pc for years after the CPU changes to a new material.

    The AMD AVX problem is not 100% AMDs fault, stuff like Matlab does not even support AMD AVX it reverts to a 10 years old protocol, either because of laziness or on purpose.

    HBM on chip cache and faster PCI storage could remove the need for DDR memory completely.
    SLI and Crossfire could be reinvented and fixed for a lot more performance.

    If the next chip material is invented and gives us 10ghz, developers will still be lazy about it and not able to feed the CPU.
     
    jura11 likes this.
  8. sykozis

    sykozis Ancient Guru

    Messages:
    22,492
    Likes Received:
    1,537
    GPU:
    Asus RX6700XT
    Go repeat that in an AMD GPU thread....where power draw is attacked constantly. Seems that power consumption only matters when it's an AMD product.
     
  9. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,018
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    Right, hence my point: there's not much that can be done about x86 that will satisfy a wide audience. Intel can probably one-up AMD, but not by a substantial margin in everyday tasks.
    It could, but I don't think there will be too many people agreeable with the idea of an expensive socketed CPU with a fixed amount of memory.
    On the note of faster PCIe though: stuff like ReRAM (keyword here is like) could obsolete the need of RAM entirely. If storage can be made as fast as modern SDRAM, there's no point in loading anything into memory, you could just load the software directly from the disk. Keep in mind the whole reason RAM exists is to quickly feed the CPU data. So, if your storage is as fast as RAM, you don't need it anymore.
    However, none of that is really an evolution of x86, just an evolution in computing. Keep in mind, binary computers have a tremendous amount of new potential, but as far as I'm concerned, x86 is realistically almost as good as it's ever going to get.
    Absolutely, but again, that's not something x86 is responsible for.
    Yes, I very much agree. However, I don't necessarily see that as a bad thing. To me, CPUs should be used for highly complex serial/linear computations. I don't care for applications becoming more multi-threaded, I think they need to be ported to OpenCL or CUDA instead. GPUs offer so much compute power and they're not that hard to program for.

    What difference does any of that make? They're still backward compatible with software from the 90s, and that's kinda the point of everything I've been saying: what will cause x86 to stagnate is an attempt to retain that backward compatibility (which also means preventing performance regressions).
    Also, I've said several times already that of course there is room to innovate and improve x86, but devs need to actually implement them. There are dozens of instruction sets available that nobody uses. Many of these instructions have actually been abandoned because nobody used them and it's too costly to maintain the hardware that operates them. rdrand is a good example of an instruction that hardly anyone uses, and has been around for years. Apparently, only Destiny 2 and systemd used it, and we only discovered that because of how the instruction broke in Zen2.
    I feel like a broken record player here. I really don't know how to emphasize that theoretical performance improvements does absolutely nothing to improve most software out there, especially software that has already been released.
     
  10. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,018
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    I'm aware... that doesn't change my point. Intel has already started looking beyond 10nm.
    I'm talking about binary compatibility here, not the literal 80386 architecture. That being said, x86 was steadily improving up until Skylake. That's pretty much when it peaked in terms of per-core IPC.
    Considering everything else I've been saying, an 18% improvement doesn't mean very much by itself. Like I said before, we're at a point where that kind of performance means something else must be sacrificed, such as lower clocks, higher latency, higher TDP, requires developer implementation, higher price tag, and so on. No, I'm not saying all of those things will happen, they're just a list of some possibilities. If Intel accomplishes an overall 18% IPC improvement in a wide range of tests, feel free to come back here and rub in my face that I'm wrong.
    As for Zen3, that's a little different. Seems to me Zen is suffering some bottleneck and latency issues, so any IPC improvements it gets will be caused by reducing those issues. In other words, I don't think Zen3 is necessarily going to get a whole lot "faster", it's just going to be closer to reaching its true potential.
    Seems you completely ignored the reason why I brought up Bulldozer...
    Supposed to offer what? Being a clean break doesn't change my point. Zen was a clean break and today, its IPC is barely better than Intel's. I have no doubt Sapphire Rapids will offer substantial improvements, but like I said for the bajillionth time: some of those improvements will involve compromises, and some of them will need developers to actually use them. In most software, I'm sure Sapphire Rapids won't yield impressively better results.
     

  11. user1

    user1 Ancient Guru

    Messages:
    2,782
    Likes Received:
    1,305
    GPU:
    Mi25/IGP
    my point is that the cost of "legacy" instruction support is not as important as you think, because the instructions are essentially translated to a risc-like anyway, the decoder doesn't necessarily need to change all that much, even if the core is mostly redesigned.
    Intel has even done a pure emulation implementation of x86 on some of its ill-fated itanium cpus, that worked okish.

    and you are mistaken about the rdrand, its a Newer instruction , chips older than ivy bridge lack it all together, it just so happens that it was broken zen2 , later patched in the agesa. I should also mention it is by no means a very important instruction, its just an optimization really, depending on it is silly to begin with.

    old instructions breaking can be a problem, but a very minor one, since if the instruction being used is super old, it can be emulated quite easily on modern cpus, and you probably wouldn't notice the difference performance wise.

    trust me if there is anything stifling performance in x86 land, its the patents, not the technology.
     
  12. D3M1G0D

    D3M1G0D Guest

    Messages:
    2,068
    Likes Received:
    1,341
    GPU:
    2 x GeForce 1080 Ti
    @schmidtbag A toaster would be enough for the average person - smartphone, tablet or low-end laptop. The average Joe isn't going to benefit from a 5 GHz or 16-core CPU - that's for gamers, enthusiasts and professionals. This is what the whole "post-PC" thing was all about - the typical person doesn't need a PC anymore. If there is a benefit to improvements in x86 going forward for the average user, it would be in power efficiency - more efficient / longer-lasting laptops. Otherwise, advancements will be solely for power users and businesses.
     
    sykozis likes this.
  13. sykozis

    sykozis Ancient Guru

    Messages:
    22,492
    Likes Received:
    1,537
    GPU:
    Asus RX6700XT
    If the average user still wants a PC, they can easily get by on a Chromebook.....
     
  14. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,018
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    I don't get it... how do you not see the issue revolving around binary compatibility? It is everything to the vast majority of consumers out there, and even a substantial amount of servers. Vista, Windows RT, Windows CE, and all 64-bit versions of Windows prior to 7 were all unpopular because of broken software compatibility. Of those things I mentioned, only Vista caused breakages strictly due to software. Even the ARM support for Windows 10 isn't so great, because the x86 compatibility layer is very CPU-taxing on an architecture not known for high performance.
    Y'know why Apple was able to get away with their transition from PPC to Intel? Because they had a competent compatibility layer that allowed people to run their old software smoothly.

    As a Linux enthusiast, pretty much the #1 reason why Windows users don't switch is because of binary compatibility. A Linux experience is actually pretty great regardless of which CPU architecture you use, and it has compatibility layers for just about any architecture too. But Linux's cross-compatibility doesn't mean anything to Windows users, because their software isn't so "graceful".
    The takeaway from all of this is whether or not modern AMD and Intel CPUs are "true" x86 is completely irrelevant. What matters is software compatibility, and the vast majority of userland software that people actually care about is for x86. Intel and AMD are not in a position to break that compatibility with their CPUs, and there's very little they can do to improve the performance for x86 software in a way that developers will actually use.
    Uh... how am I mistaken? Ivy Bridge has been around for several years... But you just proved my point anyway: the fact AMD didn't even notice it was broken when they released their product shows how unused it was. You also said yourself it wasn't an important optimization. That's literally the synopsis of everything I've been trying to say: Intel and AMD has plenty of room to improve their architectures but hardly anybody is ever going to bother to use such things. Literally just 2 pieces of software were known to use rdrand after all these years. That's a pathetically small number for an architectural improvement. rdrand is not hard to use, so it's far more unrealistic to expect devs to use something more complex.
    So to reiterate:
    Adding instructions is futile to improve a CPU if devs don't use them, and 99% of software out there proves they won't.
    Also last I checked, the AGESA update just simply disabled the instruction entirely, which brings software that depend on it to use a software-based alternative.
    Patents of what? I don't see anything Intel has that AMD doesn't or vise versa that would offer either of them a substantial performance gain.

    I agree with pretty much all of that. But that doesn't change the fact that x86 doesn't have much room to grow in real-world applications. Most performance enhancements for future software will most likely be done in a GPU.
     
    Last edited: Nov 29, 2019
  15. MegaFalloutFan

    MegaFalloutFan Maha Guru

    Messages:
    1,048
    Likes Received:
    203
    GPU:
    RTX4090 24Gb
    We are talking about CPUs here, not GPU
     

  16. sykozis

    sykozis Ancient Guru

    Messages:
    22,492
    Likes Received:
    1,537
    GPU:
    Asus RX6700XT
    If power consumption doesn't matter on a CPU, it shouldn't matter on a GPU either.
     
  17. user1

    user1 Ancient Guru

    Messages:
    2,782
    Likes Received:
    1,305
    GPU:
    Mi25/IGP
    first of all, windows Software backwards compatibility has litterally nothing to with x86. That is a completely different game, cpus just generally dont change nearly as much as software can.

    emulation has been done, with ok performance, especially now with recompliation techniques its much faster than it used to be, its no longer an order of magnitude slower. infact if you read about the itaniums, you know that intel's original hardware translation of ia32 to ia64 on early itaniums, is actually slower than their software emulation layer.

    I dont know what time scale you operate on but rdrand is IMO a newer instruction since instructions are not added that frequently(once every 2 years? maybe less frequent than that for intel lately), and often do not even get compiler support for many years, so it is recent in this sense.
    deprecating old instructions like 3dnow, does infact break compatibilty in a significant way, but rdrand is not one of those(at least it shouldn't be), and could be emulated with little issue.

    People will use a newinstruction if they need it,the AVX512 instructions are very useful for certain workloads, so more people use it, its that simple.
    RDrand can be useful too, but again its the market, frankly anyone that wants that kind of entropy knows that hw generators are security risk, so they dont use it, and im pretty sure the only reason destiny 2 actually used it was because ICC is very aggressive with optimizations.


    so what do I mean by it is not that important?, It means that inessence the "silicon/developement burden" of maintaining hardware compatibility with old x86 instructions is not a significant factor or rather, the cost of ditching it is way higher than the cost to keep it, by the time some "real" breakage occurs (like deprecating say SSE2) , it probably wont matter(performance wise), is my point.

    To stay on topic, this is main issue, its just not true, because again its not x86 anymore, so you could completely redesign the execution pipeline and maintain x86 compatibility as has been done several times.

    AFAIK the new agesa does infact fix the breakage of the rdrand instruction on zen2, but amd did disable rdrand on applicable bulldozer apus.

    Patents are relevant because cpus like , arm POWER, sparc, x86, ect have lots of overlap(some arm cpus actually have the an equivalent meltdown bug for instance), and so when amd or intel want to implement a certain feature, they need a team of lawyers to go over the implementation to make sure it doesn't violate somebody else's patent or license it from them , its really a mess frankly.
    Depsite intel/amd's crosslicensing deal, there is a tonne of litigation going on all the time of IPs.


    No need to splits hairs over the minor details, point is there are bigger problems than the ISA itself holding cpus back.
     
  18. MegaFalloutFan

    MegaFalloutFan Maha Guru

    Messages:
    1,048
    Likes Received:
    203
    GPU:
    RTX4090 24Gb
    The reason people dont like high power consumption on GPU is that GPUs are much hotter then CPUs and heat up the whole case and even the room.
    I never heard of CPU heating up the room around me, but GPU, well i had this experience myself, i even enjoyed it during the winter.

    Also, this Intel power consumption on Stock is normal, much less then TR2 from last year and on par with TR3, overclocking is optional, but even when overclocked 580W is not bad, especially compared vs last years threadrippers
     
  19. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,018
    Likes Received:
    4,396
    GPU:
    Asrock 7700XT
    So... what exactly are you arguing about at this point? Because retaining software compatibility has been my entire argument this whole time.
    I'm aware emulation can be done with decent performance. Decent emulation is why I pointed out Rosetta (Apple's method of getting PPC programs to work on Intel Macs). So, what exactly is your point here? Or is it related to rdrand being emulated? Because that doesn't really have much of an impact on CPU load - the point of rdrand was to give "better" random numbers.
    Ivy Bridge is from 2012... 7 years is a LONG time for computers, even with Intel slowing R&D thanks to Bulldozer. rdrand in particular was added to compilers since at least December of 2012. So, it's definitely not new anymore. And that goes back to my original point: this is just one of dozens of instructions that nobody bothers to use. You can keep adding all the instructions you want, but performance won't improve until devs use them in their software.
    And yet, there are so many applications that could use AVX (just in general, not even 512) or other nice modern instructions like the SSE4 family, but they don't. Take a look at benchmarks of Clear Linux to see how much devs are slacking (that whole distro is optimized to take advantage of various instructions). The amount of untapped performance you can get in a modern Intel or AMD CPU is absolutely insane. But how do you get devs to take this stuff seriously? Until they do, any additional instructions added to CPUs is a wasted effort.
    Yes, maintaining it isn't that difficult. Preventing it from regressing without people noticing is. That's why x86 is heading toward a dead-end. There's no clear path to improve it:
    * You can't depend on devs to use new instructions
    * You can't make people's existing software run slower; modern optimizations can often cause this
    * You can't break software compatibility without pissing people off
    * We're near the limits of silicon transistors
    I don't understand why you keep pointing that out. It doesn't change the underlying point. Despite how drastically different the execution pipeline is between modern Intel and AMD CPUs, they still perform roughly the same. You could also say the same about Athlon II and Core2. Intel could revamp the entire pipeline, but because of trying to retain x86 software compatibility, it isn't going to get a lot faster.
    Makes sense.
     
  20. user1

    user1 Ancient Guru

    Messages:
    2,782
    Likes Received:
    1,305
    GPU:
    Mi25/IGP
    i point it out because it is important, if intel wanted to put an ARM-like core under the hood they could is the point, the frontend (x86) isn't as burdensome as you say.

    you mistake market stagnation for lack of progress on x86, preventing the regression of certain instructions isn't a huge issue, hasn't been for a long time, Cpus these days aren't nearly as picky about what instructions you use than in the past. new instructions are added for people that need them, they aren't needed for general good performance, really very few applications need much more than sse2/sse3 to get good utilization out of a modern cpu, intel has the time to tune clear linux, most people dont, code readibility modularity, and time is just more important than pure performance, if you want that 10-20% extra performance you can always use intel's compiler , which enables all of the unsafe optimzations and you can enjoy all of the debugging.
    fundimentally Fixing sh** code isn't intel or amd's job so im not sure what your on about "untapped potential" pretty much nobody has the time to learn every quirk and every keyboard shortcut to fully optimize for a cpu.

    finding cpu bugs in the execution units of the core and other core constructs has been a much bigger problem to deal with on the other hand.

    edit:also, dont assume people update their compilers, cause they quite often dont, It is not surprising if an application runs slower because people like using gcc4.
     
    Last edited: Nov 30, 2019

Share This Page