Review: Core i9 10980XE processor (Intel 18-core galore)

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Nov 25, 2019.

  1. wavetrex

    wavetrex Master Guru

    Messages:
    690
    Likes Received:
    372
    GPU:
    Zotac GTX1080 AMP!
    Yet AMD is capable of cramming 64 cores with SMT in one single CPU, and also make each one of them faster than equivalent Intel cores.

    No x86/x64 is not dead.
    Monolithic chips are. And until Intel finally gets that, they will fall more and more behind.
     
    K.S. and angelgraves13 like this.
  2. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,573
    Likes Received:
    1,430
    GPU:
    HIS R9 290
    None of that disproves anything I just said.
    I didn't say x86 is dead, I said it's at a dead end. I basically hinted before that x86 isn't going anywhere. Intel is capable of outdoing AMD, but not by much. I think we all know Intel's next major architecture isn't going to be monolithic, because it just isn't cost effective when trying to pump out a lot of cores. But all that's going to do is just get them more cores at more affordable prices. It isn't going to give them a per-core IPC advantage, just as it didn't do for AMD.
    So my point remains:
    Individual x86 cores don't have much room to improve. Silicon transistors are almost as small as they can get. Adding more cores isn't going to make everything run faster and it is unrealistic to expect everything to adapt. x86 isn't going to die off in the foreseeable future, but its progress will stagnate very soon (which is what I meant by "dead end"). AMD's chiplet approach breathed new life into x86, but only for high-end applications. It didn't really do anything for the average person other than lower prices.
    But, perhaps it doesn't need to get any better. More and more workloads are being pushed to GPUs, and quantum computers will soon be handling the heavy tasks that traditional binary processors just aren't fit for. I see x86 to be like the internal combustion engine: decently well refined and pretty much as good as it's ever going to get, but still found everywhere and not effectively obsoleted for the majority of people.
     
    Last edited: Nov 27, 2019
  3. TLD LARS

    TLD LARS Member Guru

    Messages:
    118
    Likes Received:
    33
    GPU:
    Vega 64
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------

    A lot can still be done with more cache, better instructions like AVX at full speed, SMT boost, better threaded software, something like asynchronous compute, build in HBM, build in GPU, more memory speed, more bus speed, lower price....
    Everything is a dead end if you look long enough into the future and so will the silicon replacement be when it arrives, but until that happens we can enjoy it until the next big thing arrives.
     
  4. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,573
    Likes Received:
    1,430
    GPU:
    HIS R9 290
    What you said only applies to a relatively small group of people. Catering to the minority doesn't make a company money. Most of the things you mentioned will not yield an all around improvement in most software. Of the ones that will, you either have to compromise performance elsewhere or are already being pushed to the limits of physics. Remember, the products need to be mass producible so even if they could in theory be improved, it doesn't mean it'll be cost effective to do so. That's why monolithic dies with 18+ cores are so expensive, even though it's overall a better approach than chiplets.
    Most of the things you mentioned require devs to actually implement them. Take a hard look at all programs you run and notice how little of them actually take advantage of modern instructions or subsystems. Clear Linux is a perfect example of the immense potential software has if devs bothered to use the very things you said would help.

    So yes, there's plenty of ways you could improve x86 but none of it matters if you want to retain good performance with old software and if devs don't adapt (and most of them won't). Both Intel and AMD have tried to improve x86 and it takes years until anyone decides to take advantage of what they offer. Why do you think AMD has been so slow with AVX for example? They know nobody is going to use it, even though it offers tremendous power.
     

  5. fry178

    fry178 Maha Guru

    Messages:
    1,252
    Likes Received:
    148
    GPU:
    MSI 1080 X@2GHz
    @TLD LARS
    doesnt even always have to be faster clocks.
    e.g. if "efficiency" goes up, you can gain enough not to have to deal with higher speeds to see perf increase.
    one reason why you dont always have to increase vram bandwidth vs previous gens.
     
  6. user1

    user1 Maha Guru

    Messages:
    1,441
    Likes Received:
    472
    GPU:
    hd 6870
    There hasn't been a "true" cisc x86 cpu for a very long time, all modern x86 cpus use a decoder to break most instructions up into uops and combine some instructions into macro-ops , there is plenty of room for innovation on x86, The instruction set is not really a limiting factor.

    Its the patent hell thats the problem.
     
    jura11 and Carfax like this.
  7. Carfax

    Carfax Ancient Guru

    Messages:
    2,585
    Likes Received:
    280
    GPU:
    NVidia Titan Xp
    Funny you should mention this, because last month Intel has finally begun ramping up its 10nm production to high volume for the first time.

    They've already been shipping mobile parts for several months, but now they will be producing server grade CPUs and GPUs on their 10nm+ process.

    This has been the case for several decades already. As user1 said, there are no true x86 CPUs in production. Today's variants are all hybrids.

    Sunnycove core aka Ice Lake has a 18% average increase in IPC over Skylake, and even more for FP/SIMD. Zen 3 is also rumored to have a significant IPC increase over Zen 2.

    Point is, there is plenty of performance to be had on the table and it won't be stopping anytime soon.

    Bulldozer was God awful. They sacrificed far too much single thread performance to make it viable against Intel's Core series.

    This is exactly what Sapphire Rapids is supposed to offer. Supposedly it will be more of a clean break from the restraints of contemporary x86-64 designs.
     
    K.S. likes this.
  8. JethroTu11

    JethroTu11 Member

    Messages:
    26
    Likes Received:
    6
    GPU:
    GTX 1050ti
    I'm glad somebody mentioned this. I've been skeptical that people could find these cpus for the price Intel has quoted.

    How long will it take before people can actually buy this cpu for $979? It took many weeks if not months before I could buy a 3900X if I wanted one. If I wanted a new Threadripper or 3950X, I can't buy one at the moment. Now I wonder, could I get a 9900K for $500 if I wanted one? I never checked their availability or price.
     
  9. TLD LARS

    TLD LARS Member Guru

    Messages:
    118
    Likes Received:
    33
    GPU:
    Vega 64
    If someone invents the next new silicon replacement, it will still take years before it will find its way to the rest of the pc, Just like Intel went back to a older generation fabrication for the chipset, because of cost and machine time.
    So the CPU or GPU will just run away in frequency to give better singlecore or more speed in sloppy 4-8 core programs, the silicon memory, chipset, storage will still limit the pc for years after the CPU changes to a new material.

    The AMD AVX problem is not 100% AMDs fault, stuff like Matlab does not even support AMD AVX it reverts to a 10 years old protocol, either because of laziness or on purpose.

    HBM on chip cache and faster PCI storage could remove the need for DDR memory completely.
    SLI and Crossfire could be reinvented and fixed for a lot more performance.

    If the next chip material is invented and gives us 10ghz, developers will still be lazy about it and not able to feed the CPU.
     
    jura11 likes this.
  10. sykozis

    sykozis Ancient Guru

    Messages:
    21,100
    Likes Received:
    692
    GPU:
    MSI RX5700
    Go repeat that in an AMD GPU thread....where power draw is attacked constantly. Seems that power consumption only matters when it's an AMD product.
     
    K.S. and carnivore like this.

  11. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,573
    Likes Received:
    1,430
    GPU:
    HIS R9 290
    Right, hence my point: there's not much that can be done about x86 that will satisfy a wide audience. Intel can probably one-up AMD, but not by a substantial margin in everyday tasks.
    It could, but I don't think there will be too many people agreeable with the idea of an expensive socketed CPU with a fixed amount of memory.
    On the note of faster PCIe though: stuff like ReRAM (keyword here is like) could obsolete the need of RAM entirely. If storage can be made as fast as modern SDRAM, there's no point in loading anything into memory, you could just load the software directly from the disk. Keep in mind the whole reason RAM exists is to quickly feed the CPU data. So, if your storage is as fast as RAM, you don't need it anymore.
    However, none of that is really an evolution of x86, just an evolution in computing. Keep in mind, binary computers have a tremendous amount of new potential, but as far as I'm concerned, x86 is realistically almost as good as it's ever going to get.
    Absolutely, but again, that's not something x86 is responsible for.
    Yes, I very much agree. However, I don't necessarily see that as a bad thing. To me, CPUs should be used for highly complex serial/linear computations. I don't care for applications becoming more multi-threaded, I think they need to be ported to OpenCL or CUDA instead. GPUs offer so much compute power and they're not that hard to program for.

    What difference does any of that make? They're still backward compatible with software from the 90s, and that's kinda the point of everything I've been saying: what will cause x86 to stagnate is an attempt to retain that backward compatibility (which also means preventing performance regressions).
    Also, I've said several times already that of course there is room to innovate and improve x86, but devs need to actually implement them. There are dozens of instruction sets available that nobody uses. Many of these instructions have actually been abandoned because nobody used them and it's too costly to maintain the hardware that operates them. rdrand is a good example of an instruction that hardly anyone uses, and has been around for years. Apparently, only Destiny 2 and systemd used it, and we only discovered that because of how the instruction broke in Zen2.
    I feel like a broken record player here. I really don't know how to emphasize that theoretical performance improvements does absolutely nothing to improve most software out there, especially software that has already been released.
     
  12. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,573
    Likes Received:
    1,430
    GPU:
    HIS R9 290
    I'm aware... that doesn't change my point. Intel has already started looking beyond 10nm.
    I'm talking about binary compatibility here, not the literal 80386 architecture. That being said, x86 was steadily improving up until Skylake. That's pretty much when it peaked in terms of per-core IPC.
    Considering everything else I've been saying, an 18% improvement doesn't mean very much by itself. Like I said before, we're at a point where that kind of performance means something else must be sacrificed, such as lower clocks, higher latency, higher TDP, requires developer implementation, higher price tag, and so on. No, I'm not saying all of those things will happen, they're just a list of some possibilities. If Intel accomplishes an overall 18% IPC improvement in a wide range of tests, feel free to come back here and rub in my face that I'm wrong.
    As for Zen3, that's a little different. Seems to me Zen is suffering some bottleneck and latency issues, so any IPC improvements it gets will be caused by reducing those issues. In other words, I don't think Zen3 is necessarily going to get a whole lot "faster", it's just going to be closer to reaching its true potential.
    Seems you completely ignored the reason why I brought up Bulldozer...
    Supposed to offer what? Being a clean break doesn't change my point. Zen was a clean break and today, its IPC is barely better than Intel's. I have no doubt Sapphire Rapids will offer substantial improvements, but like I said for the bajillionth time: some of those improvements will involve compromises, and some of them will need developers to actually use them. In most software, I'm sure Sapphire Rapids won't yield impressively better results.
     
  13. user1

    user1 Maha Guru

    Messages:
    1,441
    Likes Received:
    472
    GPU:
    hd 6870
    my point is that the cost of "legacy" instruction support is not as important as you think, because the instructions are essentially translated to a risc-like anyway, the decoder doesn't necessarily need to change all that much, even if the core is mostly redesigned.
    Intel has even done a pure emulation implementation of x86 on some of its ill-fated itanium cpus, that worked okish.

    and you are mistaken about the rdrand, its a Newer instruction , chips older than ivy bridge lack it all together, it just so happens that it was broken zen2 , later patched in the agesa. I should also mention it is by no means a very important instruction, its just an optimization really, depending on it is silly to begin with.

    old instructions breaking can be a problem, but a very minor one, since if the instruction being used is super old, it can be emulated quite easily on modern cpus, and you probably wouldn't notice the difference performance wise.

    trust me if there is anything stifling performance in x86 land, its the patents, not the technology.
     
  14. D3M1G0D

    D3M1G0D Ancient Guru

    Messages:
    1,994
    Likes Received:
    1,274
    GPU:
    2 x GeForce 1080 Ti
    @schmidtbag A toaster would be enough for the average person - smartphone, tablet or low-end laptop. The average Joe isn't going to benefit from a 5 GHz or 16-core CPU - that's for gamers, enthusiasts and professionals. This is what the whole "post-PC" thing was all about - the typical person doesn't need a PC anymore. If there is a benefit to improvements in x86 going forward for the average user, it would be in power efficiency - more efficient / longer-lasting laptops. Otherwise, advancements will be solely for power users and businesses.
     
    sykozis likes this.
  15. sykozis

    sykozis Ancient Guru

    Messages:
    21,100
    Likes Received:
    692
    GPU:
    MSI RX5700
    If the average user still wants a PC, they can easily get by on a Chromebook.....
     

  16. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    4,573
    Likes Received:
    1,430
    GPU:
    HIS R9 290
    I don't get it... how do you not see the issue revolving around binary compatibility? It is everything to the vast majority of consumers out there, and even a substantial amount of servers. Vista, Windows RT, Windows CE, and all 64-bit versions of Windows prior to 7 were all unpopular because of broken software compatibility. Of those things I mentioned, only Vista caused breakages strictly due to software. Even the ARM support for Windows 10 isn't so great, because the x86 compatibility layer is very CPU-taxing on an architecture not known for high performance.
    Y'know why Apple was able to get away with their transition from PPC to Intel? Because they had a competent compatibility layer that allowed people to run their old software smoothly.

    As a Linux enthusiast, pretty much the #1 reason why Windows users don't switch is because of binary compatibility. A Linux experience is actually pretty great regardless of which CPU architecture you use, and it has compatibility layers for just about any architecture too. But Linux's cross-compatibility doesn't mean anything to Windows users, because their software isn't so "graceful".
    The takeaway from all of this is whether or not modern AMD and Intel CPUs are "true" x86 is completely irrelevant. What matters is software compatibility, and the vast majority of userland software that people actually care about is for x86. Intel and AMD are not in a position to break that compatibility with their CPUs, and there's very little they can do to improve the performance for x86 software in a way that developers will actually use.
    Uh... how am I mistaken? Ivy Bridge has been around for several years... But you just proved my point anyway: the fact AMD didn't even notice it was broken when they released their product shows how unused it was. You also said yourself it wasn't an important optimization. That's literally the synopsis of everything I've been trying to say: Intel and AMD has plenty of room to improve their architectures but hardly anybody is ever going to bother to use such things. Literally just 2 pieces of software were known to use rdrand after all these years. That's a pathetically small number for an architectural improvement. rdrand is not hard to use, so it's far more unrealistic to expect devs to use something more complex.
    So to reiterate:
    Adding instructions is futile to improve a CPU if devs don't use them, and 99% of software out there proves they won't.
    Also last I checked, the AGESA update just simply disabled the instruction entirely, which brings software that depend on it to use a software-based alternative.
    Patents of what? I don't see anything Intel has that AMD doesn't or vise versa that would offer either of them a substantial performance gain.

    I agree with pretty much all of that. But that doesn't change the fact that x86 doesn't have much room to grow in real-world applications. Most performance enhancements for future software will most likely be done in a GPU.
     
    Last edited: Nov 29, 2019
  17. MegaFalloutFan

    MegaFalloutFan Master Guru

    Messages:
    695
    Likes Received:
    87
    GPU:
    RTX 2080Ti 11Gb
    We are talking about CPUs here, not GPU
     
  18. sykozis

    sykozis Ancient Guru

    Messages:
    21,100
    Likes Received:
    692
    GPU:
    MSI RX5700
    If power consumption doesn't matter on a CPU, it shouldn't matter on a GPU either.
     
    XS621, K.S., Undying and 2 others like this.
  19. user1

    user1 Maha Guru

    Messages:
    1,441
    Likes Received:
    472
    GPU:
    hd 6870
    first of all, windows Software backwards compatibility has litterally nothing to with x86. That is a completely different game, cpus just generally dont change nearly as much as software can.

    emulation has been done, with ok performance, especially now with recompliation techniques its much faster than it used to be, its no longer an order of magnitude slower. infact if you read about the itaniums, you know that intel's original hardware translation of ia32 to ia64 on early itaniums, is actually slower than their software emulation layer.

    I dont know what time scale you operate on but rdrand is IMO a newer instruction since instructions are not added that frequently(once every 2 years? maybe less frequent than that for intel lately), and often do not even get compiler support for many years, so it is recent in this sense.
    deprecating old instructions like 3dnow, does infact break compatibilty in a significant way, but rdrand is not one of those(at least it shouldn't be), and could be emulated with little issue.

    People will use a newinstruction if they need it,the AVX512 instructions are very useful for certain workloads, so more people use it, its that simple.
    RDrand can be useful too, but again its the market, frankly anyone that wants that kind of entropy knows that hw generators are security risk, so they dont use it, and im pretty sure the only reason destiny 2 actually used it was because ICC is very aggressive with optimizations.


    so what do I mean by it is not that important?, It means that inessence the "silicon/developement burden" of maintaining hardware compatibility with old x86 instructions is not a significant factor or rather, the cost of ditching it is way higher than the cost to keep it, by the time some "real" breakage occurs (like deprecating say SSE2) , it probably wont matter(performance wise), is my point.

    To stay on topic, this is main issue, its just not true, because again its not x86 anymore, so you could completely redesign the execution pipeline and maintain x86 compatibility as has been done several times.

    AFAIK the new agesa does infact fix the breakage of the rdrand instruction on zen2, but amd did disable rdrand on applicable bulldozer apus.

    Patents are relevant because cpus like , arm POWER, sparc, x86, ect have lots of overlap(some arm cpus actually have the an equivalent meltdown bug for instance), and so when amd or intel want to implement a certain feature, they need a team of lawyers to go over the implementation to make sure it doesn't violate somebody else's patent or license it from them , its really a mess frankly.
    Depsite intel/amd's crosslicensing deal, there is a tonne of litigation going on all the time of IPs.


    No need to splits hairs over the minor details, point is there are bigger problems than the ISA itself holding cpus back.
     
  20. MegaFalloutFan

    MegaFalloutFan Master Guru

    Messages:
    695
    Likes Received:
    87
    GPU:
    RTX 2080Ti 11Gb
    The reason people dont like high power consumption on GPU is that GPUs are much hotter then CPUs and heat up the whole case and even the room.
    I never heard of CPU heating up the room around me, but GPU, well i had this experience myself, i even enjoyed it during the winter.

    Also, this Intel power consumption on Stock is normal, much less then TR2 from last year and on par with TR3, overclocking is optional, but even when overclocked 580W is not bad, especially compared vs last years threadrippers
     

Share This Page