Apple M1 chip outperforms Intel Core i7-11700K in PassMark (single-thread)

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Mar 26, 2021.

  1. Strijker

    Strijker Active Member

    Messages:
    52
    Likes Received:
    12
    GPU:
    Gigabyte 3060Ti
    Slight note, single (&multi) core performance of the AMD 5xxx processors got slightly better after a bios update (where the cache speeds works much faster).
    But damn impressive that new M1 chip. Wished we had that chip for pc's (RIP Intel & AMD though then most likely). I wonder with kickass cooling & more power you could even get much higher...
     
    Last edited: Mar 26, 2021
  2. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,974
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    A trend in what, specifically?
    These days, computer hardware is becoming less and less interesting. The days of overclocking are nearing an end (now it's basically just providing a cooling solution capable of maintaining boost clocks). All motherboards are pretty much the same thing with ever-so-slightly different variations, and even then, they're mostly just black with RGB heatsinks. I wouldn't be surprised if in a few years, all motherboards come with soldered-on CPUs.

    It wouldn't matter if the chip came to PCs - most of the performance enhancements is in Apple's software. The CPU itself is otherwise mediocre.
    ARM chips are also notorious for overclocking poorly.
     
  3. Strijker

    Strijker Active Member

    Messages:
    52
    Likes Received:
    12
    GPU:
    Gigabyte 3060Ti
    Really just software? Don't taking that part. C'mon mediocre, it's amazing. I rather bash Apple whenever I can but now I just can't. MS could make windows run on ARM cpu's, they already made a build for that in recent times if I'm not mistaken. But yea all the other software must be altered for that as well but when taking that road it's possible. Could take some time but Apple is more hardcore in big changes for sure.

    Yea ARM chips are mostly on phones, no good coolers as well and Apple main concern is silence, more over heat. Let it burn&throttle but let it be silent (if possible) when looking at the past history. But this thing is 15watt.
     
    Last edited: Mar 26, 2021
  4. tsunami231

    tsunami231 Ancient Guru

    Messages:
    14,725
    Likes Received:
    1,854
    GPU:
    EVGA 1070Ti Black
    I hope leads to Desktop parts that use 15wattage with that performance on single core would amazing if it could be applied to multi core.

    Gonaa hope in one hand and crap in the other and see which fill up first :rolleyes:
     

  5. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,974
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    Yes, mostly software. I assume Apple basically just copied the instructions from Intel that they actually used, excluded the rest, and ended up with a chip that could do pretty much the same thing at 10% the wattage.
    Software is absolutely critical to how well something performs. I mentioned Clear Linux earlier as a real-world example of how much performance you can squeeze into a CPU if you actually care to optimize it. Apple's closed ecosystem makes it much easier for them to optimize things, because they don't have to retain compatibility with a wide array of platforms.

    Windows does run on ARM. I'm using the Lenovo C630 as I write this - an 8-core ARM-based laptop that comes with Windows 10. Windows gets the job done but Linux has better software support, and Mac is better optimized.
    EDIT:
    Windows also has a compatibility layer to run x86 software, but it's basically just emulation and runs like crap. Mac's Rosetta 2 seems to go a lot further, where you're not going to get native-level performance but it isn't horribly slow either. Linux takes a similar route to Windows but the driver layer is less complex so it seems to take less of a performance hit.

    Again, the M1 isn't any ordinary ARM chip. Even if you port Windows 10 to the new M1 Macs (which in theory should be possible, Linux is already bootable on them), you're not going to get the same level of performance, because Windows isn't built to be optimized.
     
    Last edited: Mar 26, 2021
    HandR likes this.
  6. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    A beefed up M1 Air is probably the best laptop I've used. Really responsive, zero lag on anything pretty much, ten hours plus of battery life.
    It also feels much faster than the 2020 13" Air, on clean big Sur installations. They have definitely done things about I/O with it, and as a package is great.
    A colleague has the M1 Mini at home, and he only praises it.

    The M1 is anything but mediocre. If anything, macOS is actually slower than Windows.
     
    moo100times and Undying like this.
  7. OldManRiver

    OldManRiver Guest

    Messages:
    44
    Likes Received:
    23
    GPU:
    Nvidia
    Pretty much every modern processor is internally RISC with a front end CISC decoder. Surprising to see so many believe they are still primarly CISC based. That has not been true for a long time.
     
  8. Raserian

    Raserian Master Guru

    Messages:
    479
    Likes Received:
    344
    GPU:
    1060
    Care to explain more? I think that x86 has not really changed all that much since its invention and remains a CISC system.
     
  9. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,011
    Likes Received:
    7,352
    GPU:
    GTX 1080ti

    To the user, x86 architecture is *very* CISC. Inside a chip it can be a different story.

    A modern high end Intel chip takes those CISC instructions and translates (decodes) them down into an internal RISC-like instruction set and executes those in parallel and can often “execute” (retire is Intel’s term for when an instruction completes execution) as many as 4 at a time (that was what they could do in 2015 on Haswell class machines).

    Today, Intel or AMD x86 processors do not execute x86, its instructions get decoded and translated into micro-instructions in the processor frontend; the backend executes those micro-instructions and looks a lot more like a RISC processor.
     
    Last edited: Mar 27, 2021
    geogan and Richard Nutman like this.
  10. fredgml7

    fredgml7 Master Guru

    Messages:
    242
    Likes Received:
    87
    GPU:
    Sapphire RX 7600
    Specifically SOC (that implies not modular parts as it is nowadays on desktops) and non x86 like processors.
     
    Last edited: Mar 27, 2021

  11. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,974
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    You say that as though that changes anything discussed to any significant degree, because as Astyanax pointed out, the end result of many modern architectures is something that is very much CISC.
    By your logic, that's like saying "seaweed isn't a plant, it's a protist, so it's not a vegetable". The things that separate seaweeds from being plants don't separate it from being a vegetable.
    All that being said, MIPS is a true RISC architecture that is still fairly modern. Even today, the floating point instructions aren't integral to the architecture, and yet, MIPS can still run a full OS.

    Desktop and laptop PCs have become more and more of a SoC. For one big example, there is no longer a separate northbridge and southbridge. I wouldn't be surprised if within the next decade, there will no longer be a discrete motherboard chipset anymore. I think it'll still be a long while until RAM becomes fully integrated, like it is in many ARM platforms.
    As for non-x86 processors, I think that's very possible. Most tablets run on ARM and have obsoleted the need for desktops or laptops for most people. Apple is moving away from x86. Many tools we use have become cloud based, where it doesn't matter what architecture you use. The Chinese government is trying to move to MIPS. For Linux users, there's a rising interest in POWER, ARM, and RISC-V. MS's 2nd attempt at ARM is actually usable, and I'm sure they're just trying to prepare in the future in case ARM becomes a serious contender.
    So yeah, I think it's very possible x86 may lose quite a lot of popularity. It will basically just remain popular to those who want raw CPU processing power, and that's a dying trend thanks to things like OpenCL or CUDA.
     
    fredgml7 likes this.
  12. HARDRESET

    HARDRESET Master Guru

    Messages:
    890
    Likes Received:
    417
    GPU:
    4090 ZOTAEA /1080Ti
    Magic power of ASUS DARK HERO DOCS Capture54377.PNG
     

    Attached Files:

  13. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,872
    Likes Received:
    446
    GPU:
    RTX3080ti Founders
    I've seen loads of videos about the M1 in previews and post release. As a home recording enthusiast, it's very tempting, especially as more software is starting to support it (although still a while to go yet). I'm sure a year from now it will be very compelling. It runs emulated windows apps pretty decently too. I'll definitely be watching it's progress in the coming years.
     
  14. OldManRiver

    OldManRiver Guest

    Messages:
    44
    Likes Received:
    23
    GPU:
    Nvidia
    You meant RISC.
    I said that because of ridiculous backward claims like that. I think it's funny you cite Sora, for once, clearly based on my post, got it more or less correct. You got it wrong. Look elsewhere to incite conflict. Thanks.
     
    Last edited: Mar 27, 2021
  15. rl66

    rl66 Ancient Guru

    Messages:
    3,924
    Likes Received:
    839
    GPU:
    Sapphire RX 6700 XT
    Yes most bench and program.
    Everyone were sceptical about it (don't forget that the 1st version is a tablet CPU) but the more they put it with more core on computer the more i like it...
    The good news is that they will make even more core and powerfull version.
    The bad point it is that it is Apple exclusive... :(
     
    Undying and PrMinisterGR like this.

  16. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    All this talk about CISC and RISC is completely irrelevant. All modern CPUs just translate incoming command streams to whatever they use internally.

    It's really a pity it's an Apple exclusive. I even see Apple going to RISC V in 20 years, just not to be in the mercy of Nvidia.
     
  17. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,974
    Likes Received:
    4,342
    GPU:
    Asrock 7700XT
    I meant what I said.
    Who the hell is Sora?
    How can I be "more or less correct" while also getting it wrong?
    You're the one getting pedantic about what is actually RISC. If you don't want to incite conflict, you're barking up the wrong tree.
     
  18. OldManRiver

    OldManRiver Guest

    Messages:
    44
    Likes Received:
    23
    GPU:
    Nvidia
    You're still arguing with me because? More or less correct was not addressed to you. I am not arguing "what is RISC" at all. Learn to read. You're still seeking conflict, duh.

    Well no, it is not actually irrelevant what the internal processor structure is. It it were, there would be no differences in uarch... 'cause it would be irrelevant'.
    Man... poor quality posting around here.
     
  19. Venix

    Venix Ancient Guru

    Messages:
    3,440
    Likes Received:
    1,944
    GPU:
    Rtx 4070 super
    May i suggest you scream at em to get of your lawn ?
     
    schmidtbag and moo100times like this.
  20. kapu

    kapu Ancient Guru

    Messages:
    5,418
    Likes Received:
    802
    GPU:
    Radeon 7800XT
    Id say at least Zen2/3 there is, alot of head room for OC/UV and ram, tweaking. Much more than it was ever on intel side. Was quite fun for me :) Zen3 especially is most tweakable cpu in recent history
     

Share This Page