Intel processors: Comet Lake and Elkhart Lake in 2020 (roadmap)

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Apr 2, 2019.

  1. Koniakki

    Koniakki Ancient Guru

    Messages:
    2,843
    Likes Received:
    443
    GPU:
    ZOTAC GTX 1080Ti FE
    /offtopic

    GEEZUS!!! And here I thought I had too many fans in my TT X9 but then I looked and saw you have a CaseLabs STH10. Ok, 47 fans make sense now! That case is pure pr0n!
     
  2. Petr V

    Petr V Master Guru

    Messages:
    350
    Likes Received:
    106
    GPU:
    Gtx over 9000
    Horrible.
     
  3. TLD LARS

    TLD LARS Member Guru

    Messages:
    197
    Likes Received:
    58
    GPU:
    Asus Vega 64 DIY
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    If i ever needed that much cooling i would hook up the pc to my house water heater, for a practically endless supply of water for the watercooling loop.

    This example of a monster pc is exactly what AMD wants to replace, a 32 core system or 64 coming, with 4 quatros or instinct cards, would be faster then this monster in professional workloads.
    If you have 4 Titans i hope you do professional workloads, because the game support would be very hit or miss.
     
    patteSatan and DeskStar like this.
  4. nevcairiel

    nevcairiel Master Guru

    Messages:
    749
    Likes Received:
    287
    GPU:
    3090
    That approach doesn't work with GPUs so easily. GPUs may have thousands of cores, but they need to work very closely together so that splitting them into different chips would present huge bottlenecks.
    Thats what you see with CrossFire or SLI. If they can't figure out how to do that automatically without the application knowing, it won't work. And thats a huge problem for them right now.
     

  5. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    5,821
    Likes Received:
    2,243
    GPU:
    HIS R9 290
    I see why you're saying that but I don't think it's as bad as you might think. There are a slew of problems why Xfire and SLI didn't work, such as:
    * Consistency: Both GPUs needed to be as similar as possible, which was a problem with so many AIB partners would make their own adjustments to the hardware. This becomes an even greater problem when one GPU is overheating because the other one is sitting just below it.
    * Bandwidth: there's just too much data to communicate between the GPUs over PCIe or the SLI link. This is why NVLink is basically a whole discrete set of PCIe lanes.
    * Latency: there's a lot of wasted time synchronizing the GPUs
    * Software: it was just too cumbersome to set up. Although you could force-enable multi-GPU setups and yield overall positive results, this required some manual tweaking, which most people didn't know how to do properly.

    Back in the days of GPUs like the R9 290x2 or the GTX 690, that was a step in the right direction but it wasn't good enough, because it was basically just 2 separate GPUs with separate VRAM slapped to the same AIB.
    But, I believe GPUs could be designed with something like InfinityFabric in mind, where you have a central hub that does all the syncing and relays data between each GPU die. There's no "master/primary" GPU, the memory isn't split between each die, and software should be able to handle it as though it was just 1 giant monolithic GPU.

    TL;DR: what AMD did with Ryzen wasn't just (in the words of Intel) gluing a couple dies together and calling it a day. There's a backbone that moderates everything so it runs seamlessly regardless of which core(s) you're using. The same principle can be used with GPUs.
     
    patteSatan likes this.
  6. DeskStar

    DeskStar Maha Guru

    Messages:
    1,159
    Likes Received:
    176
    GPU:
    EVGA 2080Ti/3090FTW
    to many filters would be needed to do such a thing and I wouldn't want to restrict flow to the rest of the house... HA!!! Not to mention one needs biocides and stabilizers in their loop in order to attain longevity.
     
  7. DeskStar

    DeskStar Maha Guru

    Messages:
    1,159
    Likes Received:
    176
    GPU:
    EVGA 2080Ti/3090FTW
    Yes sir...thank you sir.... (old Gregg)

    And she's got the pedestal to go with the rest of her for that extra room for them rads.

    Case of cases if you ask me.

    Anyone could help me out with locating a tempered window for this beast I'd be more than greatly appreciative that's for sure....
     
    Koniakki likes this.
  8. Denial

    Denial Ancient Guru

    Messages:
    13,326
    Likes Received:
    2,827
    GPU:
    EVGA RTX 3080
    Crossfire/SLI work completely differently than any proposed MCM-GPU setup.

    Crossfire/SLI is dying because the majority of modern shaders use interframe dependencies to speed up the processing - which creates massive overhead and scheduling/synchronization issues in multi-GPU setups.

    As far as MCM setups - it's being worked on by both companies. Nvidia and others already published several research documents related to it:

    https://research.nvidia.com/sites/default/files/publications/ISCA_2017_MCMGPU.pdf
    https://hps.ece.utexas.edu/people/ebrahimi/pub/milic_micro17.pdf
     
    fantaskarsef and Fox2232 like this.
  9. Jespi

    Jespi Member

    Messages:
    22
    Likes Received:
    7
    GPU:
    8GB
    I have 47 fans is the "Iam Vegan" of IT industry :D
    :D
     
  10. user1

    user1 Ancient Guru

    Messages:
    1,636
    Likes Received:
    554
    GPU:
    hd 6870
    this is why https://www.servethehome.com/intel-xeon-platinum-9200-formerly-cascade-lake-ap-launched/

    intel just launched a 400W tdp server chip, presumably to try and compete with epyc 2.

    14nm means increasing tdp's inorder to try and keep performance up against what ever amd is going to launch,14nm is a very old node at this point.

    I wouldn't be surprised if intel launches desktop parts in the >200w tdp range, over the next year, since 10nm is MIA
     

  11. Aura89

    Aura89 Ancient Guru

    Messages:
    8,161
    Likes Received:
    1,274
    GPU:
    -
    ....lol

    It's posts like this when i wish these forums had a downvoting system, there's so much facepalm here.

    I mean, sure, you're free to do as you want, but there's no situation...ever, that 47 fans would be needed in a system. You could have a dual-socket 250 watt per CPU with 3 titans, and still not need 47 fans....no matter how slow, no matter how fast. Plus, 47 fans, in any configuration, due to inherently mish-mash airflow you'd have to have, would have audible sound no matter what.

    Then, there's the validity of your statement. How, pray tell, do you have 47 fans in a PC? The amount of space 47 fans would take would mean you have no case, instead, your case is just one conglomerate of fans.

    I'm having a hard time even finding a case with more then 10 fan mounts, there are some, but, lets say you got one with 10 fan mounts, and you got 4 GPUs with 3 fans each, and two CPUs with 2 fans each, that gives you....26 fans, where's the other 21 fans? Even if you bumped the case up to 20 fan mounts(like the Thermaltake Core X9 listed above, or is it 23 fan mount capable on that one? i'm reading 20 according to the description, but it could be 23), you'd still be 16 fans short (or 13 if it is indeed 23 on the Thermaltake Core X9)

    But i guess i can't rule out the possibility that what you're saying is true, as this guy has 66 fans

    [​IMG]

    Very um.....practical.....and useful.....not overkill for the sake of overkill at all, definitely not useless

    And this isn't even mentioning that depending on your wattage that your fans are running at, you're using even more wattage, which again i get you say you don't care about, but that's another 47-94 extra watts right there, supposedly, to cool things down......

    Hey guys, look at my car! most cars have 4 wheels, but mine.....it's got 8!

    [​IMG]

    Or better yet, look at my bicycle!

    [​IMG]

    OH YEAH BABY LOOK AT ME!
     
    Last edited: Apr 3, 2019
  12. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,804
    Likes Received:
    3,359
    GPU:
    6900XT+AW@240Hz
    I have no idea why people have something against fantastic cases.
     
  13. slyphnier

    slyphnier Master Guru

    Messages:
    813
    Likes Received:
    71
    GPU:
    GTX1070
    many soc/chips still using bigger node
    yeah it yields lower amount per-waffer, but other than that its should not make the pricing different or higher on previous/older nodes, as the production mature, yield-rate on older node actually better than new node

    on other hand, like u already mention, new-nodes means they need to investing in making the production line
    even they have better yields but that when its matures, on early stages the yield will be low
    so that what always make new-nodes have higher prices

    so saying 14nm will be more expensive than 10nm is somewhat not correct
     
  14. nevcairiel

    nevcairiel Master Guru

    Messages:
    749
    Likes Received:
    287
    GPU:
    3090
    If you abstract a principle to a high enough level, you can use it for anything. But the details to make it work are exponentially more complex with a GPU, since the interaction between cores is far higher there.
     
  15. nevcairiel

    nevcairiel Master Guru

    Messages:
    749
    Likes Received:
    287
    GPU:
    3090
    I would be surprised if they weren't working on it, making several small chips is always better then one big one. However, that doesn't mean they are anywhere close to making it work seamlessly. Because bottlenecks from overhead to keep multiple chips synchronized and talking to each other are quite real.
     

  16. Silva

    Silva Maha Guru

    Messages:
    1,468
    Likes Received:
    618
    GPU:
    Asus RX560 4G
    The problem with CrossFire and SLI is on the software: it has to be the developer to program for the feature.
    Some games scale really well, up to 90%, while others just run worse. It's not an hardware related issue.
    If the "Crossfire" is done internally at the GPU level, you could split the load (like the software already does) in the board but only show one GPU to the software. Eventually we will have multi die GPUs, Nvidia simply can't keep making them bigger as not everyone has deep pockets.
    When top of the line performance isn't in question, old nodes work just fine. They're efficient to produce and cheap to sell.

    You don't understand how yields function, do you?
    Using the same process node, if you make a bigger chip you will have lower yields. The chip takes more space on the wafer, has bigger change of being a faulty one.
    Plus, if the chip takes more space on the wafer, you will have less working chips from that wafer to divide the cost, making the product more expensive (even if the yields are +90%).

    Bigger chips will always be more expensive, even if fabricated on a mature fabrication process, its math.
    This is one of the reasons Nvidia is so expensive, they're not getting better, they're just getting bigger.
     
  17. nevcairiel

    nevcairiel Master Guru

    Messages:
    749
    Likes Received:
    287
    GPU:
    3090
    The point is that this isn't very easy. Crossfire doesn't only have scaling problems because software sucks, but also because more advanced graphic features are just fundamentally incompatible with its approach.
    If you could just make a chip that does all the Crossfire/SLI work, and it would scale 90-100% all the time, then someone would have done that already. But there is no easy solution to this.

    A future multi-die GPU wouldn't work like Crossfire/SLI works today (no matter if management is in software or hardware). It would have to be much smarter then that, and allow the chips to work together much more closely, sharing caches, VRAM and all that stuff. This is an exceptionally hard topic to solve - and most importantly, much harder then for CPUs, because CPU cores are designed to work relatively independently.
     
    Denial likes this.
  18. Denial

    Denial Ancient Guru

    Messages:
    13,326
    Likes Received:
    2,827
    GPU:
    EVGA RTX 3080
    Yah - I strongly suggest people read the first white paper I linked here:

    https://research.nvidia.com/sites/default/files/publications/ISCA_2017_MCMGPU.pdf

    It's definitely more complex to do MCM on a GPU than CPU due to extremely complex scheduling requirements. Also like you said it operates nothing like modern SLI/Crossfire.

    It's probably a few generations out and even in their theoretical approaches it still doesn't scale 100% and requires an incredibly high bandwidth bus - far more than what infinity fabric is currently capable of.
     
  19. Warrax

    Warrax Member Guru

    Messages:
    131
    Likes Received:
    17
    GPU:
    Gigabyte 970 G1
    Almost 5 years of pretty much rebranding the same thing, good job intel /s
     
    Silva likes this.
  20. tunejunky

    tunejunky Maha Guru

    Messages:
    1,240
    Likes Received:
    440
    GPU:
    RadeonVII RTX 2070
    exactly.
     

Share This Page