Dummy Dies? ThreadRipper examined[der8auer]

Discussion in 'Frontpage news' started by BigMaMaInHouse, Sep 15, 2017.

  1. varkkon

    varkkon Member Guru

    Messages:
    140
    Likes Received:
    24
    GPU:
    Geforce 1080 Ti
    Make no difference to me, the ThreadRipper CPUs own.

    Cool article though, nice to see what is under the hood for the heck of it.
     
    Silva likes this.
  2. BlueRay

    BlueRay Guest

    Messages:
    278
    Likes Received:
    77
    GPU:
    EVGA GTX 1070 FTW
    Beyond simple curiosity to learn about how and why is there any other serious reason why this reveal may be important?
    Does it hinder performance or cripples the product somehow?
    Why I see some people write "ha AMD lied to us gotcha" and such?
     
  3. wavetrex

    wavetrex Ancient Guru

    Messages:
    2,464
    Likes Received:
    2,574
    GPU:
    ROG RTX 6090 Ultra
    It's very likely that these are actually Epyc CPU's straight from the factory, and then they lasered off the bad dies and flashed firmware into the chip to show up as TR.
    Minimum work, reduced waste, everyone wins !

    Except Intel of course, which is losing marketshare on all fronts rapidly...
    I really hope those foundries AMD is using can keep up with the demand !
     
  4. airbud7

    airbud7 Guest

    Messages:
    7,833
    Likes Received:
    4,797
    GPU:
    pny gtx 1060 xlr8
    Any way to unlock that stuff in there like back in the Phenom II X3 to a X4 days?
     

  5. Venix

    Venix Ancient Guru

    Messages:
    3,472
    Likes Received:
    1,972
    GPU:
    Rtx 4070 super
    my only speculation about it is that it was easier/cheaper to use the same slot and same pcb design and same assembly line to solder etc those cpu's than make a new slot or new assembly line to solder etc a different threadripper cpu package but right now is my own speculation with out any proof or evidence just speculating what seems to be the logical reason , and well that leaves em with the window open to release higher core counts .... possibly !
     
    Evildead666 likes this.
  6. akbaar

    akbaar Master Guru

    Messages:
    426
    Likes Received:
    55
    GPU:
    ASUS TUFF 3080 12Gb
    Good News
     
  7. r00lz

    r00lz Member

    Messages:
    20
    Likes Received:
    3
    GPU:
    NVIDIA RTX 3090
    Or maybe two bad ryzen dies with bad cache and two bad zeppelins with bad cores. But I don't think its possible to make working threadripper in that way:)
     
  8. DARKSF

    DARKSF Active Member

    Messages:
    61
    Likes Received:
    17
    Instead of making useless comments why they did it , to which the answer is very clear > production optimisation , we should ask are there actually intact 32 cores cpus sold as an 16 core cpus and is there a way to activate the rest of the cores :) Since the platform is clearly almost the same as the EPYC platform there will be no surprise if we see up to 32 core Threadripper in the future put against the i9 series.
     
  9. Evildead666

    Evildead666 Guest

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    Dont forget people, that the Zeppelin dies are not just cores, but there is also a SoC on there.
    Some wafers may have defects in the SoC part, and they would be totally lost, even if the cores worked.
    This is a re-use and minimizing waste procedure, that is extremely cost effective.
     
    Loophole35 likes this.
  10. Evildead666

    Evildead666 Guest

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    Not really.
    There are "probably" no traces in the substrate for them.

    edit: even if there were traces in the chip substrate, the motherboard doesn't support EPYC chips, so it wouldn't be activatable.
     
    Last edited: Sep 16, 2017

  11. Evildead666

    Evildead666 Guest

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    Exactly. Same production line as EPYC, much lower costs than a separate line.

    I'd say that a new revision of the board, with the same socket, could enable a quad-die TR to work.
    Next gen probably, when they hit 10/7nm and get the TDP's down enough to have all 32 cores in the same thermal envelope.

    edit : But by then, EPYC will be on 64 cores, and the Zeppelin "II" dies could be 16 cores each.
     
  12. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,793
    Likes Received:
    1,396
    GPU:
    黃仁勳 stole my 4090
    I seriously doubt that. It's most likely 4 bad dies with 4 cores on each disabled or even mixed and matched to get 16 passable cores. Those things are enormous; their defect rate is probably through the roof.
    Why would it need official support for the board to recognize it? It's the same architecture, there shouldn't be much of an issue. At least that's how it's always been from what I've seen. Someone correct me if I'm wrong.
     
  13. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    I'd say put this idea to bed.

    There's too many factors that would change to unlock the additional cores. A 12 core to 16 core, sure. But you'd be asking a motherboard and bios and possibly the CPU itself to work in a way it was never designed to. For instance, EPYC CPUs will not work on threadripper motherboards, even though they are physically the same. Why is that? Is it because the EPYC CPUs have additional DIMM slots it expects? Additional PCI-Express lanes that they expect? If a motherboard tells the CPU it has less of either of those, does the motherboard tell the EPYC CPU that they same way that a threadripper motherboard would?

    This isn't like the old days where you could "unlock" CPU cores on a CPU. Those motherboards understood the additional cores and information to control them, because they supported the CPUs they had essentially become, already. AKA if a 6 core unlocked to an 8 core, that motherboard already knew how to handle an 8 core.

    This is not like that. Threadripper motherboards only understand up to 16 cores, not 32. Not 17. As well as only understands how to utilize up to what a 16 core processor can give it, PCI-E lanes and such.

    Not to mention the fact that you'd be trying to unlock 16 additional cores and you would have no idea how badly they failed. If they are BOTH failed chips, you're essentially doubling the risk, if not much much more, compared to how it was done before. "back in the day" unlocks were cores PART of the CPU package. This is entirely different, and would be essentially "waking up" 2 full CPU packages that are deemed non-working, to the point that they didn't even want to use them as 4-core Ryzen r3 chips.
     
    Last edited: Sep 16, 2017
    yasamoka likes this.
  14. D3M1G0D

    D3M1G0D Guest

    Messages:
    2,068
    Likes Received:
    1,341
    GPU:
    2 x GeForce 1080 Ti
    That's the main reason why Threadripper exists right now. AMD described Threadripper as an experiment by their engineers in their spare time. It was originally not in the plans for 2017 but got squeezed in at the last minute, and it was only by utilizing the EPYC production line that it saw the light of day this year.

    IMO, Threadripper is the way CPUs should be. It was not created by management trying to increase the bottom line, but by the passion and drive of the engineers - created by enthusiasts, for enthusiasts. The fact that it was so well-received and is seeing great sales provides further validation.
     
    chispy likes this.
  15. Evildead666

    Evildead666 Guest

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    The Zeppelin dies aren't that big, and they announced 70% yield quite a while ago. ;)
     

  16. wavetrex

    wavetrex Ancient Guru

    Messages:
    2,464
    Likes Received:
    2,574
    GPU:
    ROG RTX 6090 Ultra
    Don't forget that with NUMA it's possible to access memory and PCI express lanes linked to another core, via infinity fabric.
    So 32 core CPUs definitely possible on current X399, with 2 of the dies not directly linked to the motherboard. Latency will suck yes, but if such a CPU is built, it will be purely for content creators, where core power is more important than memory or IO latency.
     
    Last edited: Sep 17, 2017
  17. DARKSF

    DARKSF Active Member

    Messages:
    61
    Likes Received:
    17
    Most of your arguments are invalid.The Memory lines and the PCI-E are just unused if what you are saying was a problem then the MiniITX form factors would require special CPUs for them or any dual/quad chanel modern CPU would be unable to post with only one DIMM installed :)

    To the more cores question , the mobo is not interested how many cores the cpu has on a hardware level it is all on BIOS level just a reminder of almost a decade ago the AM3 Phenom II X6 worked fine on the old AM2+ mobos that were manufactured in the Quad Core era and they didn't knew about a 6 core part until the X6 came along and BIOS update was provided.

    About the package it is again a CPU packadge the CPUs don't comunicate around the mobo with each other but with the Infinity Fabric on the CPU packadge.

    About the bad silicon well here i agree but no catastrophic failure will happen.The machine won't POST because of CPU errors with the faulty areas activated like it always did with the older architectures.

    The only real problem will be the power delivery you will need to downclock the CPU in order not to fry the mobo power delivery.
     
  18. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    That's not correct. Those motherboards know both how to tell the the CPU how to handle them as well as the CPU knows how to handle less due to both the CPU and Motherboard being designed to work with eachother. So your idea doesn't hold ground.

    By that i mean, if i were to release a chipset with less features then the CPU can provide, or allow manufacturers to decide that fact with said chipset (for miniITX and etc.) then obviously i would be designing both the CPU and chipset to work with eachother on that. That fact does not state that if you did NOT design a chipset and/or CPU to work with eachother, that either part will understand how to handle more then what it was designed to understand is even possible.

    That depends on if all 4094 pins are actually effectively being used, or if only half of them are being used. Quite frankly, it doesn't make much sense to use all pins if all CCX's aren't being used. The pins could be dummy pins, inactive, and go nowhere, on both the CPU and the motherboard. But we do not know how it is set up fully.
    I didn't even think of that.

    Overall, my point is it's not really worth trying to figure out if it's possible due to the fact that there's too many factors, too many things the motherboard specifically was never designed to understand or handle, too many unknowns to think that what happened in "the old days" would even be possible now.
     
    Last edited: Sep 17, 2017
  19. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,793
    Likes Received:
    1,396
    GPU:
    黃仁勳 stole my 4090
    Are we talking about the same thing? Either I'm thinking of something else or they're enormous for something that lacks an integrated GPU. And I have a hard time believing that 70%, is AMD on a larger process?
     
  20. Evildead666

    Evildead666 Guest

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    213mm² isn't that big, for an 8c/16t chip.
    Intels 8-core (based on the 10 core chips) is 308mm².
    It has more memory channels etc, but its still much bigger. :)

    AMD is on a larger process than Intel currently, yes.
    I think its 14nm for AMD, and 14 or 12/10nm for Intel.
    It might change for AMD's 7nm vs Intels 10nm next year, since I don't beleive Intel will be running to 7nm or below just yet...
     

Share This Page