AMD Gives Statement on the PCI-Express Overcurrent Problems

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 2, 2016.

  1. HeavyHemi

    HeavyHemi Guest

    Messages:
    6,952
    Likes Received:
    960
    GPU:
    GTX1080Ti
    I have not seen any example where a 480 was shown to be drawing a continuous 100 watts on the PCIe slot. Not even close. A budget board would be irrelevant to burning up the 24 pin connector. Further he adds:

    Were you dual mining? YES
    Overclocking? NO, actually I had the clock less the standard
    Increase Power Limit? NO

    and he's running the Asus P7P55-LX motherboard.
     
    Last edited: Jul 3, 2016
  2. alanm

    alanm Ancient Guru

    Messages:
    12,232
    Likes Received:
    4,435
    GPU:
    RTX 4080
    Sorry my bad, estimated based on OC'd 480s which can draw up to 200w.

    [​IMG]
    So despite mining heavy load, should not draw 300w from mobo with 3 cards when not OC'd. But sustained power draw over pcie spec from 3 cards on lower budget boards is not a good idea, I think we can agree.
     
    Last edited: Jul 3, 2016
  3. blahsaysblah

    blahsaysblah Guest

    Messages:
    13
    Likes Received:
    0
    GPU:
    GTX 750 Ti 2GB
    Just Say No to out of spec cards.

    ATX12v 2.2(2.3 spec was 2007) changed ATX pins to HCS to allow 9A per pin from 6A. The change to 20+4 ATX power added a second 12v line to motherboard, came about because of high power graphics cards.

    Not shared with CPU, but shared with all PCI-E, M.2, fans,...

    New motherboards and power supplies using High Current Series pins:
    24pin: 2x12Vx9A is 216W.
    20pin: 1x12Vx9A is 108W.

    Older motherboards and power supplies using standard pins:
    24pin: 2x12Vx6A is 144W
    20pin: 1x12Vx6A is 72W

    edit: For reference:
    New PS:
    6-pin: 2x12Vx9A is 216W vs 75W spec(if wire allows)
    8-pin: 3x12Vx9A is 324W vs 150W spec

    Old PS:
    6-pin: 2x12Vx6A is 144W vs 75W spec?(if wire allows)
    8-pin: 3x12Vx6A is 216W vs 150W spec?

    The PCI-E and ATX specs allow you to run 24x7 without any risk at 150W, 225W and 300W configurations per card(10W of each is via 3.3V rail, rest is 12V from PCI-E and direct from PS). edit: forgot the base 75W(10W at 3.3V and 66W at 12V) via just PCI-E slot.

    You don't need to run out of spec. Why would you ever run out of spec?
     
    Last edited: Jul 3, 2016
  4. Clouseau

    Clouseau Ancient Guru

    Messages:
    2,841
    Likes Received:
    508
    GPU:
    ZOTAC AMP RTX 3070
    Hold on there, that is not correct. That's bat crazy. Get your head straight. That grain of sand belongs over there, not there. It has a tone of color more consistent with those over there.

    But what about if one tilts it to a point where the color changes hues?

    FFS, let's beat this horse till nothing of the horse is left and we can start talking about the dirt that was underneath the horse. If that is not sufficient, then there is always the space between the particles of dirt that can be dissected.

    Slap AMD on the wrist and be done with it. If that is not satisfying enough, write them a letter stating how irresponsible they were. Elaborate on how bat sh|t crazy they were.
     

  5. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I'll just leave it here: He burnt Pins: 1, 2, 3, 14 (+3.3V, +3.3V, GND, -12V)
    But actual source of dmg looks like between pin 2 and 3. So +3.3V.
    Not that it matters in any way, because that -12V is not used by GPU. And everyone everywhere says that +3.3V current going to RX480 through PCIe is pretty standard.
    [​IMG]

    @alanm: I call you out: "If you actually care, Go and Find what that 3.3V pin is actually used for."
     
  6. Clouseau

    Clouseau Ancient Guru

    Messages:
    2,841
    Likes Received:
    508
    GPU:
    ZOTAC AMP RTX 3070
    The PCI-E slot draws current from the 12V and the 3.3V pins for a total of 75 watts.

    What will this discussion unearth from this point on? To what avail? This horse has been dead for a while now. Let it rest in peace. Bow to Nvidia and acknowledge their humility and humbleness. Kick AMD in the nuts and be done with it.
     
  7. Bleib

    Bleib Guest

    Messages:
    374
    Likes Received:
    1
    GPU:
    MSI RX 480 8GB
    I'm actually glad now that the price was a bit on the high side so I didn't purchase one. I'll even wait the 1060 to see how AMD responds to that, it's just quite hard to believe that a major corporation messes up this badly.

    The bad cooler is not the biggest problem, I'll replace it with something better but to no be in spec for something as important is just mindbogglingly incompetent. Kudos to websites who test these sort of things.

    AMD really cannot afford these types of mess-ups, they already have had for more than a decade the view of having bad drivers (which is not true).
     
  8. blahsaysblah

    blahsaysblah Guest

    Messages:
    13
    Likes Received:
    0
    GPU:
    GTX 750 Ti 2GB
    According to TomsHardware review that kind of started all this: The 3.3V volt is the 7th phase of their 6+1 config. It is used to power the memory.

    They saw average of 4W and peak of 7W during power profiling in games.

    I would assume bit-mining is much heavier user of memory?

    pcper reported that their MB contact said it wont be the traces that give first, it will be the PCI-E contacts. They probably melted and created a short.

    The -12V pin is probably only related in that its next to the contact that failed first.

    However, havent other sites over the years clearly said that VRAM on cards is a significant user of watts? That 4W/7W peak is not exactly that much higher than regular DDR. So maybe that information is incorrect. The ASIC(GPU die) is 110W and whole card is 150W. I thought a big chunk of that difference is RAM until i read that the 3.3V was for the RAM.
     
  9. HeavyHemi

    HeavyHemi Guest

    Messages:
    6,952
    Likes Received:
    960
    GPU:
    GTX1080Ti
    Typical usage for GDDR5 at these speeds and memory amount is closer to 20-25 watts. So theoretically you're looking at pulling close to 22 amps on the 3.3v.
    Edit...revised numbers based on further research...ie memory bandwidth :)
     
    Last edited: Jul 3, 2016
  10. alanm

    alanm Ancient Guru

    Messages:
    12,232
    Likes Received:
    4,435
    GPU:
    RTX 4080
    I actually checked that before posting the mining link and did see it as 3.3v, but also saw others as 12v. I believe its wire side vs pin side that is causing the confusion.

    [​IMG]

    [​IMG]

    [​IMG]

    So the burnt motherboard connectors were 12v, not 3.3v.
     
    Last edited: Jul 3, 2016

  11. VENGEANCE

    VENGEANCE Guest

    Messages:
    172
    Likes Received:
    0
    GPU:
    GALAX GTX 1070 HOF @2164
    this card is allready doomed
     
  12. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I'll tell ye secret and you will spread it to others, yeah?
    - - - -
    All those connectors are squares, but some have corners cut to prevent misalignment.
    And now the secret. All cuts are in same direction. Please let everyone know how to identify diagram for Board and its direction.
    (and that -12V does not even have enough current to burn anything)
    - - - -
    And for laughs. If you managed to turn it around, those burned pins would be:
    GND, +5V, +5V, +12V. weirdly enough with source of burning between +5V and +5V.
    That does not even relate to anything other than wishful thinking that it was an GPU caused problem.

    Edit: He is actually right. But sadly. It means that all which burned are +3.3V, +12V, +12V, +5V.
    And electrical basics will tell you that: "Current which goes IN, must go OUT."
    So, where are burned GNDs?

    And in all honesty, I would believe burned traces, not burned connector.
     
    Last edited: Jul 3, 2016
  13. alanm

    alanm Ancient Guru

    Messages:
    12,232
    Likes Received:
    4,435
    GPU:
    RTX 4080
    I only see the 12v as burned, which may have spilled over into nearby pin sockets.

    [​IMG]
     
    Last edited: Jul 3, 2016
  14. H83

    H83 Ancient Guru

    Messages:
    5,465
    Likes Received:
    3,002
    GPU:
    XFX Black 6950XT
    I don´t what´s happening for sure because i´m not an electrical engineer or anything like that, what i know is that´s a shame that the reputation of a very good card is being destroyed because of a stupid mistake from AMD...
    C´mon AMD, put your **** together and start releasing great products so Intel and Nvidia have some real competition!
     
  15. The Laughing Ma

    The Laughing Ma Ancient Guru

    Messages:
    4,691
    Likes Received:
    1,078
    GPU:
    Gigabyte 4070ti
    Then why is AMD admitting their is an issue and saying they are working on a driver to fix it, yes the trolls have over inflated the potential damage but you don't 'deal with the trolls' by admitting you are working on a fix.

    Market share? seems about right.

    What I love about the statement is that they couldn't just give a flat out we are fixing it sorry for any issues that some users may be having they have to prefix the statement regarding the fix with this marketing bs

    Mind you good for the new owners that they can fix the issue with software last thing AMD needs now would be some sort of recall, then again last thing AMD needed now was any kind of bad press regarding their new card.
     

  16. alanm

    alanm Ancient Guru

    Messages:
    12,232
    Likes Received:
    4,435
    GPU:
    RTX 4080
    Yep, I would have no problem buying or recommending the card, its a damn good deal. As long as AMD corrects the pcie overdraw on single 6-pin card or an AIB with 8-pin/dual 6-pin. And yes, its a shame as I think its even better built than a gtx1080, at least with the beefier power phases on it.
     
  17. Athlonite

    Athlonite Maha Guru

    Messages:
    1,358
    Likes Received:
    52
    GPU:
    Pulse RX5700 8GB
    Look at the age of that mobo it has an PATA connector on geez how stupid are some people that mobo was never going to last mining with 3x RX 480's in it and it would never have lasted with 3x R9 390's in it either probable also only had a single 4 pin ATX power plug aswell
     
  18. Reddoguk

    Reddoguk Ancient Guru

    Messages:
    2,660
    Likes Received:
    593
    GPU:
    RTX3090 GB GamingOC
    I'm starting to question now why AMD or Nvidia even needs to make cards at all. We all know that Reference Nvidia or AMD cards are poorly made cheap as possible, plain, standard cooling not great, poor power delivery, yet very similar in price to 3rd party cards.

    I've never owned a Ref card made by AMD or Nvidia and the reason is because MSI, EVGA, Gigabyte, Asus, Palit, KFA2, Zotac, Inno3D, PNY, Gainward and any i missed all sell Reference cards as well but they also all do none reference cards with better everything even the PCB.

    Why do all of them just buy the GPU chips separately and make many versions of the same card? Is it because reference cards are made poorly and they think they can improve them? Yes is the answer.

    Reference cards aren't bad btw just that 3rd party cards are better, better components, better power delivery, better cooling, better PCB with more copper and even more layers in the PCB to help spread the heat out and overall just better design and usually better warranty.

    I'm sorry but as a PC builder i could never recommend Reference anything and it'll be up to these 3rd parties to save this card from becoming a long standing joke.

    Those names i mentioned above will all of them make a very good RX480 it's just a shame that AMD couldn't.
     
  19. mjorgenson

    mjorgenson Guest

    Messages:
    257
    Likes Received:
    0
    GPU:
    EVGA GTX 1070 FTW-RX 480
    I had a Nvidia reference 980ti that I installed for a client. I felt the card was very well made. Also the AMD Fury Nano and X I have used were also well made. The RX 480....have no idea as I have not even opened the box.

    I see reference cards like a 68 Hemi Dart. Just a base for builders to work on and show proof of concept.
     
  20. Clouseau

    Clouseau Ancient Guru

    Messages:
    2,841
    Likes Received:
    508
    GPU:
    ZOTAC AMP RTX 3070
    It was a P7P55-LX; circa 2010. The slots are PCI-E 2.0 and look where the damage was. PSU at fault would be my guess. Also in the thread on the next page there is an individual who states they are running three 390 on the same mainboard and that power draw is much, much more than the RX480. So the culprit most likely was the PSU.

    Let's all form a cult based on how bad the RX480 is and picket AMD headquarters until they cough up a redesigned card that states the actual power draw and the effect it may have on older equipment. Better yet, let's lobby Congress for a new warning label that needs to be placed on the packaging and disclaimers that need to be mentioned in any and all advertising events like CES.
     

Share This Page