Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Sep 24, 2020.
Indeed, i made a post with a couple other shots of AIBs boards available above.
Seems even the Zotac 3090 uses POSCAPs, not even SP-CAPs.
All 6 caps read 330 on them.
To be fair, POSCAP's are not garbage. Just not the best for this use case.
Some cool info here for those that care about details.
This is a preproduction Tuf, the production models are full MLCC
Nvidia should impose more stringent measures on board partners not to cheap out or go less than what they use in their own ref boards. Although Zotac, Gigabyte should be shamed, Nividias brand gets hurt due to these lax standards with AIBs. Blame therefore equally goes to Nvidia for allowing it to happen.
Buildzoid basically blames Nvidia, not the board partners for this. According to him, Nvidia sets the design guidelines for the AIBs to follow and must approve them, so ultimately, it is Nvidias responsibility for the proper functioning of the end product.
Im suprised for Asus quality, so many seem to complain on asus. But honestly i had like 3 mobos from Asus and non ever died.
Hopefully this will teach some people the risk of early adoption. I'd look into a return option asap.
Shame everyone at Asus is deaf, and ends up making loud coolers.
I'll be honest, i didn't watch the video, don't have time at the moment, but....is buildzoid stating that nvidia "has" to approve of a design before an AIB is allowed to use their GPU in it? .... Cause i don't think that's true. Nvidia gives them the reference specifications, the bare minimum, which includes the key aspect that is causing the GPUs to have issues here. Why would nvidia, after giving this to the AIBs for the AIBs to follow, need to "approve" the design of something that they've already discussed? And realistically, what is stopping the AIBs from just bypassing all of that? Heck we see custom chinese boards all the time that have never been seen in any AIB (well, known AIB) just plop in nvidia and AMD GPUs (and often fake them and etc.) So...What's really stopping AIBs from looking at that reference design and following it?
Seems really odd to blame nvidia, when we have seen that their reference, bare minimum design, clearly shows what these cards don't have. It's the AIBs who decided to cheap out.
And lets just say nvidia does, somehow, have to, and is able to, authorize every single PCB and component choices for every single AIB and every single different version of the AIBs products.....What exactly would nvidias rationale be behind setting a standard and then stating: "Nah that's fine, you don't have to follow it, even though you using cheaper parks then we stated was required doesn't help our profits at all." ?
There's no logical rationale, reasoning, etc. for nvidia to do that. So i don't buy it. AIBs cheaped out, that's on them.
At around the 17:50m mark, he does say: "as far as I'm aware, that if you are making an Nvidia GPU, you have to get the PCB design approved by Nvidia". Not sure if entirely accurate, I wouldnt know. Just thats what he is saying.
Basically everyone who complain about Asus its about the pricing not the quality.
You usually get what you pay for... hence why i ordered the 3090 strix oc
That's not his intended message nor his final conclusion though. He concludes that he doesn't know, because not enough info. Here, I time timestamped your link:
Same as Igor:
NVIDIA, by the way, cannot be blamed directly, because the fact that MLCCs work better than POSCAPs is something that any board designer who hasn’t taken the wrong profession knows.
TLDR; NOT ENOUGH INFO
Realistically, unless AIB(s) pulled a fast one, of course it's NVIDIA's fault. And even then, this is not Nvidia's first rodeo - they should know better that AIB would try to cheap out. Nvidia's validation program (which despite AMD's famous "overengineering", is directly responsible for NVIDIA's consistently lower RMA rates) should be more strict and more streamlined to prevent something like this.
Also, one doesn't have to be genius to figure out that 350 Watt GPU peaking in 500 Watt territory will have higher probability of malfunctioning than say my sexy-cool-undervolted 2070, as seen in historical RMA rates which clearly corelate with TDP.
So especially if you're going into something of a new territory (350,450 Watt GPU launch) you better have those guidelines in place.
All this provided there is an actual problem with 3080s, and it's not just a matter of OC-happy owners.
And now we don't have to worry about A chips and non-A chips, the right RAM chips, but also the caps on the boards? This is... surprising, but not in a good way. A buyer has to check out so many things these days that usually, you don't see unless you have ample testing and at best, two months of internet R&D to see what people complain about.
I hope you guys get that sorted out, but if it's an issue with the caps on the board itself, what could a customer do? RMA the card to just get another one with the same caps?
Fair enough, long vid, hadnt completed it yet. But did get to the part where (17:50) where he does not "necessarily" blame the GPU makers but does imply the responsibility lies with Nvidia by saying they have to approve the PCB designs. How far the responsibility ultimately goes between AIBs and Nvidia, I dont know.
But as far as I'm concerned, I put the blame on Nvidia for not ensuring that whatever parts the AIBs use must perform within the expected parameters of the products sold. Meaning if an OC'd AIB card is released, it must be proven or tested to be stable before release. Nvidias own rep is at stake when it does not. Not to mention the inconvenience customers have to go through.
What baffles me most is prior to reviews, AIBs couldnt even test with a driver, they only had stress testing software from NVidia!
It wasnt possible to do any real world validation of their designs.
But why? Is it becouse they ware keeping it as a secret? buildzoid is right this is an nvidia fault.