Discussion in 'Frontpage news' started by Rich_Guy, Mar 15, 2019.
Market normalization was fully expected.
This is not some slight normalization, that's 50% of your value. there's no such adjustment.
RTX sales are weak, the GPU market is saturated, Crypto turned into nothing.
It was expected normalization, any stock watcher knew all too well the value was artificial and it wasn't a good time to buy, sucks to be anyone that did but that was their own foolishness.
That's wrong. read what I wrote above.
the slow sales of gpu's had nothing to do with the normalization.
quit with the conspiracy agenda.
conspiracy? when you call a 50% drop in value share "normalization", that's conspiracy
Nvidia admitted it themselves: https://www.tomshardware.com/news/nvidia-rtx-2080-2070-earnings,38618.html
let' see 50% loss of value - fact.
Gaming revenue 45% down - fact.
and you talk about "conspiracy". haha
I don't care if the top ampere card is worse than a 1080 ti and costs 2k. I'll buy atleast 2 of them.
Nvidias ideal target market .
He's a billionaire because he owns shares of the company.. it's not liquid assets.
You're wrong about GP100. Ignoring the HBM, its the only pascal chip to use the FPx2 cores that do two FP16 operations per cycle per core (which was used for training in datacenter). Volta is not older than GP102 - Volta doesn't clock as high as Turing or Pascal because it has so many CUDA cores. GDDR6 doesn't perform better than HBM2.
its almost like nvidia's primary market isn't gamers these da..... oh wait. they aren't.
All the more reason for him to want a payout
I retract my statement regarding gp100 and gp102... i missread it as the new turing chips /doh
If it was simply due to chip size that volta doesn't clock higher, then turing ought to clock even lower, as the chip is even bigger... but it doesn't. I think it has alot to do with the achitecture, rather than the amount of cuda cores.
For general purpose you are absolutely right in regards to hbm2 performing better, but games tend to like high clock speeds on vram more than a wide bus width. Ofc i can't find the article, but the amd fury x was actually limited performance wise due to the low clock speeds of its hbm. Hbm2 is twice as high clocked, yes, but gddr6 is MUCH higher clocked. It would be interesting to see a comparison between the quadro turing gpu using hbm2, and 2080 ti using gddr6, with both gpu's clocked to have the same amount of TFLOPS. My guess would be that the 2080 ti would win out in most situations (there are a few games that like bandwidth a whole lot, and hbm2 might win out there).
Which is exactly the problem.... they fail to deliver on the gaming market because of the focus on emerging markets like AI and Datacenters.
what do you think "Tensor cores" are? it's their AI acceleration solution, they just try to stick it into the gaming market so everyone uses their AI cores which are very expensive to develop.
It's bigger due to Tensor/RT cores which can be dispatched to concurrently but don't actually run concurrently. If you fire up a graphics workload while running pytorch (or vice versa) you can watch the performance of both drop.
You might be right that it is simply due to the amount of cuda cores, but as far as i've understood, chip instability (and thus inability to clock high) stems from the overall chip size being too big, rather than the size of a specific component of the chip. But my knowledge of how the chips work at a low level is rather limited, so yeah you might be right - it just seems odd to me, as it has never been an issue in the past. The biggest chips always clocked as high as the smaller ones, granted that they had sufficient cooling.
At least he knows, he wears the green with pride.