NVIDIA Could Tease Its Next-Gen Ampere GPU on 7nm at GTC 19

Discussion in 'Frontpage news' started by Rich_Guy, Mar 15, 2019.

  1. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,040
    Likes Received:
    7,381
    GPU:
    GTX 1080ti
    It was expected normalization, any stock watcher knew all too well the value was artificial and it wasn't a good time to buy, sucks to be anyone that did but that was their own foolishness.
     
  2. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    That's wrong. read what I wrote above.
     
  3. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,040
    Likes Received:
    7,381
    GPU:
    GTX 1080ti
    the slow sales of gpu's had nothing to do with the normalization.

    quit with the conspiracy agenda.
     
  4. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x

  5. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    let' see 50% loss of value - fact.
    Gaming revenue 45% down - fact.
    and you talk about "conspiracy". haha
     
  6. Killian38

    Killian38 Guest

    Messages:
    312
    Likes Received:
    88
    GPU:
    1060
    fantaskarsef and Rich_Guy like this.
  7. alanm

    alanm Ancient Guru

    Messages:
    12,274
    Likes Received:
    4,477
    GPU:
    RTX 4080
    Nvidias ideal target market :D.
     
  8. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
     
    RzrTrek and Rich_Guy like this.
  9. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    He's a billionaire because he owns shares of the company.. it's not liquid assets.

    You're wrong about GP100. Ignoring the HBM, its the only pascal chip to use the FPx2 cores that do two FP16 operations per cycle per core (which was used for training in datacenter). Volta is not older than GP102 - Volta doesn't clock as high as Turing or Pascal because it has so many CUDA cores. GDDR6 doesn't perform better than HBM2.
     
    yasamoka and fantaskarsef like this.
  10. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,040
    Likes Received:
    7,381
    GPU:
    GTX 1080ti
    its almost like nvidia's primary market isn't gamers these da..... oh wait. they aren't.
     
    fantaskarsef and Dragam1337 like this.

  11. Dragam1337

    Dragam1337 Ancient Guru

    Messages:
    5,535
    Likes Received:
    3,581
    GPU:
    RTX 4090 Gaming OC
    All the more reason for him to want a payout ;)

    I retract my statement regarding gp100 and gp102... i missread it as the new turing chips /doh

    If it was simply due to chip size that volta doesn't clock higher, then turing ought to clock even lower, as the chip is even bigger... but it doesn't. I think it has alot to do with the achitecture, rather than the amount of cuda cores.
    For general purpose you are absolutely right in regards to hbm2 performing better, but games tend to like high clock speeds on vram more than a wide bus width. Ofc i can't find the article, but the amd fury x was actually limited performance wise due to the low clock speeds of its hbm. Hbm2 is twice as high clocked, yes, but gddr6 is MUCH higher clocked. It would be interesting to see a comparison between the quadro turing gpu using hbm2, and 2080 ti using gddr6, with both gpu's clocked to have the same amount of TFLOPS. My guess would be that the 2080 ti would win out in most situations (there are a few games that like bandwidth a whole lot, and hbm2 might win out there).
     
  12. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    Which is exactly the problem.... they fail to deliver on the gaming market because of the focus on emerging markets like AI and Datacenters.
    what do you think "Tensor cores" are? it's their AI acceleration solution, they just try to stick it into the gaming market so everyone uses their AI cores which are very expensive to develop.
     
    Dragam1337 likes this.
  13. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    It's bigger due to Tensor/RT cores which can be dispatched to concurrently but don't actually run concurrently. If you fire up a graphics workload while running pytorch (or vice versa) you can watch the performance of both drop.
     
  14. Dragam1337

    Dragam1337 Ancient Guru

    Messages:
    5,535
    Likes Received:
    3,581
    GPU:
    RTX 4090 Gaming OC
    You might be right that it is simply due to the amount of cuda cores, but as far as i've understood, chip instability (and thus inability to clock high) stems from the overall chip size being too big, rather than the size of a specific component of the chip. But my knowledge of how the chips work at a low level is rather limited, so yeah you might be right - it just seems odd to me, as it has never been an issue in the past. The biggest chips always clocked as high as the smaller ones, granted that they had sufficient cooling.
     
  15. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,759
    Likes Received:
    9,649
    GPU:
    4090@H2O
    At least he knows, he wears the green with pride. ;)
     

Share This Page