Nvidia Turing GeForce 2080 (Ti) architecture review

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Sep 14, 2018.

  1. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    @tunejunky : nVidia is working on MCM for some time. Fact that you do not see it means that they have some kind of hiccup.
    AMD mentioned some issues too. So, I wonder which will have 1st MCM GPU for us.
     
  2. tunejunky

    tunejunky Ancient Guru

    Messages:
    4,451
    Likes Received:
    3,071
    GPU:
    7900xtx/7900xt
    "chiplets" are SoC, just re-branded to seem higher end.
    and no, they couldn't have made big Pascal w/6k cuda cores.

    they are a secondary or tertiary Client for chip fabs.
    period.

    Nvidia had refused initial partnership with TSMC, the best and the largest chip fab on earth (Intel is #2 now) with the most advanced process nodes. right now TSMC is booked full for over six months for just Apple, Qualcomm, And AMD...they ponied up the capitalization and reap the benefit.
    all other fabs are chasing node shrink...and GloFlo can't do the numbers Nvidia would need at a price that would allow profitability.

    as i've said before, RTX is a rush job, the additional delay and lack of NDA'd developers with full ray tracing just underscore my point. this is very unlike Nvidia, which prefers the details buttoned up.

    Pascal should have been a refresh at 12-14nm, but with a further node shrink and new architecture from AMD, Nvidia could not resist competitive sparring beating Amd to the punch.
     
  3. tunejunky

    tunejunky Ancient Guru

    Messages:
    4,451
    Likes Received:
    3,071
    GPU:
    7900xtx/7900xt
    and "free-sync" is AMD which left it open source just SO it could be part of standards.
     
  4. fr33k

    fr33k Ancient Guru

    Messages:
    2,260
    Likes Received:
    132
    GPU:
    MSI 4080 VENTUS OC
    i'd like to know why people keep calling the 2080 a mid-range card?
     
    yasamoka likes this.

  5. tunejunky

    tunejunky Ancient Guru

    Messages:
    4,451
    Likes Received:
    3,071
    GPU:
    7900xtx/7900xt

    oh i know Nvidia has been working on MCM, AMD is just months, if not years, ahead. their R&D roadmap has been adhered to...even ahead of schedule in some regards.

    as you probably know, the MCM will have industrial penetration 1st...probably for 1.5-2 years before the consumer. mainly because games don't need this level of power (yet).
     
  6. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,858
    Likes Received:
    442
    GPU:
    RTX 3080
    I went 8600M GT (whatever architecture that's called), Fermi (GTX 560M), Kepler (GTX 670MX), Pascal (GTX 1070 - desktop). I imagine Turing I will skip, like I did Maxwell. As you can see I've converted from laptop to desktop gaming!
     
    tunejunky likes this.
  7. fr33k

    fr33k Ancient Guru

    Messages:
    2,260
    Likes Received:
    132
    GPU:
    MSI 4080 VENTUS OC
    i'm trying to follow what you are saying but wiki claims a 580 is a GF110 and a 560 was a GF114. So i'm not really following you.
     
  8. alanm

    alanm Ancient Guru

    Messages:
    12,272
    Likes Received:
    4,474
    GPU:
    RTX 4080
    Dont read too much in the 106 vs 104 designation for Turing parts, it doesnt necessarily follow the same relationship of previous gens 104s vs 106s. A full TU106 with same SPs, SMs, bus width, etc as a cut down 104 with same units would perform the same. So bit of a silly argument if you ask me.
     
  9. -Tj-

    -Tj- Ancient Guru

    Messages:
    18,103
    Likes Received:
    2,606
    GPU:
    3080TI iChill Black
    Why, this time around they made it bad, 106SKU for a 2070 is greed all over it.. Those use to be mainstream midrange chips.. Kepler was the first to use midrange, now Turing. Ok they're trying to say look they are powerfull chips, but we all know that its not really true.

    Even TU104 is not thaat special, especially compared to bigger TU102,. It looks tiny, TU106 looks like some smear spec wise xD


    I know I would never buy that for 500€+., the only OK chip is TU102 but that's not worth that money either, 1140€ nope.
     
    tunejunky likes this.
  10. Dragondale13

    Dragondale13 Ancient Guru

    Messages:
    1,527
    Likes Received:
    244
    GPU:
    GTX 1070 AMP! • H75
    I have to ask...seeing as the 1080 is 180w TDP, 2080 is 225w TDP (2070 being closer to prev. gen at 185w TDP).

    Is the performance boost, more power driven, than architecture?
     

  11. XenthorX

    XenthorX Ancient Guru

    Messages:
    5,059
    Likes Received:
    3,439
    GPU:
    MSI 4090 Suprim X
    Was a great read, thanks Hilbert.
     
  12. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,858
    Likes Received:
    442
    GPU:
    RTX 3080
    I think the increased power required is to run the Ray Tracing (RT) cores and Tensor cores - they're an addition that we did not have before, so that's in addition to the 'conventional' GPU core that we know from Pascal & previous generations. So to answer you question it's architecture related.
     
    Dragondale13 likes this.
  13. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    Why not?

    You're wrong, unless TSMC makes custom reticles for random clients.
    NVIDIA Corp.'s Relationship With Taiwan Semiconductor Manufacturing Is Deepening
    Nvidia and TSMC to increase production of 12nm Volta GPUs later this year
    TSMC and NVIDIA Reaffirm Partnership
    The last link is from TSMC themselves. According to them, they have shipped more than 200 million NVIDIA GPUs the last five years. And these are big, expensive chips (even the smaller ones), compared to the tiny, crappy mobile ones. It's a lot of money, and everything indicates that NVIDIA is much more than a "tertiary" partner for TSMC.

    Where do you see all these refusals? See above, their collaboration seems to be going fine. Also the larger the numbers you do, the more profitable you are. GloFo has produced millions and millions of chips the last years, they are not exactly inexperienced. When you parrot Adored videos, please keep in mind of the whole video, not just the spot you want to hear. AMD's (even recent) stock rise cannot even compare to what NVIDIA is making in pure profit the last couple of years. If they needed that 7nm in the future, they would have had it. It was obvious that Turing was scheduled to be this chip at this time, instead of waiting for another year for 7nm.
    Also NVIDIA was burned twice when they tried to introduce a completely new GPU architecture to a smaller node at the same time. The "new arch" + "new node" combo has been particularly painful.

    Rush job with a completely new architecture (the largest departure since Fermi, with stuff like mixed precision, better hardware scheduling etc), new DirectX revision to go with it, and all the assorted NVIDIA-only libraries, without mentioning the actual RTX hardware, lol.

    It's quite obvious that Pascal is capped at 2Gz, and if this was just a big Pascal with 6k+ cores it would have wrecked AMD again, if not more.

    No matter the words around it, it is royalty free and part of both the DisplayPort and HDMI specs. You write it in a way as if it was a criminal conspiracy or something.

    Because they saw a couple of old Adored videos about Pascal, and they believe that a GPU with 545mm2 is midrange, lol. For the record, the 1080 is midrange, it is 314mm2.
     
  14. Dragondale13

    Dragondale13 Ancient Guru

    Messages:
    1,527
    Likes Received:
    244
    GPU:
    GTX 1070 AMP! • H75
    Makes sense.With these models being the first of their kind I'm assuming power tweaks may accompany whatever cards come after, cool!
     
  15. wavetrex

    wavetrex Ancient Guru

    Messages:
    2,465
    Likes Received:
    2,578
    GPU:
    ROG RTX 6090 Ultra
    "Mid-range" literally means in the middle of the range (not to be confused with "center").
    As long as there is another product (e.g. 2080 Ti) which is "high-end" (observe the word: END - meaning there is nothing else higher than it - currently), anything below that "End" chip (but not at the other end) will be mid-range.

    This has nothing to do with chip size. It could be 1000 mm2 or 5000 mm2, when there is another product with the same role which is bigger and more powerful than itself, then it will be mid-range.

    So yea, TL;DR - 2080 Is "mid-range", and so is 2070, 1080, 1070, 1060, 1050 and everything else until the bottom of the barrel GT 710 which is "low END"
     
    Last edited: Sep 17, 2018

  16. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    No, mid-range is when you can produce it cheaply enough to hit whatever is the mainstream at the time.

    The 1080, hardware wise, was a medium sized chip, sold for a very high price. The chip itself was cheap and small, so mid-range.

    The 2080 is huge and expensive to make.
     
    Maddness likes this.
  17. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,858
    Likes Received:
    442
    GPU:
    RTX 3080
    Yeah, the next architecture after Turing I think will be on a smaller process (7nm), so it will have more performance and also performance per Watt will be increased.
     
    Dragondale13 likes this.
  18. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I think we all are pretty guilty here. Exchanging midrange and mainstream as we see fit.
    Mainstream apparently stands for product(s) which gets most to hands of end users. And so, when someone uses midrange, it usually means mainstream.
     
    wavetrex likes this.
  19. Embra

    Embra Ancient Guru

    Messages:
    1,601
    Likes Received:
    956
    GPU:
    Red Devil 6950 XT
    So NV will not have a high-end GPU with the 20xx series? Got it. o_O
     
  20. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    I don't belive that any of us meant that.
     

Share This Page