NVIDIA Next-Gen Ampere GPUs to arrive in 2020 - based on Samsung 7nm EUV process

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jun 5, 2019.

  1. Mpampis

    Mpampis Master Guru

    Messages:
    249
    Likes Received:
    231
    GPU:
    RX 5700 XT 8GB
    Well, the next release from nVidia is going to be a tricky one.
    Right now, an RTX 2080 Ti can only do 60+ fps with RT on @1080p (and decent 1440p), paired with an i9-9900k that is.
    Since nVidia will certainly have Ampere priced way above Turing, they must deliver 60fps@4K. I don't think people would spent ~1500k $ just to get 75fps@1440p instead of 60fps.
    Plus, Samsung's 7nm process has not been tested yet (besides their own test samples), so that's a big question too.
     
  2. BuildeR2

    BuildeR2 Ancient Guru

    Messages:
    3,208
    Likes Received:
    437
    GPU:
    ASUS 4090 TUF OG OC
    My thoughts exactly.
    My thoughts exactly.
     
  3. illrigger

    illrigger Master Guru

    Messages:
    340
    Likes Received:
    120
    GPU:
    Gigabyte RTX 3080
    Yeah, that's not going to happen until someone competes with them at the high end, and it doesn't look like that's happening anytime soon. The mid-range and lower is going to see a huge shake-up in the next year, but that 2080Ti is going to remain untouchable. There probably isn't enough market in the above $700 range for AMD to even bother trying (unless it also has some sort of datacenter application they can make real money from).

    As to Intel Xe, I'll believe it when it's in a reviewer's hands telling me it is faster than a 1080p@60 capable card when they ship in a year or two. My bet is that is about all they are going to manage, and that's completely dependent upon them getting a process shrink working that can actually turn out enough wafers for them to make them. They already dedicate a pretty big chunk of silicon on their current CPUs to graphics, and their top-end Iris is an order of magnitude slower than even the lowest end dedicated GPUs from AMD and nVidia. It's possible that they somehow have a miracle waiting in the wings, but it just seems... unlikely.
     
  4. shamus21

    shamus21 Member Guru

    Messages:
    144
    Likes Received:
    25
    GPU:
    0
    20 to 30% may be more is possible but configuring chip design towards ray tracing improvements will cut the % down a lot. The real problem with this years new RTX gen is pricing.
     

  5. yasamoka

    yasamoka Ancient Guru

    Messages:
    4,875
    Likes Received:
    259
    GPU:
    Zotac RTX 3090
    Why would that be the case?
     
  6. Jawnys

    Jawnys Master Guru

    Messages:
    225
    Likes Received:
    55
    GPU:
    asus tuf oc 3090
    honestly if they would have made a gtx 2080ti, at the price of the rtx 2080 i would probably have upgraded from my 1080ti, but the pricing of the 2080ti makes it a no go
     
    HARDRESET likes this.
  7. HARDRESET

    HARDRESET Master Guru

    Messages:
    890
    Likes Received:
    417
    GPU:
    4090 ZOTAEA /1080Ti
    My 1080 Ti Gigabyte gaming oc cost me $629.00 new , double the price, 30 to 40 % gain with 2080 Ti, no thanks.
     
  8. Petr V

    Petr V Master Guru

    Messages:
    358
    Likes Received:
    116
    GPU:
    Gtx over 9000
    If nvidia keep this nonsense pricing they reliably kills their new gen products.
    750EUR for Lightning is good offer.
     
  9. Kaotik

    Kaotik Guest

    Messages:
    163
    Likes Received:
    4
    GPU:
    Radeon RX 6800 XT
    Just to nitpick, but wasn't IBM NVIDIAs main manufacturer with the infamous FX-series back in 2003?
     
  10. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,016
    Likes Received:
    7,355
    GPU:
    GTX 1080ti
    i wonder if this ridiculous impossible idea will ever stop being postulated.
     

  11. Stefem

    Stefem Active Member

    Messages:
    50
    Likes Received:
    7
    GPU:
    16
    Laws of physics may sucks but are there and can't be overcome, better make peace with them...
     
  12. Stefem

    Stefem Active Member

    Messages:
    50
    Likes Received:
    7
    GPU:
    16
    Yep, and they got burned...
     
  13. DrKeo

    DrKeo Guest

    This is very exciting. Turing was a huge architectural change for NVIDIA, it made the die huge and expensive so performance per $ wasn't great but it's a very advanced piece of hardware that begs for a die shrink. The last time that we've got a real die shrink (TSMC's 12nm and 16nm is basically the same) was the 1000 series and performance jumped two folds. For instance, the 970 GTX which was on 28nm 3.49TFLOP/s on a 398mm^w die became the 1070 GTX on 16nm which was 6.46TFLOP/s on a smaller 314mm^2 die. That's 85% more performance on a 21% smaller die. So I have very high hopes for the 3000 series, the 3070RTX could have a 2080ti level of performance and I'm expecting lower prices considering the 2000 series haven't sold well. So a 3070RTX with 2080ti performance for 400$-500$? I'm in.
     
    Stefem likes this.
  14. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,016
    Likes Received:
    7,355
    GPU:
    GTX 1080ti
    people claiming that RT made the die huge are wrong and need to stop speaking.
     
  15. Undying

    Undying Ancient Guru

    Messages:
    25,358
    Likes Received:
    12,756
    GPU:
    XFX RX6800XT 16GB
    Isnt the turing die much larger than pascal? I wonder why if its not the rt.
     

  16. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,016
    Likes Received:
    7,355
    GPU:
    GTX 1080ti
    Google it.
     
  17. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    I did and they said it's the RT.

    FWIW I don't think it's the RT, I think it's the Tensor cores, which aren't required by RT.. but telling people to shut up and google the solution when 99.9% of the internet thinks the answer is RT is pretty dumb. There's been multiple threads, posts, even anandtech kind of concludes a major part of the die difference between the 1660 and 2060 is RT/Tensor. So I'm not sure what you expect them to find but RT.

    The shrink will help with yields but the cost per transistor on 7nm EUV is higher than 16/14/12nm - which will drive up the cost of chips regardless. I don't think 7nm is going to be as big of a jump as people think it is, without the price significantly inflating.
     
    Last edited: Jun 6, 2019
    Maddness, yasamoka and Fox2232 like this.
  18. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,016
    Likes Received:
    7,355
    GPU:
    GTX 1080ti
    RT cores are tiny fixed function units.

    The size comes from the increased cache, Tensor block (which doubles as the dedicated FP16 cores) and a hell of a lot more SM's per gpc.
     
    Last edited: Jun 6, 2019
  19. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    I agree but I've had this argument before with people on this forum and I took your position - that there are changes outside just RT that mostly inflates the die size. It wasn't really googleable though, you definitely had to kind of do your own research and kind of come to your own conclusion on it. I found most "tech review" sites actually conclude the opposite - which you can find in various articles on the 1660. I think this is mostly because everyone believes Tensors are used for RT.. but as of right now not a single game uses them outside of DLSS. So theoretically they can build an RTX GPU sans Tensor/DLSS and save some space.
     
  20. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,016
    Likes Received:
    7,355
    GPU:
    GTX 1080ti
    its funny that people conclude this, since BFV RT doesn't use Tensor at all, but yes they can be used in both RT and non RT cases without being specifically in Tensor / Denoise mode since they double as the FP16 units on RTX gpus
     

Share This Page