Next Nvidia GPU architecture?

Discussion in 'Videocards - NVIDIA GeForce' started by IhatebeingAcop, Aug 27, 2015.

  1. IhatebeingAcop

    IhatebeingAcop Master Guru

    Messages:
    263
    Likes Received:
    3
    GPU:
    Gigabyte RTX 2080
    Is the next release going to be the huge leap in architecture? If I remember right the next GPU is supposed to be 40% better or some crazy number like that. Also, I'm not a cop anymore! **** THAT JOB!

    Best
    Adam
     
  2. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    17,564
    Likes Received:
    2,961
    GPU:
    XFX 7900XTX M'310
    It should be a jump from the current 28nm node down to I think it's 20 or 16+finfeet which will also allow for further enhancements and better power usage.

    Should also be a jump from GDDR to HBM and I think it's HBM gen2 at that allowing for up to 8 GB instead of the current 4 and at higher bandwidth so that could bring some nice performance boosts too.
    (Though I believe GPU clock speed is still more important but this will alleviate any memory bandwidth bottlenecks nicely.)
     
  3. nhlkoho

    nhlkoho Guest

    Messages:
    7,754
    Likes Received:
    366
    GPU:
    RTX 2080ti FE
    Will be available up to 32GB but that would probably only be for Quadro users and whatever the next Titan model will be.
     
  4. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,759
    Likes Received:
    9,652
    GPU:
    4090@H2O
    I'm curious to see nvidia's implementation AND performance of hbm2 cards. Also, pascal will bring several steps towards a more unified, and accessable hardware on the green team to, opening up some chances for low level access for devs.

    Then again, what I'm really looking forward is volta.
     

  5. thatguy91

    thatguy91 Guest

    It's at least 16 nm, if not 14 nm. It could be that AMD will be on 14 nm and Nvidia 16 nm or vice versa, the information available regarding this isn't definitive. It's most certainly not 20 nm though. The current Nvidia and AMD cards were supposed to be 20 nm but the process was apparently a fail, it wouldn't have been economical for them to produce cards on that process. By the time the process would have been perfected it would have been time for the transition to 14 nm/16 nm, which is why I believe instead of wasting resources and development on improving 20 nm they decided to go straight to 14/16 nm.

    The 32 GB RAM etc is likely only for the super-enthusiast cards, the mainstream cards which I would classify as the GTX 970 and the R9-380/390/390X etc will likely be say, 4 GB, with 8 GB versions available at extra cost. It all depends on the card system, it could be 6 GB/12 GB, or with HBM2 maybe something a little less conventional like 5 GB/10 GB. It simply would be cost prohibitive for standard cards to have 32 GB of HBM2 memory, at least for the meantime.

    It would be nice to have system memory at HBM2, DDR4 doesn't seem to be effective and Skylake doesn't seem to make much better use of anything greater than around 2400 MHz low latency. If HBM2 came to system RAM, it wouldn't have to be as fast etc, I think it could be cost effective with that in respect.
     
    Last edited by a moderator: Aug 28, 2015
  6. Thalyn

    Thalyn Guest

    The various manufacturers use different measurements for their process nodes. As best I've been able to find out, the 14nm and 16nm FinFET designs are actually both the same physical size, thus near-identical electrical attributes. Of course, that won't stop marketing of whichever one happens to be using the one branded as 14nm from trying to use it as a major selling point, even though it's meaningless.

    HBM would need to be built into the CPU to have any appreciable benefit. As soon as it becomes user-upgradable the signalling tolerances have to be increased to such a point that the extra bandwidth's benefit is lost to latency and redundancy.

    Of course, the concept of an i7 6700K with 16GB of HBM2 on-die is very appealing. Would also offer a much-needed boost to on-die GPU performance for those who happen to be using such a thing.
     
  7. Barry J

    Barry J Ancient Guru

    Messages:
    2,803
    Likes Received:
    152
    GPU:
    RTX2080 TRIO Super
    i7 with With Hbm and no on board GPU would be a win for me

    Example
    i7 4790k HBM 16gb

    I believe all K models should have no on board GPU it enthusiast CPU who will have dedicated GPU
     
  8. thatguy91

    thatguy91 Guest

    Absolutely. If they didn't have the GPU on board, they could fit more cores :). AMD are doing that with Zen, on the same socket you can have either a 4 core, 8 thread CPU with APU, or a full 8 core CPU. Or so the rumour is anyway :). Intel could have done this with Skylake, even if socket 1151 as is doesn't support it, it's a new socket and chipset, they could have added it easily enough. The main reason why you do not see 6 core Skylake socket 1151 is because of socket 2011. Socket 2011 is kind of redundant if you can have a 6 core socket 1151 CPU. Having 6 cores on the CPU wouldn't be that more expensive to make either, since you aren't spending the money on the graphics core, onboard graphics memory etc. Of course, if such a CPU did exist they'd charge through the roof for it, because they can.

    Now back to AMD, you might be wondering why the APU version is still an attractive option. This depends on the rumour of full driver-level load balancing for all graphics workload, not just that specifically programmed via DirectX 12. This means you could couple an APU with a mid level AMD Radeon card and get high end graphics performance. The benefit of this diminishes with faster discrete cards, but still present. This will be Nvidia's main competition threat.
     
  9. 0blivious

    0blivious Ancient Guru

    Messages:
    3,301
    Likes Received:
    824
    GPU:
    7800 XT / 5700 XT

    To be honest, if it's only 40% better (across the range of cards), I'll be disappointed. We used to get nearly double the performance with a new card series. Pascal (the next architecture) should be in that neighborhood to meet the hype.

    Last time I bought a new iteration of a video card that doubled my framerate was 7900GT-->8800GT (2008?). Hopefully those days are coming back. :)
     
  10. signex

    signex Ancient Guru

    Messages:
    9,071
    Likes Received:
    313
    GPU:
    RTX 4070 Super
    I hated tho's days tbh, my 7900GT and 8800GTX became outdated way too fast.

    Now older cards can hold their own still.

    I get it, PC gaming IS a expensive hobby, but if you can safe money all the better.
     

  11. Barry J

    Barry J Ancient Guru

    Messages:
    2,803
    Likes Received:
    152
    GPU:
    RTX2080 TRIO Super
    Intel deserves to lose the performance crown. There lack of innovation is disappointing and the small increases in performance are a joke I really want zen to be an Intel killer maybe then we will get a CPU that is worth having rather than missing a few generation to get an worthwhile upgrade
     
  12. Turanis

    Turanis Guest

    Messages:
    1,779
    Likes Received:
    489
    GPU:
    Gigabyte RX500
    Last edited: Aug 30, 2015
  13. Spets

    Spets Guest

    Messages:
    3,500
    Likes Received:
    670
    GPU:
    RTX 4090
  14. nhlkoho

    nhlkoho Guest

    Messages:
    7,754
    Likes Received:
    366
    GPU:
    RTX 2080ti FE
    Intel pushes out incremental updates to their CPU's because AMD just can't compete with them. If Intel used their funds and R&D team to their full potential, they would put AMD out of business. That's not something consumers or Intel wants to happen.
     
  15. Barry J

    Barry J Ancient Guru

    Messages:
    2,803
    Likes Received:
    152
    GPU:
    RTX2080 TRIO Super


    I agree AMD going would be very bad for consumers Intel would get even slower
     

  16. Loophole35

    Loophole35 Guest

    Messages:
    9,797
    Likes Received:
    1,161
    GPU:
    EVGA 1080ti SC
    Naw it fits his native too well to stop using it.
     
  17. -Tj-

    -Tj- Ancient Guru

    Messages:
    18,103
    Likes Received:
    2,606
    GPU:
    3080TI iChill Black
    Why such butthurt response? if gpu doesn't support async compute then it is bad by default, many other engines could use this feature too and nv will be left behind in dust.
    They support only basic - halfway async method (no mixed mode, only Maxwell does something), at least that's what I see by that table comparison bellow in anadtech article...

    @ async compute
    http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading


    Same bs as nv made with Fermi and Kepler saying those DX11 features are not needed just so they could cut the corners to save some extra $$$ and still sell them as $$$$.
     
    Last edited: Aug 30, 2015
  18. Barry J

    Barry J Ancient Guru

    Messages:
    2,803
    Likes Received:
    152
    GPU:
    RTX2080 TRIO Super
    On a side note, part of the reason for AMD's presentation is to explain their architectural advantages over NVIDIA, so we checked with NVIDIA on queues. Fermi/Kepler/Maxwell 1 can only use a single graphics queue or their complement of compute queues, but not both at once – early implementations of HyperQ cannot be used in conjunction with graphics. Meanwhile Maxwell 2 has 32 queues, composed of 1 graphics queue and 31 compute queues (or 32 compute queues total in pure compute mode). So pre-Maxwell 2 GPUs have to either execute in serial or pre-empt to move tasks ahead of each other, which would indeed give AMD an advantage..

    GPU Queue Engine Support
    Graphics/Mixed Mode Pure Compute Mode
    AMD GCN 1.2 (285) 1 Graphics + 8 Compute 8 Compute
    AMD GCN 1.1 (290 Series) 1 Graphics + 8 Compute 8 Compute
    AMD GCN 1.1 (260 Series) 1 Graphics + 2 Compute 2 Compute
    AMD GCN 1.0 (7000/200 Series) 1 Graphics + 2 Compute 2 Compute
    NVIDIA Maxwell 2 (900 Series) 1 Graphics + 31 Compute 32 Compute
    NVIDIA Maxwell 1 (750 Series) 1 Graphics 32 Compute
    NVIDIA Kepler GK110 (780/Titan) 1 Graphics 32 Compute
    NVIDIA Kepler GK10x (600/700 Series) 1 Graphics 1 Compute

    http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading
     
  19. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,952
    Likes Received:
    1,244
    GPU:
    .
    Hope to see no more that embarrassing SLI bridge...
     
  20. Spets

    Spets Guest

    Messages:
    3,500
    Likes Received:
    670
    GPU:
    RTX 4090
    What a Hunter-like thing to say, because I have a differing opinion on the matter it's a butthurt response?
    A pre-alpha benchmark from a company that was willing to sabotage results in a prior benchmark shouldn't be held as an end-all result, perfectly fine to take into consideration but that's about it at this stage. Personally I prefer multiple tests and sources before making my conclusions, I guess that's why we have always had different opinions since you like to jump on the one result.

    For the record Maxwell2 cards do have mixed queues.
     

Share This Page