Nvidia LiveStream Event Later today

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, May 6, 2016.

  1. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000

    Funnily enough what you think doesn't affect performance metrics

    Using game performance as a metric would lead to differing perf/watt values per game...

    Think it through...
     
  2. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,528
    Likes Received:
    277
    GPU:
    GTX1070 @2050Mhz
    I think you're trying very hard not to see the obvious point right in front of you: Image 8 shows on the X-axis Watts vs Relative Gaming Performance on the Y-axis. That's all very simple - Watts vs Gaming Performance. This is the metric that is of most interest to gamers: Why? Because it's Gaming Performance vs Watts used. My calculations were based on that. You stick to your love of ALU numbers if that's what's important to you.
     
    Last edited: May 7, 2016
  3. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000

    I see what you're referring to very clearly, now if you can explain to how that's a useful performance metric when it will vary from game to game...

    Say pascal has 1.5x the geometry performance of gm204

    In a geometry bound game/scene/benchmark it will be 50% faster at power parity

    I'm a shader bound scene it will be twice as fast at power parity

    Your definition of perf/watt is a useful as a third nipple on the elbow

    While you're thinking about that, explain how the Fury X is so much faster than the 390x if it's geometry performance is identical, it's memory bandwidth is very similar, it's rop count is identical.

    Maybe, just maybe, the people who decided to use that a performance metric thought it through a little more than you
     
    Last edited: May 7, 2016
  4. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,528
    Likes Received:
    277
    GPU:
    GTX1070 @2050Mhz
    Well, I see your point, but NVidia created that graph, and obviously they would have selected a broad range of games in order to arrive at those specific average figures. It's their choice which games they used, and I'm sure they wouldn't have picked a suite of games that would show them in a bad light anyway. At this point we can only go off the slides they've shown us, and my extrapolations on that said graph in Image 8 only show a 65% increase in gaming performance per Watt vs Maxwell - I don't think that's very impressive.
     

  5. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,741
    Likes Received:
    3,329
    GPU:
    6900XT+AW@240Hz
    This goes exactly as I predicted. Bit smaller than GTX 980Ti (in transistor count). performance gained through higher clock. Quick release before AMD shows their GPUs.
    nVidia went with better safe than sorry. And people call it win before they see competition.

    I want to see those people here who replace GTX 980Ti with GTX 1080 just week before they see Polaris.

    And the presentation, it was entirely 'circumstantial'... showing numbers and graphs and comparing at unknown scenarios. Not even resolution in any comparison was known.
    What everyone sane here should do is to wait for HH to do his standard and trusted set of tests.

    Hell, AMD told more about Polaris in that 3 minutes video I linked yesterday (old video from January), than nVidia did in this super long stream.
     
  6. Solfaur

    Solfaur Ancient Guru

    Messages:
    7,475
    Likes Received:
    942
    GPU:
    MSI GTX 1080Ti Ga.X
    Interesting, I suspected it had something to do with better clocks ha.

    I'm just curious if every vendor will do this, for example those expensive AIB cards, now those should be the ones that get binned.
     
  7. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,741
    Likes Received:
    3,329
    GPU:
    6900XT+AW@240Hz
    Performance per Watt is only important to mobile gamers, like yourself. It is important to me in notebook. But on desktop... As everyone: "If it stays cool, then it can eat 300W."

    And down the road, people will get those 250W+ Pascals/Polarises.
     
  8. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,528
    Likes Received:
    277
    GPU:
    GTX1070 @2050Mhz
    Yep, Watts more important to mobile gamers for sure, but your desktop cards are still limited by a maximum designed wattage too, so it still applies to desktop GPUs, but perhaps to a lesser extent. Performance per watt is still an important metric to understand what kind of performance you can hope to see on your desktop cards.
     
  9. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    I'll tell you why I'm personally calling it a win. I recognize amd *could* surprise anyone but the likelihood of that happening is minimal.

    I'm gonna try and be concise at the expense of accuracy because I could be here all day.

    AMD has been selling cards to compete with nvidia on desktop

    The gpu on an amd card is always significantly larger (die size) and more complex (# xtors)

    Nvidia have economy of scale on their side, and their chips are smaller, less complex, far more energy efficient, fewer in number, dominant in the mobile market (efficiency) and have a ridiculously large R&D budget, obviously.

    AMD literally have to work a *feat of engineering* to rework their design to improve on all these things.

    We know amd only has two Polaris gpus lined up until 2017

    We know Polaris 10 is 230mm

    We know glofo has an 8% feature size advantage for transistors

    We know tsmc has an efficiency advantage but I don't know the number, ignore it.

    AMD has larger dies for many reasons, more ALUs, more hardware redundancies, more complex schedulers etc, and the increased power usage (it all ties in you see, it's all linked) to with it.

    We know it can be put to good use, look at recent performance, look at potential for concurrency, programmable dispatchers etc.

    Why doesn't nvidia have this?

    This is their logic.

    We can add all this **** and get an X% advantage under certain conditions, or we can forgo it altogether and spend a lot of time and money on perfecting a less complex solution, and focus our design on speed and high utilization under a wide range of loads.

    Nvidia has an immense architectural advantage, that they just flaunted, again, with extremely aggressive pricing.

    Two. Gigahertz.

    The only thing that remains to be seen is how the memory bandwidth fares, we were told nothing technical really, is there improved memory compression? More efficient imc etc etc.

    Now nvidia has a 50% die area advantage over Polaris, the cut die is at 379%, it clocks to 1.8ghz and likely has 2048 SPs

    It's a ****ing 980 with 8gb ram at 1.8ghz +arch improvements

    AMD need to best nvidia for die area *and* frequency *and* price

    Good luck
     
  10. moab600

    moab600 Ancient Guru

    Messages:
    6,293
    Likes Received:
    202
    GPU:
    Galax 3080 SC
    Let us not forget that it all sounds beautiful on paper and gonna vary in real world, nvidia and AMD overhyped us way too many times.

    I will see how pascal will survive the test of time, as nvidia cards are not that good value vs their current AMD counterpart, with 980TI being the exception.
     

  11. PhazeDelta1

    PhazeDelta1 Moderator Staff Member

    Messages:
    15,616
    Likes Received:
    14
    GPU:
    EVGA 1080 FTW
    I'm just going to leave this here. :D

    [​IMG]
     
  12. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    You know, I don't think JHH enjoys doing the gaming shows in particular, cause it's all marketing and no technical, and the guy is an engineer not a marketing man. I think it's profoundly uncomfortable for him.

    You can tell he's struggling to sort play by a script and mention those key buzzwords for buffoons to parrot, like you saw him pause and awkwardly, hastily add that this was the fastest *gddr* memory yet.

    Tom played it well initially, mistakes happens, make it a joke, then he started impairing the show from moving along, if I were in that situation I'd get pretty irritated and frustrated, and I'd go tell him to get his **** together because I don't want to be on stage any longer than I have to.

    Tom should get his **** together and just focus on the ****ing announcement
     
  13. alanm

    alanm Ancient Guru

    Messages:
    10,097
    Likes Received:
    2,242
    GPU:
    Asus 2080 Dual OC
    So release dates = May 27 for 1080 and June 10 for 1070. I wonder how limited stocks will be for the 1080 given that few people expected anything with GDDR5x to be available so soon. Expect back-orders for it extending to several weeks when it goes on sale, esp with the hype its gotten now.

    2x 980 performance? Maybe optimistic.. but turning to realistic 6 months later when Maxwell driver support is neglected. :D
     
  14. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,626
    Likes Received:
    330
    GPU:
    MSI GTX1070 GamingX
    Hey, we're all hoping AMD has a "secret weapon" card lined-up (to drive prices down). Unfortunately, people aren't going to wait a year for that card to happen if it's 2017 release. If AMD have "something", they need to let us know before GTX1080 releases, otherwise, people won't care and just buy Nvidia. The ball really is in AMD's court now, it's up to them how they play this round.

    I'm hoping within 48hrs we hear something from AMD, otherwise, it's done.

    http://forums.guru3d.com/showthread.php?t=407307
     
    Last edited: May 7, 2016
  15. oGow89

    oGow89 Maha Guru

    Messages:
    1,213
    Likes Received:
    0
    GPU:
    Gigabyte RX 5700xt
    I was actually looking for the card to cost around 550$/€, but the benchmarks should tell a different story. If the card's performance in dx12 is off the charts, then the 1080 may well be my next gpu.
     

  16. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,741
    Likes Received:
    3,329
    GPU:
    6900XT+AW@240Hz
    Forget about die size for a moment, die size may improve clock at lower transistor density, that's all. GPUs are made of those mentioned transistors.
    GTX 980 has 5.2B transistors (2048SP ~ 394 SP/BT); GTX 980 Ti has 8.0B transistors (2816SP ~ 352 SP/BT).
    R9-290x Has 6.2B transistors (2816SP ~ 454 SP/BT); Fury X has 8.9B transistors (4096SP ~ 460 SP/BT)

    GTX 1080 has 7.2B transistors (2560SP ~ 355 SP/BT)

    What you see is basically bit smaller GTX 980 Ti (10%), with much higher clock. Now think about this for a moment, what does that mean? What does nVidia compare in graphs?
    Is that really reference GTX 980 with (1126/1216MHz) against that 25% bigger GTX 1080, but 1607/1733MHz (42% faster)?
    If you take this as base of comparison, then shown performance difference is to be expected. That's why people around could nearly match their OC 1450MHz GTX 980 TI-s in 3DM scores.
    Because nV made it smaller (10%), but faster in clock (20%). (there will definitely be OC headroom allowing 1080 outperform GTX 980 Ti by another 10%)

    But what you take from this? My lovely metric: Performance per transistor per clock. There was quite a jump from Kepler to Maxwell. I do not see significant improvement here between Maxwell and Pascal.

    What nVidia talked about mostly is: Scene warping and internal frame management. Good for certain specific applications like multi screen gaming/VR.
    Have you seen list of things AMD changed from GCN 1.2 (Tonga/Fiji) to GCN 1.3 (Polaris)?

    Do you know that GCN 1.2 is actually ~10% ahead in performance per transistor per clock metric over Maxwell? It does not look like nVidia improved there much, but AMD seems sure they improved drastically. And they say it quite aloud and often.

    So, what's advantage Pascal can hold over GCN 1.3? Clock. That's only thing nVidia has in hand. But does that mean GCN can't clock on 14nm in similar way? There is no indication at all. Quite contrary, AMD stated they increased clock quite a bit.
    That's why I questioned those 800MHz benchmarks of Polaris.

    Do I think 4.8~5.6B (2304SP) transistor Polaris will be competitive in terms of performance with 7.2B GTX 1080? No, not in slightest. Does AMD intend to compete in that price category where people do not buy much? No again.
    Polaris is there to take over biggest user base there is, and to give them R9-390x performance level at low price.

    Now think about prices. What made GTX 970 so price competitive and is that something to be translated to GTX 1070?
    GTX 970 is cut down GTX 980 (5.2B transistors). So it's base was not so expensive. GTX 980 Ti as cut down Titan X (8B transistors) and that dictates price range.
    Now GTX 1080 is 7.2B transistor GPU on new process and cutting it down will not decrease its manufacturing price. And GTX 1070 can be expected close (within 25%).
    nVidia hs to come with other ~5B transistor GPU to pose as that price efficiency point. But where does that land it in performance and performance per $?

    With Maxwell nVidia differentiated GM204 from GM200 by 5.2 vs 8B transistors. That's 35% less. (To prevent competing with itself.)
    So, what can we expect as GTX 960? like 4.6B transistors? With 1728SP? And there is your Polaris competition. Something like this will be facing it.

    TL;DR:
    There is only one piece of puzzle present showing very bad (in quality of information) approximation of Maxwell vs. Pascal. And anyone sane should wait to get more puzzle pieces.

    Because as some members here already know... while high clock looks awesome, it is real world performance what matters in the end. Do not let yourself to be swayed by 2.1GHz vision. Wait and see if 320GB/s on 256bit GDDR5X is even enough to feed it at 1733MHz as we do not know if and how GDDR5X overclocks.
     
  17. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,741
    Likes Received:
    3,329
    GPU:
    6900XT+AW@240Hz
    I do not. I do not want Fury X level Polaris card. There is no need for it.
    I want AMD to crash into low-end/mainstream Desktop GPU market. Get their Power efficiency right for notebooks as their presence again is close to non-existent there. And I want them to make good cash from it.

    Then improve architecture and power efficiency again and bring some Brutal Vega GPU which will double Fury X performance (within 300W) and give me reason to upgrade in 2017.

    Going from Big GPUs to small over time is good for power efficient low ends. But going from small to large is good for those Big Boys...

    And right now, with both 14/16nm there is very big performance/Watt improvement, and notebook's GPUs should be refreshed as fast as they can (Boost that entire market performance). I see no reason for AMD to wait as they can only gain there.
     
  18. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    Dude, performance (ALU throughput) is a function of clock.

    Performance per transistor i somewhat understand, but per clock? That makes no ****ing sense. It rewards large alu array designs and simply takes clocks out of the equation, the only situation you would use such a performance metric is when you want to make one architecture seem better than the other by taking its main advantage away

    I don't think redesigning the front-end to match maxwell's geometry output will come for free, also modifying their uarch to allow for higher clocks will not come for free, matching nvidia for die area won't come for free.

    The main thing Polaris had going for it was time frame, but nvidia ended up announcing first, remains to be seen how availability is
     
  19. isidore

    isidore Ancient Guru

    Messages:
    6,276
    Likes Received:
    58
    GPU:
    RTX 2080TI GamingOC
    Oh for the love of God. Just wait for the card to see how it performs in real games. There will probably be a 15-20% performance increase over the 980ti, that's for the 1080.
    I also wan't to see how these new nvidia gpu's perform in dx12 titles. I hope they did some improvements on that architecture and that we won't see amd's 390x performing at the level of a 1070 in dx12 games :)))). That would be some crazy ****.
     
    Last edited: May 7, 2016
  20. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,741
    Likes Received:
    3,329
    GPU:
    6900XT+AW@240Hz
    It is very simple to understand metric. Easy to compare Maxwell to Pascal. What matters is performance delivered. Transistor count and clock is method that performance was delivered by. (that's why both are important)
    By using all those 3 most defining values, you can see how big or small changes are.

    Hypothetical scenario: GTX 1080 has same 8B transistors like GTX Titan X. Then you clock both to same 1400MHz. And compare real world performance.

    Now you know how much Pascal improved. And that's what performance per transistor per clock is about. It is best normalized approximation of changes in architecture you can get without knowing how many transistors each architecture needs per ROP/SP/... And without making it so complicated that people will just ignore such metric as they'll not understand how it was made.

    IIRC, you mentioned it for 3rd time already: "takes clocks out of the equation"
    And you are wrong here. This creates value for easy comparison of architectures. But since they are normalized, you have to again multiply them by transistor count and clock to get to particular chip performance. Nothing is lost as long as you understand how you got there and what it means.
     

Share This Page