Nvidia Pascal Specs?

Discussion in 'Videocards - NVIDIA GeForce' started by Shadowdane, Mar 17, 2016.

  1. Turanis

    Turanis Guest

    Messages:
    1,779
    Likes Received:
    489
    GPU:
    Gigabyte RX500
    Last edited: Mar 25, 2016
  2. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    That's still speculation. Let's assume for a minute that nvidia doesn't do sh*it about async. They completely ignore async.

    The Witcher 4 comes out on DX12, and the GTX Pascal Ti (no async) performs equal to the Radeon R9 Polaris XT(async)

    How many sh*ts are given ?

    Right now, the best available evidence suggests that when AMD and Nvidia talk about asynchronous compute, they are talking about two very different capabilities. “Asynchronous compute,” in fact, isn’t necessarily the best name for what’s happening here. The question is whether or not Nvidia GPUs can run graphics and compute workloads concurrently. AMD can, courtesy of its ACE units.

    It’s been suggested that AMD’s approach is more like Hyper-Threading, which allows the GPU to work on disparate compute and graphics workloads simultaneously without a loss of performance, whereas Nvidia may be leaning on the CPU for some of its initial setup steps and attempting to schedule simultaneous compute + graphics workload for ideal execution. Obviously that process isn’t working well yet. Since our initial article, Oxide has since stated the following:

    “We actually just chatted with Nvidia about Async Compute, indeed the driver hasn’t fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute.”

    Here’s what that likely means, given Nvidia’s own presentations at GDC and the various test benchmarks that have been assembled over the past week. Maxwell does not have a GCN-style configuration of asynchronous compute engines and it cannot switch between graphics and compute workloads as quickly as GCN. According to Beyond3D user Ext3h:

    “There were claims originally, that Nvidia GPUs wouldn’t even be able to execute async compute shaders in an async fashion at all, this myth was quickly debunked. What become clear, however, is that Nvidia GPUs preferred a much lighter load than AMD cards. At small loads, Nvidia GPUs would run circles around AMD cards. At high load, well, quite the opposite, up to the point where Nvidia GPUs took such a long time to process the workload that they triggered safeguards in Windows. Which caused Windows to pull the trigger and kill the driver, assuming that it got stuck.

    “Final result (for now): AMD GPUs are capable of handling a much higher load. About 10x times what Nvidia GPUs can handle. But they also need also about 4x the pressure applied before they get to play out there capabilities.”

    Ext3h goes on to say that preemption in Nvidia’s case is only used when switching between graphics contexts (1x graphics + 31 compute mode) and “pure compute context,” but claims that this functionality is “utterly broken” on Nvidia cards at present. He also states that while Maxwell 2 (GTX 900 family) is capable of parallel execution, “The hardware doesn’t profit from it much though, since it has only little ‘gaps’ in the shader utilization either way. So in the end, it’s still just sequential execution for most workload, even though if you did manage to stall the pipeline in some way by constructing an unfortunate workload, you could still profit from it.”

    Nvidia, meanwhile, has represented to Oxide that it can implement asynchronous compute, however, and that this capability was not fully enabled in drivers. Like Oxide, we’re going to wait and see how the situation develops. The analysis thread at Beyond3D makes it very clear that this is an incredibly complex question, and much of what Nvidia and Maxwell may or may not be doing is unclear.

    Earlier, we mentioned that AMD’s approach to asynchronous computing superficially resembled Hyper-Threading. There’s another way in which that analogy may prove accurate: When Hyper-Threading debuted, many AMD fans asked why Team Red hadn’t copied the feature to boost performance on K7 and K8. AMD’s response at the time was that the K7 and K8 processors had much shorter pipelines and very different architectures, and were intrinsically less likely to benefit from Hyper-Threading as a result. The P4, in contrast, had a long pipeline and a relatively high stall rate. If one thread stalled, HT allowed another thread to continue executing, which boosted the chip’s overall performance.

    GCN-style asynchronous computing is unlikely to boost Maxwell performance, in other words, because Maxwell isn’t really designed for these kinds of workloads. Whether Nvidia can work around that limitation (or implement something even faster) remains to be seen.
    http://www.extremetech.com/extreme/...ading-amd-nvidia-and-dx12-what-we-know-so-far
     
    Last edited: Mar 25, 2016
  3. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    Async compute is just a fancy name for concurrent execution of compute and graphics. NVIDIA architectures can't do it. I suspect that this will change with Pascal, or if they didn't forsee it they will try to bruteforce a way out of it.
     
  4. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    It isn't, though. Asynchronous execution enables concurrent execution in this case, but one does not imply the other.


    As I always do, I can restate my argument analogously; if the execution times for any thread weren't so high on GCN there'd be no 'space' for a performance boost from concurrency
     
    Last edited: Mar 25, 2016

  5. TrousersnakeXL

    TrousersnakeXL Guest

    Messages:
    6
    Likes Received:
    0
    GPU:
    980ti sli water
    The only thing we can confirm is a generally since the shelves are in production right now meaning everything's been finalized they are preparing for shipment. 1070 GTX SLI for about $800 with full speed room might finally deliver 60 FPS at Max settings and 4K affordable price. We're going to be seeing the sale of 4K displays go through the roof.
     
  6. kegastaMmer

    kegastaMmer Guest

    Messages:
    326
    Likes Received:
    40
    GPU:
    strix 1070 GTX
    gahahaha, 9000 FPS, we may wish for it in our imaginaciones

    But by that nuthurt statement, aren't you asserting that even mid range is minority, as I never stated what or which card are people gonna pair up with, and

    also wont two cards a 960 (class---> mid-range) with another 370 or 380 with a decent setup require more than 500 W PSU?

    That was what my post was referring to,
    Peace out :infinity:
     
  7. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    If there's one thing I feel has been more overhyped than async it's the explicit multi-gpu.

    I'm quite confident I'll be able to count the games that support it on one hand
     
  8. Shadowdane

    Shadowdane Maha Guru

    Messages:
    1,464
    Likes Received:
    91
    GPU:
    Nvidia RTX 4080 FE
  9. cowie

    cowie Ancient Guru

    Messages:
    13,276
    Likes Received:
    357
    GPU:
    GTX
    so what was all that bs with the "leaked" 3d11 benches then? lap top parts or what?
    oh mostly bs probably.
    I hope its some cheap lower end we can play with first you know ...cus then the software guy's can get all the bios flashing and other tools ready for the bigger dies.
     
  10. Turanis

    Turanis Guest

    Messages:
    1,779
    Likes Received:
    489
    GPU:
    Gigabyte RX500
    Could be only marketing PR bias.Meaning dreams.

    That's why Pascal will be HPC only and Volta for gaming.
     

  11. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    huh?
     
  12. cowie

    cowie Ancient Guru

    Messages:
    13,276
    Likes Received:
    357
    GPU:
    GTX
    oh defiantly marketing bs

    but i do think we will get pascal for gaming some freaking time.

    new cuda5 in june for pascal you think it could be the time line for some new cards?
     
  13. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    Yeah but GTX 1080 looks to be slightly faster than a stock 980Ti if core ration remains as it was for GM204:GM200

    Unless the consumer lineup will have totally different GPUs with some of thoseFP64 units replaced with FP32
     
  14. cowie

    cowie Ancient Guru

    Messages:
    13,276
    Likes Received:
    357
    GPU:
    GTX
    but then the power use would go sky high if fp64 was on consumer cards even with hbm2?
    we don't need that crap for benches anyway

    not only that but they could just clock up 28mn and get that type of increase
     
  15. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    The FP64 cores would be disabled, or removed entirely depends on what they end up releasing, either way I'm calling it.

    Titan X/980Ti and Fury X are the undisputed performance kings until GP100 launces in Titan form
     

  16. cudarenderer

    cudarenderer Guest

    GTX Titan Pascal will back to full-speed FP64 :D but 8GB HBM2 due to the new type of memory and to the price expensive while Tesla has 16GB HBM2. ;)
     
  17. Shadowdane

    Shadowdane Maha Guru

    Messages:
    1,464
    Likes Received:
    91
    GPU:
    Nvidia RTX 4080 FE

    Yah games don't really make use of FP64 at all... it would honestly be a waste of silicon for the consumer platform. With Tesla each SM has 64 FP32 Cores & 32 FP64 Cores. If they change the FP32/FP64 ratio for the consumer platform could be a pretty amazing chip for games.

    I really wouldn't be surprised if the GTX 1080 uses a different design entirely. Maybe 4 GPCs total still using 10 SMs/GPC with 88 FP32 Cores & 8 FP64 Cores each. I don't think they would kill the FP64 cores completely, just scale it back. Would keep the 4 Texture Units per SM.

    Something like this would put us at these specs:
    3520 FP32 Cuda Cores
    160 Texture Units
    80 ROPs (Guessing 2:1 Ratio Texture Units to ROPs)
    ~1500Mhz Boost Clock
    240 GTexels/sec (Texture Fill Rate)
    120 GPixels/sec (Pixel Fill Rate)

    SP Floating Point Performance: 10.56 TFLOPS


    Of course that is all just pulling numbers out my ass... But the fact that Nvidia didn't even mention Geforce once during the keynote. I kinda get the feeling the GP100 (Tesla P100) won't be aimed at consumers at all. I wouldn't be surprised in the least if Nvidia announces another Pascal GPU in the coming months for the Geforce cards.
     
    Last edited: Apr 6, 2016
  18. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    Pascal will initially be GDDR5(x). That's like the best case scenario. The worst case is that it's a paper launch.
     
  19. Turanis

    Turanis Guest

    Messages:
    1,779
    Likes Received:
    489
    GPU:
    Gigabyte RX500
    Unfortunatelly Pascal Gpu for gaming will not come so soon.
    Still they have trouble with 16nm and new Gddr5x(Hbm2 is too expensive for consumers).In Q3 '16 we will have some news.
    The question is why they dont show us some real card,not wooden one?

    That level of FP64 for consumers?No way.Because the new Gpu will need to be cool as ice and low Tdp.
     
    Last edited: Apr 6, 2016
  20. cowie

    cowie Ancient Guru

    Messages:
    13,276
    Likes Received:
    357
    GPU:
    GTX
    well we are getting something in June, what who knows?
    I am sure gddr5x will suit the gamers needs easily.
    its ok with the wooden screw crack they deserve that but I did not see any yesterday....would you show a yet unreleased hardware that cost big money to r-n-d?
    that said how about that bs that amd showed about Polaris? they did not show the card at all but everyone thinks you can run it on a 9v battery lol and where is that/small little chip that should be easy to release by now? its been months since that bs they showed? so wooden screws and ghost cards ...I don't want to go there tbh because even you know that comes from pascal will probably really hurt the competition hard ...if on gddr gddr5x or hbm2.

    like I said I don't really care about the bs
    its just been so blah with vga's for almost a year for me and may/june always brings me joys of new toys
    I don't mean full big boy titan x $200000.00 cards for gamers just wee little new chip toys
     
    Last edited: Apr 6, 2016

Share This Page