NVIDIA Will Fully Implement Async Compute Via Driver Support, Oxide Confirms

Discussion in 'Frontpage news' started by (.)(.), Sep 5, 2015.

  1. Clouseau

    Clouseau Ancient Guru

    Messages:
    2,841
    Likes Received:
    508
    GPU:
    ZOTAC AMP RTX 3070
    Point taken.
     
  2. eGGroLLiO

    eGGroLLiO Master Guru

    Messages:
    241
    Likes Received:
    108
    GPU:
    EVGA 3080ti FTW3
    This is just my opinion, but there's just no way I can take the Ashes of Singularity developers seriously when you have to spend 45.00 to get this benchmark for testing. This is obviously a marketing ploy to stir up controversy and get attention for this game.

    If they want credibility they'll need to release the benchmark free of charge to everyone. Otherwise I'm not listening to some game developer who might well be shilling for AMD. I wasn't born yesterday and I can smell PR at work here.
     
  3. Vbs

    Vbs Guest

    Messages:
    291
    Likes Received:
    0
    GPU:
    Asus Strix 970, 1506/7806
    This pretty much sums it up. :)

    About the software vs hardware issue: Two years ago AMD was releasing software implemented frame pacing for GCN 1.0 in crossfire. It worked out much better than anticipated, delivering a feature that nVidia had at hardware level since Kepler.
     
  4. cleverman

    cleverman Guest

    Messages:
    14
    Likes Received:
    0
    GPU:
    16
    Self defeating but let them try,
    it will be fun to watch them fail.
    Software cant replace hardware.
     

  5. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super

    sw scheduler replaced Fermi's hw since Kepler. just look how bad they're doing :banana:
     
  6. Srsbsns

    Srsbsns Member Guru

    Messages:
    192
    Likes Received:
    54
    GPU:
    RX Vega 64 Liquid
    So all this does is confirm what was said previous that context switching will need to be used because Nvidia hardware lacks ASYNC shaders... If software has to be used this is just a work around. This article is written like there is some new revelation.

    This is still software emulation of true ASYNC shaders. Nvidia will probably need to create custom scheduling for each game which I dont see happening. What if the game changes... Sounds like a driver nightmare.

    You can guarantee if there was no performance penalty for this then AMD wouldn't have made it a hardware feature. FCAT will be very telling once the latency numbers go through the roof. John Carmack even came out and said GCN was the way to go for VR and Nvidia is a non starter.
     
    Last edited: Sep 6, 2015
  7. ---TK---

    ---TK--- Guest

    Messages:
    22,104
    Likes Received:
    3
    GPU:
    2x 980Ti Gaming 1430/7296
    Maxwell 2: Queues in Software, work distributor in software (context switching), Asynchronous Warps in hardware, DMA Engines in hardware, CUDA cores in hardware.
    GCN: Queues/Work distributor/Asynchronous Compute engines (ACEs/Graphic Command Processor) in hardware, Copy (DMA Engines) in hardware, CUs in hardware.”
    http://www.dsogaming.com/news/nvidi...nc-compute-via-driver-support-oxide-confirms/
    Does not appear everything is software based.
     
  8. fry178

    fry178 Ancient Guru

    Messages:
    2,067
    Likes Received:
    377
    GPU:
    Aorus 2080S WB
    @Denial
    +1

    @cleverman
    right...
    but, maybe tell that to all the people on the planet playing console ports in software/emulator on a pc...
    or the legal/illegal (depending on country) use of software to decrypt dvd/BDs as another example.
     
  9. BedantP

    BedantP Guest

    Messages:
    220
    Likes Received:
    0
    GPU:
    1660Ti
    [​IMG]


    Feeling better :p
    So, NVIDIA's gonna feel the same boost as AMD?
    We gotta stay ahead of the consoles man!
     
  10. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    Async compute is there to improve efficiency and performance not make it worse being again reliant on the CPU.
     

  11. artikot

    artikot Guest

    Messages:
    15
    Likes Received:
    0
    GPU:
    R9 295x2
    then just rationally think why ARK dx12 patch got delayed and we still didn't get dx12 3dmark?
    maybe because those would put one team further deep in sh*t?...
     
  12. guskline

    guskline Guest

    Messages:
    14
    Likes Received:
    0
    GPU:
    Zotac GTX1080 FE EK block
    I own both a single GTX980TI in my 5960x rig and twin R9-290s (CF) in my 4790k rig so I guess I'm not going to panic either way.

    Here's what I see. First denials comments I agree with.
    Second, the cold hard reality is that Nvidia has @80% market share and AMD has 20%. I expected AMD to come out with all guns blazing and they have. Will it have much effect on the average buyer? Not sure. Will it turn around AMD's Graphics division? Not sure, but I doubt it.

    First in the higher end market, Fiji appears in demand but perhaps it's because the supply is LOW. Second the BUZZ about Nano sure gets quiet when you see the price.

    I think the ASYNC shader "issue" was a PR idea created by AMD to sell their lower end cards at a higher price than before and an effort to cut into Nvidia's bread and butter. I give them credit, it sure is getting play.
    We'll see how it turns out.
     
    Last edited: Sep 6, 2015
  13. cowie

    cowie Ancient Guru

    Messages:
    13,276
    Likes Received:
    357
    GPU:
    GTX
    come on I think I can answer both questions I would say it is......money.
    if you don't think it reeks of pr I feel bad for you.
    see over at that site they act smart and just spew all day but here these guy are smart and stay calm and not buy into everything so easy.
    we have older folk here and over there they have 25 year old know it alls
    I love you smart guys here even denial which if I was smart would have said what he did.
     
    Last edited: Sep 6, 2015
  14. Webhiker

    Webhiker Master Guru

    Messages:
    751
    Likes Received:
    264
    GPU:
    ASRock Radeon RX 79
    Really ?
    What makes hardware run ?
     
  15. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    At this time I would say that only stupid looking people are those who were accusing Oxide team having one sided bond with AMD.

    They have been falsely accused. They came out with clean non-biased information. And now you have confirmation from nV, that their 'implementation' was just on paper. So, I would kindly ask people here to recapitulate on their own actions 1st before posting any more accusations, ridiculous, whatever comments meant only to harm and not to help anything at all.

    PS.: Anyone is free to express himself/herself, just remember... ' you are what you do', and some comments here are really toxic.
     

  16. Turanis

    Turanis Guest

    Messages:
    1,779
    Likes Received:
    489
    GPU:
    Gigabyte RX500
    Still nvidia dont have an answer about this ,say,issue.

    If ,as that Oxide dev say,that driver will make Async Compute work on Maxwell and will work "natively",why then the nvidia push Oxide to stop Async in that game:
    "Oxide’s developer also revealed that NVIDIA’s Maxwell does not support natively Async Compute, and that NVIDIA asked Oxide to disable it for its graphics cards."

    Its obvious that the driver will make Async Compute work only software ,not hardware? So nvidia still dont respond.

    “Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn’t really have Async Compute so I don’t know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don’t think it ended up being very significant. This isn’t a vendor specific path, as it’s responding to capabilities the driver reports.” Oxide dev.
     
    Last edited: Sep 6, 2015
  17. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Nvidia doesn't have to try. They literally have to do nothing. Their current implementation, without ASync performs just as well as AMD's hardware based Async one. Them trying is just a bonus for Nvidia users.

    AMD's latency numbers with their ASync implementation are already through the roof. Nvidia's go up that high too but only after given millions of calls, AMD's numbers start high and never go up.

    Also I can't find a single place where John Carmack said that. There was that post recently that some random guy said that he heard someone from Oculus saying that, but there is no official statement from Oculus. Further, I have a DK2 that works fine on my 980 say saying it's a "non starter" is just bull****.

    Lol, or maybe the ARK team that can't get their ****ty looking game to run above 10FPS doesn't have the talent to convert it to DX12 in a week.

    I personally think Oxide handled the entire thing well. They exposed that there was an issue with Nvidias handling of A-Sync and essentially forced Nvidia to take a look at it. My problem is with the Tech Media and Fanboy bull**** that cherry picked the results from Oxides's benchmarks in order to stur up non-existent controversy.

    They didn't come out and answer anything, no. But they are clearly working with Oxide on coming up with an Async solution. The bottom line is, Nvidia currently ties the Fury X Async implementation without it. And I'm pretty positive that any solution that Nvidia comes up with isn't going to negatively effect performance, regardless to whether it's "software" or hardware based.
     
  18. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    If you happen to see graphs I made from that user made benchmark on beyond3d, you could see that AMD did something wrong with FuryX driver/HW.
    Because r9-290/390(x) has nearly 100% efficiency and execution times have fine granularity. But with Fury X, each frame has either 0/25/75/100% efficiency, so there are like 4 slots into which it could fall. Maybe FuryX owner doing tests has something bad in system, or AMD drivers are still not good for FuryX.

    If I had link for test download, I would do quite few tests while using different code paths in driver.
     
  19. Lane

    Lane Guest

    Messages:
    6,361
    Likes Received:
    3
    GPU:
    2x HD7970 - EK Waterblock
    Again, dont check this test code as an benchmark, it is not, the code path instructions is not optimized for any gpu's... there's got 4 version in 1 day of this test, with 2 additionals one from jawed for test on GCN 1.0..

    But again, this test is not intended to work good or bad, just to run. ( just to see if it was generate Async compute or not ) As indeed it was the only gpu'#s who have Async compute ( well for the moment if Nivida sort it )

    At the base, Mcdolenc and other devs on Beyond3D wanted to check if Async was working on Maxwell or not. The rest is led by a curiosity from thoses devs to try see what happend on the GCN level....

    The load on Fury is so low that is practically impossible to see what happend in intern.. it seems only 1 wavefront is used on 64 this could mean that the code is too small and the schedulers pack everything on 1 wave.. Seriously this will be a nice discussion to have with an AMD engineer who have work on this arch, because we miss too much of information on how it work at this deep level.
     
    Last edited: Sep 6, 2015
  20. stereoman

    stereoman Master Guru

    Messages:
    884
    Likes Received:
    181
    GPU:
    Palit RTX 3080 GPRO
    The way I see it I'd rather just stand by and watch all this unfold, I'm sure Nvidia's solution to Async will be just as elegant as AMD's and if not they will make it up in another area so ultimately the performance difference will be negligible but people love to create controversy where there is none, fact is there's hardly any DX12 capable applications out there atm and we've yet to see a card that offers full hardware support of DX12 from either camp, hell we've only just started using Windows 10, drivers will take time to mature but it's like some people expect things to just work over night.
     

Share This Page