1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

NVIDIA Will Fully Implement Async Compute Via Driver Support, Oxide Confirms

Discussion in 'Frontpage news' started by (.)(.), Sep 5, 2015.

  1. thesebastian

    thesebastian Active Member

    Messages:
    83
    Likes Received:
    11
    GPU:
    GTX1080 + H90
    IMO, DX12 is still a new concept. All worried about current Maxwell or Fury cards...

    They'll sux compared to "REAL DX12" cards that will be released in 1-2 years when there will be a lot of games supporting native DX12.
     
  2. Clouseau

    Clouseau Ancient Guru

    Messages:
    2,329
    Likes Received:
    219
    GPU:
    ASUS STRIX GTX 1080
    Point taken.
     
  3. eGGroLLiO

    eGGroLLiO Member Guru

    Messages:
    172
    Likes Received:
    57
    GPU:
    EVGA 2080ti FTW3 UG
    This is just my opinion, but there's just no way I can take the Ashes of Singularity developers seriously when you have to spend 45.00 to get this benchmark for testing. This is obviously a marketing ploy to stir up controversy and get attention for this game.

    If they want credibility they'll need to release the benchmark free of charge to everyone. Otherwise I'm not listening to some game developer who might well be shilling for AMD. I wasn't born yesterday and I can smell PR at work here.
     
  4. Vbs

    Vbs Master Guru

    Messages:
    291
    Likes Received:
    0
    GPU:
    Asus Strix 970, 1506/7806
    This pretty much sums it up. :)

    About the software vs hardware issue: Two years ago AMD was releasing software implemented frame pacing for GCN 1.0 in crossfire. It worked out much better than anticipated, delivering a feature that nVidia had at hardware level since Kepler.
     

  5. cleverman

    cleverman Banned

    Messages:
    14
    Likes Received:
    0
    GPU:
    16
    Self defeating but let them try,
    it will be fun to watch them fail.
    Software cant replace hardware.
     
  6. Noisiv

    Noisiv Ancient Guru

    Messages:
    6,652
    Likes Received:
    494
    GPU:
    2070 Super

    sw scheduler replaced Fermi's hw since Kepler. just look how bad they're doing :banana:
     
  7. Srsbsns

    Srsbsns Member Guru

    Messages:
    144
    Likes Received:
    34
    GPU:
    RX Vega 64 Liquid
    So all this does is confirm what was said previous that context switching will need to be used because Nvidia hardware lacks ASYNC shaders... If software has to be used this is just a work around. This article is written like there is some new revelation.

    This is still software emulation of true ASYNC shaders. Nvidia will probably need to create custom scheduling for each game which I dont see happening. What if the game changes... Sounds like a driver nightmare.

    You can guarantee if there was no performance penalty for this then AMD wouldn't have made it a hardware feature. FCAT will be very telling once the latency numbers go through the roof. John Carmack even came out and said GCN was the way to go for VR and Nvidia is a non starter.
     
    Last edited: Sep 6, 2015
  8. ---TK---

    ---TK--- Ancient Guru

    Messages:
    22,112
    Likes Received:
    2
    GPU:
    2x 980Ti Gaming 1430/7296
    Maxwell 2: Queues in Software, work distributor in software (context switching), Asynchronous Warps in hardware, DMA Engines in hardware, CUDA cores in hardware.
    GCN: Queues/Work distributor/Asynchronous Compute engines (ACEs/Graphic Command Processor) in hardware, Copy (DMA Engines) in hardware, CUs in hardware.”
    http://www.dsogaming.com/news/nvidi...nc-compute-via-driver-support-oxide-confirms/
    Does not appear everything is software based.
     
  9. fry178

    fry178 Maha Guru

    Messages:
    1,096
    Likes Received:
    119
    GPU:
    MSI 1080 X@2GHz
    @Denial
    +1

    @cleverman
    right...
    but, maybe tell that to all the people on the planet playing console ports in software/emulator on a pc...
    or the legal/illegal (depending on country) use of software to decrypt dvd/BDs as another example.
     
  10. BedantP

    BedantP Master Guru

    Messages:
    220
    Likes Received:
    0
    GPU:
    STRIX 960 1500Mhz/2009Mhz
    [​IMG]


    Feeling better :p
    So, NVIDIA's gonna feel the same boost as AMD?
    We gotta stay ahead of the consoles man!
     

  11. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,056
    Likes Received:
    45
    GPU:
    RX 580 8GB
    Async compute is there to improve efficiency and performance not make it worse being again reliant on the CPU.
     
  12. artikot

    artikot Member

    Messages:
    15
    Likes Received:
    0
    GPU:
    R9 295x2
    then just rationally think why ARK dx12 patch got delayed and we still didn't get dx12 3dmark?
    maybe because those would put one team further deep in sh*t?...
     
  13. guskline

    guskline Member

    Messages:
    14
    Likes Received:
    0
    GPU:
    Zotac GTX1080 FE EK block
    I own both a single GTX980TI in my 5960x rig and twin R9-290s (CF) in my 4790k rig so I guess I'm not going to panic either way.

    Here's what I see. First denials comments I agree with.
    Second, the cold hard reality is that Nvidia has @80% market share and AMD has 20%. I expected AMD to come out with all guns blazing and they have. Will it have much effect on the average buyer? Not sure. Will it turn around AMD's Graphics division? Not sure, but I doubt it.

    First in the higher end market, Fiji appears in demand but perhaps it's because the supply is LOW. Second the BUZZ about Nano sure gets quiet when you see the price.

    I think the ASYNC shader "issue" was a PR idea created by AMD to sell their lower end cards at a higher price than before and an effort to cut into Nvidia's bread and butter. I give them credit, it sure is getting play.
    We'll see how it turns out.
     
    Last edited: Sep 6, 2015
  14. cowie

    cowie Ancient Guru

    Messages:
    13,190
    Likes Received:
    281
    GPU:
    GTX
    come on I think I can answer both questions I would say it is......money.
    if you don't think it reeks of pr I feel bad for you.
    see over at that site they act smart and just spew all day but here these guy are smart and stay calm and not buy into everything so easy.
    we have older folk here and over there they have 25 year old know it alls
    I love you smart guys here even denial which if I was smart would have said what he did.
     
    Last edited: Sep 6, 2015
  15. Webhiker

    Webhiker Master Guru

    Messages:
    525
    Likes Received:
    102
    GPU:
    EVGA GTX 1080i SC2
    Really ?
    What makes hardware run ?
     

  16. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,687
    Likes Received:
    2,159
    GPU:
    5700XT+AW@240Hz
    At this time I would say that only stupid looking people are those who were accusing Oxide team having one sided bond with AMD.

    They have been falsely accused. They came out with clean non-biased information. And now you have confirmation from nV, that their 'implementation' was just on paper. So, I would kindly ask people here to recapitulate on their own actions 1st before posting any more accusations, ridiculous, whatever comments meant only to harm and not to help anything at all.

    PS.: Anyone is free to express himself/herself, just remember... ' you are what you do', and some comments here are really toxic.
     
  17. Turanis

    Turanis Maha Guru

    Messages:
    1,418
    Likes Received:
    153
    GPU:
    Gigabyte RX500
    Still nvidia dont have an answer about this ,say,issue.

    If ,as that Oxide dev say,that driver will make Async Compute work on Maxwell and will work "natively",why then the nvidia push Oxide to stop Async in that game:
    "Oxide’s developer also revealed that NVIDIA’s Maxwell does not support natively Async Compute, and that NVIDIA asked Oxide to disable it for its graphics cards."

    Its obvious that the driver will make Async Compute work only software ,not hardware? So nvidia still dont respond.

    “Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn’t really have Async Compute so I don’t know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don’t think it ended up being very significant. This isn’t a vendor specific path, as it’s responding to capabilities the driver reports.” Oxide dev.
     
    Last edited: Sep 6, 2015
  18. Denial

    Denial Ancient Guru

    Messages:
    12,314
    Likes Received:
    1,501
    GPU:
    EVGA 1080Ti
    Nvidia doesn't have to try. They literally have to do nothing. Their current implementation, without ASync performs just as well as AMD's hardware based Async one. Them trying is just a bonus for Nvidia users.

    AMD's latency numbers with their ASync implementation are already through the roof. Nvidia's go up that high too but only after given millions of calls, AMD's numbers start high and never go up.

    Also I can't find a single place where John Carmack said that. There was that post recently that some random guy said that he heard someone from Oculus saying that, but there is no official statement from Oculus. Further, I have a DK2 that works fine on my 980 say saying it's a "non starter" is just bull****.

    Lol, or maybe the ARK team that can't get their ****ty looking game to run above 10FPS doesn't have the talent to convert it to DX12 in a week.

    I personally think Oxide handled the entire thing well. They exposed that there was an issue with Nvidias handling of A-Sync and essentially forced Nvidia to take a look at it. My problem is with the Tech Media and Fanboy bull**** that cherry picked the results from Oxides's benchmarks in order to stur up non-existent controversy.

    They didn't come out and answer anything, no. But they are clearly working with Oxide on coming up with an Async solution. The bottom line is, Nvidia currently ties the Fury X Async implementation without it. And I'm pretty positive that any solution that Nvidia comes up with isn't going to negatively effect performance, regardless to whether it's "software" or hardware based.
     
  19. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,687
    Likes Received:
    2,159
    GPU:
    5700XT+AW@240Hz
    If you happen to see graphs I made from that user made benchmark on beyond3d, you could see that AMD did something wrong with FuryX driver/HW.
    Because r9-290/390(x) has nearly 100% efficiency and execution times have fine granularity. But with Fury X, each frame has either 0/25/75/100% efficiency, so there are like 4 slots into which it could fall. Maybe FuryX owner doing tests has something bad in system, or AMD drivers are still not good for FuryX.

    If I had link for test download, I would do quite few tests while using different code paths in driver.
     
  20. Lane

    Lane Ancient Guru

    Messages:
    6,361
    Likes Received:
    3
    GPU:
    2x HD7970 - EK Waterblock
    Again, dont check this test code as an benchmark, it is not, the code path instructions is not optimized for any gpu's... there's got 4 version in 1 day of this test, with 2 additionals one from jawed for test on GCN 1.0..

    But again, this test is not intended to work good or bad, just to run. ( just to see if it was generate Async compute or not ) As indeed it was the only gpu'#s who have Async compute ( well for the moment if Nivida sort it )

    At the base, Mcdolenc and other devs on Beyond3D wanted to check if Async was working on Maxwell or not. The rest is led by a curiosity from thoses devs to try see what happend on the GCN level....

    The load on Fury is so low that is practically impossible to see what happend in intern.. it seems only 1 wavefront is used on 64 this could mean that the code is too small and the schedulers pack everything on 1 wave.. Seriously this will be a nice discussion to have with an AMD engineer who have work on this arch, because we miss too much of information on how it work at this deep level.
     
    Last edited: Sep 6, 2015

Share This Page