NVIDIA Will Fully Implement Async Compute Via Driver Support, Oxide Confirms

Discussion in 'Frontpage news' started by (.)(.), Sep 5, 2015.

  1. (.)(.)

    (.)(.) Banned

    Messages:
    9,094
    Likes Received:
    0
    GPU:
    GTX 970
    DSO

    http://www.guru3d.com/news-story/nvidia-will-fully-implement-async-compute-via-driver-support.html
    Will be interesting to see how this compares to AMD.

    Though i did find this little bit from a separate post to be rather interesting (apologies if old):
     
    Last edited: Sep 5, 2015
  2. ---TK---

    ---TK--- Ancient Guru

    Messages:
    22,112
    Likes Received:
    2
    GPU:
    2x 980Ti Gaming 1430/7296
    Excellent.
     
  3. Undying

    Undying Ancient Guru

    Messages:
    11,869
    Likes Received:
    1,542
    GPU:
    Aorus RX580 XTR 8GB
  4. Denial

    Denial Ancient Guru

    Messages:
    12,389
    Likes Received:
    1,631
    GPU:
    EVGA 1080Ti
    Just going to increase latency like the AMD one, perhaps even worse since it's partially software based. I doubt this will yield any real benefits. Then again it's not like it matters. AoS is essentially the same thing as the 3D Mark draw call test, the Fury X and the 980Ti tie in performance. Why people care about this stuff so much is beyond me. The console guys see 30% increases because those systems are already CPU starved. Go look at low processor benchmarks of AoS with fast GPUs, pcper has a good example, the worse the processor the more the difference that dx12 makes.

    But in the mean time you have people posting stupid bull****, including technical review sites, like the ars technia article that compares the 290x to the 980ti and circle jerks over the fact that a $300 card performs the same as a $650 one in AoS benchmark. What they fail to mention is that it also performs the same as the Fury X. But that doesn't fit the current narrative so they don't mention it.

    Similarly Nvidia is also ****ing stupid for not just getting an engineer to explain it at all. Have Tom Peterson sit down and just have some slides so people can understand what goes on with this stuff. AMD should also probably put a leash on some of their employees. The technical advertising guy that made all those posts on reddit is looking pretty stupid right now.
     

  5. (.)(.)

    (.)(.) Banned

    Messages:
    9,094
    Likes Received:
    0
    GPU:
    GTX 970
    +1.

    If Nvidia come out on top of this with this driver, amd are going to look ridiculous. At least until heavier Async games come out, as i assume software implementation will only get Nvidia so far.

    If this driver doesnt make all that of a difference, Nvidia are going to have a hard time selling cards if AMD play theirs right.
     
  6. Lane

    Lane Ancient Guru

    Messages:
    6,361
    Likes Received:
    3
    GPU:
    2x HD7970 - EK Waterblock
    Increasing latency ? where do you ttake that ? im hooping this is not from the little test provided on Beyond 3D, because this litttle codes was aboslutely not intended for be a benchmark at all, it was for see if Async is on or off ..

    It is not a code made for GCN, indeed it is a code who will in any case run really bad on any AMD gpu's .. But it was not intended for that .

    I dont even understand what latency you are talking about, if latency was increase this will mean it will be slower to perform the task needed, so ,, where do you take that i dont know ...

    The point oof Async is specially there for decrease the latency of serial compute + graphics command ..

    This said, i send them all my wishes for simulate an hardware scheduling by software aka driver .
     
    Last edited: Sep 5, 2015
  7. pharma

    pharma Maha Guru

    Messages:
    1,095
    Likes Received:
    123
    GPU:
    Asus Strix GTX 1080
    I have little idea why you would chose Madigan's comments (former ATI employee) to explain something regarding Oxide and Nvidia's architecture on the main Guru3D Frontpage news. It's like asking AMD to do Nvidia's marketing ....


    Why not provide the complete Oxide developer's comments instead?
     
    Last edited: Sep 5, 2015
  8. (.)(.)

    (.)(.) Banned

    Messages:
    9,094
    Likes Received:
    0
    GPU:
    GTX 970

    I havent linked any of Madigans posts here, only Oxides. The two quotes in the OP are of Kollocks of Oxide.

    2nd: I did, its in the link within the article i quoted from DSO. Look for " As Oxide’s developer “Kollock” wrote on Overclock.net..". Its inserted into overclock.net. Besides, im not going to place an entire post like that in a new thread as op, people can go to the source if they want more info.

    So can you edit your post please, what youve quoted is already linked above.
     
    Last edited: Sep 5, 2015
  9. Caesar

    Caesar Master Guru

    Messages:
    786
    Likes Received:
    288
    GPU:
    GTX 1070Ti Titanium
    NVidiA .... meant to be PLAYED....

    Love U /// GEFORCE
     
  10. pharma

    pharma Maha Guru

    Messages:
    1,095
    Likes Received:
    123
    GPU:
    Asus Strix GTX 1080
    Sorry, I was referring to the comment on the main Guru3d News page. I thought this was comments related to that ...
     

  11. SamW

    SamW Master Guru

    Messages:
    540
    Likes Received:
    0
    GPU:
    8800GTX
    My only question, is why does Nvidia expose a feature as available in their drivers when it is not ready to be used.
     
  12. (.)(.)

    (.)(.) Banned

    Messages:
    9,094
    Likes Received:
    0
    GPU:
    GTX 970
    Perhaps an accident, oversight by the driver team? Who knows, Nvidia, but thats another reason why they need to make a statement on this.

    Staying silent isnt going to help, but, Nvidia tends to keep war of words to a minimum while amd plays the look at us card all of the time.

    Wow! im so confused. The comments in here are linked to that article on the main page. Completely different OPs but using the same comments.:infinity:
     
    Last edited: Sep 5, 2015
  13. Fender178

    Fender178 Ancient Guru

    Messages:
    3,782
    Likes Received:
    93
    GPU:
    GTX 1070 | GTX 1060
    Maybe this feature is supported in Hardware like the cards support it in software it does not support it yet hence the driver update to support it. Which would make sense to me in this situation. Either way it will take game developers time to have these features in their games anyway.
     
  14. Denial

    Denial Ancient Guru

    Messages:
    12,389
    Likes Received:
    1,631
    GPU:
    EVGA 1080Ti
    The Beyond3D thread for that test has completely changed from an Nvidia ASync thread to an AMD latency thread. There are multiple posts/developers talking about AMD's Async latency in that thread. Is it a problem? No I don't think it is, but neither is Nvidia's solution to the same problem. Like you can sit here and say "Nvidia's implementation of ASync is wrong, its serial, etc, blah blah" it's irrelevant. It's irrelevant because in a test that sends a 1000x more draw calls you'll see in any real game, Nvidia's solution matches AMD's in performance.

    And lets be real, if Nvidia/Oxide are claiming that Nvidia is going to fix this in driver, does it really matter if it's software or hardware? I highly doubt Nvidia is going to release a driver that either cripples performance, or side-grades it.

    Again people keep talking about the implementation, they talk about the method being used, etc, but they completely ignore the final result, which is what everyone should care about -- the performance. In AOS @ 1080P through 4K the difference in performance between a 980Ti and Fury X is negligible. They essentially perform the same. And the game is literally designed to be a draw call benchmark. Its not a real insight to how DX12 will perform in an actual title. Proof of this is that the 290x and the 980 perform nearly identically to a Fury X/980Ti. Obviously this game isn't testing a graphics cards performance, it's testing the bottleneck of the scheduler.

    What annoys me is that aside from PC Perspective's podcast and Hilbert (through multiple posts) no one is talking about this. Everyone is just circle jerking eachother about a bunch of irrelevant bull****. 3 Days ago there was a post on reddit that was like "CONFIRMED, NVIDIA DOESN'T SUPPORT DX12 ASYNC SHADERS" now today, there is "OXIDE CONFIRMS NVIDIA SUPPORTS ASYNC". Or, the example I already given, Ars Technica writes an entire article where the conclusion is that AMD's $300 card outperforms Nvidia's $650 one. Yet they don't even ****ing investigate why or compare it to AMD's $650 offerings.

    Like I said in my other post, Nvidia is to blame too. They should be way more open and upfront about this stuff. Not only this -- about the Kepler bugs in Witcher 3, about the SLI issues with memory stuff (it's now fixed) about why their drivers ****ing suck for Windows 10, etc. They don't even post changelogs in their drivers anymore. The Nvidia guy posted something like "I'd have to talk to the docs team" like wtf is that. They have an entire team dedicated to documents and they release a ****ing driver with zero ****ing documents. It was like literally copied and pasted from the previous release with a find and replace to change the driver number.

    Tech review sites really need to step up too -- I get that some don't focus on the low end, technical stuff -- but the fact that I have to go to some forum and read through amd/nvidia fanboy nonsense for 70 pages in order to even understand what is happening, is bull****. I really miss Anand from Anandtech. The guy would have had a 10 page breakdown of the entire architecture, what's occurring, why it's occurring, with actual developer commenting on it. But since Ryan or whoever sold it to Tom's hardware, that site has literally turned into a giant ****ing billboard of advertising garbage. There is more ad space then article space on that site now. Anyway I'm ranting, but it's frustrating. People keep posting their uneducated, uninformed opinions based on what some armchair googling nerd thinks is happening. I'm not going to pretend to understand how the Async design between AMD/Nvidia's architecture differ. I'm going to pretend that I know why AMD or Nvidia's implementation is better or worse. What I do know and can see is the performance of both one specific title, that tests a specific part of the card, and they are EQUAL. So the fanboy circle jerk of "Nvidia is better" or "AMD is better" can end now.
     
    Last edited: Sep 5, 2015
  15. Khronikos

    Khronikos Master Guru

    Messages:
    722
    Likes Received:
    53
    GPU:
    EVGA SC2 1080ti
    Haha, they own 85% of the market bud, and have a huge one up with drivers and graphical capabilities in Gameworks. They aren't ever going to have problems selling cards.
     

  16. riardon

    riardon Active Member

    Messages:
    65
    Likes Received:
    0
    GPU:
    Nvidia GTX760
    I wanna hug and kiss you. #nohomo But what you said is the absolute truth beyond fanboyism and jumping into conclusions. Everyone is rushing to make conclusions nowadays. Also I agree 100% about Nvidia downfalls lately. Their communication totally failed. It began with the GTX970 memory fiasco and after that they did what you described. And yes they stopped even posting a changelog inside their driver's release notes PDF. I suspect that almost everything is broken now and they decided to conceal all those weak spots by not publishing them. I feel that you can hardly find the truth from any tech site these days.
     
  17. Clouseau

    Clouseau Ancient Guru

    Messages:
    2,375
    Likes Received:
    250
    GPU:
    ASUS STRIX GTX 1080
    With a comment like the worse a cpu is the more difference DX12 makes, it is not seen that comparisons will be made between less expensive gear and the top enthusiast gear? Leaving AMD and Nvidia aside, DX12 along with Gsync and Freesync allow mainstream gear appear to have the same visual experience as top enthusiast gear. If this trend keeps going, top gear like FurryXs and Titians will not have a place in the market. Phones and tablets may even have a shot of becoming the new need to have gaming rigs. The only thing left to differentiate the experience will be the display device used. GPUs and CPUs will be rendered moot. One will just end up needing good enough.
     
  18. pharma

    pharma Maha Guru

    Messages:
    1,095
    Likes Received:
    123
    GPU:
    Asus Strix GTX 1080
    Even though Nvidia documents are lacking with regard to the specifics of async compute or other DX12 features, the link below gives a good idea of the concept. It is currently assumed this feature works the same on Maxwell 2 hardware as it does on Kepler/Maxwell 1 hardware so keep in mind it may or may not function the same once official documentation is released. The article (written 3 days ago) should at least provide a good starting point if you are interested in the concepts.

    Remove space after wcc.
    http://wcc ftech.com/nvidia-amd-directx-12-graphic-card-list-features-explained/4/
     
  19. HeavyHemi

    HeavyHemi Ancient Guru

    Messages:
    6,284
    Likes Received:
    604
    GPU:
    GTX1080Ti
    Uh....DX12 is somewhat helping to alleviate the CPU 'bottleneck'. It doesn't eliminate it nor does this single bench likely reflect the future of gaming. Phones and tablets cannot give you the immersive experience of a desktop gaming rig. Angry birds and Arkham Knight are different gaming paradigms and will remain that way. Good enough is a subjective term. My 'good enough' is probably a bit higher than others 'good enough'.
     
  20. Ryu5uzaku

    Ryu5uzaku Ancient Guru

    Messages:
    6,773
    Likes Received:
    110
    GPU:
    5700 XT UV 1950~
    Tbh I've been thinking you are the most sensible guy here. And that thought still goes strong. Very good post man very good.
     

Share This Page