Discussion in 'Frontpage news' started by (.)(.), Sep 5, 2015.
This is just my opinion, but there's just no way I can take the Ashes of Singularity developers seriously when you have to spend 45.00 to get this benchmark for testing. This is obviously a marketing ploy to stir up controversy and get attention for this game.
If they want credibility they'll need to release the benchmark free of charge to everyone. Otherwise I'm not listening to some game developer who might well be shilling for AMD. I wasn't born yesterday and I can smell PR at work here.
This pretty much sums it up.
About the software vs hardware issue: Two years ago AMD was releasing software implemented frame pacing for GCN 1.0 in crossfire. It worked out much better than anticipated, delivering a feature that nVidia had at hardware level since Kepler.
Self defeating but let them try,
it will be fun to watch them fail.
Software cant replace hardware.
sw scheduler replaced Fermi's hw since Kepler. just look how bad they're doing :banana:
So all this does is confirm what was said previous that context switching will need to be used because Nvidia hardware lacks ASYNC shaders... If software has to be used this is just a work around. This article is written like there is some new revelation.
This is still software emulation of true ASYNC shaders. Nvidia will probably need to create custom scheduling for each game which I dont see happening. What if the game changes... Sounds like a driver nightmare.
You can guarantee if there was no performance penalty for this then AMD wouldn't have made it a hardware feature. FCAT will be very telling once the latency numbers go through the roof. John Carmack even came out and said GCN was the way to go for VR and Nvidia is a non starter.
Maxwell 2: Queues in Software, work distributor in software (context switching), Asynchronous Warps in hardware, DMA Engines in hardware, CUDA cores in hardware.
GCN: Queues/Work distributor/Asynchronous Compute engines (ACEs/Graphic Command Processor) in hardware, Copy (DMA Engines) in hardware, CUs in hardware.”
Does not appear everything is software based.
but, maybe tell that to all the people on the planet playing console ports in software/emulator on a pc...
or the legal/illegal (depending on country) use of software to decrypt dvd/BDs as another example.
So, NVIDIA's gonna feel the same boost as AMD?
We gotta stay ahead of the consoles man!
Async compute is there to improve efficiency and performance not make it worse being again reliant on the CPU.
then just rationally think why ARK dx12 patch got delayed and we still didn't get dx12 3dmark?
maybe because those would put one team further deep in sh*t?...
I own both a single GTX980TI in my 5960x rig and twin R9-290s (CF) in my 4790k rig so I guess I'm not going to panic either way.
Here's what I see. First denials comments I agree with.
Second, the cold hard reality is that Nvidia has @80% market share and AMD has 20%. I expected AMD to come out with all guns blazing and they have. Will it have much effect on the average buyer? Not sure. Will it turn around AMD's Graphics division? Not sure, but I doubt it.
First in the higher end market, Fiji appears in demand but perhaps it's because the supply is LOW. Second the BUZZ about Nano sure gets quiet when you see the price.
I think the ASYNC shader "issue" was a PR idea created by AMD to sell their lower end cards at a higher price than before and an effort to cut into Nvidia's bread and butter. I give them credit, it sure is getting play.
We'll see how it turns out.
come on I think I can answer both questions I would say it is......money.
if you don't think it reeks of pr I feel bad for you.
see over at that site they act smart and just spew all day but here these guy are smart and stay calm and not buy into everything so easy.
we have older folk here and over there they have 25 year old know it alls
I love you smart guys here even denial which if I was smart would have said what he did.
What makes hardware run ?
At this time I would say that only stupid looking people are those who were accusing Oxide team having one sided bond with AMD.
They have been falsely accused. They came out with clean non-biased information. And now you have confirmation from nV, that their 'implementation' was just on paper. So, I would kindly ask people here to recapitulate on their own actions 1st before posting any more accusations, ridiculous, whatever comments meant only to harm and not to help anything at all.
PS.: Anyone is free to express himself/herself, just remember... ' you are what you do', and some comments here are really toxic.
Still nvidia dont have an answer about this ,say,issue.
If ,as that Oxide dev say,that driver will make Async Compute work on Maxwell and will work "natively",why then the nvidia push Oxide to stop Async in that game:
"Oxide’s developer also revealed that NVIDIA’s Maxwell does not support natively Async Compute, and that NVIDIA asked Oxide to disable it for its graphics cards."
Its obvious that the driver will make Async Compute work only software ,not hardware? So nvidia still dont respond.
“Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn’t really have Async Compute so I don’t know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don’t think it ended up being very significant. This isn’t a vendor specific path, as it’s responding to capabilities the driver reports.” Oxide dev.
Nvidia doesn't have to try. They literally have to do nothing. Their current implementation, without ASync performs just as well as AMD's hardware based Async one. Them trying is just a bonus for Nvidia users.
AMD's latency numbers with their ASync implementation are already through the roof. Nvidia's go up that high too but only after given millions of calls, AMD's numbers start high and never go up.
Also I can't find a single place where John Carmack said that. There was that post recently that some random guy said that he heard someone from Oculus saying that, but there is no official statement from Oculus. Further, I have a DK2 that works fine on my 980 say saying it's a "non starter" is just bull****.
Lol, or maybe the ARK team that can't get their ****ty looking game to run above 10FPS doesn't have the talent to convert it to DX12 in a week.
I personally think Oxide handled the entire thing well. They exposed that there was an issue with Nvidias handling of A-Sync and essentially forced Nvidia to take a look at it. My problem is with the Tech Media and Fanboy bull**** that cherry picked the results from Oxides's benchmarks in order to stur up non-existent controversy.
They didn't come out and answer anything, no. But they are clearly working with Oxide on coming up with an Async solution. The bottom line is, Nvidia currently ties the Fury X Async implementation without it. And I'm pretty positive that any solution that Nvidia comes up with isn't going to negatively effect performance, regardless to whether it's "software" or hardware based.
If you happen to see graphs I made from that user made benchmark on beyond3d, you could see that AMD did something wrong with FuryX driver/HW.
Because r9-290/390(x) has nearly 100% efficiency and execution times have fine granularity. But with Fury X, each frame has either 0/25/75/100% efficiency, so there are like 4 slots into which it could fall. Maybe FuryX owner doing tests has something bad in system, or AMD drivers are still not good for FuryX.
If I had link for test download, I would do quite few tests while using different code paths in driver.
Again, dont check this test code as an benchmark, it is not, the code path instructions is not optimized for any gpu's... there's got 4 version in 1 day of this test, with 2 additionals one from jawed for test on GCN 1.0..
But again, this test is not intended to work good or bad, just to run. ( just to see if it was generate Async compute or not ) As indeed it was the only gpu'#s who have Async compute ( well for the moment if Nivida sort it )
At the base, Mcdolenc and other devs on Beyond3D wanted to check if Async was working on Maxwell or not. The rest is led by a curiosity from thoses devs to try see what happend on the GCN level....
The load on Fury is so low that is practically impossible to see what happend in intern.. it seems only 1 wavefront is used on 64 this could mean that the code is too small and the schedulers pack everything on 1 wave.. Seriously this will be a nice discussion to have with an AMD engineer who have work on this arch, because we miss too much of information on how it work at this deep level.
The way I see it I'd rather just stand by and watch all this unfold, I'm sure Nvidia's solution to Async will be just as elegant as AMD's and if not they will make it up in another area so ultimately the performance difference will be negligible but people love to create controversy where there is none, fact is there's hardly any DX12 capable applications out there atm and we've yet to see a card that offers full hardware support of DX12 from either camp, hell we've only just started using Windows 10, drivers will take time to mature but it's like some people expect things to just work over night.