Discussion in 'Frontpage news' started by (.)(.), Sep 1, 2015.
that much i know but given what we know now about maxwell 2.0 that table is pretty damn spot on
What did I say in the other thread?
Just ignore the junk from both sides. Sad they have to resort to petty crap. Or it's just trolls making **** up..and posting benchmarks and tables.
We seen it before. Hardly original.
Did someone think that current gpus would be fully dx12 capable ? I thought it is pretty clear that there are features missing here and there considering the fact that all of these gpus were pretty much made way before dx12 was done.
Have you read what you just said in context?
Not quite sure what you mean think he has a point to be honest....
This whole dx12 things smacks of unsubstantiated claims and trolling then people jumping on the bandwagon thinking they can try and give credibility to what is most likely made up garbage.
There may be some issues but until dx12 is launched officially I could claim anything...
Wait are they trying to influence buying habits or its just petty "your gpu cant do this so there...my games are gonna get 5fps improvement over yrs".
or "my hair/dust particles are gonna look amazing".
****, wrong thread. loooool.
Lol ... noobs keep changing the chart. The primary DX12 feature level chart can be found at the link below and is modified daily by people in the field.
After reading this : https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/ and running the DX12 async compute test program. I would say Maxwell can do async compute and do it well as long as the command list is below 31 (GCN can do 128 so that's the difference - but until 31 - 980 Ti is way faster)
Amd and nvidia is not comparable this time at ALL!
Amd came out and honestly explained everything and said about their own lacking features before anyone even went looking for them.
And amd isnt trying to spin their lack of a few features as an "advantage" now, or any time before.
While this is inconvenient that some active fella changes it over again, he only adds stuff to it, and he is not altering originally present information.
He added Async shaders, cross-node sharing, UAVs at every stage, maximum sample count for UAV-only rendering, Logical Blend Operations.
Well, and he removed MS's reference. But checking what he added, they are mostly same for AMD and nV, so it does not look like he is taking sides.
(Tho, I would state for async shader on nV as partial, since it works. Just not as intended => may mean poor implementation.)
yeap im a troll now
prove me wrong then on the specs of mx cards 2.0 as far as everyone knows ***** is real pretty much every card below titan x and 980ti(suspected to have 2 dma's) is gimped for sure
if that is a troll post then by any means report it
there is a reason that everyone is going mad on nvidia forums and no its not because of trolls
Something to look, theres a bit of problem on the MS front.
Many features who was optional for 12_1 have suddenly moved as essential, and even some features as 12_0 who was optionnal have then suddenly moved back to non optionnal or vice versa... ( let alone the problem with Tier1-2-3 ) who move up and down from needed to optionnal.
Maybe MS should have look at the 12_0-12_1 a bit differently and take all features supported by all, and then put the other on the optional side.
Even today, there's not one driver from AMD, Intel and Nvidia who expose all features available on their architecture
Each driver we find new cap-bit who was not exposed or only partially.
example: Cat 15.8beta: - Geometry Shader bypass performance cap-bit finally exposed.
Currently there is only one source of the table and that is at Wiki. A slew of experts modify the Wiki table from MS, Intel, Nvidia and AMD on a daily basis. Do you really think his "cut & paste" interpretation of what the table should look like is in some way superior to theirs?
Considering his resulting table it's understandable why he did not include a link to the source.
It is precisely these sort of things that make me appreciate the stupidity of the typical consumer. You buy something and expect it to support 100% a feature which goes live a year later, and even then it's insignificant because the games won't even exist for some time after that. But the "green team" and "red team" go on the defensive/offensive muddying the issue with feelings and uneducated opinions. Yeah not much quantitative facts in my post either, but seriously, can your GPU play current gen games perfectly? Yes. Does AMD show advantage is one single AMD-preferred demo? Absolutely. Does it matter at all?
Really i thought i read that Maxwell is meant to be able to do 32, while GCN 1.0 is 2, GCN 1.1 is 4 and GCN 1.2 is 8.
Lol that reddit thread, did you guys even read the comments? Seems like no one even knows how to interpret the results including the guy who wrote the program.
Interesting read DX12 is going to be interesting
It is 1 + 2 mixed and 1 + 8 and 1 + 8 and compute only 2, 8 and 8 (Tho 260 series gcn 1.1 seems to be 1 + 2 while 290 are 1 + 8). But yes maxwell should have 1 + 31 mixed and 32 for compute only.
The program was an attempt to produce some data, is not definitive and believe was done by a graphics developer (a lot of folk there are game developers where the program originated) -- it's basically a work in progress and may continue to be so well after the first DX12 game (Fable Legends) is released in Oct. 2015.