Windows 10 build 10586 with WDDM 2.0 support for 400/500-series?

Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by RzrTrek, Nov 6, 2015.

  1. Yxskaft

    Yxskaft Maha Guru

    Messages:
    1,438
    Likes Received:
    97
    GPU:
    GTX Titan Sli
    AMD has never said the HD 5000 and 6000 series fulfill all the requirements for DX12.

    Fermi and Kepler have similarities and that's probably the biggest reason Fermi also is getting DX12 support. Providing it for Kepler should give Fermi a piggyback ride.

    There are benefits for Nvidia and game studios for supporting DX12 on Fermi even if it's low-end.
    Game studios can make the games in DX12 only and still have it work on the five-year-old Fermi chips. Lowering the need for fallbacks for older hardware.

    Nvidia can stick to DX12 optimizations for all its architectures.

    And reputation among consumers should never be underestimated. You already see how people point out Nvidia is supporting DX12 on older products than AMD does (whether it's possible or not is another story)
     
    Last edited: Nov 8, 2015
  2. km52

    km52 Member Guru

    Messages:
    101
    Likes Received:
    0
    GPU:
    EVGA GTX 1070 FTW
    No I wasn't.

    [​IMG][​IMG][​IMG]
    Xeon X3470 @ 3.84 GHz + GTX 470 (Quadro 5000 Bios Mod) @ 772/1544/1804

    Why don't you download the 358.70 and see for yourself, bigmouth?
     
  3. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,108
    Likes Received:
    183
    GPU:
    Sapphire 7970 Quadrobake
    Aren't the DX12 draw calls way too low? (Not saying you're lying, I can't test myself anyway).
     
  4. dr_rus

    dr_rus Ancient Guru

    Messages:
    2,985
    Likes Received:
    333
    GPU:
    RTX 2080 OC
    To fulfill a requirement for DX12 driver your h/w must be able to support the lowest feature level provided by DX12 - that's FL11_0 which both 5000 and 6000 Radeons support just fine. The only reason why AMD decided to not bother with DX12 drivers for them is the lack of resources and general low performance of these parts even in DX11. That's basically the same reason NV could've used to avoid supporting DX12 on Fermi.

    Fermi and Kepler has way less similarities than Kepler and Maxwell for example. They are probably closer architecturally than VLIW5/4 and GCN if that's what you're talking about but still - Fermi is very different from Kepler. NV's postponing of Fermi DX12 driver while putting out Kepler and Maxwell driver at the same time should illustrate this to you.

    This is a misconception which I think will be proven wrong rather soon. There is no common "DX12", there are several feature levels inside the API which are rather different in their capabilities. DX12 require a lot of hand work on part of a developer and thus each additional FL support in an application will mean that there must be a lot of additional hand work. This is why I fully expect D3D12 renderers to use FL12_x h/w only and support lower tier h/w via the "compatibility" D3D11 renderers.

    I mean, think about it - there are no incentive of any kind for anyone to support FL11_x h/w in D3D12 renderer. A dev will create FL12_0 renderer for XBO and port it to FL12_x h/w on PC. They'll use a much simpler to work with D3D11 for compatibility with older h/w as this will mean less money spent for them. An IHV is interested in selling the new videocards and they won't push for FL11_x usage as this will mean that to get DX12 running people will have to buy the new FL12_x videocards.

    Highly unlikely if you consider that Fermi and Kepler are DX11 and single queue while Maxwell is DX12 and multiple queue. I'm 100% sure that they will optimize Maxwell and Maxwell only for DX12.

    The cost of reputation is an important thing as well.
     

  5. siriq

    siriq Master Guru

    Messages:
    788
    Likes Received:
    14
    GPU:
    Evga GTX 570 Classified
    Well it won't run here at all. Win ver: 10586
    http://postimg.org/image/hufk9ujub/
    [​IMG]

    Just tried Ashes bench. Same thing.
     
    Last edited: Nov 8, 2015
  6. Yxskaft

    Yxskaft Maha Guru

    Messages:
    1,438
    Likes Received:
    97
    GPU:
    GTX Titan Sli

    DX11 feature level 11_0 and DX12 feature level 11_0 aren't the same. HD 5000 and 6000 series supporting DX11 feature level 11_0 do in no way prove that they'd be able to support DX12 feature level 11_0.

    If it weren't for Windows 10 still having to gain enough marketshare, studios would move to DX12 API directly and target the older hardware by using feature level 11_0. For Nvidia, a game supporting feature level 11_0 would cover Fermi to Maxwell 1. For AMD and Intel, feature level 11_1 would cover Haswell, Broadwell and GCN 1.0.

    It is more simple to have a single API and do the necessary work for supporting older feature levels than having to support two different APIs. And when it comes to adoption of engines supporting DX12, ensuring compatibility across as many GPU generations as possible is extremely important.

    DX12's biggest enemy is its Windows 10 exclusivity. Its hardware support dates back to Fermi and HD 7000 series.

    You bring up cost. We outside Nvidia obviously don't know what the cost is for providing DX12 support for Fermi, though Nvidia obviously deemed the cost to be ok.
    Nvidia generally has long support for both operating systems and its hardware. Nvidia seems to think long support time is an investment.
     
  7. km52

    km52 Member Guru

    Messages:
    101
    Likes Received:
    0
    GPU:
    EVGA GTX 1070 FTW
    You're using an older 3DMark build, yours is v1.5.884 while mine is at v1.5.915 (latest steam release at the time).
    Also, the render resolution should be left at the default 720p, it's a CPU/API and not a GPU test.
     
  8. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,108
    Likes Received:
    183
    GPU:
    Sapphire 7970 Quadrobake
    This is the second thread that you say things with an authoritative style, and expecting everyone to believe them, with no proof to back it up. All of the above sentence is completely wrong.

    The 2/3/4/5/6000 series from AMD are part of the Terascale architecture, which is a VLIW design. The difference between that and Fermi/GCN couldn't be greater.
    A schematic from the article that explains it:
    [​IMG]

    Furthermore:
    AMD is not lazy not to support DX12. DX12 has compute requirements that a specialized VLIW architecture like Terascale cannot physically do. It can't calculate wavefronts of instructions in parallel, and it cannot switch between tasks. Fermi is an SIMD architecture with a hardware scheduler, and it can. Simple as that.

    This is correct more or less. Although it seems that Kepler is a cut-down-compute Fermi tuned for graphics work.
    There are no "compatibility" renderers. You confuse Feature Levels that exist since the days of DX11, with DX point releases like DX11.1 etc. If a game supports DX12, it has to run on all feature levels. That's it. Unless the programmers specifically request a feature level, the DX compiler adjusts it to the hardware it finds.
    Furthermore, the XboxOne developers have access to Feature Level 11_1. Everything is in the link above.

    Maxwell 2.0 is multiple queue, but it can only run one queue at the same time unlike GCN that can run them simultaneously. With your way of thinking developers shouldn't optimize for Maxwell either, and just wait for Pascal.
     
  9. siriq

    siriq Master Guru

    Messages:
    788
    Likes Received:
    14
    GPU:
    Evga GTX 570 Classified
    Doesn't matter since not a single DX 12 program able to run with that driver on my rig. Tested few.
     
  10. siriq

    siriq Master Guru

    Messages:
    788
    Likes Received:
    14
    GPU:
    Evga GTX 570 Classified
    I gave one more chance to this driver. Updated 3Dmark, it went thru, however i was able to run 0 dx 12 program. tried Zelda, Star Wars , Sun Temple and of course Ashes Of Singularity as well. All gave me the same error msg. This driver is far from ready.
    http://www.3dmark.com/3dm/9211957

    I would assume, this is an early Alpha driver. Nothing to see here yet. Now i go back to latest one. I got less score in DX 11 api test.
     
    Last edited: Nov 9, 2015

  11. Reddoguk

    Reddoguk Ancient Guru

    Messages:
    1,865
    Likes Received:
    180
    GPU:
    Guru3d GTX 980 G1
    Windows 10 got an update to 10590, looking for info on the update but not finding anything yet.

    Hate that with Win10, no idea what you are installing when you do a Cumulative Update.

    Mostly think these updates involves fixing security holes in CPU weaknesses in Hyper-V.
     
  12. tsunami231

    tsunami231 Ancient Guru

    Messages:
    10,289
    Likes Received:
    527
    GPU:
    EVGA 1070Ti Black
    Atlest there is info of any kind on the updates when they are released at the begining there was no info at all
     
  13. dr_rus

    dr_rus Ancient Guru

    Messages:
    2,985
    Likes Received:
    333
    GPU:
    RTX 2080 OC
    All of this is completely right. You're quoting stuff which you don't understand. Instruction scheduling has nothing to do with API job submission, and all DX11 hardware can theoretically support DX12 under FL11_x. There is no requirement to be able to run several jobs concurrently anywhere in DX12 - and this is easy to see if you'll think about Fermi and Kepler which cannot do this but are still getting DX12 drivers. AMD's VLIW architectures can most certainly calculate wavefronts in parallel although the granulation of such calculation is admittedly low because of inherent VLIW limitations. And it can most definitely switch between tasks otherwise you wouldn't be able to run anything but pixel shader on it. Stop talking about things which you don't understand. The only reason AMD did not provide DX12 drivers for 5000/6000 series of Radeons is because they don't have the resources for this.

    Kepler is not a cut down Fermi in any way. Kepler is the first complete rebuild of NV's double pumping architecture which was introduced in Tesla G80. Fermi is closer to Tesla than to Kepler. Maxwell is to Kepler what Fermi was to Tesla more or less -- a feature update with smaller rebuild of the basic SIMD structures.

    A renderer created for DX11 API alongside a DX12 renderer is a compatibility renderer because it is created for systems where DX12 API is unavailable. If a game requests a feature which isn't present in a h/w it won't run on this h/w. Nowhere in DX12 does it say that "If a game supports DX12, it has to run on all feature levels." That's simply bull****. A renderer written for FL12_1 won't run on FL12_0 h/w because some features will simply be absent from that h/w.

    Of course they have. Why wouldn't they? Feature levels are just batches of features provided to make it easier for developers to target groups of h/w instead of figuring out what is supported by each GPU on the market. An XBO developer can easily write a DX12 FL11_1 renderer which will run on XBO and all FL11_1+ PC h/w. This can be an option for simpler games which do not require advanced features.

    Maxwell is multiple queue and it can run several queues (up to 32) at a time. This is the official information we have right now. It may not come to be true if NV will not release the driver they've promised. But for now this is their official position.
    If you're talking about async compute then you can't "optimize for Maxwell" in general as all GPUs inside the lineup will behave differently here. Same is true for the current GCN lineup. Async compute on PC is a big question mark at the moment because you really have to hand tune to each GPU to get the benefit from it otherwise you're risking running into stuttering, hitching and an actual loss of performance even on GCN cards. I'm not sure how many developers will actually be willing to go that far into PC specific optimizations.
     
    Last edited: Nov 11, 2015
  14. Reddoguk

    Reddoguk Ancient Guru

    Messages:
    1,865
    Likes Received:
    180
    GPU:
    Guru3d GTX 980 G1
    Yea true, i noticed that now they are giving some better info because like you said there was NO info in the start.

    Maybe the difference in the two Nvidia architectures means they may in future need a separate driver or maybe they are struggling to implement all of the DX12 feature in one single driver package.
     
    Last edited: Nov 11, 2015
  15. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,327
    Likes Received:
    177
    GPU:
    MSI GTX1070 GamingX
    Couldn't this be done automatically with some kind of system profiling app?
     

  16. dr_rus

    dr_rus Ancient Guru

    Messages:
    2,985
    Likes Received:
    333
    GPU:
    RTX 2080 OC
    What's done automatically? You have to hand tune the exact load of each shader running concurrently to the exact number of h/w resources available alongside the main graphics thread. This is basically a programming task, not something which can be "profiled" on the go. You still need to have the sets of shaders optimized for each GPU to select from them based on the profiling performed. This will need a lot of resources which I don't think that many devs will be willing to spend on PC versions.
     
  17. Terepin

    Terepin Master Guru

    Messages:
    715
    Likes Received:
    33
    GPU:
    MSI 2070S GAMING X
    I got update to 10601.
     
  18. RzrTrek

    RzrTrek Ancient Guru

    Messages:
    2,461
    Likes Received:
    676
    GPU:
    RX 580 ❤ MESA 20.0+
    Are you an insider or are you simply receiving those updates automatically through Windows update?
     
    Last edited: Nov 11, 2015
  19. dr_rus

    dr_rus Ancient Guru

    Messages:
    2,985
    Likes Received:
    333
    GPU:
    RTX 2080 OC
    Here's an interesting result for Radeon 380 btw:

    [​IMG]

    As you can see it falls out of the general behavior of Radeons in this benchmark and actually loose some performance when running under DX12 - much like the GeForces.

    I'm not saying that this is the result of async compute in this benchmark not being optimized for Tonga GPUs but that may be a possibility.

    (Interesting benchmark in general btw)
     
  20. myshkin

    myshkin Member

    Messages:
    23
    Likes Received:
    0
    GPU:
    GTX 560 ti
    Just posting to say thanks for making this thread, and big thanks also to km52 for testing the 358.70 driver with the insider preview of 10586 and, and for actually letting people know WDDM 2.0 works with fermi on it. :)

    Now that the fall update (10586) has been released, after installing it I can confirm other D3D12 applications do work aside from the 3DMark test. I just quickly compiled and ran a selection of the official D3D12 sample applications from microsoft (using the machine in my specs on this guru3d account). All the sample apps I tried ran fine.
     
    Last edited: Nov 13, 2015

Share This Page