You're only getting some features of DX12. There's some features that require new hardware like the 9 series from nivdia.
as before with dx 11( remember we had some dx10.1 cards from amd that had hardware accelerated tessellation making them more dx 11 compatible than other dx 10 cards) at this point i would imagine that maxwell has support for most of the features perhaps not the unannounced ones(although i would imagine it has those as well ) sd for older nvidia cards like kepler and fermi, each will support low level access(since it's not a feature it's a promise and without it there is nothing that special about dx12, keep in mind the new features have not been demoed in any way form or shape until now) fermi probably only support it but none of the exclusive features dx 12 offers(only dx 12 itself and the rest of features that where present in dx 11) while kepler should theoretically support at-least a few hardware features of dx 12
I get that The same thing will happen when 13 comes out or whatever the newest is. I've seen everyone get excited the same way from DX8-12 now. lol
yes...even USB 1.0 or 2.0 or 2.1...simply try google for how but since you are asking, I would not advice it....it is absolutely _NOT_ bug free and MS will push quite a few updates. If you are brave, try it but be aware that anything you have might get lost. I dropped the test because the version in december made it not workable for me (and there was no way to stop it from updating to it). Way too many bugs for a day to day usage. On the plus side though, it felt a bit snappier and faster than Win7 or 8.1, nothing one can put the finger on really.
The article is a quite misleading attempt to describe the end effect of Direct3D 12 in layman's terms. First, version number of the API/runtime has no relevance to hardware features of the GPU anymore, because Direct3D 10.1 introduced the concept of feature levels, and Direct3D 11 extended feature levels to D3D9-class hardware. All GPUs produced since 2003 - cards like GeForce 6000 or Radeon X600, laughable by today's performance standards - are supported by the main Direct3D 11 API and runtime. You can program these older cards in a subset of the Direct3D 11 API and scale the performance from low to high end without reverting to Direct3D 9. What is relevant is GPU's feature level, which encapsulates things like the number of hardware registers, shader language model and processing stages, etc. Each upper level is a strict superset of any lower level and often includes features which were once optional on lower levels. There are currently 7 such feature levels - 9_1, 9_2 and 9_3 for various 9.0a/b/c cards (10level9), and levels 10_0, 10_1, 11_0 and 11_1 for more modern cards. Again, the API is still the same - Direct3D 11, a superset of Direct3D 10. There is no level 11_2, but confusingly Direct3D 11.2 API/runtime introduces CAPs (capability bits) again for some important optional features which are not made part of any level. (This is how marketing folks at Nvidia always come about not really supporting 10_1 and 11_1 feature levels - they claim full support for Direct3D 10.1, 11.1 and 11.2 API instead, which is actually true for any card as old as 2003, so it's a clever way of saying nothing informative.) Direct3D 12 is expected to be very similar in this respect - though the 12 API and the runtime require level 11_0 hardware starting with Radeon HD 7000 (GCN 1.x) and GeForce 400 (Fermi/Maxwell/Kepler). How the proposed new hardware features will be presented is unknown yet - probably as a feature level 11_3 (since there will be Direct3D 11.3 runtime, more on that later), but it could be another set of optional CAPs. So all this talk about "you certainly need D3D12 level hardware to fully exploit blah blah blah" is misinformed, and of course marketing people wouldn't mind you buying a new graphics card. The truth is, current Direct3D 11 cards should work very well with Direct3D 12 (including XBox One which has a Radeon HD77x0 level GPU inside). There are certainly no 12_0 level cards on the horizon - no SM6.0 with additional shader stages, no realtime raytracing, nothing like that, and planned additional hardware features in Direct3D 12 hardly warrant a new feature level 12_0. So today the only reason for an upgrade would be improvements in graphics performance, not some minor new features. Now the Direct3D 12 API is quite revolutionary in itself, since Microsoft basically takes latest 11_0 GPUs and makes them the lowest common denominator, throwing away most hardware abstraction levels present since the early days of Direct3D. So now there are no generalized surface formats, no type checking and automatic resource/memory management in the API, etc. The driver and the runtime are as close to the actual shader ALUs, the scheduler and the onboard memory as it is practically possible, with much less intermediate levels. This should free a lot of additional CPU time to use on your actual game logic, if you can handle the low level details. Which means two things. First, there is a completely new driver model, WDDM 2.0, which seems to only feature user-mode driver. Kernel-mode component is either eliminated completely, or generalized and moved into the OS kernel. The driver and the API are "immutable" - i.e. basically stripped of any read-modify-write operations such as texture format conversion which can stall inter-process synchronization - and that allows true multicore rendering where each of multiple CPU cores can process multiple command streams, unlike Direct3D 11. Second, the existing Direct3D 10/11 runtime is still there for compatibility purposes and basically becomes a higher-level API for those who do not want or need low-level access provided by Direct3D 12 and prefer still convenient way of having automatic resource management, however non-optimal from performance point of view. Which means that any new hardware features will need to be implemented at Direct3D 11 level as well, since many developers will still be working with it for a foreseeable future. Hence Windows 10 will have a Direct3D 11.3 API/runtime and my expectations are there will either be level 11_3 or more optional CAPs at level 11_1, and not a separate level 12_0, at least initially. That again makes all this talk about non-existing "DX12 hardware" (i.e. "level 12_0") seem like total bul****. Hope it makes sense.
Bring on the beta driver AMD/Nvidia. I am downloading now. The testing will commence, ETA 16 ::Macrosoft:: minutes.
Windows 8 drivers run fine in 10, as for DX12, there are no apps that support the API yet so I dont think there is much of a point.
I've in the past been very much against DX12 being Win10 only. I've heavily in favor of mass adoption of DX12, I would love a PC gaming future where we get the most out of our hardware just like devs do with consoles. It still feels frustrating that the huge user base of Win7 wont be getting DX12 which will slow devs use of the API, but the fact that the new OS will be a free upgrade for these people is at least a good middle ground for what I want. I'll take it. Not quite a full 360, more like a 180.
A 360 would have you going in exactly the same direction as you were before. A 180 exactly the opposite. So, maybe you're somewhere in between 90 and 180 degrees.
Did you read the entire thread from start? Or did you just read the last 2 posts? Your answer is in the thread.
Of course they are, it's the same architecture as the XBox One, which is getting Direct3D 12 as well. The language used makes me really doubt that GTX980 supports any of these new proposed "hardware" features, and they are only minor tweaks on top of level 11_1 anyway: conservative rasterization volume tiled resources rasterizer ordered view typed UAV load See also: Nvidia’s ace in the hole against AMD: Maxwell is the first GPU with full DirectX 12 support http://www.extremetech.com/computin...is-the-first-gpu-with-full-directx-12-support Most DirectX 12 features won’t require a new graphics card http://www.extremetech.com/gaming/198204-most-directx-12-features-wont-require-a-new-graphics-card
Maxwell (9xx) series supports those features on a hardware level through DX11.3 http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/4 Some of the features are suppose to be available on a software level like you suggested, tiled resources for example was demonstrated on a 770. I'm curious to know if all of 12's features will be available to 11.3 (except the low overhead) or if they plan on adding more hardware orientated feature levels (and if they can still be supported on DX11 cards) beyond that. Only time will tell.
Bravo DmityKo, I never knew all of that! =] That's very interesting to know about DX12, even funnier that it all just seems like a marketing ploy from Nvidia and AMD now to push new DX level cards.
Other reviews, like the ExtremeTech bit above, do not really make such bold statements, so I'd leave some room for errors of judgement. This is not possible - it requires virtual memory tables and TLBs in each texturing unit. If hardware support is not there, it can't be implemented in the WDDM driver. Since Direct3D 11 and Direct3D 12 will go in parallel for some time, I'd actually expect any new hardware features to remain available in Direct3D 11. Microsoft can introduce feature level 12_0 in D3D11 or level 11_3 in D3D12 - the numbers don't really matter as current 11_x cards are fine with either version of the API. It's not about a marketing ploy from GPU makers, it's about how the media never gets the little details right.
Can't pretty much any feature be done through software, although at a great performance cost? I also hope the Xbox One will drive DX12 adoption, although I guess the same could have been said about DX11.2 (since the Xbox One supposedly had a specialized DX11.2)
Nope. At the lowest level, you have a few thousand SIMD (ALU) units with their own instruction set and register file, and a few dosen memory lookup units (TMU). The user-mode driver compiles shader bytecode to a machine code used by that particular architecture. If a particular ALU instruction doesn't support a required combination of registers, or a required number of operands, or a needed data format which is critical for some shader-language feature, there is nothing you can do about it. The same goes for virtual memory - you have to have virtual page tables and memory caches in the TMU. Theoretically you can do data conversion on the fly and emulate missing instructions with inline macros, but it would be a dozen times slower. Think about emulating AVX instructions on older x86 CPUs, or numerous X87 emulators of i80386 era. It's far more effective to let the application choose an optimal code path based on actual capabilities of the hardware.