Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Sep 10, 2018.
https://videocardz.com/77895/the-new-features-of-nvidia-turing-architecture most likely fake
I agree 100% with you if this is going mainsteam needs both ventors to support it at some level and MOST certainly not only the high end up to enthusiast class cards.
all true...but you are speaking of initial orders and not supplementary ones...EVGA, Asus, MSI, and Gigabyte all were apoplectic after the initial run of Vega sold out and miners were dipping down to RX580's. each and every one of them put in supplemental orders to the point Nvidia
ordered additional production runs of Pascal gpus.
the foundry order is what got Nvidia into trouble...previously Nvidia has kept a lid on fab runs for both price and exclusivity...mining blew that up and now Nvidia's doing the stupid...launching a new architecture with excess inventory of the previous.
the following is pure speculation with no source of record.
i deeply suspect that due to Navi and RX Vega (later, but still coming) that Nvidia will node shrink GP 104 from 16 to 7nm. this accomplishes several goals at the same time;
1) proven architecture on a new node, so less power and higher clock speed, 2) GP 104 is a great design and can achieve economies of scale on 7nm...reducing cost of manufacture by around 40%, 3) a node shrunk GP 104 can outperform GP 102 with judicious clocks and at far less power, 4) node shrinking GP 104 will dominate gaming laptop sales..right now a niche, but at 7nm laptops have a whole new outlook for their power envelope.
and during this whole refresh they can still hold the high ground with RTX...until 7nm RX Vega at the end of next year.
Reason why I mention a non vendor specific way is that raytracing isn't just some random gimmick like hairworks. It's simply... lightening... done with 1 "real" source, the sun. It's something so fundamentally basic to a 3D environment that imo having vendor specific implementations for this is idiotic.
The description of raytracing is that simple, the hardware implementation is not that simple.. especially with what's required to get it running in real time at the moment. A complete 100% universal implementation is not going to happen.. but I don't know why you'd need that anyway. Nearly nothing works like that under the hood currently.. most engines already have multiple code paths and compilers are adding all kinds of shader intrinsic functions for specific hardware. Microsoft's DXR is defined in a way that keeps the end result near universal (it defines a feature set) but the backend is entirely on the vendor.. at some level the code is going to diverge.
I do not think it should diverge on Game level. Remember HAL and what that's for? What's difference with DX? Yes, it is DX to Game communication where everything should be 1:1, no special magic. That's what DX is for. DX defines methods. To outs from DX AMD, nV, intel, ... connect their drivers and adhere to rules of imports/exports.
Differences should start on driver level.
It should not matter that DXR was co-developed by MS and nV. Still, DX part should have certain features revealed to game developer and ask certain actions from graphical driver. Saying that exactly this HW is needed for this is not and never was part of DX. Mind that if that was the case, nVidia would not be in business today. (null=dummy implementaion)
So, there is need to calculate some rays based on vectors given by game/DX and then some temporal cleanup. No reason to say that even current GPUs can't do that with certain limitations like lower quality.
(AMD's tessellation slider. nVidia's -+4 texture LOD slider...)
DXR is out there, AMD has access. And it should be AMD to say if they are going to support it on current HW or not.
I mean all of this is fine as an opinion but it's clearly not the way the guys developing these architectures see it, nor the game developers. Everyone, including AMD, is pushing to bypass the APIs.. that was the entire point with Shader Intrinsics, Mantle/DX12/NVAPI/etc. I don't know if that's because it's an optimization problem (for example Microsoft/Nvidia/AMD don't know how game developers are going to integrate certain features so them coding a "catch all" with multiple abstraction layers is way less optimal than devs themselves just handling the implementation) or what the reasoning is.. but they are all pushing for it and using it (in NVAPIs case it's been used for years now, even in DX11/OpenGL)
I don't see how DXR is any different. The implementation to accelerate it from vendors is going to be widely different and the hardware changes to support DXR can be used for a multitude of different functions.
That's not to mention that Microsoft does have a DirectCompute fallback layer (that DICE isn't supporting, probably because it's too slow)