Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jan 17, 2019.
Feel free to provide links at any point instead of just shitposting.
Yea it's Vega 64 cranked up so yes there is more. 1:16 FP64 performance is greater than 2080 Ti and FP16 half precision is nearly a tie with 2080 Ti. AMD cards for years have had great compute that hasn't been necessarily leverage appropriately in games. Exactly why they have performed in other GPU use cases. If they can leverage their extra compute they might deliver DLSS nicely. We'll see but to say that there is no extra compute with Radeon 7 because it is very much Vega is very wrong. Where you think all that extra performance is coming from anyways?
FP64 performance is useless outside of supercomputer applications. RVII won't be used in supercomputers for sure.
Nearly a tie of what? All Turings support double speed FP16 on main SIMDs. All Turings so far have tensor cores which perform FP16 matrix multiplications for DL
No, they can't. Turing is using tensor core for DLSS and 2080Ti have 114TOPS of FP16 DL performance on tensor cores alone. That's about four times more than RVII's peak FP16 performance even without the need to use the same SIMDs on RVII for general compute too.
For gaming there's nothing really new in RVII compared to Vega 56/64, nor in graphical feature neither in compute.
Clocks and memory bandwidth, of course.
You might be surprised what Radeon 7 ends up in. The Vega 20 design is actually very much for professional use. AMD decided they could score some extra money selling it to gamers as well.
Guess we will see how AMD performs DLSS type duties in the future. We both only speculate at this time. I bet they can leverage something in the existing architecture. Sure, Nvidia is using Tensor cores but that doesn't mean AMD can't use DirectML with existing architecture successfully. There are a lot of performance and visual changes/improvements that happen outside of hardware. A lot of what a GPU can do is defined by drivers and software. Wasn't but a year or less ago we thought tensor cores were for deep learning duties not to be leveraged by gamers now look how deep learning has been applied to gaming. Sure Radeon 7 is nothing new on the surface but it doesn't mean it can't be utilized different.
Higher clocks resulted in more compute...
Radeon VII won't end up anywhere but in Radeon VII.
Nvidia should invest more time in developing tools to improve games ai, this is also something amd could do. Tho im not sure how their are compared to nvidia tech and knowledge. Amd needs to make something in the future, they could work together on this since they could both benefit from it. And guessing nvidia would make proprietary to their hardware, like psyx etc. Tho, im guessing these two companies working together on this is very unrealistic.
I do think nvidia is missing a opportunity tho. Instead of mainly focusing on raytracing or DLSS they should use their know how on this matter. There could be a revolution in gaming ai. And if not locked 100% to nvidia hardware it could be awesome.
Its off topic, but is there any info regarding next gen ai that can be used in games thats a interesting read? I havent payed much attention on this matter for some time now, and for all i know they have something in the works now after tensor cores, etc is available for us gamers. Even a low end rtx card must have some potential regarding ai?
Radeon VII will very likely find it's way into non gaming systems as it offers very good compute for those that can utilize it at $700. I guess you just like trolling.
Additionally, Radeon VII is Vega 20 and will be used for more than Radeon VII. Radeon VII will also very likely find it's way into non gaming systems as it offers very good compute for those that can utilize it at $700 price point.
Radeon VII can find it's way anywhere but it will still be Radeon VII and because of this it won't be used in HPC applications where its FP64 performance would be usable. I guess you're just not too bright.
I guess you just think you have the world figured out... Conversation with you was pointless from the beginning.
Sure, me saying obvious things about Radeon VII means that I have the world figured out. L for logic.
@dr_rus & @Jayp : Guys, that "L" thing you have here... kind of reminds me of "The L Word". Read names of Episodes as a hint.
Hey people. Stop it. Only warning.
Is it possible by MS DXR to assign 1 GPU for Ray-tracing calculation while primary GPU handles the gaming? like PhysX in the old days?
Asking if MS DXR API can support this task .
So in future you can get GPU like Vega64/1080TI and you add RX 580 for ray-tracing to reduce the performance impact?
Sure. RX580 won't reduce anything though, it's not capable of running any noticeable amount of raytracing in real time.
OK, so Lets say 2080TI+2080TI For 4K High FPS gaming if you just won the lottery .
Just asking from the Code perspective, to see if in future we gonna be back to multi-GPU rigs.
You'd lose BVH performance as it shares some data with raster, so you'd either need to redo everything on 2nd GPU or copy that data (essentially start RT later in the frame) but y ou'd gain some trace performance (you start later but the actual cost of RT in terms of total time is less).
If you have an entirely raytraced scene you'd probably see some benefit as the time to frameout is entirely dependent on the ray calcs and less-so on BVH but as it currently stands it seems like BVH generation takes some significant portion of time (a lot of DICE's optimizations focused on simplifying the BVH and starting it earlier).
I think it would be a lot of work from a development standpoint for relatively little gain in performance on current workloads. I also agree with dr_rus that you wouldn't see much on a 580 as the time to sync the data outweighs any performance advantage you'd see in RT calc with that card.
Not with Fake raytracing which is shader based. Real raytracing done purely through compute, easy.
But those pixels which are to be raytraced are part of scene, not some special list.
But SLi/CFX will work. Thing is that with this "hybrid raytracing" all cards need same scene and data used. Which may show some memory restrictions effects. (And card to card data transfer limitations.)
@dr_rus : Thing here is that while Tensor cores are not part of shaders, RT is. Therefore offloading task to another GPU would help.
It's a lot easier to parallelize raytraycing than rasterization so the more raytraycing a game will use - the higher capacity it will have to utilize several GPUs.
Not sure what you mean by "shaders". Both RT and tensors are used through shaders.
Tensors are additional and separate transistors outside shaders => shader code and tensor code can be executed simultaneously. RT is part of shader units and can't be executed at same time.
Secondly, dx-r is not true raytracing, it is just more advanced shader code.
Both RT cores and tensor cores are present inside the same streaming multiprocessor as main SPs/SIMDs (you really shouldn't call them "shaders") and all three are built from separate transistors. Theoretically all three units should be able to run in parallel but on Turing this is limited (by available intra-SM bandwidth and data paths most likely) in such a way that RT and SIMDs can run in parallel but tensors and SIMDs can't (hence this frame graph).
Shader code is any code and any code can be whatever "true" or not raytracing code too. DXR can be used for full path tracing (which is what I assume you mean by "true raytracing") just as well as some hybrid raster+RT solutions (like that of BFV) - it all depends on the code. DXR has no limitations here.