Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Feb 27, 2020.
Why no Quadro's?
HBM isn't owned by AMD. It was developed with SK Hynix. AMD has no control over who uses it.
FreeSync is just the name AMD gave to their implementation of Variable Refresh Rate. VESA calls it Adaptive-Sync. Adaptive-Sync was adopted by VESA and added to the DP1.2a standard. If you want to get technicial, NVidia doesn't support AMD's FreeSync....nor any FreeSync monitor. NVidia is supporting VESA's DP1.2a interface standard.
AMD doesn't benefit directly from Adaptive-Sync. Conversely, NVidia does benefit directly from CUDA.
AMD isn't a memory manufacturer. They needed one to actually get things done outside of a laboratory. I never said they have control. If they did, would Nvidia be using it? However, Nvidia seems to have no compunctions using something AMD developed.
Reread what I wrote. And of course they both benefit from adaptive sync. It's a recognised technology valued by lots of gamers. Gamers need good GPUs. The huge pool of adaptive sync screens would be a small pool without AMD because Nvidia wants you to pay 100 bucks extra for the small module inside, in addition to the other technology the screen must contain (like a sufficient panel). AMD doesn't. Consequently if a gamer wanted adaptive sync, they needed to either pay significantly more for Nvidia video card + expensive Gsync screen or less for an AMD GPU + any random adaptive sync screen. Since AMD GPUs have been less desirable for a while now, the difference in price with an adaptive sync package would have worked to compensate. Now Nvidia finally allows people to use a non-G-sync screen as well for adaptive sync, so people can go the Nvidia way without paying as much.
I'm using a RX470 and the setting is there. (COMPUTE GPU WORKLOAD AMD)
Not sure about Vega though.
If you go to the right side cog, click and then click graphics and scroll down to advanced and click .it should be there under GPU WORKLOAD.
Worth a look.
There is less chance he will miss your post if you reply to him.
That setting isn't available to everyone.
I have an RX5700 and the setting doesn't exist for me.
It's quite possible that the setting only exists for certain GPUs....
There are dozens of technologies Nvidia shared and contributed that AMD utilizes. There is also examples of technologies that AMD/ATi developed that Nvidia couldn't use. I don't really think pointing the finger at one and saying it's worse really means much when you start stacking everything up.
Unity is targeting different audience than Blender, V-RAY, Indigo Renderer etc and other renderers as you know, Unity is mostly used for game development although can be used too for archviz visualisation or visualisation of products etc
There are no test scene for Unity which most of reviewers could use and similarly this applies to Unreal
GTX 1080 test not sure there, I can only comment on my tests, my Asus RTX 2080Ti Strix will render Blender Classroom in 75 seconds with OptiX and 4*GPUs(RTX 2080Ti Strix, GTX1080, GTX1080 Ti and GTX1080) will finish that in 42-45 seconds without the OptiX and Asus RTX 2080Ti Strix without the OptiX will render that in 2 minutes and 19 seconds that's in Blender Cycles, in E-Cycles same scene it will render in 1minute and 44 seconds that's on Asus RTX 2080Ti Strix and without the OptiX and same scene with 4*GPUs(Asus RTX 2080Ti Strix, GTX1080, GTX1080 and GTX1080Ti) will render in 32-36 seconds and just two GTX1080 will render that scene in E-Cycles in 1 minute and 29 seconds
Hope this helps
A lot of indie developers, who use Blender for example, also use Unity. All of these are just dev tools with GPU accelerated lighting. There is no reason to exclude Unity from the comparison.
Benchmarking Unity's GPU lightmapper is not hard. Just download a free scene (or create a simple one from boxes) and make sure the lighting takes at least 30 seconds. From my experience the internal lighting statistics are pretty good for benchmarking. My GTX1080 peak performance is always at 280mrays/s +- 2mrays/s.
I am not so sure it makes sense for Nvidia to allow anyone else to use the CUDA API because I do not know where they would see a benefit from doing so. The only thing that would happen is they could potentially lose market share. As I wrote, if Google wins their case against Oracle that becomes a moot point. Then the question becomes does AMD (or a third party) want to support CUDA on their GPUs? Conversion tools are not the same as supporting it because the conversion is not 100%. It is a one-time thing and then you have to tweak things here and there. This means, in the end, a developer has to choose a direction. There is a LOT of infrastructure involved for full support and it will be very interesting to see what AMD does.
Sure. I wasn't really writing the comment from such a rational point of view. AMD could have kept HBM under heavy protection and licensing scheme, but it's possible it wouldn't have ever progressed anywhere like that, becoming another RDRAM. Hynix and others probably wouldn't have started to produce it if there weren't enough customer potential (which AMD alone isn't). Nvidia could only go for their closed technologies because of its highly dominating position. But even so there seemed to be a limit with adaptive sync.
The really big difference in scenarios here is HBM is a hardware thing and those require even more infrastructure support, especially if you want affordable parts. What is rather ironic about this is software is probably the key to Nvidia's success in the HPC and AI arenas at the moment and they give it all away for free. Several reports say AMD's GPUs are superior on OpenCL and OpenACC performance so, obviously, there is a little more too it.
HBM was introduced by AMD and Hynix, so doubt whether and licensing scheme/protection could have materialized since these companies belong to JEDEC. Though Hynix did agree to let AMD produce the first GPU with HBM in 2015. Samsung introduced HBM2 in 2016 and was first used in Tesla.
Ah Ok I'm using Polaris 20 (RX580) maybe Navi doesn't need this switch it just does it automatically