Discussion in 'Videocards - AMD Radeon Drivers Section' started by PrMinisterGR, Aug 17, 2015.
Find the link
Full history http://www.pcper.com/reviews/Graphics-Cards/DX12-GPU-and-CPU-Performance-Tested-Ashes-Singularity-Benchmark
That's another Gaming Evolved title, so the point is well taken, but we get the undertones.
One question: Does AMD even consider adding proper Vsync/TripleBuffering/RenderAhead/FrameLimiter controls, or not? It will really influence my next purchase. Thanks.
Every new game out there has treated this CPU better and better. It won't be faster than overclocked i5's, but it will be faster than the crippled i5 you might get at the same price. The total price of the platform is lower too.
Nvidia has had "Game Ready" drivers for AotS for a week now. These are the drivers everybody tested with. As for the rest, the overhead thread is where you need to go.
Nvidia know that Gameworks is a pain in the ass for AMD users but now the other cheek has been bite hard in the ass & Nvidia is saying Ashes of the Singularity is not a real DX12 Benchmark LMAO.
Wait for their "real benchmark" with hacked results to look cool for more info.
PS: Was on the "ATI 9800 vs nVidia 580" last time they cheated on benchmarks?
You mean like quack.exe and the FP16 demotion? Both sides cheat. Get over it.
Nvidia calling this AOS a bad test is dumb, I agree, but the results aren't even impressive. All it shows is that AMD's DX11 driver is garbage with huge draw calls and Nvidia can't find any additional performance in DX12.
In most titles DX12 won't even have an impact unless you're on an low end processor.
I actually wonder if they can "cheat" now in the benchmarks.
What is cheating? Optimizing for the specific game? As long as they aren't impacting image quality in a negative way, it should be fine, no?
I would be incredibly surprised if the DX11 Nvidia drivers were at the level that DX12 offered little to no benefit.
Why would Nvidia be spending any time/money on it?
The whole point of the DX12 driver is that it is "thin". That the manufacturer is not allowed to directly intervene on game code. That's the whole point of low-level programming in the end. If you read the Oxide blog, they said that NVIDIA suggested them some changes in their shader code, and they incorporated it. If it was the olden days, they would have simply supplanted them in the driver. The problem with "cheating" is that different developers get different reactions from the driver. If you make app A which is famous, and NVIDIA optimizes your shader code in their driver, you're screwing the developer who makes app B and is less famous, and will never have a team of experts write code for them. And the driver would behave differently towards your application too. It created two different kinds of apps.
DX12 and Vulkan is supposed to be the end of this.
Optimizing the driver itself (scheduling within the GPU etc), is necessary and good, but the rest...
That's a testament to the speed of the NVIDIA DX11 driver. I don't understand why NVIDIA and some NVIDIA fanbois are feeling bad about this.
You were getting 100% of your hardware from DAY ONE. If any, the people with Kepler and AMD users should be very upset.
I definitely don't think 100% would be a bad thing, I'm just not convinced it is 100%.
As previously mentioned, it would be be nice to see some CPU/GPU usage stats.
I was referring to what can be done with DX11. It is nice as an NVIDIA owner to know that you get most of what your card can do, the moment you get it (see 980 DX11 performance). It's not so nice when NVIDIA changes architecture with every iteration, and you suddenly find that your old card could have been 35% faster, but nobody optimized the driver efficiently for it (see GTX 770).
Changing architecture is much better than releasing the 300 series as respins imo. I had 2 sets of Kepler cards, 680 sli and 780ti sli and never noticed anything going on performance wise with drivers.
It's a maintenance nightmare, and when you have a more or less stable arch, you can support older hardware much better. Weren't you surprised that the 770 got a 35% increase under DX12? The Maxwells didn't, doesn't that ring any bells?
I was pessimistic but I still cautiously hoped these hyped new APIs might manage to bring some real and notable improvements on the GPU front as well (similarly how the utilization of advanced CPU instruction sets [SSE,AVX,FMA,...] or even more architecture-specific optimizations [especially when done by hand but even through automatic compiler optimizations] might bring really nice speedups in certain kinds of CPU tasks) and it's still possible to happen in the future but I am more pessimistic after seeing some Mantle and some early DX12 results. It seems like it's really just about the CPU, not the GPU at all (if we consider real and significant differences only - on SINGLE GPU). And I think that's a problem because the GPU evolution (in terms of raw horsepower) will slow down along the CPU evolution (just take a look at a SandyBridge<->SkyLake clock-to-clock benchmark and how they stall the move to 6 or 8 cores for the mainstream). It takes more and more time to get a new fabrication process working as expected and even those bring diminishing returns in terms of performance benefits and today's top VAGs are already "monsters" (300W+ beasts which should be tamed by a watercooler and the prices seem to crawl up from gen to gen, so I don't really want several pieces of those in a single gaming PC...).
Some noteworthy things I still remember from funny unofficial (fanboy) AMD marketing:
1: "You should buy the HD4xxx because you will need DX 10.1 for anti-aliasing in games with deferred shading"
2a: "You should by the HD5xxx because it WILL BE faster in tessellation"
2b: "You should buy the HD6xxx because now it's really faster in tessellation (faster than the earlier-gen Geforce which emulated the tessellation unit...)"
3: "You should buy the HD7xxx because you will need GCN for real DX11 support and GPU computing in games [GPU accelerated AI, physics, ect]
Several years and VGA generations later (with a 290X which could be called HD8xxx for comparison) I play tons of hours of DIA with Mantle (which is comparable to DX12) and:
1: MSAA still doesn't work (the fps is significantly lower but the aliasing is virtually the same with 4x MSAA, so I obviously turn it off :3eyes
2ab: I still turn off the tessellation (completely) because the overall quality/performance ratio is still miserable in my opinion, even at Low level (it makes the ground a little more bumpy but that's all [and it's not even that nice and it's still weird around the edges of objects laying on the ground] but it demands ~1/3 of the framerate even when I am barely having any tessellated surfaces on the screen and I set it to Low :3eyes.
3: still none of that is happening and the techno-babby is replaced, now I will need az APU (and it's integrated GPU with access to the CPU memory) to do that kind of magic (even though Physx could run just fine on "ancient" Geforce cards like the 8800GT and most of the other physics engines are far from that level but whatever...) and, of course, DX12 (or Mantle) instead of DX11 (before I might forget that in this topic).:infinity:
Does the 3D Mark api overhead test match up with the results from this?
That's is still a raw CPU speed benchmark (and the validation of a technically working API). This is a CPU+GPU graphics test. Apples and sharks.
there is something weird with FX cpu performance in ashes of singularity
it seems like these CPUs are not fully utilized as difference between fx-6300 and fx-8370 should be much higher (25% from core count and about 10% from frequency)
also devs stated before that FX-8350 is close to i7-4770 in this test..
but im happy about 290/390(X) performance finally what it should be from day one
Actually fx cpus are faster under dx12 than under dx11, but they're still a pile of junk.
in star swarm test there was 100% scaling and fx performed on i7 levels
dont think that this test is limited to 6 cores only see this picture
this can easily be just badly tested platform or something
also draw call in 3dmark scales with more cores under DX12