Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by ChaosPhoenix, Aug 24, 2016.
True again. Bitterness subsiding... :nerd:
Well, here's the thing. If DX12 could give FuryX 20% extra performance and Pascal gets let's say 5%, then, the numbers look more like this;
FuryX - 93fps
GTX1080 - 92fps
Totally bs numbers from me, ofcourse, but, it's a possible scenario. Hitman had some nice improvements from DX11 to DX12 so, while only theoretical, there's a good chance that with DX12 that both those 2 cards could end up about par.
Both the examples I have from the Nitrous engine and this one, show that the "AMD favored" engines more or less scale the cards according to the chart I made, while the "NVIDIA favored" engines simply don't seem to use AMD GPUs properly. The only cards that I've seen having an actual problem with AMD-favored engines, have been Kepler cards with this specific engine. As for Unreal 4, it's made by really competent people and it has to run mainly on consoles, so GCN optimizations are unavoidable.
The fact is that if you look at the "AMD optimized" games, they don't really seem to "favor" AMD, merely use AMD properly. That's AMD's problem, but you can't really call favorites.
This depends on patches, but I don't expect the performance relationships of the cards to change much. We'll see I guess. As for the ROP bound, it's not just the ROPs, but graphics performance in general that seems to keep NVIDIA afloat in most modern games.
Well, if the Fury X gets 20% from DX12, then I expect the Pascal cards to gain too. Not 20%, but at least a 10% imo just by the submission changes and the better usage of the CPU in the game in general (there is a limit to what NVIDIA can do with their driver and frontend). The gap between the Fury X and the 1080 will close, but I can't see it getting that close.
These are my thoughts too. I believe that the performance will be even closer to that 9/8.6 ratio, but I can't see the Fury X overtaking the 1080. Keep in mind though, that as NVIDIA's graphics subsystem seems to be much better, so is the overall compute design of GCN. It's more or less the modern compute platform for games, so there might be architectural issues that might make it more efficient in a lot of situations.
It sounds like the MSAA option in the game is not actual MSAA but rather a form of supersampling. I have not seen MSAA have such an insane impact in a game, ever. We have some AA experts in this forum who might know whether this is true.
I fail to see how this struggles on a 1080 when gtaV has so much more going on while looking far superior.
As usual, you push your agenda without thinking or actually looking at the data available. Again: the power of the graphics card is not only in it's rated peak flops, it's in how the system perform overall. And since this game doesn't follow the usual pattern it is doing something bad to NV cards, it's that easy really. The choice of optimizing the game to ~20-30% of the market instead of optimizing it for 70-80% or for 100% is what matter here, not your faulty theories on how something "should" perform. Everything perform exactly like it is programmed to perform. It is the human choice, not some inherent h/w characteristic.
Your repeated theories on Fury being ROP limited are highly dubious as well because I haven't seen any tests which would actually prove it.
And Kepler is perfectly fine in this game btw:
The difference between this benchmark and Guru3D's is in the run used - Guru's benchmark is using the built-in sequence while PCGH is testing the actual gameplay.
This is another point where AMD is blatantly lying to the likes of you by creating a built-in benchmark with graphics fidelity nowhere to be seen in-game (I was to the same scene in the game, it's way less detailed even on Ultra settings than in the built-in benchmark and it doesn't run anywhere as slow as in the benchmark).
"Biased" is calling something "properly" when really "properly" is a good optimization for all h/w on the market instead of just one vendor. There are no indications that these results are somehow more "proper" than, for example, TW3's which is running good across all h/w on the market while being an NV GW title.
Which would mean that the DX12 will be even more badly optimized for NV h/w as even PrMinisterGR's theory of cards sitting on their respective flops numbers wouldn't work in this case.
But tbh I'm not expecting much from DX12 in this game. It's the same engine as Hitman where DX12 gains are very small on Radeons and the fact that they've delayed the renderer means that it's performance is really subpar at the moment, even on AMD's h/w - which can be clearly seen in GameGPU tests of the pre-release version.
This game and benchmark are using DX11. No idea what any "low level API" have to do with it.
I wouldn't say it struggles. I'm at 60 FPS the vast majority of the time, with Vsync on at 1440p. Without Vsync, I can get well into the 70s..
The problem is that the game's engine lacks polish and refinement, which is to be expected I suppose considering it's brand new.
By lacking polish and refinement, I mean there are tons of graphical glitches like settings and effects not working properly. Also, the game obviously has some optimization issues in that it is prone to micro stuttering which I think is either because the asset streaming is too slow, or there is some kind of problem with the occlusion culling system they have..
As for GTA V, that game has nothing on The Division. The Division is the best looking, and best running DX11 game ever made imo...
It's hardly brand new as it's a fork of Glacier 2 engine which itself isn't new as can be guessed from the "2" there.
The perceived lack of polish is due to them using subpar AMD Gaming Evolved effects which are buggy and don't work as intended most of the time. If you turn the settings down to the console level the graphics is a lot more "stable" and "polished" as this is what the devs were working on primarily. Whatever was added on top of this was an afterthought.
Your chart pic is also an eye-opener, helps we can see disabled settings. I was actually amazed that Furyx beat my GTX1070 (looking at HH's numbers using built-in benchmark). However, that chart you posted is more in-line with how I thought my card performed. It's nearly 10fps difference (80.6 vs 71.9). Weird things going on with this game.
I'm going to sit on the fence and wait this out.
dr_rus actually used Prague test (he knows some parts of the game are cpu/gpu bound) smart tho.
Here is the Gardens and Prague tests ingame, Fury X tends to beat 1070 not only in build in benchmark test.
I get decent frame rates (50+) with current setup. Graphics settings are set to Ultra with MSAA - OFF and CHS - OFF. Resolution - 4K. MSAA - ON my performance takes a massive hit. At this resolution, I can't notice the difference.
Something is incredibly wrong here, I have a similar card Oced to +105 core and +530 mem and run around 40fps+ with the settings set to Very High and everything else the same.. I actually thought with another 1080 the game should run at minimal 60fps constantly.. Something is very unoptimized and incorrect with this game. I DO NOT reccommend buying until dx12 patch or "even more" (already had 2) performance patches have been released.
What are you saying here? All things equal, isn't the differentiating characteristic each GPU and its driver? No matter how you try to throw it to the "rest of the system" or whatever (making no sense in the process), the fact is that the game performs almost 1:1 according to each card's compute performance, with NVIDIA cards actually looking more efficient than AMD in it. What's your problem with that? That AMD hardware is getting utilized?
Techreport for the rest.
Yes, Kepler is great. In Hilbert's test the 780Ti is slower than the 380x, in the one you post the 280x is only 53% faster than the GTX 770. It's great alright.
The difference is that the test you posted is not an actual Ultra settings test.
Hilbert run the internal benchmark on actual Ultra settings, the test you post has no CHS, no Temporal AO, forced Anisotropic filtering, and it's not the same run on top. Getting different results on a different is completely ok, but this isn't even the actual Ultra settings.
Your issue with the benchmark is that it's more detailed than the actual game and that your card doesn't cope as well you thought it would? Is your issue your frame rate with the 980Ti, or that the 390x seems to be around it? Because these are two different issues.
Where is the bad optimization for NVIDIA in this title? Even in the benchmark that you call biased, NVIDIA cards seem to be more efficient from AMD cards. What is your problem exactly? That the almost-equal in compute 980Ti and 390x are neck to neck? That the 1080 is destroying everything (as it should)?
A similar chart for The Witcher, since NVIDIA ports are nice and good and use all of the available hardware:
It's not a theory, the numbers are almost 1:1, with NVIDIA cards winning the efficiency prize. The gains on Hitman are big for everyone, not only AMD. Not everyone has a top level CPU, and DX12 helps a ton with that, in-engine asset streaming, frame latency... Unless there is a multi-gpu problem, I can't imagine a game that offers DX12/Vulkan where ANYONE selects DX11/OpenGL over it.
Let me summarize: Your first paragraph about things not being equal while being equal simply made no sense, neither logically nor grammatically, and it didn't present any kind of extra fact.
The fact you presented was from a benchmark using a custom run that we can't replicate, while not even using the actual Ultra settings, but a custom preset.
All GPUs perform within each other's compute bracket, with NVIDIA cards edging out AMD cards once more.
Your conclusion: "This game is biased because AMD cards are not eating dirt along with Kepler". Your problem seems to be that the 390x is doing well, not that your card has a problem (which you admit it actually doesn't when you play, which is hilarious and shows us what your ACTUAL problem is about this game).
Those numbers look terrible. Really off-putting. I wouldn't buy the game based off those numbers. Infact, since it's such a mixed-bag, I don't think I will bother. Cheers anyway.
I'm going to play this game no matter what, but waiting until it gets a few patches and a price drop.
By looking at your post i think you are person with double standards and this is the reason AMD is left with no market share in high end products. You bash gameworks but you want people to enable AMD CHS, which is a turd and does nothing in game and it impacts performance that much blind you are.
I don't know if you're aware, but CHS and temporal AO are both currently broken in Deus Ex MD.
CHS has reduced shadow draw distance, and temporal AO actually introduces shimmering rather than eliminating it..
The benchmark doesn't reflect actual gameplay though, as it dramatically reduces or outright eliminates the CPU factor, which is an important part of the main game.
So why is he calling the game a bad port that favors only AMD then? If NVIDIA hardware is working as it's supposed to, is the problem that AMD hardware is working ok too?
Also the DX12 patch will most likely eliminate the CPU factor.
It is bad port because it is not optimized for 80% of PC gamers. This is the reason why Hitman, Deus EX, Ashes of the Singularity and QB did not make profit from PC. Deus EX is dead on PC right due to it is not optimized for 80% PC users.
Just cause 3 and Rise of the Tomb Radier is the only game that squre enix made profit from PC because it was targeted to 80% of PC users and i know square enix has also learned their lesson well just like MS.
QB was a bad port, just less bad on AMD.
Any actual numbers for that, or just the usual ooma sources?