Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by ChaosPhoenix, Aug 24, 2016.
There is no such option.
agreed.. visuals looks so-so, face texture looks like some ps3 games
For me mgs v pp looks alot better than this & alot better optimize for pc.
Low quality assets smothered in GCN's secret shader sauce. Gosh, I've criticized nvidia for years of gameworks bull and now when AMD pulls the same shenanigans, everyone's mum. Consistency plz.
Annnnd not 'cause it's a built-for-consoles deferred render engine? uke2:
Only if you see the actual bench results, they more or less correlate with the compute power in each card.
I think that it might have been a bad idea to even give the option of MSAA considering how much it hurts performance. This may be a case of choice being a bad thing considering that TAA as you've said is pretty good.
Well I can use MSAA in other games and not cut my fps in half. So it's about the game itself too. I think.
If only "compute power" was the peak flops rating and not the whole system. Also - they don't, as I've seen a benchmark where 1060 was slower than 970. It's like AMD purposefully slowed down Pascal cards and forgot about Maxwell, lol
Disable msaa, taa, sharpening and use Reshade instead
I require at least 80-90fps but since I can't have that with this game i'll settle for 65-85fps at 2560x1440, no MSAA, Ultra, CHS to On instead of Ultra, VL to On instead of Ultra.
As many have said TAA is more than enough since MSAA doesn't do a very good job.
"Not always 144 but wont go below 40" - that's a pretty significantly broad spectrum there. So you mean you actually have 144fps in some cases with 2xMSAA?
Reshade will mess up to much menu things.
I'm using this preset: http://sfx.thelazy.net/games/preset/6023/
It pushes even further the performance but the DoF doesn't seem to interfere at all the interface.
This kind of posts makes me laugh at those who say gtx1080 is overkill for anything less than 4k.
Nothing's overkill for game(don't)works and gaming(de)volved titles.
I wonder how much of that is due to the eDRAM arrangement that your CPU has.
As usual, when you don't like something, you don't know what you're talking about. If you take the throttle of the Nano into consideration, the frame rate results are almost a 1:1 analogy of teraflop power of the cards. The small differences being cards like the 380x that have a lot of compute, but a smaller/less efficient front end (32 ROPs), and the 1070 edging out the Fury again due to better ROP performance. The only thing getting the shaft is Kepler, which has been getting it first with NVIDIA-sponsored titles like The Witcher. It is also the NVIDIA card with the least amount of ROP performance (only 48ROPs, and at much lower frequency at that). That would make it an NVIDIA problem.
The game needs performance improvements and hopefully the DX12 patch will deliver, but calling this "bad" because you're not used to AMD hardware getting utilized properly is biased.
^Good point, even if it makes me feel a bit bitter about my experience with AMD. 4.5 years after buying my first 7970, the hype about lower level API's bringing true performance finally begin to materialize. Basically 4.5 years that my hardware wasn't being utilized properly.
PrMinisterGR; what you've done there is appreciated. For me though, it's making the GTX1080 look like a beast of a card vs FuryX. 10fps higher with only 0.4 tflops more. That's an eye-opener.
On the other hand, they sell them at prices that are comparable to their DX11 performance at the time. Which means that in the end, you get better than what you pay for. My 7970 was $70 cheaper than the equivalent 680. I didn't expect it to be faster, everything else has been a nice bonus.
On the actual subject now, the game does seem to need optimization, but I'm betting that whatever comes will come with the DX12 patch. Games since AC:Unity more or less, do too much stuff on screen at the same time.
Oh yes. NVIDIA's frontend seems to be doing wonders for the cards. One reason that you get that good single-submit performance is that frontend. I just don't like it when people call developers and games "paid", just because they don't run like sh*t on hardware of other manufacturers.
And on the other hand you could say that the Fury X, with something like half the ROP performance of the 1080 and half the memory (which matters in this game), is quite close to it if properly utilized.
EDIT: Also because Pascal is so aggressively clocked, it gets extra ROP performance, don't forget that. That explains really well why the 390x and the 980Ti are basically tied. They seem to be much more "balanced" designs overall in contrast to the Fury cards (too much compute, slow graphics), or Pascal cards (too much graphics, not so much compute unless it's the Titan X).
I don't think developers are necessarily paid, but some engines do favor certain vendors or architectures because the engine's developer might have a strong relationship with a certain IHV and so their engine is optimized heavily towards those GPUs.
Some examples are the Nitrous Engine for AMD, and Unreal Engine 4 for NVidia..
So when a game is first released on the market, it might favor AMD or NVidia heavily.. But a few months later, the performance landscape might shift as driver optimizations and patches have an effect.
Deus Ex MD might favor AMD now, but who's to say that it might be the same three months from now?
I don't think modern games are ROP bound at all. Games have been becoming increasingly ALU and compute bound over the years, and this doesn't seem to be changing..