Discussion in 'Videocards - AMD Radeon Drivers Section' started by oGow89, Jan 27, 2016.
Yeah, at 1920x1200 I can play for around a hour or so with the very high texture setting and then it'll have reached max (Afterburner will report around 3900 to 3950 MB of VRAM used, meaning Windows and whatnot have the remaining 50 - 100 MB.) and after that RAM usage will greatly increase upwards to 10 GB or so (All in all, not just for what this game uses.) and as mentioned cutscenes in particular can stutter greatly when this VRAM limit is exceeded. (At 2880x1800 which I downsample via custom resolution that time drops to about 20 - 30 minutes before it's filled up and start using system RAM instead.)
Pretty sure the R9 390 with it's 8 GB VRAM buffer would do better for longer play sessions, plus of course the 980 Ti with 6 GB and Titan X with 12 GB VRAM on the Nvidia side.
Going by the GeForce.com guide comparison images the player model of Lara herself sees the greatest difference in texture quality between high and very high settings, most of the terrain and such mostly differ in normal and specular texture clarity with diffuse (color) detail mostly remaining unchanged.
I played the Game for about 6 hours now and can say that it is running pretty good on an all AMD system i play it with all on very high incl. Pure Hair and HBAO+ @ 2560X1440 VSR and it never gets under 30 fps with a 45-50 fps avarage.
Back in 2006-2007 when we were still in the ps2 era and early ps3/360 era, newer consoles deliver better looking games than most PCs, back then we lacked the hardware. That is where ''Can it run Crysis?'' came. The game was for years both a benchmark and a technical wonder and remains still. It was a pretty game that even now if you launch, you will still lose hours gazing at the scenes of the water, sun reflection, and beach among other features that paint an unforgettable image in your mind to remain even after closing the game. When it came out, no hardware was capable of running the game at acceptable frames on the highest quality. It was understandable for 2 reasons;
1. Hardware didn't cost as much as today, and the likes of over-hyped hardware didn't exist, and no one had to pay over 1000$ for a single gpu. The gt 8800 which would be the equivalent of the gtx 970 and r9 390 now was around 200$ on release.
2. Efficiency existed only for consoles, and the software wasn't there either. We had no gameworks crap, and yet developers jumped in and tried to use the latest API which at the time was DX10 with the likes of Call of Juarez, Crysis, Assassins Creed, Lost Planet, Gears of war, Bioshock, and the list goes on. Sure it was part of advertising for Windows Vista and the difference wasn't that much, and crysis was later unlocked to use very high settings on windows xp, but the developers started the implementation rather quick.
This here on the other hand is as garbage as Batman Arkham Knight in terms of performance. We have both hardware and software and the game takes use of neither. When a game uses one core and makes no use of the others, it is called junk. When a developer has the tools to implement more efficient API, and chooses not to for whatever reason is called stupid.
You shouldn't be happy, your gpu was advertised to run games for resolution higher than 1440p with max details and all eye candy with high performance and it does in almost every well developed and optimized game. If a gpu costs over 1000$ with the likes of the titan x runs into problems running a game at 1080p with everything maxed out, then we have a problem. Your gpu costs 500+ euros/bucks and you shouldn't be getting less than 60fps on any game on 1440p, no matter how good looking it is. If the game runs on a sub 300 bucks console and even on a 10 years old xbox 360, be at on low settings and resolution, a gtx 750ti shouldn't be struggling at 13 fps. The likes of these games that run like crap, shouldn't sell well. This is becoming more of a fashion trend, with developers betting on how broken a game must be before it stops selling. In few years time, they will just sell us their plans for the game, and we will have to develop it ourselves. DX12 exists and is already complete, why not use it. Mantle before it existed, why not use it, it's free. So really i don't know what you are happy about.
Actually, for an fx cpu+amd gpu rig, that is a fine performance nowadays.
Ok Ok, i always set my games to be between 30 and 60 fps with the max Quality/AA/Downsampling possible to achive that. If i set it to 1080P with FXAA only i have allmost always 60 fps.
and with MAX settings it does look a good bit better than on an XBox One and i will remember you that there is a optimized driver in the works for this game. But for now im glad to see this performance on launch for this game.
FU*CK THEM! they ALWAYS delay it until Saturday and then bingo! it's weekend's vacation! FFF****KK YOU AMD and ALL your staff!
Preload, Pre purchase, Pre order, Playing before the others.I really don't know the meaning of these words these days. I, who pre ordered the game 6 months ago, have exactly no difference with someone who's gonna buy the game 1 week after release. thanx to you AMD. and F*ck you again!
LOL no. Where do you get this from?
I'm happy because we got one of the best, if not the best looking game. And I'm buying it as soon as AMD finds the time to fix it.
Why shouldn't I(we)?
BTW... I am surprised that 4GB+ (8GB on R9 390/X) is starting to make sense, even bellow 4k. I have expected that 4GB would be "future-proof" longer than this.
Well have you ever compared your vRAM usage to that of the 980ti in other games. I know in game like Shadows of Mordor, and GTA V, and even Arkham Knight our Fury for the most part uses quite a bit less vRAM than that of cards with more GDDR5. And it has often been said that our 4GB of HBM is = to 6 and in some cases 8GB of GDDR5 do to the speed at which it can get data in and out.
I don't know, this is the first game that I have been having RAM issues with. I mean SE recommends a 970 and 8GB of ram and yet I am using 14GB of RAM........
Hi guys, i've been evaluating the performance for days.
And to start off, the problem about the AMD framerate is basically the AMD DX11 overhead problem.
I saw a lot of comments here that let me pissed off, like putting the faults on Gameworks and DENUVO.
Well not sure how many DENUVO Games you've played, but despite i don't fall in love with it, i think we had seen much worse DRM like Securom for example, and i played original and cracked version of Mad Max both runned the same.
Second one.. Gameworks.
I don't think that ****ty nvidia gameworks is doing something there for the performance, its just the HBAO+ effect that is there, the rest of all the stuff is mostly their Crystal Engine rendering and AMD Pure Hair solution.
Third.. The Review of the Digital foundry is bad, they say that the PC Version have Async computing, that is a big lie since we needed DX12 or Mantle for that, its not the first time i saw them talking of things they don't know, last time i notice a mistake was Leadbetter saying that H170/110 and B150 couldn't run Rams above 2133MHZ, that they needed a Z170 for running rams at higher speeds LOOOOOOOOOOOOOOOOOOOOOOL.
Leadbetter repeated that for 5 times at least, which is completely false.
Forth, if you read decently the review of the DF they sayed that its impossible to match the Xbox one performance/settings, since Shadows and tesslation is different.
So the only time that Xbox one uses tesslation is for the snow footprints, on PC the use of the tesslation is massive all over the place, tough DF put Tesslation on as part of being close to the Xbox one -.-
Shadows on the Xbox one have the same resolution as the high of the PC, but on Xbox one they miss shadows from a lot of objects which are active on the PC, for those 2 reasons thats why the GTX 960 is needed to "Match" the Xbox one.
Fifth, the game is heavy cause it relys on a lot of heavy effects, to match Xbox one quality its actually easy, but pushing soft contact shadows, tesslation all over the place, plus Pure hair at very high with 30K strands of hair instead of some strands, a lot more View distance and objects, its normal that it requires a massive hardware.
Six.. The only drawback of the port in my opinion is the vram usage, Xbox one can match the very high textures, which on PC 4gb of vram doesn't seem to be enough for it.
Seventh, **** nvidia and Square enix if they release a DX12 for the game that don't work on AMD. That would be demonic, cause Dx11 performance on AMD Sucks big balls, and its urgent that more DX12 come for PC cause AMD needs that much more than nvidia.
If AMD can't enjoy DX12 cause of the stupid marketing of nvidia, then AMD is ****ed not only in this game, but too on the next games that will follow this strategy.
4GB of vram cannot equal anything other than 4GB of vram. Now as for marketing and hype, that's another story, but it's just that, a story.
Of course we know it literally doesn't equal more vram but I do know in some games a fury will only use 3GB where a 980ti would use 5.
I didn't argue that Different hardware, different drivers etc. It also goes vice-verse in the case of this Tomb Raider, as evident by testing. Though, it doesn't seem to be too much of a difference.
Will be interesting to see if memory consumption will be altered once AMD releases 16.1.1 or what they might call it, apparently it might be released tomorrow but it could take longer too I suppose since there's no official confirmation on when it's scheduled to be released although it appears it might have originally been intended for a Friday release last week but some last minute changes must have been implemented so it got delayed a bit.
(Perhaps a larger 16.2 beta could follow shortly afterwards assuming this next driver is primarily about this game, shrinking down that list of known issues from 16.1 would be welcome for example.)
If nomenclature logic means something for AMD a driver released in ´16 February should be named 16.2.(x) and not 16.1.1.
I know for me personally, playing Tomb Raider at 1440p uses around 3.4/3.5GB on average but my system will use at much at 14GB of its RAM in certain areas >.<
My (gaming) life got a lot simpler and easier once I stopped using AB overlay to feed my head with the information that I shouldn't really care bout
The Soviet Base is the more hardware demanding area of the game. On my humble rig i can get above 60fps on closed areas like caves and etc, but on that soviet base fps drops to 15 sometimes. Not to mention a lot of stuttering when the game saves, heavy stuttering at some cutscenes and even a BSOD.
Don't know why because this game runs even on a 10 year old hardware that is Xbox 360. The only explanation is very poor optimization.
Can you monitor your GPU Usage on that soviet place please?
That's not possible, if a card supports DirectX 12 it will launch the game in DirectX 12 no matter the feature level. That missing feature level 12_1 on AMD will, at most, have some specific in-game settings disabled but it's very unlikely, so far the subfeature part was all about optimizing certain effects, not disabling them altogether - Like Battlefield with 11.1, or Assassin's Creed with 10.1 which only had some antialiasing optimization going on but not hiding any AA modes from 10.0 cards.