Sure, but we are still months away from release so this can be considered alpha build for which it was looking and running pretty good. It would be disappointment if it ends as bad as AC Odyssey in framerates. (3440x1400@28fps on 1080ti at launch)
Pretty much the same story as with witcher 3. I am guessing they counted on gpu power progressing more over the past 5 years they have been developing the game than has been the case. Great improvements with maxwell and pascal, but then 2,5 years went by, and we only get a 30% improvement with the 2000 series. By usual standards, we would have had 2 gens in that timeframe, each giving 40-70% more gpu power than the previous gen. Obviously when the hardware isn't as powerful as they anticipated it would be, they will need to dial back things.
It's like you folks forgot the 'Can it play Crysis' meme. What do you mean by the 'usual standards'. If you were expecting 100% performance increase every gen, that's silly. Maxwell was ~30% over Kelper, Pascal another 30%. And these were not generally uarch but increasing shader count and efficiency and memory throughput.
Did you just pull those numbers out of your arse ? If we do an apples to apples comparison, aka compare the gpu's using the biggest chips of every gen, such as 1080 ti vs 980 ti etc, then it is like this : Titan vs gtx 580, the GK110 was roughly 100% faster than GF100. https://www.guru3d.com/articles_pages/geforce_gtx_titan_review,19.html 980 ti vs 780 ti, GM100 was 70% faster than GK110. https://www.guru3d.com/articles_pages/msi_geforce_gtx_980_ti_gaming_oc_review,19.html 1080 ti vs 980 ti, the GP102 was roughly 70% faster than GM100. https://www.guru3d.com/articles_pages/geforce_gtx_1080_ti_review,13.html
this day and age 4gb min 1440p can use up to 5 gb in some games , most devs try to keep it under 4gb in some cases
I havent seen any games go over 7GB at 1440p with max settings. 8GB if more than enough for current games and you can easily get away with 6GB.
8GB vram is the sweet spot. https://www.guru3d.com/articles-pag...-graphics-performance-benchmark-review,8.html https://i.**********/qMP8V2BH/vram.png
Below are the pure values, it doesn't matter if sli/cf is used, always the single gpu matters due to frame mirroring. 1-2 gb vram is recommended for 720p ultra gaming. 2-4 gb vram is recommended for 1080p ultra gaming. 4-8 gb vram is recommended for 1440p ultra gaming. 6-12 gb vram is recommended for 4k ultra gaming. 12-24 gb vram is recommended for 8K ultra gaming.
There is also a thing called streaming pre allocation or buffer as much as it can.. so it doesn't neccessary need 8GB or 10GB @ 1080ti / 2080ti.. e.g. cod bo4 or bf5. I know now with 6GB it's still plenty , although 3GB buffer is a bit too low for modern games. Hardware unboxed system ram 2018 games review shows it well - 1060gtx with 3gb vram really needs 16gb system ram, well it's a must have with any modern GPU now, since there is a 9-12gb system ram limit by most..
Maybe 10 years ago . Even at 1080p, not 1440 1080TI 11GB needs a lot less system ram than a 290x cfx 4gb , it depends on hardware and setting's for my system, i dont run 32 GB of ram no more, just 16 GB of ram at fastest speed. https://www.youtube.com/edit?video_id=PO4SBSMBBcU&video_referrer=watch
Great insight on all the useful information pertaining to nvida cards. I really like nvida cards and the grafix they produce in fun games.
No game needs more than 8 GB for its frame buffer at 4K, and most will operate just fine with 4 GB, so any stuttering would be due to an inefficient texture streaming implementation from the game programmer. At worst, you should see lower FPS as the GPU constantly waits on data being transferred or streamed from the hard drive and system memory. When the frame buffer is constantly exceeded, i.e. with a lower end graphics card, the system memory operates exactly the same as VRAM, which should also only result in lower FPS. Most games will be designed with a frame buffer in mind, which is why you have the setting headings like medium, high, ultra etc. In addition, many games will state how much VRAM is available, so should know exactly how to best handle the situation to avoid stuttering. Finally, if using SLI or Crossfire, the system ram being used will be based on the multi-GPU rendering method in use. For alternate frame rendering, the most common method, the amount of data being stored or streamed would need to be much higher than with a single graphics card, which is why multi-GPU system requirements always asks for more memory. A frame buffer should be seen like an L1 cache for the CPU, so there's no point in having it larger or faster than the GPU can make use of. The only potential problem is that a CPU has an L1 cache and then L2, L3 etc. so is more evenly proportioned to ensure efficient data streaming, and while a much larger memory size on a graphics card mostly makes up for this, it still requires decent programming from the game designer.