https://docs.nvidia.com/gameworks/c...s/d3d_samples/d3d11deferredcontextssample.htm https://developer.nvidia.com/sites/...dev/docs/GDC_2013_DUDASH_DeferredContexts.pdf There's some info on how it works, there's more but it's a good implementation and it does help overall performance but this particular game creates several threads (Up to 8 from what I've been reading up on.) so that can hit lower core CPU's harder although it reduces overhead and can increase performance but if all the cores are already busy working on other tasks then it has to swap them around as cores become available and overall CPU utilization and performance can suffer slightly. I don't know if there's any good way to properly test this since it can't really be disabled but that's one reason why NVIDIA GPU's under benchmarks also see CPU0 hit more frequently often sitting around 90 - 100% usage in testing which on it's own isn't bad but with the number of threads created and the games own reliance on cores and spawning multiple threads for tasks of all types then it can be a bit of a bottleneck. For AMD last I know they don't utilize/can't utilize deferred render contexts in the same way. https://github.com/GPUOpen-LibrariesAndSDKs/AGS_SDK/issues/20 but it might have been resolved although I don't think anything has changed, situation is more or less the same though since the game itself relies a lot on these CPU threads though outside of cities CPU0 is going to measure a bit lower than on NVIDIA systems ranging up to 80% or so and then inside cities here's a ton of draw call commands and other tasks and it's likely going to be 100% even on the higher-end Intel 8000 series (Skylake? Perhaps the one after that name.) and AMD 2000 Zen+ CPU models. Does similar things on consoles but uses the lower-level PS4 and XBox API's from my very limited understanding and then it also benefits from hardware architecture being the same and a 30 FPS framerate (With dips on the base models but even so.) so it has a 33.3ms frame time and on PC well at 60 FPS you get 16ms and it executes or tries to execute these things faster but it does take more of the hardware or you get stuttering, hitches or stalling which Final Fantasy XV seems to really have a thing for that last bit but might also have a slight leakage on resource or memory, need to check up on well a lot of things really but the faster things run the more data is being sent and thus the hardware has to keep up or it will have some consequences. Doesn't have to be terrible but depending on the program itself and other system specific bottlenecks it can manifest in various ways from framerate drops to stalls to just loading for a longer period of time. I don't have a perfect understanding of this sort of thing either, I'm trying to improve and learn but it's a incredibly complex subject and seeing something like RAM causing a 20+ percentage framerate increase was really surprising when I looked into 1333 DDR3 memory modules and compared against 21333 DDR4 and then higher especially for open world titles, Fallout 4 was a early title showing some really curious gains scaling up to 2400 Mhz and probably even higher but kits weren't quite up to that speed yet and then Watch_Dogs 2 and Wildlands and many other games including Assassin's Creed and more also showed impressive gains so that was quite a eye opener. Then there's SSD's which really reduce the I/O bottleneck and on modern M.2 systems can send data at near Gigabit speeds if I'm using the correct bit/byte divider here but at that point software can actually see a increase in hardware demand since it's now sending data even faster to and from memory and other parts whereas a HDD at some 100 MB under perfect circumstances with usually dips to 20 - 40 MB/s is actually lowering the performance impact although how much is going to depend entirely on the game or software and overall a SSD will improve many other factors from the OS itself to initial start or load times and more. (Final Fantasy XV comes up as a example often for this, really reduces load times but also causes stuttering and stalls even more often but then there's that possibility of a resource leakage or memory increasing over time and it not freeing it properly so it's a bit of a worst case example in some situations though with the amount of objects and other data handled by Origins and Odyssey they also show some impact from this but it's not going to be stuttering constantly if the rest of the system is falling behind somewhat...until it's city time at least heh those are just not really fun places for the hardware to handle.) And then a lovely little mess of whatever the display driver "compatibility" profiles and numerous more or less outright hacks might be doing and OS issues, other software saying hello (Steam and UPlay overlay for example.) and it's quite a complex little mess going on under the hood there which you can use various tools and overlays to monitor things but you'll usually just feel how performance or frame-time fluctuates a bit as the game struggles with some areas because there's so much going on in these. There's been comprehensive performance tests for the game but I don't know if they go much further than comparing frame rate and frame time on average, going into multi-threading and CPU is a bit complex and then there's vendor differences between AMD, Intel and GPU wise also AMD and NVIDIA handling certain things slightly different. (Must be lovely for low-level API usage and D3D12 and VLK even with extensions on Vulkans's side helping out with drawing the most from the hardware.) Lengthy post but yeah it's a demanding title alright, probably still a decent indicator of next-gen game and game engine demands in a way though there might be certain areas where the engine just isn't doing a great job either or it's not quite as scalable on older hardware especially CPU wise where even at minimum settings it will still put a high workload on the processor and require a good number of available cores. GPU still shows immense gains from reducing things like the volumetric clouds and shadows but that increases framerate and at worst you might see a wider range now as it fluctuates or dips whenever it runs into some problem or bottleneck to call it that which 60 to 30 or lower periodically is not very fun and checking the frame time and seeing it spike up to way above normal can also be a thing although even without measuring these the user would still notice hitching and stalls even if it's for less than a second especially if it repeats frequently as the game is loading up more data. Should happen on consoles too including XBox One X and PS4 Pro but 30 FPS cap so these dips aren't as severe though I don't know if there's tools or software or even hardware to properly measure this in more detail, framerate itself might be possible and maybe a frame time or graph estimating the frame time via recording hardware or similar. Origins and Odyssey are certainly pushing Anvil Engine Next and the systems it's running on that's pretty clear, good visuals and all but there might be some areas the engine isn't dealing with entirely optimally which comes in play especially on PC and the earlier dominance of quad cores as more affordable until Intel started providing i5 hexa cores and AMD and Zen gave more competition but not everyone is going to immediately upgrade either and pricing isn't the best for hardware for just one factor. Looking forward to seeing what Italy will bring but if they're targeting 2020 and possibly the next gen of console hardware it might be quite a step up even from these two games in terms of hardware demand. (Possibly, we shall see and there might be other additions to the Anvil engine too or improving on existing tech.) EDIT: Yeah that's a lot of text. Wasn't quite planning on a A4 paper or several when writing this reply it just kinda happened as I kept adding and trying to cover more and more.