Have not cards provided significantly better features, image quality since DX7? I mean, DX 8.0 introduced shaders - huge game changer in achievable IQ. 8.1 improve on it a lot again. DX 9.0 a/b/c moved a that bar a lot. DX10 with unlimited shader length, new improved transparency and unified shader model. (No more PS and VS.) DX11 Compute Shaders, Tessellation And then DX12 with all those feature level revisions passed down to DX11 And on top of that In DX7 times you had GPU with 32/64MB of VRAM, over years that was quite often being increased by higher factor than 2. And that was used to improve texture IQ and other details like adding multiple buffers for shadows, lighting, relfections, ... And GPUs improved drastically to actually handle several magnitudes more data per second. This aspect did not change much in last years at all, I mean there were HD 7970 with 6GB of VRAM. r9-290 with 8GB of VRAM. In last 3 years, you was getting those improved features, but GPUs themselves did not really improve on number of architectural blocks (increasing brute force). Did you know that Maxwell GTX 980Ti has 50% more ROPs than RTX 2080? 2080 compensates it with 70% higher clock. Number of TMUs on RTX 2080 is just 4.5% Higher than 980Ti has. And rest of improvement simply comes from clock. nVidia did not really design better GPU, they reaped clock improvements. So no ,those features are poop in comparison to delivering 32x AF. We sit on 16xAF for so long that 1080p resolution no longer sees texture IQ improvements as you increase texture details. Yet texture details delivered have much to ask from. If you render image on 1440p or 4K and downsample it to 1080p, you are getting that texture IQ what 32xAF would give. But 32xAF in HW would cost you only 5~8% of fps (if even that) while downsampling from 1440p/4K costs you around 25/60% of fps. You may not realize it as much, but amount of new features and their actual impact on IQ is almost nothing in comparison to previous generations. Those GPUs delivering DX8/8.1 did really huge improvements on IQ and amount of performance they added!!! Why? Look at resolutions people played games. What were resolutions used in times of last DX7 GPUs like GF4 MX (rebrand)? 800x600 and 1024x768. Yet right after you got GF 4 Ti and Radeon 9x00, those were good 1280x1024 cards. So no, having practically same performance per $ as last generation and just bringing few new features does not cut it. Especially since RTX 20x0 generation comes 2,5 years after GTX 10x0. People simply expect more as they should. Only real improvement RTX cards have is FP16 performance.