Common user dont need NVME they will never see diffrence between SATA 3 and NVME to justify the extra cost, the cost SSD is still more expensive then it should be and NVME extra. if it cost lets then I it wouldnt matter. So I not really sure how it gonna replace sata3, they been saying SSD is gonna replace HDD for years but price and space is still a big fact as to why HDD are still used. I had switch my OS drive from HDD to SSD years back and that was hugh jump in performance. I just recently change my data/game drive from HDD to SSD and I see almost no difference between the 2 at lest not as far as most my games go.
Not just access time, because you still have to issue the command to read the file one after the other. With NVME i think the OS can issue one command to move *this many* files and start moving them all at once, instead of sequentially. They can have multiple queues and each queue has more depth.
I have my 2080ti in x8 just so i could connect my Optane directly to CPU instead of going trough chipset and there is no difference. Also we have more PCIe lanes, i have two x4 PCIe NVME drives installed and have space for one more I did that after reading these benchmarks https://www.techpowerup.com/reviews/NVIDIA/GeForce_RTX_2080_Ti_PCI-Express_Scaling/3.html Basically there is no real world difference, especially in 4K and especially if you have 60hz monitor
Did you even look at what you posted, before making that statement? First of all they only tested 1 bandwidth heavy game - asscreed origins - where it makes a 5% difference to average fps. But what you need to keep in mind is that what bandwidth limitations affects the most is the minimum fps... so with limited pcie bandwidth, you will see bigger drops in fps. It's even more apparent in a game like witcher 3, which is very bandwidth heavy. Second have a look at what happens in civilization - the reason for it is that bandwidth reqruirements increase drastically with increases to resolution. I will state what i said again - i would personally never sacrifice actual game performance for slightly faster load times.
I bet you rushed to replay without reading my post till the end I repeat: Basically there is no real world difference, especially in 4K and especially if you have 60hz monitor Why do you care if it does 115fps instead of 120fps? I dont, I game on OLED TV and its 60hz Based on that chart even PCIe 3.0 x2 [they tested PCIe 2.0 at x4 and got 78FPS] with 2080ti is playable at 4K I mean i wont use it and its unrealistic, but PCIe 3.0 x8 is far above that Only games that get into super high frame rate are hit, regualr games that do below 80fps are identical Their last graph says that the difference is 2% between x8 and x16 Minimum FPS are not affected by this at all Here is Titan X tested https://www.pugetsystems.com/labs/articles/Titan-X-Performance-PCI-E-3-0-x8-vs-x16-851/
There absolutely is a difference - assuming that your rig can pull 4k 120 fps, but you only have 60 hz monitor, then a smart person would be using downsampling, which would increase bandwidth requirements even further, meaning an even bigger bandwidth limitation posed by pcie. That test you link to again tests no bandwidth heavy games... I just tested witcher 3, and indeed, i do see fairly significant differences. 2.0 x16 (which has the exact same bandwidth as 3.0 x8) 3.0 x16 A 8,5 % difference in fps, and that is just in a static... during heavy action, the difference is even greater.
Why would you say that when there is a factual performance hit at high resolutions using top tier GPU's at x8 vs x16? Why is this even being disputed?
Yes, and the test by techpowerup shows that the games that use a lot of bandwidth do get limited by lack of pcie bandwidth.
Care to show the link? GN tested it very recently and there was no difference at all. hopefully you're not talking about the 2010 article which featured GPU's like GTX 280....
Problem is most motherboards only have 1 M2 slot. I migrated from Sata SSD to M2 drive. Can only plug 1 M2 drive into mobo.
TPU results show x16 consistently providing higher numbers, even when there’s no apparent limitation. So, there may be other factors influencing the results. Of course, that doesn’t mean there isn’t a bandwidth limitation, but just that those particular results can’t be seen other than margin of error. If they ran the same tests with a 2080, the comparison could provide more insight to why games like Hellblade show such a difference despite not being very VRAM intensive, or why Wolfenstein only saturates x4 (FPS) for a 2080 but requires x16 for a 25% faster 2080 TI.
Huh? you mean 2% difference? which is basically margin of error and can come down to clocks or the run itself? Looks like you can't read graphs.....
This is going to be my last reply to you, as you are obviously trolling. The differences varies alot depending on how much bandwidth a game uses, and overall the difference for all resolutions might only be 2%, but that does in no way tell the whole story - a game that uses very little gpu bandwidth, such as battlefield, see minor differences with pcie bandwidth, while a game such as asscreed odessy and witcher 3 that use alot of gpu bandwidth see significant differences in performance according to the amount of available pcie bandwidth. But you are probably going to claim that 10% is also just margin of error...