PCIe SSDs slowly replacing SATA3 SSD

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Mar 22, 2019.

  1. tsunami231

    tsunami231 Ancient Guru

    Messages:
    14,725
    Likes Received:
    1,855
    GPU:
    EVGA 1070Ti Black
    Common user dont need NVME they will never see diffrence between SATA 3 and NVME to justify the extra cost, the cost SSD is still more expensive then it should be and NVME extra. if it cost lets then I it wouldnt matter. So I not really sure how it gonna replace sata3, they been saying SSD is gonna replace HDD for years but price and space is still a big fact as to why HDD are still used.

    I had switch my OS drive from HDD to SSD years back and that was hugh jump in performance. I just recently change my data/game drive from HDD to SSD and I see almost no difference between the 2 at lest not as far as most my games go.
     
  2. Glidefan

    Glidefan Don Booze Staff Member

    Messages:
    12,481
    Likes Received:
    51
    GPU:
    GTX 1070 | 8600M GS
    Not just access time, because you still have to issue the command to read the file one after the other.
    With NVME i think the OS can issue one command to move *this many* files and start moving them all at once, instead of sequentially.
    They can have multiple queues and each queue has more depth.
     
  3. The Goose

    The Goose Ancient Guru

    Messages:
    3,057
    Likes Received:
    375
    GPU:
    MSI Rtx3080 SuprimX
    same way you migrated to m.2
     
  4. MegaFalloutFan

    MegaFalloutFan Maha Guru

    Messages:
    1,048
    Likes Received:
    203
    GPU:
    RTX4090 24Gb
    I have my 2080ti in x8 just so i could connect my Optane directly to CPU instead of going trough chipset and there is no difference.
    Also we have more PCIe lanes, i have two x4 PCIe NVME drives installed and have space for one more

    I did that after reading these benchmarks
    https://www.techpowerup.com/reviews/NVIDIA/GeForce_RTX_2080_Ti_PCI-Express_Scaling/3.html

    Basically there is no real world difference, especially in 4K and especially if you have 60hz monitor
     

  5. Dragam1337

    Dragam1337 Ancient Guru

    Messages:
    5,535
    Likes Received:
    3,581
    GPU:
    RTX 4090 Gaming OC
    Did you even look at what you posted, before making that statement?

    First of all they only tested 1 bandwidth heavy game - asscreed origins - where it makes a 5% difference to average fps. But what you need to keep in mind is that what bandwidth limitations affects the most is the minimum fps... so with limited pcie bandwidth, you will see bigger drops in fps. It's even more apparent in a game like witcher 3, which is very bandwidth heavy.
    Second have a look at what happens in civilization - the reason for it is that bandwidth reqruirements increase drastically with increases to resolution.

    I will state what i said again - i would personally never sacrifice actual game performance for slightly faster load times.
     
    Last edited: Mar 24, 2019
  6. MegaFalloutFan

    MegaFalloutFan Maha Guru

    Messages:
    1,048
    Likes Received:
    203
    GPU:
    RTX4090 24Gb
    I bet you rushed to replay without reading my post till the end :(
    I repeat: Basically there is no real world difference, especially in 4K and especially if you have 60hz monitor

    Why do you care if it does 115fps instead of 120fps? I dont, I game on OLED TV and its 60hz
    Based on that chart even PCIe 3.0 x2 [they tested PCIe 2.0 at x4 and got 78FPS] with 2080ti is playable at 4K
    I mean i wont use it and its unrealistic, but PCIe 3.0 x8 is far above that

    Only games that get into super high frame rate are hit, regualr games that do below 80fps are identical
    Their last graph says that the difference is 2% between x8 and x16

    Minimum FPS are not affected by this at all

    Here is Titan X tested
    https://www.pugetsystems.com/labs/articles/Titan-X-Performance-PCI-E-3-0-x8-vs-x16-851/
     
  7. Dragam1337

    Dragam1337 Ancient Guru

    Messages:
    5,535
    Likes Received:
    3,581
    GPU:
    RTX 4090 Gaming OC
    There absolutely is a difference - assuming that your rig can pull 4k 120 fps, but you only have 60 hz monitor, then a smart person would be using downsampling, which would increase bandwidth requirements even further, meaning an even bigger bandwidth limitation posed by pcie.

    That test you link to again tests no bandwidth heavy games...

    I just tested witcher 3, and indeed, i do see fairly significant differences.

    2.0 x16 (which has the exact same bandwidth as 3.0 x8)

    [​IMG]

    3.0 x16

    [​IMG]

    A 8,5 % difference in fps, and that is just in a static... during heavy action, the difference is even greater.
     
  8. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    seems like a software issue more than a PCIE limit.
     
  9. HeavyHemi

    HeavyHemi Guest

    Messages:
    6,952
    Likes Received:
    960
    GPU:
    GTX1080Ti
    Why would you say that when there is a factual performance hit at high resolutions using top tier GPU's at x8 vs x16? Why is this even being disputed?
     
    Dragam1337 likes this.
  10. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    Because it was tested already.
     

  11. Dragam1337

    Dragam1337 Ancient Guru

    Messages:
    5,535
    Likes Received:
    3,581
    GPU:
    RTX 4090 Gaming OC
    Yes, and the test by techpowerup shows that the games that use a lot of bandwidth do get limited by lack of pcie bandwidth.
     
  12. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    Care to show the link? GN tested it very recently and there was no difference at all.

    hopefully you're not talking about the 2010 article which featured GPU's like GTX 280....
     
  13. Dragam1337

    Dragam1337 Ancient Guru

    Messages:
    5,535
    Likes Received:
    3,581
    GPU:
    RTX 4090 Gaming OC
    It is already posted in this thread...
     
  14. Richard Nutman

    Richard Nutman Master Guru

    Messages:
    268
    Likes Received:
    121
    GPU:
    Sapphire 7800XT
    Problem is most motherboards only have 1 M2 slot.

    I migrated from Sata SSD to M2 drive. Can only plug 1 M2 drive into mobo.
     
  15. BlackZero

    BlackZero Guest

    TPU results show x16 consistently providing higher numbers, even when there’s no apparent limitation. So, there may be other factors influencing the results.

    Of course, that doesn’t mean there isn’t a bandwidth limitation, but just that those particular results can’t be seen other than margin of error.

    If they ran the same tests with a 2080, the comparison could provide more insight to why games like Hellblade show such a difference despite not being very VRAM intensive, or why Wolfenstein only saturates x4 (FPS) for a 2080 but requires x16 for a 25% faster 2080 TI.
     

  16. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    That's how you avoid facts.... well done.
     
  17. Dragam1337

    Dragam1337 Ancient Guru

    Messages:
    5,535
    Likes Received:
    3,581
    GPU:
    RTX 4090 Gaming OC
    Are you really unable to scroll up? It's literally on the same page... talk about lazy...
     
  18. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    Huh? you mean 2% difference? which is basically margin of error and can come down to clocks or the run itself?

    Looks like you can't read graphs.....

    [​IMG]
     
    MegaFalloutFan likes this.
  19. HardwareCaps

    HardwareCaps Guest

    Messages:
    452
    Likes Received:
    154
    GPU:
    x
    To add GN test which shows similar trend of basically margin of error difference:

    [​IMG]

    [​IMG]

    [​IMG]
     
    MegaFalloutFan likes this.
  20. Dragam1337

    Dragam1337 Ancient Guru

    Messages:
    5,535
    Likes Received:
    3,581
    GPU:
    RTX 4090 Gaming OC
    This is going to be my last reply to you, as you are obviously trolling.

    The differences varies alot depending on how much bandwidth a game uses, and overall the difference for all resolutions might only be 2%, but that does in no way tell the whole story - a game that uses very little gpu bandwidth, such as battlefield, see minor differences with pcie bandwidth, while a game such as asscreed odessy and witcher 3 that use alot of gpu bandwidth see significant differences in performance according to the amount of available pcie bandwidth.

    [​IMG]

    But you are probably going to claim that 10% is also just margin of error...
     

Share This Page