PCIe 6.0 Specification finalized in 2021 and 4 times faster than PCIe 4.0

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jun 11, 2020.

  1. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Guys, I think your fight overlooks simple fact of artificial obsoletion.
    Let's say that PCIe 6.0 x4 will provide sufficient bandwidth between CPU and GPU to handle memory transfers.
    But what about all those CPU+MB combinations that are going to be stuck at PCIe 3.0/4.0?
    Their CPUs will be perfectly fine for gaming on GPUs that do no have insufficient PCIe bandwidth.
    And I am sure, we do agree that even PCIe 4.0 x4 is not enough for GPU today.
    In future VRAM demands will go up. Display resolutions are getting higher in mainstream slowly, but they do. DX-R is said to require more VRAM too.

    So, what manufacturer would go and save pennies per card and lose all sales on older CPUs and MBs?
     
  2. DmitryKo

    DmitryKo Master Guru

    Messages:
    447
    Likes Received:
    159
    GPU:
    ASRock RX 7800 XT
    We're not overlooking anything. Backward/forward compatibility is the primary reason for continued existence of PCIe x16 links and slots - this point was made a dozen times in this thread, to no avail.

    https://forums.guru3d.com/posts/5798357
    https://forums.guru3d.com/posts/5798417
    https://forums.guru3d.com/posts/5799203
    https://forums.guru3d.com/posts/5799863
    etc.
     
    Last edited: Jul 1, 2020
  3. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,395
    GPU:
    Asrock 7700XT
    Wait... so you insist PCIe 6.0 @ x16 will be necessary, yet you seriously think HBM2 is going to remain relevant by the time that amount of bandwidth is needed?
    Make up your mind.
    It's like a paging file relative to the GPU. But seeing as you take things so literally, this is a waste of a time to discuss.
    I never said it was going to overwhelm the GPU, I said it was going to increase load. The lower-level mipmaps (or in some cases, games that just load in lower-res textures) is exactly why such textures don't cause such performance issues. You're arguing for the sake of arguing.
    I still don't get how this relates to the conversation. And you accuse me of intentional fallacies?
    Right, so even in an impossibly best-case scenario where the GPU can use 100% of the PCIe and system memory bandwidth, even a mid-range GPU is going to be severely bottlenecked. Adding more PCIe lanes isn't going to fix the problem. Adding more VRAM fixes the problem.
    And Carfax showed the measured results: they're insignificant, thereby proving my point that you don't need the extra lanes.

    Again with the intentional fallacies? Don't play stupid here. By reducing the total bandwidth that goes over the PCIe bus, you can thereby limit how many lanes you actually need.

    That's because it's facts against your argument, which you just simply can't accept.
    The TL;DR: if a GPU is starved for more memory, it doesn't matter how many PCIe lanes you add, you're going to suffer significant performance losses. If a GPU only needs a little bit of extra memory (like in the benchmark Carfax showed), PCIe 4.0 @ x8 has already proved to be fast enough to offer a minimal performance loss, where adding more lanes most likely wouldn't improve results since the bottleneck isn't the PCIe bus.
    I'll take that as "I don't know what I'm talking about and making accusations where I have no evidence".

    You do realize that you can downscale PCIe, right? Backward compatibility isn't relevant to this discussion, because an x16 card will work on an x8 slot. So your point is "to no avail" because it's moot.
     
  4. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    This discussion should have ended long time ago. x16 is to stay. Look, there was that AMD/3DMark demo showing that in certain scenario PCIe 4.0 x 16 has big advantage over PCIe 3.0 x16.
    While that's borderline situation of something that does not have practical use for now, it may come helpful in time.

    But main issue here comes from fact that I do not have sh*tty CPU, neither do all those people having intel's CPU with PCIe 3.0 and many cores.
    We can't expect double CPU performance per thread in any reasonable time which would make us throw away our perfectly good gaming CPUs.
    This means that at time PCIe 5.0/6.0 come, large number of people will be stuck on PCIe 3.0 x16. I really want to see GPU manufacturer who releases GPU 2~4 times as powerful as GTX 2080Ti and cuts PCIe bandwidth for older generation CPUs to PCIe 3.0 x4/x8.
    5500XT says otherwise. While it's main issue in PCIe 3.0 vs 4.0 use, and 4GB vs 8GB variant comes from lack of VRAM in certain scenarios, it is damn low end card and performance difference between 8GB PCIe 3.0 vs 8GB PCIe 4.0 has been measured.

    I have no doubt that RTX 2080Ti would benefit from PCIe 4.0. It is expected that next generation will have it and as with each generation, it will be tested.
    Best test for this is actual frametime from 360 degree turn. General benchmarks tend to go forward in tunnel or slowly sideways. But in gameplay, it often happens that resources have to be loaded as player does quick 180. In that memory constrained scenario, PCIe 4.0 x8 (3.0 x16) will result in worse frametime.
     

  5. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,395
    GPU:
    Asrock 7700XT
    I know it will stay, my point is it doesn't need to.
    The advantages you speak of are because of the differences in PCIe 4.0, not because of 4.0 @ x16.
    Not necessarily (and definitely not when it comes to 6.0; that's too many years away from now). If x16 slots continue to persist, people with good CPUs on a 3.0 platform will be able to take advantage of 4.0 and 5.0 GPUs for a few more years. GPU technology isn't improving quickly enough where 3.0 @ x16 will be bottlenecked.
    Meanwhile, if the other benefits of newer generations of PCIe (such at latency) prove to have a significant advantage, people will just simply upgrade their outdated systems.
    I really don't understand how many times I have to state that my argument is not about PCIe 3.0, or even 4.0. I'm not saying "x16 slots aren't and never were necessary" I'm saying for 5.0 and 6.0 they're no longer necessary. The reason PCIe continues to evolve is because we desperately need more bandwidth per-lane, because that benefits x1, x4, and M.2 slots. If we just simply needed more total bandwidth, servers would have been pushing for more x16 slots years ago. AMD finally delivered that, but by the time they did that, PCIe 3.0 became obsolete.
    Why would it benefit? PCIe runs at the lowest common denominator. Since the 2080 Ti is a 3.0 card, the slot will operate at gen 3.0, thereby negating any of the other benefits 4.0 has to offer.
    If you plug a 2080 Ti into a 4.0 slot and operate it at x8 lanes, you're really getting x8 on gen 3.0. So yeah, you absolutely will lose performance. Though like the article I linked to earlier, the performance difference is minimal (if not negligible) in most cases.
     
  6. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I wrote almost same thing at beginning of this thread. But still it remains true that people will not like artificial obsoletion. When 3080Ti comes with PCIe 4.0, it will have x16 wiring. And people will test it in 3.0 vs 4.0 mode. There will be difference. It may be minor in many cases, but it will be there. At that point everyone will agree that having such card in PCIe 4.0 x8 is not desirable.
    As PCIe 5.0 comes and there are GPUs with it, same will happen again till some generation proves that even mightiest of GPUs no longer needs x16. Then people with CPU+MB enabling such PCIe revision will be OK with x8 wiring on cards. And only after that they will be OK with MBs having 8x wiring in PCIe slots.

    But GPU evolution is fast too. Because we had remained on PCIe 3.0 for very long time.
    I'll change wording. If RTX 2080Ti had PCIe 4.0, it would benefit from it. It would be better card than existing RTX 2080Ti with PCIe 3.0.
     
  7. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,395
    GPU:
    Asrock 7700XT
    It's not obsoletion if you don't lose anything. I've argued early on that axing electrical x16 slots would actually improve things, since there's less EMI to deal with and more lanes that can be used for other slots. Or motherboards can be made cheaper, without really any loss to the consumer.
    As for a 3080Ti on 3.0, yes, there will be a difference, even on an x16 slot. This is because 4.0 has other benefits besides more bandwidth. Run the GPU on 4.0 @ x8 and I am quite confident the performance loss will be even less apparent than the 2080Ti on 3.0 @ x8. Synthetic tests will obviously show a difference but real-world results ought to see little to no performance loss. To put it another way: the 2080 Ti is nowhere near saturating 3.0 @ x16 (it's barely saturating x8). The 3080Ti is a healthy upgrade, but an incremental one. PCIe 4.0 offers double the bandwidth per-lane, so, a 3080Ti is very unlikely to saturate 3.0 @ x16 and therefore won't saturate 4.0 @ x8 either.
    Based on results I've seen over the years of benchmarking different slots, there doesn't seem to be much correlation between how much bandwidth is used and how powerful your GPU is. Beyond loading in new assets, it seems frame rate and the API is what affects frame-by-frame PCIe bandwidth the most.
    Right... hence my point from the very beginning.
    I'd argue it's pretty slow. VR and 4K have been widely accessible for years and we still don't have a reasonably priced GPU that can readily handle 4K @ 60FPS or an all-around pleasant VR experience. A lot of people around here can't settle for 60FPS. Maxwell and Pascal were both major leaps in performance but we haven't seen something like that in a while.
    It would, for the same reason the 3080Ti will benefit: because 4.0 has other advantages besides more bandwidth. To reiterate: the benefit is PCIe 4.0, not 4.0 @ x16.
     
    Last edited: Jul 1, 2020
  8. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,037
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    there will never be a point to axing 16x slots.
     
    Alessio1989 and DmitryKo like this.
  9. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Sure, reasonably priced argument is true. But as those powerful GPUs in future will need to move data from system memory to graphical from time to time. That's time when practically any GPU saturates all available PCIe bandwidth.

    And I do not believe people will jump ship from 9900K or better CPU with PCIe 3.0 just because GPU manufacturer decided to go for PCIe x8 wiring.
    Take top GPU from 2 generations in future, it may be 80% faster than 2080Ti. Once put one PCIe 3.0 x16 it may be only 75% faster. But then what if it is reduced to PCIe 3.0 x8 and gives only 65% benefit?
    I can see people buying 1st example GPU because their system was limitation and performance upgrade is reasonable. But in 2nd case, when system under performs even more because manufacturer went for x8. I would have trouble with that. If GPU under performed even by 10% from PCIe 3.0 x16, it would be acceptable because that kind of expensive GPU had all it could. But I would feel being cheeped on by GPU with x8 wiring.

    If we have CPUs with like 60% higher performance per core/thread, it would be reasonable to upgrade. But if intel/AMD delivers only some 20~25% above what exists now in top CPUs, those people will really feel that they are being pushed to throw away their perfectly powerful CPUs which outperform mainstream systems with PCIe 5.0/6.0 in every aspect except gaming due to x8 wiring.

    While I do not expect myself to care much as I plan to change my current system into workstation/server at time I go for AM5, I do realize that target audience for those top end GPUs are people who have CPUs that are not going to be limiting factors for gaming on 4K/VR. And I can write it 100 times over and over again. When such person looks at $1000+ GPU, he/she will expect PCIe x16 even if it is not needed for newest generation of CPU+MBs.
     
  10. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,395
    GPU:
    Asrock 7700XT
    We're talking tens of gigabytes per second here. PCIe is not a bottleneck in such a situation.
    Nor should they, yet. I'm only arguing the future slots don't need x16. The cards themselves may need to keep x16 for a little while longer, since PCIe 3.0 is still relevant, despite its obsolescence. Though, by the time PCIe 6.0 comes out, the cards themselves probably won't need x16 slots either.
    All that being said, I agree with your points about buying something like a 3080Ti that only has x8 lanes. For now, such a GPU is better off with x16.
    Well yeah, but even Sandy Bridge users can still play most modern games at good frame rates, so, when will they have a reason to upgrade? The gaming industry hasn't evolved much in the past decade, in terms of how the CPU is utilized. Hopefully we see a shift in games caring more about threads than Hz.
    Remember, even where a 2080 Ti loses performance on x8, the framerate was already in the hundreds. Sure, nobody wants to needlessly sacrifice performance, but if you're running a 3080Ti on PCIe 3.0, even at x16, you're sacrificing performance because you no longer get the latency benefit. If you want to unleash the full potential, you need a new system regardless.
    And such a person is also willing to replace their 9900K for the sake of an imperceptible performance improvement. For anyone buying a lesser GPU, the framerate will drop, as will the need for more bandwidth.
     
    Fox2232 likes this.

  11. DmitryKo

    DmitryKo Master Guru

    Messages:
    447
    Likes Received:
    159
    GPU:
    ASRock RX 7800 XT
    Yes. The cost/performance ratio is different enough.
    Doubling the amount of high-performance HBM2 VRAM can double the price of a high-end videocard. Doubling system RAM is a fraction of that cost.

    No, it's not. Far memory can be paged to near memory but it's still accessible at all times with minimal latency, unlike paging file on the external storage.

    It won't 'increase load' because it's the exact same number of texels and pixels as any other low-res texture.

    You argue that x16 is not needed because SSDs will load game data directly into the video memory - that's only possible with expensive PCIe bridges and M.2 slots on the video card.

    They are on the same scale as adding more VRAM.

    You are not "reducing" the bandwidth, you are artificially limiting it to the read performance of the SSD drive, which is ~1.6 orders of magniture lower than system memory.


    Nope, still doesn't make sense.

    You are proving my point without even realising it.

    Only if you keep repeating "PCIe lanes don't matter" argument no matter what. Every peripheral is based on PCIe these days, there's little point in saving on x16 slots.
     
    Last edited: Jul 16, 2020
  12. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,395
    GPU:
    Asrock 7700XT
    We're talking many years down the road here. I am sure HBM2 is going to be obsolete by the time PCIe 6.0 becomes mainstream.
    Yeah, and if a GPU warrants HBM2, system RAM is nowhere near fast enough to be compared to.
    Again, you're taking things way too literally. This is pointless.
    Number of texels and pixels are not the only things that determine performance. If I was as wrong about this as you think I am, why else do games use noticeably worse textures for objects in the distance? Remember, we're not talking about AF here.
    Each pixel of the texture has to be calculated for its position in 3D space. This isn't computationally expensive for a modern GPU, but it isn't free either. Each bit of that texture is still taking up the same amount of space in memory regardless of texels or rendered pixels, and even on the blazing fast hardware we have today, a 128x128 texture will take less time to be read from memory than a 4096x4096 texture. So, when you've got hundreds or thousands of objects in the distance being rendered with high-res textures (remember, the only polygons you need to render are the ones in-view), that will take a toll on performance. Not a dramatic one, but a measurable one.
    Not quite. I'm saying that on-card M.2 storage reduce the need of more lanes. Again - we're talking about many years in the future. Today's high-end luxuries are tomorrow's commodities.
    Right, and? Whether you pay to have more RAM or add more PCIe lanes, it doesn't make a difference: as you said, it's the same scale.

    Where are you getting that number from? You keep moving the goal post (and again, you accuse me of argument fallacies?). If a PCIe 5.0 or 6.0 GPU has on-card storage, it's most likely going to be ready to support a M.2 drive of the same generation. You don't need that much bandwidth for frame-by-frame rendering, so even on x8 lanes you still have plenty of bandwidth to read from system memory. And all of this is assuming you paid good money for a reasonably powerful GPU that is somehow cripplingly starved for memory.
    Nope.
    Where did I ever say that? Again with the fallacies.... hypocrite.
     
  13. Richard O Slingerland

    Richard O Slingerland Guest

    Messages:
    1
    Likes Received:
    0
    GPU:
    1080ti
    AORUS Gen4 AIC Adaptor already reaches 15k Mbs. Thunderbolt3 40Mbs using Dual 4K at 120fps or 8K60fps so i'm sure THunderbolt3 PCIe would benefit. Aside from 8K tv's moving fast i think it has more to do with where the money is. Rendering, AI, Medical Imaging, Data Crunching. The fact that it exists, is a reason someone will find a use for it. They didn't move to PCIe 6.0 just because they want to see if they can finally run CRYSIS at 4K 240fps. Be glad that they are happy to give you any assemblence of what you ask for. While bilkin you for thousands of dollars so you can meet the minumum requirements for SinVR. Anyone know the actual PCIe thru-put on a Twin 20Gb Lan add in card PCIe 3.0 vs 4.0? If Things like Frontier residential 10G internet here in Cali get Cheaper. WWWAAAYYY Cheaper, streaming will dominate gaming. How many 4K Camera's can you live record at 60FPS on your home security system right now? Try 4K Video live in HDR10? then see how many you can setup and keep up? Not many if Any! I have the Gigabyte Designare 10G with dual Thunderbolt3 and dual 10G Lan but still PCIe 3.0 Go figure. My 500Gb internet download an upload speeds are 50MBs. How about 5G with 1Gig download speeds? If it downloads at 1Gig why call it 5G when it isn't?
     
    Last edited: Jul 17, 2020
  14. bobblunderton

    bobblunderton Master Guru

    Messages:
    420
    Likes Received:
    199
    GPU:
    EVGA 2070 Super 8gb
    People who say we haven't saturated PCI-E 3.0 x16 yet need to show themselves to the door.
    You'll easily use that much bandwidth if you have a rather high-end GPU and over-do what the VRAM can hold.

    While we only have PCI-E 4.0 things like SSD's and adapters for them (for bifurcation of a pci-e 4.0 slot), and some AMD video cards supporting pci-e 4.0 - even a "lowly" 5500XT will happily use pci-e 4.0 x8 to it's fullest when paging to system RAM - something very easy to do with modern software or games on a 4gb card today.

    PCI-E 4.0 x8 slot = same bandwidth as PCI-E 3.0 x16 slot (as with pci-e 4.0 lanes double in throughput, so if there's half of them, it works out just fine but only using half the previous capacity you would have used from a processor capable of only pci-e 3.0).

    A MONTH ago in this very thread, someone said they're tired of getting short changed in lanes for PCI-E from the processor...
    Simple, buy an AMD Ryzen 3000 non-G series processor + 5xx series chipset and you get 2x the data width as previous was available in previous gens, and even the newest consumer intel chips are on pci-e 3.0 yet until late fall when new stuff comes out with pci-e 4.0 from intel (that actually sticks to the spec* the PCI Consortium dictates), but the 3000 series Ryzen have it already. While you may not use it all now, you'll have it in the future. I myself hated lane juggling messes that were the intel Z-series chipsets past Nehelam, unless you went HEDT. Not wanting to spend HEDT money I decided to vote with my dollar and get the AMD option. Again, intel users will be able to have pci-e 4.0 on the 11xxx series chips late this fall, on most (possibly all) currently available z-series 4xx motherboards.
    If the mainstream desktop does not give you what you want, go HEDT, where you have LOADS of lanes to from which you can communicate.

    EDIT: I do content creation. You may recognize my name from the BeamNG Drive (state of the art driving simulator / soft-body vehicle physics sim) forums, I am a hard-core map modder/ 3d artist / modeler / texture and sound artist. I regularly make models for things around the map, which can be a few KB for a simple chunk of Jersey Barrier, or a light-post, both common to roads the world over... all the way up to 2.85GB for an AAA-grade studio tunnel I purchased off a pro modeling site, not including the 0.56gb in textures for JUST the tunnel.
    The biggest thing holding us back is VRAM limits. 8GB is a paltry sum when just the highest detail level of a tunnel itself, not including collision meshes or LOD's (level of detail, which gets simpler as you go further from the object), is using 3.4gb JUST for the tunnel and it's textures!
    No, not kidding, I mean it. Sure I can make scenes of many similar models such as re-using signs, light-posts, and re-color textures at run-time using shaders to save on video memory there... but when you run out of VRAM your performance will face-plant on PCI-E 3.0 x16.
    So while I have a (seemingly stupid) 2070 Super (due to Radeon driver bugs), which has said paltry 8gb of VRAM here, and only is PCI-E 3.0 x16 and NOT PCI-E 4.0 like the new AMD Radeon 5000 series and professional level cards such as Radeon VII and instinct models; I can't stress enough how nice it would be, and how much more in-depth I could make a scene, if I wasn't limited to 8gb of VRAM in-stead having more like 32~64gb. I regularly run out of VRAM just editing things when I'm doing modeling in Maya for example - paging to memory stinks especially when Winblows gets angry and the task bar flicks off and back on now and then because it can't figure out what to do paging memory like crazy, all while introducing random micro-stutter in any other 3D apps running including the modeling program itself.
    Embrace PCI-E 4.0, it's one more nail in the coffin that is preventing us from having ultra-realistic mega-detailed open worlds. Now we just have to wait for the tech to USE it.
    I figured I'd add this. Sure I should have gotten a Quaddro, but if you think Titans and Ti's are expensive, these Quaddro things are almost expensive as having a wife. After paying out for a 3950x, x570 cheapo motherboard, nvme drives, 2070 Super, a sound card, expensive top-dollar sound systems (gotta have our creature features if we're stuck at the PC, right?) I just don't feel like opening my wallet this very minute for a Quaddro for some reason - and the beautiful Radeon VII 16gb model was totally out of the question after the AMD driver snafus I dealt with before - though I really would have loved it and was my first choice if drivers were out of the question.
    That being said, come to the BeamNG Drive forums and see us some time if you fancy a driving sim, I'll be lurking not too far from my Los Injurus City Map mod project as-usual.
    --That is all, sorry for the really bad grammar and run-on sentences.
     
    Last edited: Jul 19, 2020
  15. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    Is this thread still going on? My goodness people...
     
    theoneofgod likes this.

  16. TalentX

    TalentX Master Guru

    Messages:
    210
    Likes Received:
    99
    GPU:
    Inno3D RTX4090 AIO
    What's the matter? Let them discuss, that's the good thing about forums after all.
     
  17. DmitryKo

    DmitryKo Master Guru

    Messages:
    447
    Likes Received:
    159
    GPU:
    ASRock RX 7800 XT
    They don't, it's standard mipmapping. Low-res mipmap textures are just not good enough to provide the fine details - anything below 32x32 is going to be blurry with any filtering algorithm you can practially implement with pixel shaders.

    This is the reason why Unreal 5 uses millions of small triangles instead of textures even for large outdoor areas.

    Only if you assume that all polygons in the distance are strictly parallel to the screen plane.

    For distant objects, 128x128 texture looks exactly the same as 4096x4096 texture and takes the same time to filter, because you are using the exact same mip level (unless you crazily twist the scaling factor like it's the year 1996 with 2 MByte videocards, so that each texel is a giant spot on your screen).


    It doesn't make sense. By your logic, dedicated VRAM (1 TByte/s or higher) is required and DDR5 system RAM (125 GByte/s) absolutely cannot keep up for frame-by-frame rendering, but (at the same time) PCIe 6.0 M.2 disk at 31.5 GByte/s and system RAM at 63 GByte/s would be perfectly OK.


    Not sure why do you have to repeat it over and over. Nobody ever promised onboard storage for gaming GPUs. This is not how DirectStorage on the Xbox is designed to work either.

    It will become cheaper, so it will move to mid-end.

    It's fast enough compared to PCIe storage or unavailable HBM2 RAM.

    Pointles is making bold claims based on broad assumptions.

    Comparing performance of dual-channel DDR5-8000 (~125 GByte/s) to NVMe SSDs (3.5 GByte/s), that's 35.71 times or log10(35.71)=1.55 orders of magnitude faster.



    Still does not make any sense.

    Fast VRAM is more expensive that system RAM, and PCIe lanes are essentially free.

    Isn't it hillarious how you don't even understand the meaning of the words you are using?
    Take some time to look up the definitions in a dictionary.
     
    Last edited: Jul 25, 2020
  18. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,395
    GPU:
    Asrock 7700XT
    You are impossibly arrogant. By "worse textures" I meant lower resolution. That's what mipmapping does - it uses lower-res textures.
    Right, and water is wet.
    It does make sense, but it won't as long as you keep twisting my logic in absurd ways. That being said, where did I ever imply 1TB/s VRAM is required or that DDR5 can't keep up with frame-by-frame rendering? I said you don't need that much bandwidth to do such a thing, so your misinterpretation is due to your own reading comprehension issues. You are proving to be a real waste of time.
    I would gladly drop it but you keep arguing about it.
    If you insist it's "fast enough" (whatever that means) for that then you don't need the full x16 lanes.
    Most people have the common sense to understand a generalization when they see one.
    Again... why are you comparing a PCIe 3.0 SSD to a system with DDR5? If you're so sure I'm wrong, you wouldn't have to make such ridiculous comparisons.
    You also seem to keep forgetting that you're not going to get all that performance from DDR5 whether you have x16 lanes for PCIe 5.0 or even 6.0. So your comparison grows more moot.
    So... you're willing to cripple your GPU's performance by reading from system memory just so you can save a little bit of cash? Low-end GPUs are not going to demand more than x8 lanes even if they run out of VRAM. That's how it is for PCIe 4.0 and that will only be made more true in the foreseeable future.
    EDIT:
    Also, PCIe lanes aren't "essentially free", especially not when you're talking 16 lanes. They're definitely cheap when you're talking 3.0, but that's to be expected for a roughly decade old standard. The tolerances are more strict for newer versions, and that will be more costly.
    I suggest you do the same and look up the word "irony".
     
    Last edited: Jul 25, 2020
  19. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,016
    Likes Received:
    4,395
    GPU:
    Asrock 7700XT
    You are woefully misinformed if you think more PCIe bandwidth is going to fix your problems. They'll just make your problems less severe. What you need is more VRAM; a lot more. Or, perhaps, a more efficient approach to your workload. The whole reason VRAM exists is so you don't have to feed your GPU from system memory, because doing so is cripplingly slow and inefficient.

    Fix the VRAM issue and suddenly, PCIe 3.0 @ x16 won't be saturated. So take your needless aggression out that door with you.

    There's a difference between a cheap GPU like a 4GB 5500XT needing to sip from system memory because it can't fit all game assets in VRAM, vs your extreme scenario where even PCIe 6.0 @ x16 won't be enough bandwidth to help you. The reason I emphasize this is because depending on PCIe to make up for insufficient VRAM is not a sensible solution.
     
    Last edited: Jul 25, 2020
  20. bobblunderton

    bobblunderton Master Guru

    Messages:
    420
    Likes Received:
    199
    GPU:
    EVGA 2070 Super 8gb
    Never intended aggression in a text box on an internet forum, too old for that.
    However, also never stated it would fix the issue, more intended it to be portrayed as 'kicking the can down the road'. I just merely said it's a good way to saturate 16x bandwidth on pci-e 3.0. I do understand not just the speed alone but the latency penalties would cripple one's fps even if the pci-e slot wasn't the limiting factor.
    ...and yes, need a lot of VRAM here when working on objects with 1~3 million vertices, especially when generating LOD's and collision meshes. Running out of VRAM stinks, but is not always avoidable unless you have no budget. Too bad it's about 20~22 years too late to get a video card new that has expandable video memory connectors for a daughterboard with more VRAM. That used to be a standard feature (but not really standard pin-out / interface for VRAM upgrade). Heck you even used to be able to put expandable RAM on sound cards!
     

Share This Page