GeForce RTX 30 series - Gaming Videos with RTX ON

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Sep 3, 2020.

  1. XenthorX

    XenthorX Ancient Guru

    Messages:
    3,681
    Likes Received:
    1,650
    GPU:
    3090 Gaming X Trio
    I have a Nvidia developer account :eek:
    Even contributed to some Gameworks integration in Unreal Engine.
     
    fantaskarsef and PrMinisterGR like this.
  2. Denial

    Denial Ancient Guru

    Messages:
    13,323
    Likes Received:
    2,823
    GPU:
    EVGA RTX 3080
    You can just sign up for it.. they don't like human approve it or anything you just give them your email and a company and you get access to everything.
     
    PrMinisterGR likes this.
  3. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,675
    Likes Received:
    603
    GPU:
    Inno3D RTX 3090
    NVIDIA SHILL NVIDIA SHILL!

    ( :p )

    Seriously, do you get access to more specific documentation about features? I don't want to do it myself because I will autism-dive to it and stop having any semblance of life :p
     
    XenthorX likes this.
  4. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,803
    Likes Received:
    3,359
    GPU:
    6900XT+AW@240Hz
    Even I have it for years. They never made that sign up as barrier, they made it to know who is interested in what. (Usual analytics.)

    So, maybe you can make one for yourself and let that autism burn out in few hours.
     
    Maddness and PrMinisterGR like this.

  5. nicugoalkeper

    nicugoalkeper Master Guru

    Messages:
    896
    Likes Received:
    23
    GPU:
    ASUS GTX 1060 DUAL OC 6GB
    I really love this talk about VRAM, but I think many of those that say it is to little are wrong because they don't see one simple fact: very few people are playing games at over 1080P.
    So what is the point form NVIDIA to make a card even more expensive if people don't have PC screens on what to play it.
    Still few people play games at over 1080P (I think 25-30% and not all at 4k or over, but I'm optimistic).
    Nvidia needs to build cards for poor people also, or else they will not make more money.
    See here:
    https://gs.statcounter.com/screen-resolution-stats/desktop/worldwide
     
  6. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,675
    Likes Received:
    603
    GPU:
    Inno3D RTX 3090
    Isn't this a matter of architecture too? If games move to streaming a single ultra-high quality asset that the engine scales on the spot (like the UE 5.0 demo, or the things coming with Direct Storage), then the size of the VRAM will determine the quality of the final presentation of the asset, if I understand correctly.

    Also getting a 3080 and using it at 1080p is completely out of scope. Someone who wants to game at that resolution should wait for the 3060/3050 and their AMD equivalents.

    10GB is too few VRAM for a card that boasts to be at leat 50% faster than the hardware in the next gen consoles which already have 16GB.
     
  7. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,803
    Likes Received:
    3,359
    GPU:
    6900XT+AW@240Hz
    And we did talk about it before extensively multiple times. PS5 can't do more than 7GB of memory accesses (sum of all reads and writes) per frame or it will not achieve 60 fps. Same limitation applies to RX 5700 XT. Except RX 5700 XT has that memory bandwidth just for itself and does not share it with CPU.

    Then you have 3080 which, while having just 2 more GB than RX 5700 XT, has enough bandwidth to read+write over 12GB of data per frame from+into memory and still be above 60fps.

    In other words. Even if PS5 did nothing but read 7GB of data from VRAM and had no need to write anything back, 3080 can read same 7GB, process it, write another 2GB and then read another 1GB and process it with that written 2GB. And still be above 60 fps.
    And that's unrealistic, because PS5 will write quite a lot of data back into GDDR6. Which will again reduce actual available footprint for data that are being read with each frame.
    I expect that peak game graphical asset volume per frame that's being read will be around 4~4.5GB. Rest (to 7GB limit for 60 fps target) will be writing of data processed by GPU and re-reading it again when needed in given frame + CPU access for handling all engine related things.

    Only achievable thing with 3080 is to provide it with more VRAM where most of increase will not be accessible on per frame basis. Which will serve as cache at best.

    If I am to pick smaller working set on which I can do a lot of processing or large working set that I have no way of squeezing through IMC. I'd pick smaller one. But 3080 is not in either situation. It can process rather large working set in comparison to PS5 or most of older GPUs. And it can do bit more with it because that IMC can pull/push more data than actually fits into VRAM. Sure, total optimum size would be like 12~14GB of VRAM for given bandwidth. But 20GB would be wasteful for all cases except few where user needs that extra cache.

    Same way as people who want 3070 with 16GB of VRAM, it can't handle 8GB fully, but they want cache. Maybe time will prove that VRAM as cache is important. But then I expect GPU manufacturers will opt to create separate IMC chip that will handle high capacity, low bandwidth and cheap cache which will still have much faster access than GPU <=> System memory <=> storage chain. Or GPU <=> storage via DirectStorage.
     
  8. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,675
    Likes Received:
    603
    GPU:
    Inno3D RTX 3090
    Correct, but what about games that might be 1440p30 on a console and you want them to be 4k60 on the PC?

    Then you have 3080 which, while having just 2 more GB than RX 5700 XT, has enough bandwidth to read+write over 12GB of data per frame from+into memory and still be above 60fps.

    I don't know man. We have an idea of how this might work, but consider this: You have as a developer 14GB free, and an SSD that can do 9-22GB/sec. Depending on the character movement, you can literally stream behind their back, things like 12GB of textures for example. A GPU would still need to be able to fit them in their VRAM to get the same quality, right?

    I see your point, but that GPU is already much faster than the next generation consoles, so it will have to be able to do higher native resolutions and better assets. All of that requires more VRAM. To me 20GB sounds like the sweet spot to be honest.

    That's already in Ampere, Turing and RDNA 2.0. The GPU talks directly to NVMe storage without going via the CPU or system memory. I think this will actually make higher VRAM even more meaningful than before. We'll see I guess, but I won't buy a "next gen" card without a minimum of 12GB of VRAM.
     
  9. XenthorX

    XenthorX Ancient Guru

    Messages:
    3,681
    Likes Received:
    1,650
    GPU:
    3090 Gaming X Trio
    Nvidia Developer accounts are mandatory to access Nvidia github repository, alongside download access to a bunch of tools and plugins for all kind of purposes (top of my head there's a cubemap plugin for photoshop i usually get).

    Nvidia has been implementing their latest tech in Unreal Engine in their own branch all available on github given you have a linked developer account.

    [​IMG]
     
  10. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,803
    Likes Received:
    3,359
    GPU:
    6900XT+AW@240Hz
    1st problem is that increase in VRAM read write is not that critical from higher resolutions. It is there and raytracing may even increase on it. But in most cases, games use exactly same assets on 4K as they do on 1080p. Therefore if game was 1080p 60fps, not due to GPU processing limitation on console, but due to memory bandwidth limitation. They could increase resolution without drastically increasing bandwidth requirements and limiting factor would become GPU power.
    On same note, one could say that game developer may target 1080p @30fps with assets that are limited by VRAM bandwidth. Sure, it would be problem. But it would be technical problem for console. But let's be honest again. That 30 fps (if caused by bandwidth limitation) comes around point where game loads+writes 14GB of data per frame. (Excluding OS, as that's usually pretty idle.)
    And question is, can we even come with sensible scenario in which this happens on PS5? And can we find developer that would go for it? (I hope that you remember my post about memory allocation by OS, and what's left for game. And that part of it is usual data handled by CPU and part of it belongs to GPU.)
    Suddenly, PS5 having 16GB of total memory does not lave much room for 14GB of GPU<=>VRAM access to happen per frame, unless you reprocess same assets or products of asset processing like 3 times around.

    Would MS release XSX with full 20GB VRAM as they should have, IMHO. Large quantity of your arguments would immediately have a lot of ground.
    We did talk about this too. You can't stream 12GB of textures without user realizing that he now has no textures on objects right in front of him. Realistically, given the limitations for good gameplay:
    - GPU uses limit 4.5GB of assets per frame. Writes and reads another 1GB (which requires memory region). Totaling 6.5GB R+W and 5.5GB of memory allocation. Some of rest is eaten by CPU in terms of both allocation and bandwidth.
    - And then you have lets say another 5GB free memory for GPU caching behind user's back. And this caching can't happen on higher rate per frame than that what would exceed given 7GB/s limit to keep 60fps.
    - Limit reasonable speed of storage is 10GB/s. (Let's leave it on mean as it has RAW is 5.5GB/s, mean with decompression is 9GB/s, and peak is close to yours 20GB/s.)
    - This means, storage can read required 5GB at around 0.5s which is good. And will be loaded across 30 frames. Which is some 171MB/frame loaded into memory from storage. And that does not obstruct performance of PS5 in any noticeable way.

    Given this reasonable memory partitioning and use, I see no big problem in volume of VRAM on 10GB GPU. Bigger problem would be how to consistently get 171MB/per frame into VRAM when this caching happens on PC. But we did discuss that too, right?
    See underlined part bit above :)

    I am not against people who want more VRAM. But I think that it is waste for most use cases. Which here is mostly gaming. Except few content creators here who could really use those 24GB VRAM 3090 has.
    By all means, when people are willing to pay for it. But it is analogy to me buying i5-2500K and OCing to 4.5GHz instead of buying bulldozer 8C/8T. I did it, because for many years from purchase, I had better performance from i5. At the end I suffered and I would get better experience from 8C/8T bulldozer.
    But in all those years before I had 1st game that suffered on 4C/4T CPU was worth it and provided much better experience than any bulldozer ever could.

    If I was to make choice between 16GB 3070 and 10GB 3080, it would not be even choice. I would take 3080 without regrets. And I really wonder what will be price difference between those 2.
    And at time, I would finally start to see that 3080 degrades performance due to insufficient VRAM (and performs worse than 3070 16GB), I would just move on to 2~4 times as powerful GPU. Or reduce some detail from UltraMaximum to UltraHigh.

    Saying that 10GB VRAM is not going to be enough to play console ports well is same as saying that 99.5% of GPUs currently used are not enough to play those ports.
    Then, who would be porting such game to PC when publishers know, people are not going to buy game that "Just Doesn't Work".

    Anyway. We're going to seen soon enough where games land. We can revisit this topic when that happens or in case MS announces that XSX has 20GB GDDR6.
     
    Last edited: Sep 7, 2020
    Aura89 and PrMinisterGR like this.

  11. Spets

    Spets Ancient Guru

    Messages:
    3,070
    Likes Received:
    166
    GPU:
    RTX 3090
    I think that depends if it's enabled or disabled on a per card basis in the games code?
    Purely from a DXR stand point though it technically will work. I think Vulkan based games with RT would be difficult though based on using specific extensions that will need extra.
     
  12. nicugoalkeper

    nicugoalkeper Master Guru

    Messages:
    896
    Likes Received:
    23
    GPU:
    ASUS GTX 1060 DUAL OC 6GB
    You can compensate the lack of VRAM whit other optimization and improvements to get that 50%. But on thye other hand how many times did you see NVIDIA to have a new card 50% faster, maybe 30-40%.
    Still looking at the card they may just actualy pull this 50% this time. We will see.
     
  13. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,675
    Likes Received:
    603
    GPU:
    Inno3D RTX 3090
    Every time I hear "VRAM optimisations" I remember the Fury X and I shudder.

    If anything, in trying to justify how 10GB are ok, Nvidia has shown that you need an 8GB card to play games in the 8GB console generation.

    I don't understand why nobody takes this one step further, unless they plan to keep a 700$ GPU for a couple of years only.
     
  14. kapu

    kapu Ancient Guru

    Messages:
    4,705
    Likes Received:
    407
    GPU:
    Radeon 6800
    Most of people here think 6gigs are totaly fine. Everyone forgets we are not talking NOW , once next gen games will start poping you will see drastic rise in vram usage , justfied by new consoles having enough. 8gb will only cut 1080p/ ultra detail.
     
    PrMinisterGR likes this.
  15. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,803
    Likes Received:
    3,359
    GPU:
    6900XT+AW@240Hz
    Why? Fiji ended moment Polaris came. That's because no matter how many CUs GCN has, there is certain shared infrastructure and parts that can scale only from clock. And Polaris simply had sufficient clock boost to make Fiji look bad regardless of available VRAM.
    When card runs out of horse power before it runs out of VRAM, it is not exactly example one wants to justify statement that 10 or even 8GB VRAM is not enough.

    And it is by now composition fallacy. Few games having VRAM problems on maximum texture/shadows details does not mean all games have that problem for given card.
    Opposite is truth. And this problem is non-critical unless user insists on self harm by refusing to reduce given detail. And then it is their choice.
    On top of it, as close as we are able to identify edge where this problem happens for certain games, it is just above 4GB. And it took quite a few years for this edge of problem to move from 2GB to 4GB.

    Neither AMD, nor nVidia shown total lack of foresight in terms of available VRAM vs performance in terms of cards longevity. If anything, they historically turned more high end GPUs into entry level GPUs within few years.
    (Do you remember times when denser transistor manufacturing enabled both nV and AMD to double shader count?)
    How long it took to get from 320SP to 1600SP?

    Someone with entry level GPU could only dream to have relevant performance 4 years later like 1060 3G has. And to be honest, if either nV or AMD solve MCM within next few generations for gaming GPUs, anything we have now will be done for right after. (In terms of value like 2080Ti now.)
     
    Last edited: Sep 8, 2020

  16. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,675
    Likes Received:
    603
    GPU:
    Inno3D RTX 3090
    All I'm saying is:

    8GB Consoles = 8GB High End GPUs, therefore

    16GB Consoles = 16GB High End GPUs.
     
    Undying likes this.
  17. nicugoalkeper

    nicugoalkeper Master Guru

    Messages:
    896
    Likes Received:
    23
    GPU:
    ASUS GTX 1060 DUAL OC 6GB
    I did not say VRAM optimisations, but GPU optimisations. VRAM is not the only thing that drives a GPU.
    How wrong, the 16GB on consoles are not only VRAM and it is a custom build and how much RAM the CPU uses from those 16GB for other that graphic computations and so on.
     
  18. kapu

    kapu Ancient Guru

    Messages:
    4,705
    Likes Received:
    407
    GPU:
    Radeon 6800
    Still much more than 8gigs :D
     
  19. nicugoalkeper

    nicugoalkeper Master Guru

    Messages:
    896
    Likes Received:
    23
    GPU:
    ASUS GTX 1060 DUAL OC 6GB
    PS4 ofered 5GB available to game developers, and PS4 PRO had 5.5GB available to game developers.
    So nearly 40% of the entire memory on a PS4 was used for other task's.
    If we take a PS5 and do the same then we can say that only 10GB is available to game developers.
    So maybe a 10GB VRAM for GPU is OK.
    We will have to wait and see.
     
  20. kapu

    kapu Ancient Guru

    Messages:
    4,705
    Likes Received:
    407
    GPU:
    Radeon 6800
    i really hope. 3070 looks really good for me i'm planing to stay at 1080p or go 1440p at best.
    I thinking about AMD also but DLSS seems big feature , that can make card more future proof.
    My 1060 turned out quite future proof :)
     

Share This Page