Gears of War Ultimate Edition. DX12 exclusive, requirements inside

Discussion in 'Videocards - AMD Radeon Drivers Section' started by PrMinisterGR, Feb 24, 2016.

  1. Horus-Anhur

    Horus-Anhur Master Guru

    Messages:
    378
    Likes Received:
    223
    GPU:
    GTX 1070
    How can a game from Xbox One need such high end hardware?
    Is there such a massive graphical improvement, or is this just another bad port.

    And an exclusive for Windows Store?
     
  2. Truder

    Truder Maha Guru

    Messages:
    1,368
    Likes Received:
    316
    GPU:
    Sapphire Fury Nitro
    A lot of people don't realise just how good GCN 1.2 is at compression and I understand that because you really need to have a 285/380 or Fury range of cards to see how it handles.

    In my case, very rarely have I found to be exceeding the capacity of my card's vram (granted I am playing at 1080p, a good example for me is GTAV, while the card can't exactly play the game at the highest settings, I can crank up the settings fairly high and when I do, it estimates that it'd use at least 4GB vram (have to bypass the memory limitations) - now in some situations it does exceed the capcity having to use swap but for the most part when I played this game, it's memory usage would hover around 1600MB to 1800MB, it very rarely went past 1900MB let alone maxing out at 2048MB.

    AMD reports that GCN 1.2 can compress upto 40% of content stored in VRAM and that's a very believable figure that I've observed.

    Another fact is that the games development industry are very lazy now in order to rush out games, they aren't using and lossless compression techniques now and just dumping raw textures and relying on brute force of computers to render games which inflates the requirements of hardware to compensate.

    Eitherway we'll have to see the results when the game comes out - very annoyed though that it's only on Windows Store -_-
     
  3. Dygaza

    Dygaza Master Guru

    Messages:
    536
    Likes Received:
    0
    GPU:
    Vega 64 Liquid
    I ran the test myself.

    I don't remember where I downloaded the compiled version of the program (downloaded 1 year ago) , but here is address for source.

    https://github.com/duzenko/OpenclMemBench

    Edit. Take a look at techreports memory bandwith test.

    [​IMG]

    Seems 290x is also having efficiency problems.
     
    Last edited: Feb 24, 2016
  4. Noisiv

    Noisiv Ancient Guru

    Messages:
    7,600
    Likes Received:
    1,025
    GPU:
    2070 Super
    that is the difference between theoretically obtained and measured numbers,

    and it seems that even when the measured quantity itself is a theoretical quantity(bandwidth), as opposed to practical (fps,frametime),
    obtained data does not match theoretical projection.

    290x is the only one that fetches random textures just as efficiently as black
     

  5. Dygaza

    Dygaza Master Guru

    Messages:
    536
    Likes Received:
    0
    GPU:
    Vega 64 Liquid
    Point was that 290x theoritical maximum speed is 320GB/s. However, you raise a valid point.
     
    Last edited: Feb 24, 2016
  6. RexOmnipotentus

    RexOmnipotentus Master Guru

    Messages:
    796
    Likes Received:
    4
    GPU:
    Vega 64
    What are black textures?
     
  7. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    1,915
    Likes Received:
    526
    GPU:
    .
    You can have a 1TBs of bandwidth, but PCI-E is still the bottleneck from both latency and bandwidth POV.
     
  8. Rich_Guy

    Rich_Guy Ancient Guru

    Messages:
    12,622
    Likes Received:
    620
    GPU:
    MSI 2070S X-Trio
    Drop something down to high from very high on the Furys (i don't have the game, so not sure what setting it is, could be textures), or it stuttered like buggery, as very high needs more than 4GB VRam.
     
    Last edited: Feb 24, 2016
  9. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,528
    Likes Received:
    3,204
    GPU:
    6900XT+AW@240Hz
    Black texture is as good as having texture made of just one Black pixel. It is question of being stupid at development or smart.

    That Random Texture is what matters, it is close to worst case scenario. And it says that Fury X has 40% higher data transfer speed than Titan X.
    And that means Titan X will start to suffer from high memory bandwidth requirements much sooner than Fury X.

    Now, imagine that you need to load exactly 4GB from vram to draw single frame. Then Titan X can read that amount of data 238 / 4 = 59.5x per second.
    If there is game which really requires 10GB of vram, then it is 238 / 10 = 23.8 fps at best.

    Apparently, most of games reuse textures for multiple objects, so allocating 2GB of vram may still cause reading of 4GB of data per frame.

    That points to simple fact. Big part of vram utilization are precached data which are not used on per frame basis unless you want your game to run sub 60fps on Titan X. Or developer decided to use high resolution textures unique per object.
    Again, I say that developer must be stupid or have special agenda to use resources improperly.
     
  10. sammarbella

    sammarbella Ancient Guru

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    I guess they recommend the 390X because it's the best AMD GPU with enough VRAM to play this game at 4K with a minimum sustained FPS.

    Thanks for the heads up!

    Finally a good reason to test the Vulkan driver. :D

    I agree the ppl who bought it should know that "little" detail but Fury X was promoted as a 4K GPU overkill to play at 1080p.

    PC gamers in here should know that but average casual gamers out there are easy prey for PR and ads.

    IT can be "shocking" (fancy term from another thread) for some but this is ROTTR VERY HIGH at only 1080p (under spoiler to protect sensitive people)

    [​IMG]
    http://kotaku.com/rise-of-the-tomb-raider-pc-benchmarks-steep-demands-1756350619

    IDK you but i bought my 290X cards in the first semester of 2014, back then 4 GB VRAM was plenty.

    The last 290X models have 8 GB VRAM (MSI gaming) but it was too late for me, i had already bought 2 lightnings 290X 4 GB VRAM.

    Buying a Fury X in the second semester of 2015 with only 4 GB VRAM when the competitor in your level (980 Ti) has 6 GB and lower level AMD cards (390X) have already 8 GB VRAM was a bad idea even when FURY X was launched.

    Buying a Fury X now it's simply be very bad advised.

    Wait for Polaris or buy now a GPU with a different color...
     

  11. Goiur

    Goiur Maha Guru

    Messages:
    1,097
    Likes Received:
    378
    GPU:
    ASUS TUF RTX 3080
    I chose FuryX over the 980ti because i know what happens whit nvidia and "last-gen" gets rekt when new one is release.

    First games on FuryX release used less VRAM than 980Ti, so well i hoped for the best and wish 1440p was enough with 4gb. Guess i was wrong.:banana:

    When FuryX is done ill sell it and go for another gpu :)
     
  12. RexOmnipotentus

    RexOmnipotentus Master Guru

    Messages:
    796
    Likes Received:
    4
    GPU:
    Vega 64
    @ Sammarbella, i bought my r9 290s in March 2015. I payed 269 euros for each. It was very cheap and they have much better coolers than the reference cards.

    Buying a card with more VRAM was simply not worth it to me. It would have cost me alot more money. At least 100+ euros more for each card. For that price i would rather go for the 4GB version than for the 8GB version.

    I might buy new cards next year, if i really feel the need for it. I usually buy a new GPU every 2 - 3 years.

    About ROTTR: something definitely changed. It seems that the CrossFire profile is improved. Before i would get (micro)stuttering a random places at 4K. The only solution was to lower the resolution to 1440p. With the vulcan driver i can keep playing at 4K and there is no (micro)stuttering anymore. However, in very rare case the (micro)stuttering is still there.
     
    Last edited: Feb 24, 2016
  13. sammarbella

    sammarbella Ancient Guru

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    That was a good deal!

    Your GPUs still rape a single Fury X in any game (when CFX works(!)). :D

    IMHO the worst aspect of the first GCN 1.1 GPUs (290(X)) reference design was its ineffective heat dissipation, the MSI and Sapphire custom PCBs and coolers are way better.
     
  14. Despoiler

    Despoiler Master Guru

    Messages:
    244
    Likes Received:
    0
    GPU:
    Sapphire TRI-X R9 Fury

    AMD isn't doing any compression with HBM. They just have better memory management for what resides in VRAM(only actively used items) vs what resides in system memory(streamed in as needed).
     
  15. Ryu5uzaku

    Ryu5uzaku Ancient Guru

    Messages:
    7,008
    Likes Received:
    230
    GPU:
    6800 XT
    Tbh 4gb is enough for most gamers out there for a good while be it 1080p or 1440p only on rare occasion it won't be. It is sometimes enough for upcoming games in 4K setting too. Not that I am worried as I game on 1200p. And that Rottr on 1080p is just weird from Fury cards considering other 4gb cards don't have such an issue with the min fps... Not 970 nor 980. Most likely even 290 and 290x wouldn't have that issue, but those ain't tested in that one.
     
    Last edited: Feb 25, 2016

  16. oGow89

    oGow89 Maha Guru

    Messages:
    1,213
    Likes Received:
    0
    GPU:
    Gigabyte RX 5700xt
    The powercolor pcs+ is the best of all, in terms of cooling. It is way more aggressive, that it tends to keep the core at 67°c max at all times, resulting in much cooler vrms.
     
  17. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,529
    Likes Received:
    524
    GPU:
    Inno3D RTX 3090
     
  18. Szaby59

    Szaby59 Active Member

    Messages:
    84
    Likes Received:
    0
    GPU:
    Sapphire RX Vega56
  19. SimBy

    SimBy Member Guru

    Messages:
    189
    Likes Received:
    0
    GPU:
    R9 290
    Well it seems 4GB of VRAM is not an issue at all. At least not for AMD.

    http://www.hardocp.com/article/2016...graphics_features_performance/14#.VtWtTSgrLWI

    VRAM Usage

    VRAM utilization was one of the more interesting aspects of Rise of the Tomb Raider. It appears to be very different depending on what GPU you are running. There is no question this game can consume large amounts of VRAM at the right settings. With the GeForce GTX 980 Ti we maxed out its 6GB of VRAM just at 1440p with No AA and maximum settings. The TITAN X revealed that it was actually using up to 7GB of VRAM for that setting. In fact, when we pushed the TITAN X up to 4K with SSAA it used up to almost 10GB of VRAM.

    However, for the AMD Radeon R9 390X utilization was a bit odd when you first see it, never exceeding 4.2GB and remaining "flat" with "Very High" textures and SSAA. We did see the proper decrease in VRAM using lower settings, but the behavior was odd indeed. This didn't seem to negatively impact the video card however. The VRAM is simply managed differently with the Radeon R9 300 series.

    The AMD Radeon R9 Fury X kind of backs that statement up since it was able to allocate dynamic VRAM for extra VRAM past its 4GB of dedicated VRAM capacity. We saw up to a 4GB utilization of dynamic VRAM. That allowed the Fury X to keep its 4GB of dedicated VRAM maxed out and then use system RAM for extra storage. In our testing, this did not appear to negatively impact performance. At least we didn't notice anything in terms of choppy framerates or "micro-stutter." The Fury X seems to be using the dynamic VRAM as a cache rather than a direct pool of instant VRAM. This would make sense since it did not cause a performance drain and obviously system RAM is a lot slower than local HBM on the Fury X. If you remember a good while ago that AMD was making claims to this effect, but this is the first time we have actually been able to show results in real world gaming. It is awesome to see some actual validation of these statements a year later. This is what AMD said about this in June of 2015.

    Note that HBM and GDDR5 memory sized can’t be directly compared. Think of it like comparing an SSD’s capacity to a mechanical hard drive’s capacity. As long as both capacities are sufficient to hold local data sets, much higher performance can be achieved with HBM, and AMD is hand tuning games to ensure that 4GB will not hold back Fiji’s performance. Note that the graphics driver controls memory allocation, so its incorrect to assume that Game X needs Memory Y. Memory compression, buffer allocations, and caching architectures all impact a game’s memory footprint, and we are tuning to ensure 4GB will always be sufficient for 4K gaming. Main point being that HBM can be thought of as a giant embedded cache, and is not directly comparable to GDDR5 sizes.

    Now specifically that statement backs up "4K gaming" and we will give AMD (and NVDIA for that matter) a pass at this moment as neither produce "4K gaming" GPUs. The important statement here is this, "AMD is hand tuning games to ensure that 4GB will not hold back Fiji’s performance." We have said over and over again that this statement by AMD did not ring true in terms of needed HBM capacity, and this is the actually the first time we have seen AMD's statement make sense to us in real world gaming with Triple A titles. So kudos to AMD in being able to show us that this statement has come to fruition finally. The downside we see to this statement is AMD will have to "hand tune" games in order to make this work. What games will get hand tuned in the future? That said, AMD seems to have done an excellent job hand tuning Rise of the Tomb Raider for its HBM architecture.
     
  20. SimBy

    SimBy Member Guru

    Messages:
    189
    Likes Received:
    0
    GPU:
    R9 290
    "Surely the performance gets even worse as you make your way down the Radeon product stack, right? Oddly enough, no. I tested an Asus Strix R7 370 under the same demanding 4K benchmark, and it turned in only a 13% lower average framerate. Crucially, no stuttering or artifacting was present."

    So let me get this straight. This guy is saying R7 370 is only 13% slower than FuryX at the same settings and exhibits no stuttering and corruption. While having the same supposedly problematic 4GB of VRAM?

    Theres definetly something seriously wrong with this game and its not the 4GB of VRAM.
     

Share This Page