Quantum Break is Coming on the PC on April 5th and it will be a DX12 title

Discussion in 'Videocards - AMD Radeon Drivers Section' started by PrMinisterGR, Feb 12, 2016.

  1. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    DX12 only.

    Here are the requirements.

    Code:
    [B]Minimum Requirements[/B]
    
    OS: Windows 10 (64-bit)
     DirectX: DirectX 12
     CPU: Intel Core i5-4460, 2.70GHz or AMD FX-6300
     GPU: NVIDIA GeForce GTX 760 or AMD Radeon R7 260x
     VRAM: 2 GB
     RAM: 8 GB
     HDD: 55 GB available space
    
    
    [B]Recommended Requirements[/B]
    
    OS: Windows 10 (64-bit)
     DirectX: DirectX 12
     IPU: Intel Core i7 4790, 4GHz or AMD equivalent
     GPU: NVIDIA GeForce 980 Ti or AMD Radeon R9 Fury X
     VRAM: 6 GB
     RAM: 16 GB
     HDD: 55 GB available space
    
    Some weird points. There is no AMD equivalent to the 4790, and I'm not sure why it would be needed, since the game will be DX12 and Remedy are known for optimizing their titles. Meanwhile for the low spec they seem to equate a six core AMD to a Haswell-generation i5. :3eyes:
    In theory, the GTX 760 should be quite faster than the R7 260x. It has double the memory bandwidth, more shading processors at higher frequencies and double the ROPs. For all purposes it should be the equivalent of the 270, not the 260x. The 260x is literally almost half the card of what the 760 is. I don't know if that's telling about either the performance of Kepler under DX12, the state of the NVIDIA driver, both, or that the game has an AMD bias somehow.
    Final note: For the recommended they say Fury X or 980Ti, but yet it's minimum 6GB of VRAM. I don't know if AMD's memory management team is actually doing something, or they say to people with Furies to lower things like textures. We'll see.

    All and all, I don't know what the recommend is supposed to be. Maybe it means 60fps at 4k or something, or it might mean that the game will run like sh*t. Knowing Remedy, I hope for it to be the first case.
     
    Last edited: Feb 12, 2016
  2. KyleStilkey

    KyleStilkey Master Guru

    Messages:
    499
    Likes Received:
    38
    GPU:
    Sapphire 6800 XT N+
    I hope this turns out great. They do a great job with their PC titles. I hope to see some great performance and to see some DX12 benchmarks. Now it's just the waiting game.
     
  3. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,636
    Likes Received:
    9,512
    GPU:
    4090@H2O
  4. Blackfyre

    Blackfyre Maha Guru

    Messages:
    1,384
    Likes Received:
    387
    GPU:
    RTX 3090
    I'm really looking forward to this game. Should be around the time we get the full details for Polaris too.
     

  5. sammarbella

    sammarbella Guest

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    I understand their "recommended" as: A Nvidia 980ti 6 GB is enough and optimal for our game and for AMD, well...give me the best single GPU you have even your best has only 4 GB, Fury X.

    It could be a real slap in the AMD face (and Fury (X) owners) to ask for a 290X/390X models with 8 GB instead of Fury X.

    If this is another game like ROTTR who needs 6 GB VRAM to play >=1440p in very high and this game have to a lower texture quality because it falls short in VRAM amount this could start a very bad VRAM game requirements trend for Fury (X) sales.

    Edit: How is doing your baked GPU? :)
     
    Last edited: Feb 12, 2016
  6. Blackfyre

    Blackfyre Maha Guru

    Messages:
    1,384
    Likes Received:
    387
    GPU:
    RTX 3090
    AMD just confirmed Hitman will have DX12 and it'll be a "Gaming Evolved" title with exclusive DX12 features.
     
  7. sammarbella

    sammarbella Guest

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    It's good AMD sponsors an AAA DX12 title, let's see how a DX12 game perform on AMD GPU cards without the "help" of Gameworks !
     
  8. Bloodred217

    Bloodred217 Master Guru

    Messages:
    356
    Likes Received:
    1
    GPU:
    2x GTX 1080 G1 Gaming 8GB
    Here's hoping for some multi-GPU DX12 goodness as well, not just in Quantum Break but in all the upcoming DX12 titles. I hope DX12 won't bring about an age where CF/SLI don't work because it's no longer the driver's job and devs don't give a f*ck.
     
  9. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,636
    Likes Received:
    9,512
    GPU:
    4090@H2O
    So... do we have rights to complain if the game runs better on AMD cards than on Nvidia just because it's a gaming evolved title? This is a serious question, no flaming intended, since I expect it to be that way :D
     
  10. sammarbella

    sammarbella Guest

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    This is a rhetoric question...Right?

    :D
     

  11. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,636
    Likes Received:
    9,512
    GPU:
    4090@H2O
    No, it's just that I see the environment shift, presumably in AMD's paralleled architecture's favour. Then again, it's pretty much the other way around like it was, and I wondered if it will be accepted to whine just because people bought Nvidia hardware.
    And with the first native dx12 game that's released as one (I don't care about AoS, way to niche to show us much about anything but scheduler performance) we now will get the first real impression of how hardware can perform.

    But, as the moaning has started in the past with the mere announcement of any game / dev going for gameworks or being TWIMTBP titles, I wonder if I can do the same soon :eyes:
     
  12. Dygaza

    Dygaza Guest

    Messages:
    536
    Likes Received:
    0
    GPU:
    Vega 64 Liquid
    Why are you saying there is no AMD equivalent to 4790k? This is true for DX11, but on DX12 they should be closer.
     
  13. Goiur

    Goiur Maha Guru

    Messages:
    1,339
    Likes Received:
    629
    GPU:
    ASUS TUF RTX 4080
    Yep, that one could be ending in the biggest fail high-end card that i remember, but until now HBM has managed to use less RAM than GDDR5.

    I really hope ROTTR was a fail port (or nvidias hand) more than a VRAM amount problem @1440.

    But well, if 4gb of HBM its not enough i guess next gen we will have to sell our Furies for 400-500€ (if possible) and get a new gfx.
     
  14. Bloodred217

    Bloodred217 Master Guru

    Messages:
    356
    Likes Received:
    1
    GPU:
    2x GTX 1080 G1 Gaming 8GB
    To be fair, a 4790K is generally faster than a FX9590 even in heavily multithreaded tasks, not only games. The gap should be considerably smaller though, if what we're hearing about DX12 is true.
     
  15. Krteq

    Krteq Maha Guru

    Messages:
    1,129
    Likes Received:
    764
    GPU:
    MSI RTX 3080 WC
    Well, there are some major differences in approach.

    nVidia's GameWorks libraries are blackboxed trash with no access to source code (only with special licence)....

    ...on other hand, DX12 Async-Compute function is a well documented standardised feature with source code samples provided by Microsoft.
     
    Last edited: Feb 12, 2016

  16. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,636
    Likes Received:
    9,512
    GPU:
    4090@H2O
    That doesn't change much if the game's optimized for one vendor with heavy use in async compute. Which makes a game not optimized for both architectures, but only for one, if it uses heavy async compute. The difference might be one for the dev, true that, but not to me as a gamer. I couldn't care less if the dev is able to see the code, as long as they use either feature that's not running well on one vendor's GPU architecture, it's practically optimized for the other GPU.

    Again, I don't say AMD is evil, finally their architecture might finally get the use and boost to show what it was capable of all along. I just want to see what new rules are in tomorrow's environment. The future might just be around the corner :)
     
  17. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    It's true that DX12 can make use of more threads, but from what I can see, games are still very reliant on core performance, just like with DX11.
     
  18. Dygaza

    Dygaza Guest

    Messages:
    536
    Likes Received:
    0
    GPU:
    Vega 64 Liquid
    AMD 8 core cpu's do hit memory bandwidth problems when all cores need to work under stress. Especially aots benchmark showed that they had a lot greater scaling when ram speed was upped.

    But it's quite hard to say in real, but truth is that it should be quicker than I5 variant, and that might be what this recommended list is after.
     
  19. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    I posted this from the mobile last night. I didn't see that it was on the frontpage, I have changed the link accordingly. I also posted here because I would prefer a more focused conversation that what's happening in the front page right now.

    They haven't been very clear on what the "recommended" really mean. Is that what you need for everything maxed out at at 1080p60? If yes, then it better have some really better visuals than the XboxOne version. The 6GB either implies better texture quality than the console version, or simply bad optimization where the game's data needs to be mirrored. The XboxOne gives roughly 5GB of total memory (both for GPU and CPU uses) for games, so in reality a 3-4GB GPU should be fine.
    A lot of people forget that the Fury X has the same amount of ROPs/TMUs as the 390x, and just 35% more shading units. On top of that there are the GCN 1.2 optimizations in tessellation, and of course the gigantic bandwidth of HBM, but the differences are not that great, unless those shading units and the tessellator gets good use.
    My system has never been better. I used to get some intermediate freak ups once per month, which I attributed to the humidity and heat here, but apparently the GPU was a bit dodgy from the beginning. Now it's truly rock stable.

    This is my issue with these specs. If the FX-6300 is good enough for minimum, then why not recommend a lower-end Intel CPU? Unless an i3 simply doesn't have enough threads, so you recommend the lower-end i5, because you need at least 4 hardware threads for the game. So, in the lower end, the AMD CPUs seem to be performing far higher than we are used to them performing.
    My question is on the high end. If an FX-6300 is equal to an i5, why not recommend the 8350 for the high end? If the performance analogy from the minimum stands, it should be right there.
    The same thing happens to the GPU end. The 260x is basically the XboxOne GPU. 16 ROPs, 896 Shader units, GCN 1.1. That makes sense for the lower spec, but then, look at the NVIDIA card. That GTX 760 should be against the 270/7870, and definitely NOT against the 260x. The 760 has double the ROPs/Texture Units, many more shading units, double the memory bandwidth and higher frequencies. The 260x is on the same ballpark as the 750Ti/650Ti.
    So, in the lower end, the AMD hardware seems to be doing much better than usual. It's almost paired against hardware with double the performance.

    What makes this weird, is that information applied to the higher end spectrum. I know this is a weird comparison, but if the 260x is equal to the GTX 760, then there is no way that the 980Ti can be in the same category as the Fury X.

    All the above makes me believe that the "minimum" spec means XboxOne performance, and it is most likely a straight port, and that's why AMD's parts get this boost. An ridiculously optimized console game should run well on AMD hardware, unless things are changed, and why should they. If that is the case, then Intel/NVIDIA are "paying" the different architecture price.

    That changes in the higher end because either the game has some settings that will bottleneck hardware from all manufacturers in the same way, or because of politics, and possibly both. NVIDIA and Intel are the biggest manufacturers and have the greatest market share, and they could use their PR machines to trash the game, so concessions had to be made in the recommended specifications.

    As mentioned before, Async compute is not AMD's architecture, it's DX12. The whole point of it was that as a developer you can tap to all that power in the GPU in any way you want. NVIDIA does this badly currently, and that's that. Their current architecture has given them almost total market domination, and excellent performance under DX11. There is no magic, they made some concessions hardware wise. I don't believe that developers would change their whole engines (especially for console ports), because Maxwell is not switching between tasks fast enough. If they are kind hearted they might try a different path for NVIDIA hardware, but that's like missing the whole point of a common API. Async compute is what makes games looking like this one possible on the consoles.
     
  20. sammarbella

    sammarbella Guest

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)

Share This Page