DX12 and Mantle combine crossfire memory and have built in cfx support!

Discussion in 'Videocards - AMD Radeon Drivers Section' started by nzweers, Feb 20, 2015.

  1. nzweers

    nzweers Master Guru

    Messages:
    241
    Likes Received:
    0
    GPU:
    Inno3D 980Ti iChill Ultra
    That means that my two 3GB HD7950 cards now have 6GB to use! And new DX12/Mantle games will have day 1 crossfire support, instead of 1 month later, or not at all!
    I was planning to go for a single card solution as my next upgrade, but reading this I think I'll go CFX again!


    Sources:

    http://w ccftech.com/geforce-radeon-gpus-utilizing-mantle-directx-12-level-api-combine-video-memory/


    http://www.hardwarecanucks.com/foru...65244-catalyst-14-1-amd-unleashes-mantle.html
     
  2. nhlkoho

    nhlkoho Ancient Guru

    Messages:
    7,707
    Likes Received:
    329
    GPU:
    RTX 2080ti FE
    It's not automatic though so not guaranteed. Devs would still have to program crossfire support in, it would just be easier for them to do. Lazy devs (ahem...Ubisoft) may not give a $hit anyway.
     
  3. PhantomGamers

    PhantomGamers Active Member

    Messages:
    79
    Likes Received:
    2
    GPU:
    NVIDIA GTX 1080 EVGA FTW
    I just want to point out that they'll only combine crossfire memory if they're programmed to do so. I'd imagine most games will still just use mirroring so only one card's VRAM total will be available. Like Battlefield 4 and Dragon Age Inquistion use Mantle and still use mirroring.
     
  4. crz

    crz Member Guru

    Messages:
    186
    Likes Received:
    0
    GPU:
    GeForce GTX 1070 ARMOR 8G
    This has been known even before Mantle was launched. The main thing to remember is that it needs explicit control by the game engine: the programmers need to explicitly specify how stuff is handled. With Mantle and DX12 it is their responsibility to pass the proper resources to the proper card and to tell each card what to do with them. For example they can explicitly send half the scene to one card and half the scene to the other and only have each card load what is required for them to render rather than duplicate the entire scene on both of them. Or the can do even more exotic stuff like using the second idle card to do extra calculations/simulations while the first card does render something more intensive that can not be parallelized due to the overhead of the transfers between cards.
     

  5. Fox2232

    Fox2232 Ancient Guru

    Messages:
    9,728
    Likes Received:
    2,188
    GPU:
    5700XT+AW@240Hz
    I broke someone's toys before and I have to break yours too...
    if you fill 2 of your cards each with 3GB of data for rendering and for each frame you'll need only 512MB of data from other card, how long it will take to transfer those data to other GPU vie PCIe?

    You'll have to repeat this cycle again and again for every frame. And it will limit your fps to 1/(time in seconds it takes to pull those additional data) at best.
    It is same problem like gtx 970 has, you simply do not want to use that 512MB at all costs.

    nVidia may solve this by having that bridge for pascal if it is fast enough, but it would still be smarter to have more on-board memory and mirror it. In next 2 years we will still not need more than 4GB of VRAM.

    And vram question is affected only by this:
    - pixel density of screen versus texture fidelity needed for closest objects to have 1:1 clarity.
    - if you have 4k screen, what resolution one texture needs to cover entire screen 1:1? How big is that texture (even without compression)?
    - if you have thousand of smaller objects on screen, what total texture resolution you need on them to have again same 1:1 clarity?
    - Can textures be replaced by smart shader effects to provide even higher fidelity/photo realism?

    Studios and people using ultra high resolution textures kind of forgot about old good materials.
     
  6. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    10,702
    Likes Received:
    2,873
    GPU:
    2080Ti @h2o
    Maybe I'm wrong, but if SFR would work that great, why aren't CFX users having it on default, thus gaining potential performance? I don't want to bash, I'm just not sure it's that easy to have atm.

    And with the crossfire support and the devs having to do it, you'll only see this in games optimized for the PC. No simple console port, because no console has two GPUs. So you're free to guess how many devs will do that, which games will profit of it the most, and which devs will even care about it. Not too difference with multi core support for CPUs on PCs that have long ago surpassed console core count and still are not supported.
     
  7. dox_aus

    dox_aus Master Guru

    Messages:
    301
    Likes Received:
    0
    GPU:
    MSi 290 Gaming CF
    Yeh op is drinking the coolaid. Even in games heavily sponsored by AMD, mantle crossfire has been spotty.. bf4 mantle crossfire was dodgy for months at the start, then later broken with cf memory leaks for 4+ months (and probably still leaking with cards with less than 4gb). Thief had to wait atleast month+ for cf mantle. Sniperelite 3 dev claimed they could do amazing things with cf mantle, yet no cf mantle (will see if they even bother using mantle period in zombie army trilogy). Even bf hardline beta 2 decided hey we wont bother with mantle at all for some reason. And these are 'sponsored' mantle games.. heaven help you with the likes of console ports from the likes of ubisoft etc.. aka fc4 crossfire lol
     
    Last edited: Feb 21, 2015
  8. The Mac

    The Mac Ancient Guru

    Messages:
    4,408
    Likes Received:
    0
    GPU:
    Sapphire R9-290 Vapor-X
    i dont know why its an issue, works fine in DAI, and its the same engine.
     
  9. stevvie

    stevvie Member Guru

    Messages:
    119
    Likes Received:
    0
    GPU:
    Vtx3D R9 280x
    more empty grasping at straws promises from AMD, and like Fox2232 says to transfer from one card to another is too slow for gaming, thats the main reason behind 2X3gb=3gb usable. And we're all familiar enough with mantel now to know the main beneficiaries are low powered AMD cpu users. And there was me looking forward to my million frames a second mantel was going to bring me, silly me ;)
     
    Last edited: Feb 23, 2015

Share This Page