back on the red team

Discussion in 'Videocards - AMD Radeon' started by xvcardzx, Jul 4, 2015.

  1. Extraordinary

    Extraordinary Ancient Guru

    Messages:
    19,558
    Likes Received:
    1,636
    GPU:
    ROG Strix 1080 OC
    Hopefully it is popular with devs, and it becomes the norm for DX12 games then, would be better if it was an Engine feature rather than a per game feature IMO, devs can do the work once, and all their games in that engine support it

    Then I buy another 980 :)
     
  2. sammarbella

    sammarbella Ancient Guru

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    Let me rephrase it and expand it:

    In GTA V if you have at least 4 GB of VRAM the graphical limiting factor is DX 11 at 1080p, 1440p or even 4K...NOT THE AMOUNT OF VRAM.

    If you enable all in game graphic settings, including advanced in game settings like "distance" while MSAAs and grass settings are maxed DX 11 is the bottleneck.

    These DX11 intensive settings drop the GPU usage to as low as 40% in CFX-TRI-QUAD crossfire no matter if you own 8 GB of VRAM, even with CPU not maxed, even with a 5960X Extreme Edition.

    So better graphic (like far distance render) CAN'T be used in concurence with high FPS because even Quad can use the GPU power (low gpu usage...) due to DX11....

    When graphic settings are lowered DX11 is not bottlenecked but graphics are almost the same at CFX tri or Quad, the only change is more FPS.

    Not more graphic details at same res....
     
  3. ---TK---

    ---TK--- Ancient Guru

    Messages:
    22,106
    Likes Received:
    3
    GPU:
    2x 980Ti Gaming 1430/7296
    Sure their would be a need of a sli profile having that support too. Nvidia is pretty good about sli support with game ready drivers.
     
  4. vase

    vase Ancient Guru

    Messages:
    1,652
    Likes Received:
    2
    GPU:
    -
    if i dont cap fps i have mainly 100-120 fps and at some passages/map areas in the game it drops to 80. those drops to 80 down from 120 are not "smooth".

    but when i dont run the game for checking max fps then i use radeonpro and limit the fps to around 90 and i get a 100% smooth gaming experience.
    with the above screenshot settings. (everything maximum except MSAA and advanced graphics section)

    and to be 100% accurate. i even force 8xEQ AA in radeonpro. it seems to be draining less performance than the ingame MSAA setting.


    the only thing i can think of is that either gta v somehow has a memory management that works well with CFX setup (ofc not double VRAM but maybe some other algorithm which is better than single card 2GB)

    but it also has to do with the 1023.3 modded drivers which seem to work pretty well with CFX GCN1.0-architecture cards on GTA V and not worse in other games anyway.

    of course if i run in 1080p i have no problems at all with any driver ...
    then i get stable 110-120fps
     

  5. ---TK---

    ---TK--- Ancient Guru

    Messages:
    22,106
    Likes Received:
    3
    GPU:
    2x 980Ti Gaming 1430/7296
    Yeah like I said it isn't all that easy to run out of vram. Took max setting 8xaa 1440p to do it.
     
  6. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,808
    Likes Received:
    3,370
    GPU:
    6900XT+AW@240Hz
    Please stop using this over the top stupid marketing nonsense. There is not single graphics card where it would work on market as of today.
    Do not expect one GPU read even 1GB of data from other card's vram.

    It takes 1GB / (x GB/s) of time to do this one reading cycle. [x stands for PCIe bandwidth]
    And even if just one card reads from other at full speed and other does not read anything, it will butcher frame rate same way as reading those data from memory via same PCIe bus.
     
  7. Extraordinary

    Extraordinary Ancient Guru

    Messages:
    19,558
    Likes Received:
    1,636
    GPU:
    ROG Strix 1080 OC
    You're developing DX12 are you?

    No, so why not wait and see instead of 'knowing everything' about something not released yet

    My posts are based on info previously published about DX12, nothing more
     
  8. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,669
    Likes Received:
    285
    GPU:
    RX 580 8GB
    Wouldn't it be possible for both VRAM to be combined like 1 large pool of memory, like what happens to RAM.
    Both are directly connected to the CPU I don't see why not.
     
  9. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,808
    Likes Received:
    3,370
    GPU:
    6900XT+AW@240Hz
    That's original idea under this nonsense. It is easy to do. Both AMD & nVidia can even now share vram between cards.
    But why would you read data via 16GB/s PCIe bus when GPU is limited even by 120GB/s bus (local vram).
    Think why high end cards had till now 384bit bus, and that only due to compression nVidia could go to 256bit bus.

    If you read data vie PCIe from other card, then you can as well read them from system memory, because PCIe bandwidth from graphics card to CPU is slower than bandwidth between CPU and system memory. (at least for modern intel's CPUs)
    And most of us actually know what happens when GPU needs data from system memory.
     
  10. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,669
    Likes Received:
    285
    GPU:
    RX 580 8GB
    By your logic VRAM is slower than RAM?
     

  11. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,808
    Likes Received:
    3,370
    GPU:
    6900XT+AW@240Hz
    Once more. You want 1st GPU read data from 2nd graphics card's vram. What is speed at which it will be read/accessed?
    Answer: speed will be limited by PCIe speed.

    What is speed at which GPU can get data from system ram?
    Answer: Since memory controller is in CPU same way as PCIe controller. And CPU accesses system ram at higher speed than it can send data via PCIe. PCIe is limiting factor again in this case.

    So, is there actual benefit from reading data from 2nd card instead from system memory?
    Answer: No, unless you need to free bandwidth between CPU and system memory for other tasks.

    Now as for actual thinking:
    How long it will take to 1st GPU to read 1GB of data from 2nd card's vram?
    (here if you did not notice little help I wrote above: It has to go via PCIe and it has certain maximum speed.)
    Try to get us best case scenario.

    As you have to read that 1GB every single frame, what will be resulting frame rate?
    (another help: there is inverse relationship between fps and frame time ~ which is time to render one frame)
     
  12. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,669
    Likes Received:
    285
    GPU:
    RX 580 8GB
    If GPU VRAM is so much slower than RAM, why don't GPU's use RAM instead of VRAM?
     
  13. lexer98

    lexer98 Master Guru

    Messages:
    660
    Likes Received:
    2
    GPU:
    GTX 1070 - WC
    :3eyes:
    VRAM is WAAAAAAAAAAY faster than Memory RAM ...
     
  14. vase

    vase Ancient Guru

    Messages:
    1,652
    Likes Received:
    2
    GPU:
    -
  15. Extraordinary

    Extraordinary Ancient Guru

    Messages:
    19,558
    Likes Received:
    1,636
    GPU:
    ROG Strix 1080 OC
    Mantle is the first graphics API to transcend this behavior and allow that much-needed explicit control. For example, you could do split-frame rendering with each GPU ad its respective framebuffer handling 1/2 of the screen. In this way, the GPUs have extremely minimal information, allowing both GPUs to effectively behave as a single large/faster GPU with a correspondingly large pool of memory.

    Ultimately the point is that gamers believe that two 4GB cards can’t possibly give you the 8GB of useful memory. That may have been true for the last 25 years of PC gaming, but thats not true with Mantle and its not true with the low overhead APIs that follow in Mantle’s footsteps. – @Thracks (Robert Hallock, AMD)

    http://*************/geforce-radeon-gpus-utilizing-mantle-directx-12-level-api-combine-video-memory/
     

  16. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,808
    Likes Received:
    3,370
    GPU:
    6900XT+AW@240Hz
    Then apply critical thinking.

    I gave all needed information here twice, in other threads in detail with exact calculations with best case scenarios.
    theoneofgod already proven that he either lacks basic knowledge of each part of PC systems or intelligence required to solve simple problem. Problem which 10 years old children solve daily in school without need to know how PCs buses work.
     
  17. Extraordinary

    Extraordinary Ancient Guru

    Messages:
    19,558
    Likes Received:
    1,636
    GPU:
    ROG Strix 1080 OC
    My critical thinking is this, you don't know, no-one knows, I am providing info from AMD, you are trying to convince us that you are right with nothing to back your claims up other than yourself

    Personally I`ll wait until DX12 arrives rather than taking the word of someone who thinks they know better than GPU manufacturers

    Cheers
     
  18. sammarbella

    sammarbella Ancient Guru

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    There is no such "special memory management" in CFX setup there is a simple fact making it performing better:

    You are using two crossfired GPUs in place of one single GPU , if scaling is good you have almost double GPU power.

    :)

    This simple fact explains the difference cause the double global amount of VRAM in CFX is not an advantage over single GPU VRAM in DX11 (only a promise for DX12....).
     
  19. ---TK---

    ---TK--- Ancient Guru

    Messages:
    22,106
    Likes Received:
    3
    GPU:
    2x 980Ti Gaming 1430/7296
    Yeah think I'll wait for dx12 windows 10 release, no sense in arguing over dx12.
     
  20. sammarbella

    sammarbella Ancient Guru

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    I agree.

    Until DX12 theorical advantages are proved real in games we can play or wrong because game devs don't use them (multi-gpu profiles, unified multi-gpu memory usages, ect....) the DX12 discussion is useless.
     

Share This Page