Both Mantle and DX12 can combine video memory

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Feb 3, 2015.

  1. A M D BugBear

    A M D BugBear Ancient Guru

    Messages:
    3,679
    Likes Received:
    477
    GPU:
    4 GTX 970-Quad Sli

    5GB will be good, I tried running some stuff with 5k resolution, Ran slow but AWESOME. Some UE4 tech demos, looks OUTSTANDING, but running with full 7-8gb Vram? The 970 sli will be on their good for nothing knees, beggin' for Xtra mercy.

    Our cards as of now like I have stated once before, ain't fast enough to fully utilize the 8gb Vram, even if it did, how fast would the card run actually? DIRT SLOW, Slow as Molasses W/Sweet Brown Shuga(Suger).

    Even with the new Radeon 3XX series, 8k resolution with maxed out settings? I think even with three of them at full scaling speed, will still have a hard time, But this is just my own opinion. I would love to see 8k benches with that card, sorrie for being off topic here.
     
    Last edited: Feb 3, 2015
  2. A M D BugBear

    A M D BugBear Ancient Guru

    Messages:
    3,679
    Likes Received:
    477
    GPU:
    4 GTX 970-Quad Sli

    AGREED, Maybe ever since the first introduction of the first gpu to use:

    Unified shader architecture:

    GEFORCE 8800GTX!!

    The only card in existence that actually went almost to over that I remember, double the perfomance then its predecessor:

    Geforce 7
     
    Last edited: Feb 3, 2015
  3. reix2x

    reix2x Master Guru

    Messages:
    415
    Likes Received:
    71
    GPU:
    HIS 4870 1GB
    As i see it, the issue is in the game development community, it isn't unified at all.. each game studio is doing their games as they want. even optimizing the game for who pays more money (AMD or Nvidia), or "unoptimizing" it for PC in favor of Consoles.

    Year by year the hardware vendors make better and better GPUs but the new games we see are not well optimized at all. (sure there are exceptions)
     
  4. (.)(.)

    (.)(.) Banned

    Messages:
    9,094
    Likes Received:
    0
    GPU:
    GTX 970
    Unless its easy to implement, doesnt require an engine to be specifcally for and can be applied to any game, then it wont be massively adopted. Just like Dx11 features.
     

  5. A M D BugBear

    A M D BugBear Ancient Guru

    Messages:
    3,679
    Likes Received:
    477
    GPU:
    4 GTX 970-Quad Sli

    Game wont move period at that amount of vram, either at 8 or at crazy 16gb vram, it wouldn't even budge at all.
     
    Last edited: Feb 3, 2015
  6. EspHack

    EspHack Ancient Guru

    Messages:
    2,694
    Likes Received:
    127
    GPU:
    ATI/HD5770/1GB
    ^maybe 3x4k monitors in eyefinity
     
  7. DeskStar

    DeskStar Maha Guru

    Messages:
    1,279
    Likes Received:
    219
    GPU:
    EVGA 3080Ti/3090FTW
    Not sure where a lot of your information comes from if anything other than opinions...... And you still haven't touched on anything I've stated to only contradict yourself in some ways. The TITANs are fully capable to perform well with their 6gb's and it would be pretty damn impressive to have 24gb of V-RAM truly accessible. Seeing the information panel in a game like max Payne telling you your computer has 24gb of V-RAM is impressive as it is already, but couldn't imagine it being true...:puke2:.!! I Always thought that was weird how Max Payne does that.

    I'm completely satisfied :banana: with how well my quad SLI setup runs with my TITANs OC'd through the roof (I'm not sure what you're running and or using to make such bold judgments). Because if the scaling doesn't do well I simply switch off a card, slap on a bridge and go with triple.......I'm just waiting to see if I want a 4k G-Sync monitor and if they'll offer one with a potentiallly higher refresh rate. And always looking to see if the Asus IPS version is up to snuff with gaming......

    With the newer API's sounding like they will allow amazing things with optimization and better throughput everything could be just on the horizon. 4k 60hz is just something that is and has been relevant just recently, so I'm sure it hasn't been a true concern to code for it all together. With future driver optimizations the scaling and performance will always improve over time.

    I just hope they do not let us old dogs fall to the wayside with future updates...

    If there's information on your statements then please provide.
     
    Last edited: Feb 3, 2015
  8. A M D BugBear

    A M D BugBear Ancient Guru

    Messages:
    3,679
    Likes Received:
    477
    GPU:
    4 GTX 970-Quad Sli
    Just saying at that high amounts of vram, our cards today will not push as fast as people will hope for, currently. I can understand 5-6gb vram, but 8gb or higher? That requires tremendous amount's of gpu power, and keeping the frame rates 60 fps+ at all times? We wont be seeing them for quite some time.

    Do you use nvidia inspector to further enhance your aa on top of the game as well?

    I usually do that too, lol. Drops the fps even further.

    Although I love your titans, mainly because of the vram :). Could use some xtra here.
     
    Last edited: Feb 4, 2015
  9. brunopita

    brunopita Banned

    Messages:
    611
    Likes Received:
    0
    GPU:
    MSI Gaming R9 270X 2GB
    Hope this works for 3D render engines using GPUs, it's going to be amazing.
     
  10. -Tj-

    -Tj- Ancient Guru

    Messages:
    17,214
    Likes Received:
    1,941
    GPU:
    Zotac GTX980Ti OC
    I would say the later..



    And we all know how much game devs like to fiddle with APis just to make some extra gpu features possible, yeah not much..
     

  11. Ryu5uzaku

    Ryu5uzaku Ancient Guru

    Messages:
    7,085
    Likes Received:
    288
    GPU:
    6800 XT
    Tbh considering the 512mb is still faster then pci-e bus I would be amazed if it slowed down anything loads. Maybe some people just have defective 512mb.
     
  12. ScoobyDooby

    ScoobyDooby Ancient Guru

    Messages:
    7,114
    Likes Received:
    86
    GPU:
    1080Ti & Acer X34
    Could also be a problem with certain engines or even the game Shadow of Mordor. I found I had wild framerates at different places within the game..

    100fps solid in some areas, then look at nothing in particular to the left or right and it shot down 40fps.. ??? And my memory usage was below 3.5 during these times. Happened with settings up high and low.
     
  13. -Tj-

    -Tj- Ancient Guru

    Messages:
    17,214
    Likes Received:
    1,941
    GPU:
    Zotac GTX980Ti OC
    ^
    Bigger rock cliffs or rocks and its textures or something is with rocks in general has strange latency issues, at least that's what I saw. But it didnt drop that much here though, form ~ 70-80fps to mid 55's for a micro second and only if I panned camera really slowly (ultra textures for "6gb").
     
  14. KissSh0t

    KissSh0t Ancient Guru

    Messages:
    9,743
    Likes Received:
    3,628
    GPU:
    ASUS RX 470 Strix
    It took me a few moments to get it...

    xD

    I really hope game developers push games out with dx12 support much quicker than we have seen with previous iterations.

    And I hope Mantle gains more ground.
     
  15. SamW

    SamW Master Guru

    Messages:
    540
    Likes Received:
    0
    GPU:
    8800GTX
    I guess it depends on how many shaders you have, but is cutting the frame buffer in half going to save you that much in memory?

    Unless you actually have full memory sharing, you are still going to have to cache textures and geometry in both cards separately. The memory saved by being able to load some data into one but not the other seems inconsequential. I don't see a case (other than completely different scenes being rendered by each card) where a texture is sent to one GPU but then, for some reason the other GPU doesn't need it and can cache something else instead. Both GPUs will inevitably need to render that texture at some point (if they are rendering the same scene), so each GPU will need a copy and two copies will still be needed.

    Also split frame rendering seems like it could screw up a lot of fullscreen shaders, unless you overdraw enough across the split boundaries to get the data you need for the shader.
     

  16. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,809
    Likes Received:
    3,366
    GPU:
    6900XT+AW@240Hz
    So many people don't get it at all and race hype train.
    It has to be derailed with reality!

    What is memory bandwidth of your card 150GB/s? 220GB/s? 320GB/s?
    At how much data can get that second card via PCIe every second as 1st accesses data from second at same time?
    PCIe 3.0 x16 ~= 16GB/s if we omit any other communication than 2 GPUs we get that one GPU can access data from 2nd's vram at 8GB/s.
    If it needs to take just 512MB from there to render each frame, you will have 16fps at best.

    So, Dual GPU cards... how much faster is their interconnect on PCB? Is that some miraculous 64GB/s line? in that case 32GB/s for each direction and 512MB accessed data would allow for 64fps at best.
    We are talking about data chunks quite small (512MB) as we intend to happily use 2*3GB~2*6GB of vram as many seems to hype for.

    This is good news, but for HW designed in future with this in mind.
     
  17. TheDeeGee

    TheDeeGee Ancient Guru

    Messages:
    7,355
    Likes Received:
    1,528
    GPU:
    NVIDIA GTX 1070 8GB
    I wonder why this took so long.
     
  18. DeskStar

    DeskStar Maha Guru

    Messages:
    1,279
    Likes Received:
    219
    GPU:
    EVGA 3080Ti/3090FTW
    I get what you are trying to say, but from my knowledge on GPU's and their V-RAM I could still be wrong here. Does the RAM not control the flow of information (textures, cache and other stuff) from the main boards RAM/CPU communication......?

    Theoretically with a fast enough CPU and main board RAM you should able to talk to the GFX cards all day long with no issues having more of any of these items in the equation.

    I get that there are always going to be different speeds between how things communicate and therefore is the potential for a bottleneck. But even as it sits today our SSD's are still the bottleneck regarding throughput to the rest of the system as I'll never go with anything less than a raid 0 config for performance? But if things are done right and the system RAM is utilized properly to hold/send data to and from the CPU/GPU properly we end up with no issues at all in performance. I would just think that it would all be similar albeit using more RAM properly.

    Just my thoughts on how I percieve the system to talk to itself.

    I do not think that I'll upgrade my system config until something like Nvidias NV-Link or something similar becomes mainstream. CPU and GPU can talk directly with nothing interfering like limitations on bus speeds or other hardware limitations we have today.
     
  19. Ryu5uzaku

    Ryu5uzaku Ancient Guru

    Messages:
    7,085
    Likes Received:
    288
    GPU:
    6800 XT
    Possible culprit also. No way it is the 512mb section lol taking in all the facts about it. They gave wrong specs nothing else.
     
  20. A M D BugBear

    A M D BugBear Ancient Guru

    Messages:
    3,679
    Likes Received:
    477
    GPU:
    4 GTX 970-Quad Sli
    So when do you all think this will go into full effect?

    I think after they mentioned about it, probably early as this summer??? Do you all think that this will go into full effect once the radeon 3xx series is released? I think its Very possible.
     

Share This Page