Hopefully it is popular with devs, and it becomes the norm for DX12 games then, would be better if it was an Engine feature rather than a per game feature IMO, devs can do the work once, and all their games in that engine support it Then I buy another 980
Let me rephrase it and expand it: In GTA V if you have at least 4 GB of VRAM the graphical limiting factor is DX 11 at 1080p, 1440p or even 4K...NOT THE AMOUNT OF VRAM. If you enable all in game graphic settings, including advanced in game settings like "distance" while MSAAs and grass settings are maxed DX 11 is the bottleneck. These DX11 intensive settings drop the GPU usage to as low as 40% in CFX-TRI-QUAD crossfire no matter if you own 8 GB of VRAM, even with CPU not maxed, even with a 5960X Extreme Edition. So better graphic (like far distance render) CAN'T be used in concurence with high FPS because even Quad can use the GPU power (low gpu usage...) due to DX11.... When graphic settings are lowered DX11 is not bottlenecked but graphics are almost the same at CFX tri or Quad, the only change is more FPS. Not more graphic details at same res....
Sure their would be a need of a sli profile having that support too. Nvidia is pretty good about sli support with game ready drivers.
if i dont cap fps i have mainly 100-120 fps and at some passages/map areas in the game it drops to 80. those drops to 80 down from 120 are not "smooth". but when i dont run the game for checking max fps then i use radeonpro and limit the fps to around 90 and i get a 100% smooth gaming experience. with the above screenshot settings. (everything maximum except MSAA and advanced graphics section) and to be 100% accurate. i even force 8xEQ AA in radeonpro. it seems to be draining less performance than the ingame MSAA setting. the only thing i can think of is that either gta v somehow has a memory management that works well with CFX setup (ofc not double VRAM but maybe some other algorithm which is better than single card 2GB) but it also has to do with the 1023.3 modded drivers which seem to work pretty well with CFX GCN1.0-architecture cards on GTA V and not worse in other games anyway. of course if i run in 1080p i have no problems at all with any driver ... then i get stable 110-120fps
Please stop using this over the top stupid marketing nonsense. There is not single graphics card where it would work on market as of today. Do not expect one GPU read even 1GB of data from other card's vram. It takes 1GB / (x GB/s) of time to do this one reading cycle. [x stands for PCIe bandwidth] And even if just one card reads from other at full speed and other does not read anything, it will butcher frame rate same way as reading those data from memory via same PCIe bus.
You're developing DX12 are you? No, so why not wait and see instead of 'knowing everything' about something not released yet My posts are based on info previously published about DX12, nothing more
Wouldn't it be possible for both VRAM to be combined like 1 large pool of memory, like what happens to RAM. Both are directly connected to the CPU I don't see why not.
That's original idea under this nonsense. It is easy to do. Both AMD & nVidia can even now share vram between cards. But why would you read data via 16GB/s PCIe bus when GPU is limited even by 120GB/s bus (local vram). Think why high end cards had till now 384bit bus, and that only due to compression nVidia could go to 256bit bus. If you read data vie PCIe from other card, then you can as well read them from system memory, because PCIe bandwidth from graphics card to CPU is slower than bandwidth between CPU and system memory. (at least for modern intel's CPUs) And most of us actually know what happens when GPU needs data from system memory.
Once more. You want 1st GPU read data from 2nd graphics card's vram. What is speed at which it will be read/accessed? Answer: speed will be limited by PCIe speed. What is speed at which GPU can get data from system ram? Answer: Since memory controller is in CPU same way as PCIe controller. And CPU accesses system ram at higher speed than it can send data via PCIe. PCIe is limiting factor again in this case. So, is there actual benefit from reading data from 2nd card instead from system memory? Answer: No, unless you need to free bandwidth between CPU and system memory for other tasks. Now as for actual thinking: How long it will take to 1st GPU to read 1GB of data from 2nd card's vram? (here if you did not notice little help I wrote above: It has to go via PCIe and it has certain maximum speed.) Try to get us best case scenario. As you have to read that 1GB every single frame, what will be resulting frame rate? (another help: there is inverse relationship between fps and frame time ~ which is time to render one frame)
Mantle is the first graphics API to transcend this behavior and allow that much-needed explicit control. For example, you could do split-frame rendering with each GPU ad its respective framebuffer handling 1/2 of the screen. In this way, the GPUs have extremely minimal information, allowing both GPUs to effectively behave as a single large/faster GPU with a correspondingly large pool of memory. Ultimately the point is that gamers believe that two 4GB cards can’t possibly give you the 8GB of useful memory. That may have been true for the last 25 years of PC gaming, but thats not true with Mantle and its not true with the low overhead APIs that follow in Mantle’s footsteps. – @Thracks (Robert Hallock, AMD) http://*************/geforce-radeon-gpus-utilizing-mantle-directx-12-level-api-combine-video-memory/
Then apply critical thinking. I gave all needed information here twice, in other threads in detail with exact calculations with best case scenarios. theoneofgod already proven that he either lacks basic knowledge of each part of PC systems or intelligence required to solve simple problem. Problem which 10 years old children solve daily in school without need to know how PCs buses work.
My critical thinking is this, you don't know, no-one knows, I am providing info from AMD, you are trying to convince us that you are right with nothing to back your claims up other than yourself Personally I`ll wait until DX12 arrives rather than taking the word of someone who thinks they know better than GPU manufacturers Cheers
There is no such "special memory management" in CFX setup there is a simple fact making it performing better: You are using two crossfired GPUs in place of one single GPU , if scaling is good you have almost double GPU power. This simple fact explains the difference cause the double global amount of VRAM in CFX is not an advantage over single GPU VRAM in DX11 (only a promise for DX12....).
I agree. Until DX12 theorical advantages are proved real in games we can play or wrong because game devs don't use them (multi-gpu profiles, unified multi-gpu memory usages, ect....) the DX12 discussion is useless.