Why should I? Check more reviews yourself, you have hands that can type. I'm not going to waste my time with you 3 anymore.
We're all wrong because you're right then, yes? I can see why you no longer want to waste energy. It's hard work making things up.
I didn't read this HH article, thanks to share. I'll do it later when i can. No need to re-open the 3.5gate "affair" on this discussion!
My last response, read carefully. 390X with 50mhz more and 250mhz? higher memory OC and extra 4GB vram uses the same power as stock 290X, give or take 10-15W.. Now OC that 290X if you can for that much, IM 1000% sure it WILL a lot use MORE then just 15W extra like by custom OC'ed 390X. No rocket science here. 1. http://www.techspot.com/review/1019-radeon-r9-390x-390-380/page7.html 2. http://hexus.net/tech/reviews/graphics/84194-sapphire-radeon-r9-390x-tri-x/?page=12 3. http://www.hardwarecanucks.com/foru...646-amd-r9-390x-8gb-performance-review-3.html 4. http://www.overclock3d.net/reviews/gpu_displays/msi_r9_390x_gaming_8g_review/4 5. http://www.kitguru.net/components/graphic-cards/zardon/sapphire-r9-390x-tri-x-8gb-review/22/ Yes this is the improved efficiency part GrenadaXT either you like it or not.
Wait, are thsoe directx12 charts true? I mean, will AMD cards be that bad at directx 12? if thats so why would i even think of going amd? Can someone confirm that?
It's your opinion and i respect it even if i don't share it. The "untrusted" source write this in a 390X review in reference to the less performance we get from "old" 290X with 15.6 drivers (back then): http://www.eurogamer.net/articles/digitalfoundry-2015-radeon-r9-390x-review To be fair i still want to see a review using UNIFIED 15.7 drivers and 290X 8 GB vs 390X 8G at same core and VRAM using AB at +50% powerlimit. MSI has the same PCB for MSI GAMING 290X 8GB and MSI GAMING 390X 8GB. It will be a nice comparision.
I couldn't care less about 10 watts difference. I was talking about paying £120 more for a "newer card" to get the same performance.
The 980 stands out in the tests but overall AMD performance is massively improved over DX11. 115ms batch submission times to 4.8ms... Maybe the benchmark was more optimized for Maxwell or the 980 (was flagship then I think).
Not my intent, other than to show that Modor overstates memory requirements. It allocates like mad, if you look up a bunch of reviews on it you'll find such varied results it's unreal.
My point is if we use Shadow of Mordor with UHD textures with a +6GB VRAM GPU (a powerful one like a 6GB 980 Ti) this amount of VRAM will be used (filled/used...not allocated) and be a real advantage over GPUs will less VRAM. You can say that is not the case when GPU used is only 3 GB like the 780 or 4 GB (3.5+0.5) 970 and the game still works. I think Ultra settings fallback in this GPUs cases to high when UHD texture can't be loaded in VRAM and high settings texture are used instead. I could be wrong of course but testing the game only with a 3 GB GPU is not a proof of it.
A lot of people confuse being able to refresh/process the amount of data in the card's memory, with the inherent advantage of not having to access RAM that the extra vram gives you. Shadow of Mordor is a good example, since VRAM is simply filled with "dumb" texture data, and yet because it is filled, you have much less stutter with much higher texture quality. Extra VRAM is always nice.
This is more or less what i tried to express. You should teach me English grammar (and some vocabulary aswell), it could really help me to express my thoughts!. Sometimes it's very difficult for me to explain what i want to say with the adequate words and without being understood as a start of a flame war...
Ok, said I wont reply anymore :d lol but just to be clear, I was comparing only custom 290X with 8GB vram vs custom 390X. Well I don't see such difference in EU shops. 30€ or sometimes 30€ cheaper is not much, with 64gb/s faster ram out of the box. Imo the only worthy 290X model VaporX (the coolest vrm) for ~ 480€ temp from 66 to 70C 2nd TriX OC for ~420€ VS. 390X Trix for ~ 450€ on avg. 2nd HIS IceQ2 for ~ 435€, this one is actually ok now http://www.techspot.com/articles-info/1019/bench/Temps.png
The XFX 290x was £199 the other week. One of the best 290x the Sapphire Tri-X (blue with backplate, etc) was £225 at Amazon the other day. The 390x here is £350. 290x prices go up once they become discontinued, if you're too late for that then the discussion is pointless. I would pay the 30 euro for 4GB VRAM My point was if you can still get hold of a cheap 290x, buy it over the 390x.
Yeah, whats kinda dissapointing is the idea of paying 400 bucks for a 2013 card... idk if i should wait till fury nano, or just buy 390x or gtx 980... i just want the best performer... gtx 980 looks faster but the story about the nvidia bad support with kepler just freaks me out.
Nano price range will be interesting. Fury still doesn't have prices here. It can't be more than that. The 390x and Nano can only be priced similar or the 390x will drop.
Actually it is proof of it. If it wasn't going to run properly on those cards on ultra textures (thus using more VRAM) it'd show problems, which it doesn't do. Cards with more memory using more memory are allocating. The lower frame rates you see between benchmarks are due to GPU power, they're not relatable to VRAM outside of a couple of rare scenarios.
Wow, what have I done? Anyway: VRAM 4 vs 8GB is mostly pointless, you may get time to time occasional micro freeze as textures are cached into 4GB vram, but you are not likely to even notice and that is in 1 of hundreds of games. 290x vs 390x - Both eat same amount of power, make same amount of heat, but 390x has been tweaked and can clock higher without throttling. In some cases even stock 290x throttled. So 390x performs better. 390x vs 980 longevity: At this moment nothing indicates that 980 will be left in dust as nVidia releases its next architecture, but it is not entirely impossible. DX12: only benchmark you have is number of draw calls (that is amount of primitive tasks you can throw at card). And that only tells you that drivers will push 10~15 times more work to GPU per frame without CPU being limiting factor. Therefore if you take any DX11 game and rewrite it to DX12, it will not give you 10~15 timesmore fps, it will place 10~15times lower workload per core. And you have 2 scenarios: 1st) DX11 GPU was already saturated, then on DX12 you get same fps. 2nd) DX11 GPU was underutilized due to poor feed, then DX12 will see boost. (But understand that DX12 will only allow you to touch potential which is unused due to poor driver on DX11.) GCN vs kepler cores and tessellation: nVidia said that tessellation in Witcher's hairworks is reason for possible poor performance on Kepler. But Kepler has same or better tessellation than those GCN which got to him. And finally the low blow: Kepler is down in performance even with hairworks OFF, so argument that it is caused by tessellated hairs is not entirely right. There is something else in nVidia gameworks what caused great Kepler chips to fall down to midrange/entry level performance of Maxwell (gtx 960). But even with this gameworks weirdness where Kepler acts badly, those 780(Ti) are still great performers in all other games. And I am sure that if same happens to 980, it will again be isolated to those few games. (Objectivity mode OFF: I think it is disturbing to be nVidia customer and evade nVidia gameworks games.)