As others mentioned, Vega 7 likely just a repurposed MI50, not something designed from the ground-up for gaming. Navi is the next big hope for gaming, not Vega - the moment that they mentioned Vega it should have been obvious what this was - a stopgap until Navi.
2 years old performance for the same 2 years old prize of 1080ti. 1080ti was the best buy for years !
I expected a little more out of this :/ , sadly is still a Vega gpu that matches more or less a GTX1080Ti and a RTX2080. It seems Navy is far far away ... I was hoping for a replacement for my GTX1080Ti but this is not it . I'm torn between between selling my 1080Ti and buy a 2080Ti or not and i thought i would never spent more than $1,000US Dollars on a video card ! decisions decisions ...
The Die size increased by 60% for the 2080 TI (754mm2) from the 1080 TI(471mm2) while coming with around 30% extra cuda cores. A larger die size also means worse yields which results in higher cost. They added both the tensor cores and integer cores so they will add to the size of the die.
1080Ti is not a good example because it doesn't support 2x FP16 per cycle which requires fixed hardware. GP100, Volta/Turing do which is why I used GP100 for comparison. Tensor cores aren't necessary for RT as shown by BF5 which doesn't use them. The separated INT32 pipeline improves performance per SM on non-RT workloads: https://images.anandtech.com/doci/1...8_Updated090318_1536034900-compressed-011.png Also INT32 capability wasn't added it was just separated from FP32 path for concurrency improvements (Turing has twice the number of dispatch units per SM). Transistor cost per SM should be roughly the same and AFAIK it doesn't have much to do with RT other then improving scheduling (It was split in Volta too which doesn't have RT). So yeah - like you can argue that the chip is too big, costs too much, whatever but the size of the chip has nothing to do with accelerating raytracing. Most of the size comes from cache which got doubled and typically takes up a fair amount of space along with scheduler changes and tensor cores.
Any confirmation on what type of drivers Radeon Vll will receive? Anyone know if it will it match a/the format of the Vega Frontier Edition for example, considering its very high price. Example: Radeon Pro Software™
My Gigabyte gaming oc 1080Ti , new $629.00 , was a steal back in September , and now still is ! AMD would have had me at $599! Good till next generation.
If this is an MI50 card wouldn't that mean you could connect two GPUs via infinity fabric? This was pretty much everyone's dream.
@Hilbert Hagedoorn I hope you will include a good section with content creation benchmarks (Blender, premiere, etc etc) on this card when reviews are available...Also will be interesting to see if this card is a good overclocker... Seems to me a really good price here when you consider it is 16 GB of HBM and 60 CU's , for a compute card that is also able to game at 4k 60+ fps it is a good deal for people who wants the best of both worlds tbh, i do not get the outcry over the pricetag... As for pure gaming (and pricing), if one wants AMD then maybe wait for the actual NAVI products to emerge or the 11xx Nvidia series. I think most of us have too high expectations for AMD at this point in time, but hopefully they'll have time (and money) now to properly mature the NAVI segment so that we'll see proper competition to NGreedia.
"AMD will be employing some mild product segmentation here to avoid having the Radeon VII cannibalize the MI50 – the Radeon VII does not get PCIe 4.0 support, nor does it get Infinity Link support" - Source AnandTech. https://www.anandtech.com/show/13832/amd-radeon-vii-high-end-7nm-february-7th-for-699
I would definitely go for more ram over RTX 1) I personally thinks that RTX performance is 2-3 generations behind to be really usable at decent resolutions and framerates and in acceptable amount of titles to be worth it. 2) games currently easily use 6-8GB of VRAM, but OS and programs on background can easily use additional 2-3GB. Plus, I time from time run two games @once because of multiple reasons.
No idea what you are even trying to say with this or how its even related to the subject. Nvidia had RT tools for offline rendering years before AMD. They hired RT experts years ago and developed dedicated silicon for RT. Obviously Nvidia is taking RT much more seriously than AMD. Running RT on your GPU is easy, all you need to do is allow it because DXR and Vulkan already do all the work for you. The thing is? A full Vega64 can't do what 2060RTX RT cores do even if they use the whole GPU for RT (and they still need to do all the rest too, like shaders and lighting).
Many games will use more VRAM if more is available. Absolute usage figures are hard to compare, and how using more VRAM translates into a performance advantage is hard to judge. Ultimately the benchmarks should tell you if it really matters. Of course if you have special needs that warrant more VRAM for you, then that may be a good option. But for the average person, the answer "more VRAM is always better" is not quite so clear cut.
I don't find it thaat expensive, 650$ would be ideal. IMO a great competitor for Turing in general, yeah it's not 2080TI rival, but hey it's also not frickin 1200$+.. get real people. I'm pretty sure it will be a fine 1440p card, better then expected. I will personally hold off on a new 2080RTX for now, not if I can get same or better perf. at less money. The only deal breaker for me now would be thermals, if it remains ~ 70C and a good OC'er lets say at least 1950-2000MHz, idc about TDP, never was.. Then I'm sold. Bye bye nvidia, you've been a love hate relationship for far too long xD
Toms is conflicting Anandtech somewhat with PCIe 4.0. They are also saying AMD boards will allow PCIe 4.0 upgrades via bios. https://www.tomshardware.com/news/amd-ryzen-pcie-4.0-motherboard,38401.html
Crap ok - good to know as Toms is a known-good source (not that anand) isn't but ... means ... that this isn't known yet until I can find a AMD post like a spec sheet or maybe a twitter clip on their feed of something... might just ask em myself... Thanks @Srsbsns That was a really cool article about different OEMs saying their tests concluded that only one lane (the closest PCI-E x16) to the CPU could operate at 4.0 speeds just fine but the rest would revert back to 3.0 - so cool! All via BIOS update! I really hope AMD approves that cause apparently the OEMs were all saying they needed AMD's approval? :/ aahh please AMD approve it just don't "support" it.. like back when X79 SandyBridge-E days
Technically, ATI introduced tesselation with DX8, it didn't take off and only a handful of games have release or patched in support for it. the ATI tesselation engine for DX11 has never been their strong point, but there is never a "proper time" to introduce new capabilities, RTX is for those who want to dabble, and the developers who want to get in on bringing DXRT to their games now, there is nothing new going on here with graphical options being provided that are intended for future graphics hardware.