That's also a failed thought, that a really educated and hard working man/woman never should think. (FYI titles and degrees never acknowledged you as a really educated person, it is a way of living). On the topic, so you preferably buy an inferior and overpriced product for the sake of competition and keep alive the second runner? No. I would never do that, sorry. I respect the power of my money. I will always buy the best product I can afford.
Only the performance isn't equal or better. https://www.techpowerup.com/reviews/AMD/Radeon_VII/28.html So now we have a gpu that comes out months after the 2080, runs loud, is power hungry. So yes, if AMD wanted this to be competitive, it should be cheaper. Because it's overall a worse product that's playing catch up.
When there are no 1080ti's in stock, and Nvidia can comfortably sit on their ass for another year with current RTX series (AMD out), what other options are there than a 8gb 2080 in that price range?
Everyone who comments here that RTX2080 is way better because it consumes less power @ same perf. Think again. Gamers Nexus tested that and they are 1:1 at that part.
You need to take your blinkers off. RVII uses more power. Not up for a debate and your graph proves that. And, nobody is saying the 2080 is better because it consumes less power. Power consumption is just one of the things. Currently, the 2080 is cheaper, performs better, runs cooler and a LOT quieter. All of this makes it a better card.
JayzTwoCents though they might be pre-OC'd. Didn't find any/much OC room and thermals are near throttling.
"We'll grant a recommended award under the condition that our cooler acoustics levels are erroneous and based on an issue with the cooler. " Meh, you grant an award to literally everything you review and we all know the reasons, might as well not mention it.
Overclock is apparently bugged, but works ok if you use auto OC driver feature.. that said AMD needs some driver work. Up to 2.15GHz.
Some people never acknowledge your work, experience and effort. Like you luck experience by doing that for two decades. Lol. Anyway, people claim that 16gb could be useful at 8k. But I disagree. It is 4k capable at best. What do you think Hilbert? I believe it will be not be capable of average 30 frames.
If AMD had come out with this before RTX then clear cut buy. Typical for AMD / ati they are a bit late and the card needs polishing so from first tests the negative sides once again come out They always release a card that needs further polishing even though a good card. Nvidia releases same but first and really expensive but it gives bigger numbers to ease the pain a bit. It wouid be nice to see a cheaper 8gb vram version of this with after market coolers if they ever do, and finished drivers. Will the public be prepared to wait, or are they already looking more at navi / gtx 1160 space lower down or current 2080 higher up?
I am really looking forward to know how much clock it can gain by overclocking overvolting +/- undervolting with aircooling
The only one uneducated here is you. Or, perhaps, you just got hit hard by the Nvidia marketing rays. These are the proof of you being uneducated or hooked up to the Nvidia marketing tricks: "so you preferably buy an inferior and overpriced", "I will always buy the best product I can afford". You just throw loud comparative words without any technically-reasoned arguments to back yourself up. You just judge by one thing - the great game of the FPS counter (and look only for Nvidia favoured games). Yeah, there are a lot of people like you, for example, who say that Ford focus ST is better than Merc. C-class or 3er (BMW), coz it costs less and drives faster. You just pretend not to understand that it is more than just about speed and cost. 8gb vram is small for 4k. I play Rainbow Six Siege with official ultra hd textures pack and I cannot crank everything up coz I hit 8gb vram cap on vega64. And this is a 3 years old game by the way! So 8gb vram of rtx2080 is totally inferior because it cannot provide max in game settings for a 3 years cybersport competitive shooter (and its for official version without 3rd party mods). I can foresee your counter-argument as you are an Nvidia hyped fan, "it is a competitive shooter and you do not need graphics, you need tons of fps only!". And I will tell you that I'm not a pro gamer and I like detailed and sharp textures, they look awesome on a 4k screen, a way more awesome that your green rays on 1080p screen. So at least 16gb of vram constitute a valid technically-reasoned argument to call rtx2080 inferior to radeon 7. And if you, personally, do not need 16gb vram, this does not make other people uneducated in comparison to you. You have to understand it even if it is difficult for you. You have to respect other people and keep in mind that your needs do not always correspond to needs of other people.
I wonder if this product would have been better served to instead of 2 more stacks of HBM2 being slapped on the die because of space given by node shrink, slap two more CU cluster on die. Imagine two more CU clusters added on to make a total of 96 CU (vega20 has 4 clusters of 16 CU that total 64 CU but each cluster has a CU disabled) and still only run two stacks of HBM2 for 8GB, then clock the core lower like say around 1450MHz (where Vega seems to be much more efficient). Would have cost nearly the same or less money to produce the card and the performance would likely take the 2080ti to the cleaners in DX12 and match it in DX11.
Vega seems to be rather unbalanced now - I'm not sure having 96 CU's would really benefit them much with only 64 rops and presumably those 96 CU's still under 4 SEs. They have trouble loading the 4SE's now even with the load balancing improvements with Vega. Increasing the number of CU's in each SE would exacerbate that problem and increasing the number of SE's would basically require a lot of rework on the dispatch/scheduler side. And then you have the money problem - doing all this would require them to essentially spin an entirely different chip where as this is basically MI50. In datacenter/HPC the 16GB of HBM is more useful than 96 CUs but only 8GB of HBM. A lot of AMD's decisions are definitely forced by it's limited budget. If they had the money to freely spin multiple architectures like Nvidia does I don't think a lot of these chips would look the way they do.. but the upfront cost of spinning a separate SKU specifically for gaming probably far outweighs just trying to pivot datacenter products to gaming at lower margins. Perhaps when that Zen 2 money starts rolling in things change but with NAVI being yet another GCN iteration it seems like it may be further off then most think.
What is the point of these posts? Hilbert the TimeSpy & Ultra results for a 1080Ti in his review. Would all these games run fine on a Fury X since we all know 4GB of HBM exceeds the capability of 12GB of GDDR?