Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Apr 26, 2017.
Where'd it go?
This made me laugh.
Not going to say you're wrong, but what Nvidia did recently with 1080Ti and Titan Xp is prof of a very greedy company.
We've seen videos of it having close to same performance of 1080 months ago...I don't see why not.
Actually intel is not as much guilty as they look like. Their CPUs are brutally priced over manufacturing costs.
But they pump most of that into other divisions/projects.
Some time ago, there was discussion about it and I did full calculation. Intel simply can't afford to go with CPU prices down much as they would have to make cuts elsewhere.
As far as nVidia's pricing goes, they do what any company does when they are alone in certain market segment. (While intel did exactly same thing, nVidia reinvesting those additional funds in same segment which makes them.)
Situation would be much different if RX 480 clocked around same as GTX 1070/1080. But 1st 14nm simply proved to be worse than 16nm + AMD still did not have power gating on level of nVidia, so their GPUs are wasting more energy and therefore can't clock that high as waste increases chance that transistor will end up in uncertain state.
First, if people look back at the older Vega tech demos, they were beating the GTX1080 already. The real question has been since the 1080Ti got out, is how good compared to that Vega will be, and more important, at what price.
But now looking at this news objectively, I don't see any link to the famous Q&A article. And I guess since the Internet rule is if there is no proff (where's the damn link), then is Lies!.
It seems Economics 101 could come in handy.
But for Tomas Aquino, Saint Augustine and few other people, EVERYBODY is greedy. Everybody. This is a fact of life, not a problem with capitalism, much less with the GPU/CPU market anno 2017.
Therefore, all companies will maximize their profits, which means milking their customers as much as they can. If there is serious competition, then there are limits to how much they can milk us, because our pockets - or our tits - have limits as well: we have to share our budget - or our milk - with everybody.
Intel and nVidia are just maximizing their profits, the way they should be. Of course prices would be lower if there was some serious competition, but it just has not been the case for close to 10 years.
Next subject, please.
It's funny that so many people today say "this is sooo expensive, that is soo overpriced, companies are milking us"
Yet we are getting these multi-teraflop supercomputers in a 2x2cm tiny square, with absolutely amazing graphics in photo-realistic games on Ultra-HD resolutions that were unthinkable just a few years back.
I started my "accelerated" 3D Gaming with "S3 Trio 3D" on AGP... which was basically displaying a slideshow of flat shaded triangles and called it a game...
And I payed a $hitload of ca$h on something that looks like this:
AND I LOVED IT. It was my amazing gaming VGA card, and it could do this amazing 3D thing in a world of Mario-like sidescroller games.
Please, stop it with the milking... seriously. E'nuff is e'nuff !
That's great, but your *if* is exactly my point. They didn't release, thus missed that window. And, not just since the release of the 1080 Ti, but also since the release of Pascal. You're almost repeating Vase's argument which is faulty for reasons already explained.
As much as i like Guru3d, an article on such an obscure quote, i'm used to better than this. Could means raw performance, could be performance per dollars, could mean anything.
Yawn. Like usual with AMD; I'll believe it when I see it. Hopefully that will be next month when I go to computex.
If AMD is being hysteric and wants to be picked up by media, it will. Go get outside and shout, someone might notice you.
It's not really a news or anything, its AMD sending a message. Media picks it up and shows it. Whether its useful for you or not is depending on your critical mind and vision.
I am not really sure about AMD communication lately, but whatever...
#BETTERRED #DEATHTONOVIDIA #LULNOVIDIA #AMD4LIFE ... edit: wait I saw this somewhere...
It's not delayed. We know it's supposed to be out on 1H 2017, since at least June 2016 and the AMD investor calls.
This is just wishful thinking from AMD fans, until Volta comes out. Truth be told, it seems that NVIDIA seemed to tackle too many things at the same time with it and unlike Vega it is quite delayed at this point, but we'll see.
Doesn't the Vega memory controller make this point moot? I mean they have experimented with mixed storage with their professional line already, and one of the very few concrete stuff we know about Vega's controller is that it can work with everything, even network storage, as long as it has some fast local cache. I kinda think that the reason that Vega is taking its time is proper clock modulation and drivers, seeing how they put so much importance in the Linux driver this time and they're introducing a whole new driver platform with it.
There is no way this will happen with either the Ti or Vega. When people say 4k60 fps, they don't really mean 60 fps gameplay. 60fps gameplay means minimum 16.67ms frames, even paced in a frame. With that criteria, these cards are really 1440p60 hardware.
I'm not sure that HBM2 is that much more expensive from GDDR5x, especially for AMD, since they hold most of the HBM patents. Vega looks like a very nice compute GPU, and AMD has had tradition in great compute performance since GCN 1.0. I believe that along with their new memory controller, they'll do great in the Ai market. They are behind in software, but GPUOpen is already paying off and they are focusing a lot of people in Ai.
I would argue that the tiled rasterization, the polygon culling stuff and the new command processor and memory controller aren't really power saving features. The mc and the command processor are the most intriguing part of Vega to me, just because of the potential. A Fury X with 8GB of VRAM at 1.5GHz would also demolish the 1080, and probably reach 1080Ti levels, so Vega competing with them isn't out of the question at all. I just prefer to hold a smaller basket, especially for the initial release benchmarks where AMD will disappoint as is tradition.
Pretty much this. If it's ~5-10% close to the Ti initially, with a $600 price tag, I'm hitting it just for the price performance. I honestly see NVIDIA's DX11 lead eroding and mattering less and less though.
most likely only in ultra high res, the architecture is interesting. Claiming the HBM cache boosted performance in some areas in DE:MD.
If this delivers pretty comparable performance to 1080ti totally buying it. Been waiting which one to buy. Rather use amds drivers actually and geforce experience seems horrible with it's added osd.
Most sensible post in here so far. QFT.
As in Vega, I believe it when I see it. Don't fall for the marketing bs... real numbers are important. Also, no cherry picked benchmarks in AOTS, please AMD, thank you.
The thing is if it's at that performance level and $600 the updated 1080's are a better $/FPS than the Vega will be. The 1080+ is around 15% slower than reference TX and Ti and I'm seeing them for about $510.
AMD said 50% more performance just from the cache, I don't really understand how this card is going to work.
I think it's going to be all over the map, and since PS4 and Xbox Scorpio are AMD, the future bodes well.
Not worth speculating IMO.
That was only in a specific demo of Mankind Divided though from what I can find, "50%" increase in average and "100%" increase in minimum but I can't find what the actual framerate was so it might not be all that impressive, game is pretty demanding on the current Fury GPU even without going 2560x1440+ on the display resolution though I guess percent wise it's still a pretty good gain but it was a specific demo for showing off the Vega HBM's cache thing so I doubt every game is going to see such results.
(Probably was with DX12 too and might have had some game specific tweaks as well.)
Because AMD have done some hype in the past with:
-GPU with GDDR5 for bench... but only DDR3 in shop
-OC bios GPU in desguise of standard GPU (not the only one i agree, but more than an habit for AMD)
-full GPU send to press while cut down one is the only one in shop (more in asia for that trick).
-and of course demo with not commercial mod that advantage the result.
for all of that and despite AMD is way better than 5 year ago ... i will wait the review from HH
Which part? Splitting the product line? I don't think the memory controller has anything to do with it - it's the delay of getting HBM2.
The GP100 for example was announced at the start of last year yet they were only able to ship several by the end of the summer due to delays in producing the GP100 chips. Had Nvidia decided to go HBM2 on it's 1080, it also would have either been delayed and/or the supply would have been horrible. They were splitting the chip anyway for the packed math FP16v2 cores that the GP100 has so I guess they figured they might as well split HBM2 off so it doesn't effect the sales of the gaming lineup.
AMD's situation is different. I don't know if they could have shipped a Vega chip last year had HBM availability been where it needed to be. I think they were still finishing design. But if they could ship it, they wouldn't have been able to supply enough to deliver for the gaming crowds and if they decided to not use HBM, they wouldn't have been competitive in the server/datacenter markets. I also don't think they could afford to do what Nvidia did and spin a completely separate design for server/datacenter - they just don't have that kind of cash on hand and I think most of it was focused on Ryzen supply.
I think realistically AMD knew HBM2 wasn't going to be ready by Polaris launch and that they needed money for Ryzen anyway. So they made the decision to use Polaris to cover 90% of the gaming market and delay launching high end cards - instead spending more time on furthering the architecture by a year and launching them when HBM2 availability/yield was sufficient - which is now with Vega.
It's not so much the HBM2 material that's expensive. It's the manufacturing and validation. Mounting a GDDR5x module is straight forward - something that's been done for decades. Mounting a die/HBM2 module to an interposer then growing tens of thousands of crystals through the VIAs significantly more complex. Because now you have the yields of three different components, including one that's fairly new (HBM) and you significantly increase the complexity of the equipment you need to fabricate it, etc. And while I'm sure the yields of HBM and the mounting process have improved since Fury, there is no way the cost is close to GDDR5/x. It's a significantly more complex process.
Well the main purpose of Tiled Rasterization is to essentially boost memory bandwidth - allowing you to use a smaller, lower power bus, but get more effective bandwidth out of it. It actually lowers shader performance when it's enabled and anandtech spoke about some of it's potential pitfalls:
Tom on PC Perspective Podcast mentioned that Nvidia actually dynamically enables/disables it depending on the game.
The rest of the things you mentioned could improve effective performance - depends on where the bottleneck is. Either way in terms of raw power it will be close enough to the Ti, all extras will just boost utilization of it's shader performance. It's definitely going to be competitive.
The cache boost is when the card was out of available VRAM (they used a card that only had 2GB enabled or something) - not an "all the time" thing.