Which part? Splitting the product line? I don't think the memory controller has anything to do with it - it's the delay of getting HBM2. The GP100 for example was announced at the start of last year yet they were only able to ship several by the end of the summer due to delays in producing the GP100 chips. Had Nvidia decided to go HBM2 on it's 1080, it also would have either been delayed and/or the supply would have been horrible. They were splitting the chip anyway for the packed math FP16v2 cores that the GP100 has so I guess they figured they might as well split HBM2 off so it doesn't effect the sales of the gaming lineup. AMD's situation is different. I don't know if they could have shipped a Vega chip last year had HBM availability been where it needed to be. I think they were still finishing design. But if they could ship it, they wouldn't have been able to supply enough to deliver for the gaming crowds and if they decided to not use HBM, they wouldn't have been competitive in the server/datacenter markets. I also don't think they could afford to do what Nvidia did and spin a completely separate design for server/datacenter - they just don't have that kind of cash on hand and I think most of it was focused on Ryzen supply. I think realistically AMD knew HBM2 wasn't going to be ready by Polaris launch and that they needed money for Ryzen anyway. So they made the decision to use Polaris to cover 90% of the gaming market and delay launching high end cards - instead spending more time on furthering the architecture by a year and launching them when HBM2 availability/yield was sufficient - which is now with Vega. It's not so much the HBM2 material that's expensive. It's the manufacturing and validation. Mounting a GDDR5x module is straight forward - something that's been done for decades. Mounting a die/HBM2 module to an interposer then growing tens of thousands of crystals through the VIAs significantly more complex. Because now you have the yields of three different components, including one that's fairly new (HBM) and you significantly increase the complexity of the equipment you need to fabricate it, etc. And while I'm sure the yields of HBM and the mounting process have improved since Fury, there is no way the cost is close to GDDR5/x. It's a significantly more complex process. Well the main purpose of Tiled Rasterization is to essentially boost memory bandwidth - allowing you to use a smaller, lower power bus, but get more effective bandwidth out of it. It actually lowers shader performance when it's enabled and anandtech spoke about some of it's potential pitfalls: Tom on PC Perspective Podcast mentioned that Nvidia actually dynamically enables/disables it depending on the game. The rest of the things you mentioned could improve effective performance - depends on where the bottleneck is. Either way in terms of raw power it will be close enough to the Ti, all extras will just boost utilization of it's shader performance. It's definitely going to be competitive. The cache boost is when the card was out of available VRAM (they used a card that only had 2GB enabled or something) - not an "all the time" thing.