Radeon RX Vega to Compete with GTX 1080 Ti and Titan Xp

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Apr 26, 2017.

  1. Denial

    Denial Ancient Guru

    Messages:
    13,161
    Likes Received:
    2,650
    GPU:
    EVGA RTX 3080
    Which part? Splitting the product line? I don't think the memory controller has anything to do with it - it's the delay of getting HBM2.

    The GP100 for example was announced at the start of last year yet they were only able to ship several by the end of the summer due to delays in producing the GP100 chips. Had Nvidia decided to go HBM2 on it's 1080, it also would have either been delayed and/or the supply would have been horrible. They were splitting the chip anyway for the packed math FP16v2 cores that the GP100 has so I guess they figured they might as well split HBM2 off so it doesn't effect the sales of the gaming lineup.

    AMD's situation is different. I don't know if they could have shipped a Vega chip last year had HBM availability been where it needed to be. I think they were still finishing design. But if they could ship it, they wouldn't have been able to supply enough to deliver for the gaming crowds and if they decided to not use HBM, they wouldn't have been competitive in the server/datacenter markets. I also don't think they could afford to do what Nvidia did and spin a completely separate design for server/datacenter - they just don't have that kind of cash on hand and I think most of it was focused on Ryzen supply.

    I think realistically AMD knew HBM2 wasn't going to be ready by Polaris launch and that they needed money for Ryzen anyway. So they made the decision to use Polaris to cover 90% of the gaming market and delay launching high end cards - instead spending more time on furthering the architecture by a year and launching them when HBM2 availability/yield was sufficient - which is now with Vega.


    It's not so much the HBM2 material that's expensive. It's the manufacturing and validation. Mounting a GDDR5x module is straight forward - something that's been done for decades. Mounting a die/HBM2 module to an interposer then growing tens of thousands of crystals through the VIAs significantly more complex. Because now you have the yields of three different components, including one that's fairly new (HBM) and you significantly increase the complexity of the equipment you need to fabricate it, etc. And while I'm sure the yields of HBM and the mounting process have improved since Fury, there is no way the cost is close to GDDR5/x. It's a significantly more complex process.


    Well the main purpose of Tiled Rasterization is to essentially boost memory bandwidth - allowing you to use a smaller, lower power bus, but get more effective bandwidth out of it. It actually lowers shader performance when it's enabled and anandtech spoke about some of it's potential pitfalls:

    Tom on PC Perspective Podcast mentioned that Nvidia actually dynamically enables/disables it depending on the game.

    The rest of the things you mentioned could improve effective performance - depends on where the bottleneck is. Either way in terms of raw power it will be close enough to the Ti, all extras will just boost utilization of it's shader performance. It's definitely going to be competitive.

    The cache boost is when the card was out of available VRAM (they used a card that only had 2GB enabled or something) - not an "all the time" thing.
     
  2. pharma

    pharma Ancient Guru

    Messages:
    1,573
    Likes Received:
    417
    GPU:
    Asus Strix GTX 1080
    Last edited: Apr 27, 2017
  3. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,213
    Likes Received:
    3,003
    GPU:
    5700XT+AW@240Hz
    But it still remains that window of opportunity is just bit smaller.
    It is exactly same with Ryzen. Intel has CPUs with performance equal to any Ryzen processor in market for long time.
    Yet, people still buy Ryzen because it costs less than equally powerful intel's CPU.

    Imagine someone with GTX 980Ti/Fury. And that they want upgrade, but consider only GTX 1080Ti level of performance as solid upgrade since they want to move to 1440p @144Hz. But GTX 1080Ti is too expensive for them.
    Then Vega pops in with similar performance in given resolution and is available at lower price which is affordable for this person.
     
  4. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,461
    Likes Received:
    485
    GPU:
    Sapphire 7970 Quadrobake
    The cheapest Ti I can find them in the EU is 716 euros without delivery, and that's for the ****ty one-fan model. Total cost probably close to 740 euros.
    The fact is I don't really believe that the 1080 has the chops for good long-time performance. It's basically a 2560 core card with a 256-bit memory interface. It's nice, but it's essentially midrange sold in exuberant prices because there is no competition on that level.

    They said that in cases that the GPU runs out of memory, the performance is +50% higher than it would have been if a traditional memory controller was used.

    I'm not sure that NVIDIA and AMD face the same delays in HBM 2 access. Do we happen to have anything remotely concrete on the actual HBM vs GDDR5 production/implementation cost? Because there are Furies being sold ~$260, with obvious profit (but not great one) involved.

    Ryzen supply would make sense, but if you notice their Linux driver commits for the past year, they are trying something very specific. They want to change their driver model so that it's as cross-platform and open as possible, and at the same time provide at least acceptable performance through Day 1 driver releases. If you combine that with the HBM 2 factor and production overtaken by a different popular product, it makes sense. As for design spins, I think that's the reason why the emphasize the memory controller so much. If I have understood the diagram correctly, they call it High Bandwidth Cache controller, because it always needs some fast HBM on board to do its thing, and then it can connect to everything that's on the bus. That means that mixed models would also have been possible (small amount of HBM plus GDDR5). Why would they need to split designs if that is the case? It's clear that Pascal's memory controller can't do that and that they need separate silicon for HBM and GDDR5x, so it would make sense to split there.

    Seeing how Polaris performs now compared to launch, I'm not so certain that even if the Vega design was finalized GloFo could produce it in good yields with the frequencies required. "Polaris 20" is basically a GloFo spin, not an actual redesign of the chip. Now things look quite different on that front.

    We have to wait and see, because I have this feeling that AMD in general works with less refined approaches to hardware issues like that. Ie, I don't believe they'll enable/disable it depending on the application like NVIDIA does. That's just an uneducated guess from my point.

    AMD's bottleneck has always been graphics performance and their DX11 driver. It's still there, but the changes in Vega might at least change the graphics performance part a bit. I'm curious to see the effects of the new memory controller in microstuttering, because I don't believe it will only matter in out of vram situations. The next huge questionmark is the command processor and how it can be used by the DX11 driver. AMD's DX12 driver is actually quite nice, despite some recent sh*tiness (like Forza Horizon 3 having artifacts and immense loading times with the latest versions *cough* *cough*).

    I want to see what that controller does to frametimes in general. I wish they could provide even a hidden switch to turn it on or off, just to see the effect.
     

  5. Loophole35

    Loophole35 Ancient Guru

    Messages:
    9,764
    Likes Received:
    1,123
    GPU:
    EVGA 1080ti SC
    The 1080+ memory bandwidth is higher than the 980Ti and just a tick behind the Fury X. Don't fall for the bus width shenanigans.
     
  6. Valken

    Valken Ancient Guru

    Messages:
    1,630
    Likes Received:
    149
    GPU:
    Forsa 1060 3GB Temp GPU
  7. RandomDriverDev

    RandomDriverDev Banned

    Messages:
    111
    Likes Received:
    0
    GPU:
    1080 / 8GB and 1060 / 6GB
    Do NOT use the inclusion of an AMD core in a console that uses native machine code and built for product SDK's as any sort of basis that a part will do well.
     
  8. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,249
    Likes Received:
    15
    GPU:
    RTX 2080
    As an indicator of how far graphics get pushed, it works quite well. This has been proven with multiple times with "console killer" review builds. Optimization is a different story.
     
  9. RandomDriverDev

    RandomDriverDev Banned

    Messages:
    111
    Likes Received:
    0
    GPU:
    1080 / 8GB and 1060 / 6GB
    on the pc front, AMD keeps half arsing the front end of the scheduler and then pulling the 'NVidia is a cheat' card when their hardware cannot cope with particular games.

    You won't get that on a console, the game is made for the specifications available.

    on PC, games are made to take advantage of a wide range of hardware.
     
  10. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    7,461
    Likes Received:
    485
    GPU:
    Sapphire 7970 Quadrobake
    I don't. But it should have had that memory with a minimum 384-bit bus. It being higher than the 980Ti and a tick under the Fury X in raw specs, simply enforces that hardware-wise it's a mid-range card.
     

  11. Fox2232

    Fox2232 Ancient Guru

    Messages:
    11,213
    Likes Received:
    3,003
    GPU:
    5700XT+AW@240Hz
    I would not be so critical here. They use APIs for ease of development.
    And it may be surprise to you, but XBox One uses DirectX 11 & 12 as Microsoft wants to have good compatibility with PC.
    And GNMX (high level PS4 API) is close to DX11.
     
  12. RandomDriverDev

    RandomDriverDev Banned

    Messages:
    111
    Likes Received:
    0
    GPU:
    1080 / 8GB and 1060 / 6GB
    all of the Xbox's have used a DirectX 'Like' language for graphics, with the first and last of them using x86 processors.

    Doesn't mean zot for how they work on an actual PC though.

    One day AMD might have the finances to put together a driver team that knows how to read old uncommented code and fix the issues in their OpenGL and Direct3D11 drivers.
     
  13. TheF34RChannel

    TheF34RChannel Master Guru

    Messages:
    277
    Likes Received:
    3
    GPU:
    Asus GTX 1080 Strix
    This is so funny! 1. The AMD guy works in a completely unrelated department and 2. If you read his post he did not say that at all yet the Internet is going mental over this rotfl. Guru is becoming more of an unchecked rumour mill since last year and that's a real shame as it lowers the outstanding quality this site has always had I feel :/
     
  14. pharma

    pharma Ancient Guru

    Messages:
    1,573
    Likes Received:
    417
    GPU:
    Asus Strix GTX 1080
  15. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,249
    Likes Received:
    15
    GPU:
    RTX 2080

  16. Denial

    Denial Ancient Guru

    Messages:
    13,161
    Likes Received:
    2,650
    GPU:
    EVGA RTX 3080
    1200mhz isn't the final clockspeed. That's why the score is so low.
     
  17. GeniusPr0

    GeniusPr0 Maha Guru

    Messages:
    1,249
    Likes Received:
    15
    GPU:
    RTX 2080
    Well 1200Mhz Vega is roughly at GTX 1070 levels, so 1.25% more clocks, assuming it scales linearly since memory won't be the bottleneck, would be 1080, right on the dot, or around 7K GPU.

    Eh
     
  18. Ryu5uzaku

    Ryu5uzaku Ancient Guru

    Messages:
    6,979
    Likes Received:
    209
    GPU:
    980
    Last edited: Apr 30, 2017
  19. Denial

    Denial Ancient Guru

    Messages:
    13,161
    Likes Received:
    2,650
    GPU:
    EVGA RTX 3080
    Which I guess is right around the Doom/BF1 performance that was shown a few months ago. So yeah, might be accurate for the given clockspeed.
     
  20. kx11

    kx11 Ancient Guru

    Messages:
    3,263
    Likes Received:
    335
    GPU:
    RTX 3090
    WCCFTech were late to steal the article from guru3d
     

Share This Page