Rumor: AMD Radeon 490 and 490X Polaris graphics cards launch end of June

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Apr 7, 2016.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,582
    Likes Received:
    18,935
    GPU:
    AMD | NVIDIA
  2. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Matches what RTG said in the reddit AMA. I wouldn't even call it a rumor at this point.

    I expect Nvidia will have something of similar performance in the few months following that. I doubt any of the first wave cards will be much faster then what we have already -- just more efficient.
     
  3. thatguy91

    thatguy91 Guest

    If we knew the wattage of the card we could come to a reasonable estimation of the performance. The claim is 2.5 times the performance per watt, so if the card is 200 W versus an old gen 200 W, it will be 2.5 times faster.
     
  4. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    That seems like wishful thinking, im sure Maxwell had similar efficiency claims.

    I'm looking to build a bedroom PC in the next few months, might pick something up from this range, get me back checking out AMD stuff while still having the confort zone of the main Nvidia setup.
     

  5. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    It doesn't seem that far fetched honestly. Nvidia is basically claiming double performance/watt. AMD was already, slightly behind in efficiency on Fiji (although arguably they are ahead in some of the newer titles). So they have more room to catch up.

    I'm just not sure they are launching a 200w card. We know Polaris 11 is sub 100w and we know that Polaris 10 is capable of Hitman QHD @ 60fps. We also kind of maybe know they are only launching those two cards initially and we know that Nvidia isn't doing anything gaming wise with GP100 until next year.

    Idk, I imagine Polaris 10 will be ~165w and be about 15-20% faster then a Fury X and I imagine that Nvidia will launch a similar card shortly after, probably the same chip that's going on the PX2.
     
  6. Kaotik

    Kaotik Guest

    Messages:
    163
    Likes Received:
    4
    GPU:
    Radeon RX 6800 XT
    Micron has not said it's in volume production, they've said that they've started shipping samples to customers, and expect to start volume production during summer.
     
  7. hapkiman

    hapkiman Guest

    Messages:
    66
    Likes Received:
    4
    GPU:
    MSI GTX 1080Ti Gaming X
    Very interesting! AMDs first solid product in a while that I'm curious about. On a side note. Use spell check man (srping?). And you said "HBM2 based product no sooner then early 2017" - "then" should be "than."

    @Denial may be right about the first wave of these AMD cards, they may not surpass the Furys and 980 Tis.
     
  8. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,132
    Likes Received:
    974
    GPU:
    Inno3D RTX 3090
    Listening to ar Raja Koduri interview I got the impression that AMD doesn't want to deal in huge GPUs any longer. These cards might have the same configuration as a Fiji card (4096 shaders etc)m but they might be at 50% the frequency and be overclockable on top.

    Judging by NVIDIA's Pascal announcement, they managed to cram in there more than double the hardware at 25%+ the frequencies. All the small indicators we have show that AMD probably has a more mature process to work with, so a Polaris 10 with 4096 shaders at 1.5-1.8GHz wouldn't surprise me at all.
     
  9. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Well I think AMD does want large GPUs, they just don't like to lead with them.

    In fact Anandtech wrote an article about their "Small Die Strategy" all the way back in 2008.

    http://www.anandtech.com/show/2556/2

    And it makes sense. On a new process when yields are crappy, having a giant die just going to drive the cost per chip way up. Nvidia can justify it with the GP100 because it's going into a $100,000+ box where companies don't think twice about spending $15,000+ per GPU because it's still way cheaper than the alternative.

    In gaming though, that additional manufacturing cost would turn a normally priced $650 chip into a $2000 one. Very few gamers would actually buy it. Most people would just complain. That and Nvidia would have no product for next year. So they are better off reaping the high margins off GP100 units in the compute/HPC area and just doing a half size chip for gamers that's normally priced.
     
    Last edited: Apr 7, 2016
  10. Koniakki

    Koniakki Guest

    Messages:
    2,843
    Likes Received:
    452
    GPU:
    ZOTAC GTX 1080Ti FE
    Can't wait! Go AMD, GO!
     

  11. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,132
    Likes Received:
    974
    GPU:
    Inno3D RTX 3090
    A lot of things have changed since 2008. I believe that if they are cornered performance-wise, they would most likely make another big die. But read the article and listen to his interview at PC Perspective.

    They want to invest in fast, small, good yield dies and scale them using an interposer instead of making a huge die. He's even speaking about "beyond crossfire", where they give that as a default configuration. Judging by their roadmap this will come with Vega.
     
  12. Lane

    Lane Guest

    Messages:
    6,361
    Likes Received:
    3
    GPU:
    2x HD7970 - EK Waterblock
    Well, i dont think that they dont want deal with huge GPU's.. Just they seems to start with small ones... As Nvidia.

    Nvidia have annonce first delivery of GP100 on selected supercomputers center in June.. But availability for OEM in Q1 2017.. ( dont even count seeing a customer, gaming version of GP100 before 2017 as this is release roadmap is only for Tesla ).

    Its exactly what have been done by Nvidia with GK100 if you remember well. First delivery for some supercomputer center ( Titan ) the difference is the 680 was allready out at this time.

    This said, both brands, need big and high performance chips for the HPC markets really quick.. Maxwell and Fury was not really a big deal in this market, as intentionnally not made for it. Now that they can move from 28nm process and architecture they will show their muscle for it.

    For Nvidia, it is allready done with GP100.. But as the release roadmap show it, the real availibility will be for 2017...

    I dont know if AMD plan to push it before ( Polaris ) or Vega ( 2017, who match GP100 roadmap). The roadmap of AMD can suggest that Vega will go against GP100.

    Potentially we could end with a GP102-GP104 version from Nvidia vs small Polaris, and got the big die version in 2017, so GP100 vs Vega ( for gaming, consumers workstation card ). Buttthere, i just compare both roadmap informations.
     
    Last edited: Apr 7, 2016
  13. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    I don't think there is a point in dealing with large ones. Not unless it's a high margin market.

    Yields are most likely poorer than 28nm and transistor costs didn't go down much. It makes much more sense to do a small chip and clock it higher for performance then just make a large chip. And I think AMD suggests that this is the future, because we probably won't have another 28nm, 4 year debacle. TSMC is entering 10nm volume this year. Which means we will probably have 10nm GPU's in 2018, resetting the process all over again.

    @PrMinisterGR

    Yeah, I agree with what your saying, I just think that was AMD's strategy all along. 28nm was different because it was around forever and matured. They could afford to make larger GPU's with it. Unless they skip another node again, I don't think we will see larger GPU's, especially in low margin markets.
     
  14. toxzl1

    toxzl1 Guest

    Messages:
    160
    Likes Received:
    0
    GPU:
    Asus 390X Strix
    I want to sell my 2x390X

    Definitely crossfire and SLI are so fkng bad...

    Looking for a beast single GPU I hope AMD makes another FuryX Polaris
     
  15. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,132
    Likes Received:
    974
    GPU:
    Inno3D RTX 3090
    In the interview he specifically says that big GPUs are pretty much unsustainable from a point on. Also from all these years following things I've learned that soft launches mean nothing. Wait for in-depth architecture reviews and actual benchmarks. At this point only AMD has displayed working silicon. The AMD roadmap also suggests that Vega might be multi-gpu on an interposer.

    I don't believe we'll see anything complex at 10nm yet. No, mobile chips are not really complex compared to Intel/AMD CPUs and NVIDIA/AMD GPUs. Even Intel had to abandon tick-tock because of the 10nm process. TSMC was saying the same for their 20nm process and it never happened for complex processors. My gut feeling is that we'll be here for 14nm as long as it was for 28nm. The Pascal Tesla is telling me that NVIDIA is going for the standard approach of huge GPUs. AMD's roadmap and that interview tells me that they will try to leverage their interposer tech to present a larger GPU comprised of smaller, cheaper GPUs.
     

  16. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Nah, the initial Nvidia gaming GPU's will be the same size as AMD's. We aren't getting a 600mm2 gaming GPU this year. Nvidia can afford to build GP100s because they are selling them for over $10,000 per unit. Nvidia's margins in HPC/Cloud are almost 25% more than they are for gaming. Datacenter revenue for them has increased 40% over the last three years while gaming has only increased 30%. Financially it makes far more sense for Nvidia to build a larger chip for that purpose. All it does is expand margins even further.

    Samsung plans on having 10nm Exynos chips in 2017, same with Qualcomm and the 830. A 10nm GPU in 2018 doesn't seem that far of a stretch. If GPU's hang on 14/16nm for as long as 28nm and yields hit 28nm numbers, there is no point in using an interposer for multi-GPU purposes. 28nm yields are over 90%. The added complexity of the interposer offers no advantage over just having a larger chip. That is unless you're going over the reticule limit of ~600mm2.
     
  17. chispy

    chispy Ancient Guru

    Messages:
    9,997
    Likes Received:
    2,722
    GPU:
    RTX 4090
    Nice one bring it on AMD , let's hope it delivers this time and i am actually looking forward to this release. Crossing my fingers that it will be a good overclocker :D
     
  18. Embra

    Embra Ancient Guru

    Messages:
    1,601
    Likes Received:
    956
    GPU:
    Red Devil 6950 XT
    I have heard that 7 nm might be the wall. If true, we will be very close to it soon ... especially with 10 nm soon.
     
  19. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,132
    Likes Received:
    974
    GPU:
    Inno3D RTX 3090
    2019 sounds much more plausible. Intel is the best of the best regarding die shrinks, and it's the first time in their recent history where they will be so late. That's the transition between 14nm and 10nm for Intel. Do not compare small mobile chips to anything we have on the desktop, it's almost futile. As for the interposer, it makes sense even so. To hit the nice yields you need actual production and experience in that production to do it. That would mean a lot of wasted money really. Why not simply interpose 4 chips that you get almost 100% yields from day zero, who will also provide better thermals?
     
  20. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Intel's 14nm is also on a whole different level then any other fabs and the specs on their 10nm is even better.

    https://www.semiwiki.com/forum/content/3884-who-will-lead-10nm.html

    That's not even to mention that Knights Hill is launching on 10nm in 2018 and it's ~700mm2.

    Regardless, even if it is 2019, it's still 2.5 years instead of 4.5 and you still have all cost reduction issues (well lack of them). These nodes aren't getting cheaper, they are getting more expensive. It makes sense for AMD to just avoid it by making smaller chips.

    I'm also not sure how you think they will get near 100% yields out of that. The chips themselves will have a yield rate, the interposer will have a yield rate (although at 65nm it's probably really high) and the stacking/melding process itself will have a yield rate. I'm also not sure how the thermals would work out. The surface area would increase but the total power requirement would also increase as there are longer routes and extra connections. Plus what's the maximum size of an interposer? Just seems like a lot of added complexity that would be unnecessary if yields were good, but again, they probably won't be, so I see why AMD would go that route.
     

Share This Page