AMD VEGA now scheduled for October launch

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, May 11, 2016.

  1. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    There is no information on Vega. For Polaris we have at least some overview (which does not guarantee that there are not other changes).

    Warning, Image is huge:
    [​IMG]

    But what can change and what AMD is working on? Look at Page 3 here and on. Anything is subject of modification. Even smallest part.
    http://developer.amd.com/wordpress/media/2013/06/2620_final.pdf
     
  2. Aelder

    Aelder Guest

    Messages:
    37
    Likes Received:
    0
    GPU:
    **** you
    Typically slippery of you fox, you're like a fish :p

    How much do you expect they will change between Polaris and Vega? Considering AMD's financial situation I expect them not to churn out a radically different design just a few months after Polaris

    Polaris is gcn 1.3, or 3rd gen gcn, so is Vega
     
  3. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    It is not evasive. It is fact that we do not know if and what changes there will be. But AMD has that particular pattern.
    Originally there was meant to be ~7 months time frame between Polaris and Vega.
    So, it is rational to conclude there would have been changes. Even if minor.

    From moment AMD considers building blocks for given generation as final, it may take less than month to design GPU and send it to forge for tape-out.

    Saying that Polaris and Vega are exactly same is to say AMD sits on their hands in meanwhile doing nothing. And if that was to be case there would be no Vega architecture planned on roadmap as separate revision.

    And that roadmap is interesting because it does compare performance.
    It states "Performance per Watt".
    [​IMG]
    Polaris and Vega are both 14nm, that's for sure, so how would Vega get such additional performance per watt boost over Polaris?

    Is that by Vega being bigger GPU having lower clock? 4096 vs 2560SP is already 60% increase in size. What would have clock needed to be to improve performance per watt on that by 50% over Polaris 2560SP?
     
    Last edited: May 11, 2016
  4. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Fury Nano is what, 45% lower power usage for a 10-15% performance loss? Couldn't they just "Fury Nano" Vega + HBM2 die savings for memory controller/power savings be the rest?

    I mean I think there will be some slight upgrades between Polaris and Vega, but it's not going to be to the same degree between Fiji and Polaris.
     

  5. Stormyandcold

    Stormyandcold Ancient Guru

    Messages:
    5,872
    Likes Received:
    446
    GPU:
    RTX3080ti Founders
    Dude, you could have easily shrunk that pos pic down and uploaded it to any number of free hosting sites.
     
  6. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    It is not that simple. And Nano has tendency to throttle GPU clock down under certain loads as they go over TDP.
    And thing is Nano 175W TDP vs Fury X 275W TDP. That's 35% lower TDP.
    But Fury X does not throttle even with mild OC, while Nano does not even keep stock clocks at times.
    I could, but I gave fair warning instead.
     
    Last edited: May 11, 2016
  7. Aelder

    Aelder Guest

    Messages:
    37
    Likes Received:
    0
    GPU:
    **** you
    What's stopping AMD is that large dies on 14/16nm FF are expensive, and they want to make up for total absence on mobile market+fulfill orders for new console hw+focus on fledgling VR market and provide 390x~ performance for 380x~ prices. This is the norm usually, you get X tier performance, for X-1 tier price.

    Whatever changes there may be in Vega compared to Polaris (other than one being bigger) will be minor, so performance per watt should be similar once you account for the glaring difference that is expected which is HBM+ it's IMC.

    If Polaris' projected perf/w was altered to reflect altered out-of-the-box clocks, to be competitive with the 1070, then it follows naturally that Vega (clocked lower) would carry an efficiency improvement (once you remove HBM savings from the eq)

    This is what I said, any perf/w improvements for Vega over Polaris will be due to HBM + clock differences, the GCN core, the scheduling, the essence of the uarch will remain the same. Because Polaris is to Vega as GP104 is to GP102
     
  8. thatguy91

    thatguy91 Guest

    Just did a quick google and came across this. I guess it refers to real world performance:
    I won't state the source, although you can copy and paste it into google... lol. It's a good leap, but not as big as I thought would be the case. Obviously with Nvidia's earlier release though, AMD needs to bring out there new GPU's to remain competitive. I think Polaris 10 will be a decent match (if not faster) to the GTX 1080 in both price and performance.
     
    Last edited by a moderator: May 11, 2016
  9. kaz050

    kaz050 Active Member

    Messages:
    72
    Likes Received:
    4
    GPU:
    GTX 1070 FTW RGB
    why is everyone so meh, the road map to any card always end in funny ways, Nvidia 960,970gimped,980,titan,980ti, i like to know why nvidia picked 2 980 in sli for a 1080 and not post 980ti vs 1080, in the end everyone will buy what card they want so quit fighting.
     
  10. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Mate! Whatever! "Because Polaris is to Vega as GP104 is to GP102"
    That's your entire attitude. Why do you write it like you do know anything at all about Vega?
    None of us here know anything of real importance even for Polaris as of now. At least not in exact manner it is.
    Yes, we know it is more power efficient than last generation. But we do not know how much.
    There were changes in architecture, but we do not know what they affected in real world scenarios and how much.

    People around web did calculate that Pascal is very close to Maxwell in terms of performance per clock and number of TMU, ROPs, ...
    I even saw some websites to claim Pascal may be as well Just Shrunk Maxwell.
    When I actually wondered about actual architectural changes affecting real world performance, someone here nearly hired assassin on me.

    I can give you good advice, play minesweeper. It is great game from which everyone has something to learn.
    It teaches you how to use information you have and how to deal with things you do not know.
     

  11. Aelder

    Aelder Guest

    Messages:
    37
    Likes Received:
    0
    GPU:
    **** you
    lol you do not know that Vega will be radically different from Polaris. I am saying it is unlikely because of X, Y, Z.

    The discussion here is about the placement of Vega above Polaris in the perf/w chart with the funky starry background.

    First of all Polaris is now claimed to be 2x Perf/W compared to last-gen midrange (Tonga ?), not 2.5x

    Second, Vega has HBM and the attached benefits for power efficiency. There is absolutely no reason to assume there are big changes from Polaris to Vega.

    Tonga is to Fiji as Polaris is to Vega, as GM204 is to GM200, as GK104 is to GK110 etc etc.
     
  12. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    I can wait few weeks instead of throwing baseless statements around...
    Try to do same next time. This time around it is 4 times too late. :)
     
  13. Aelder

    Aelder Guest

    Messages:
    37
    Likes Received:
    0
    GPU:
    **** you
    4 times too late for what ?

    It's not baseless... It's speculative but it's much more reasonable than

    "hurr durr play minesweeper, wait few weeks before throwing baseless statements around, look at this document, it says anything is subject to modification"

    AMD could throw away GCN altogether and launch a radically different uarch built from the ground up and launch it as Vega. Wait a few weeks and see!

    NV could surprise everyone and not launch a big Pascal GPU, instead launching a big 16nm FF GPU on a radically different architecture.

    Will they ? Probably not.
     
  14. BlueRay

    BlueRay Guest

    Messages:
    278
    Likes Received:
    77
    GPU:
    EVGA GTX 1070 FTW
    lol that AMD company. I hope this is not true because they will effectively troll themselves. This is just ridiculous. And they want to be taken seriously? I thought they got their brains in their correct position after so many fails but somehow they manage to always screw things.
    Anyways still waiting for real reviews from both camps.
     
  15. eclap

    eclap Banned

    Messages:
    31,468
    Likes Received:
    4
    GPU:
    Palit GR 1080 2000/11000
    I don't know. A lot of people with 960/970/980 will most likely have an itch to upgrade, same for those with 290/290x/380/390/390x.

    I personally almost bought a 980ti quite a few times but in the end, decided to wait. Now with 1070/1080 coming out, many will buy these. If the 1070 is around 980ti performance (both OC'd) and sell for around £350, many will buy it. AMD are shooting themselves in the foot by not being able to release now.

    By the time they come out with their next gen cards, nVidia will most likely have a counter to those. I just wish AMD had something to counter 1070/1080 in a timely manner, would make the market a lot more competitive.
     

  16. Clouseau

    Clouseau Ancient Guru

    Messages:
    2,844
    Likes Received:
    514
    GPU:
    ZOTAC AMP RTX 3070
    Polaris is not in the same league as the 1080 from the appearance of things. There was a video demo of Polaris on medium settings at 1080p being shown against what was stated by an AMD employee as the competition's comparable card. That comparable card was a 950. Nvidia will have sunk themselves if the 1070 or 1080 are at the same level as a 950. That is why the AMD AIB partners would be more than just a little concerned about the looks of things.

    EDIT: AMD shot themselves in the head this time. Roulette has its risks.
     
    Last edited: May 11, 2016
  17. zer0_c0ol

    zer0_c0ol Ancient Guru

    Messages:
    2,976
    Likes Received:
    0
    GPU:
    FuryX cf
    Um dude that card was at ces and it was the smallest p11 not p10.. carry on
     
  18. zer0_c0ol

    zer0_c0ol Ancient Guru

    Messages:
    2,976
    Likes Received:
    0
    GPU:
    FuryX cf
    LOL

    I hope it is true, and troll LOL again, dude we have zero info on polaris.. we have rumors and two pictures from gdc
     
  19. Aelder

    Aelder Guest

    Messages:
    37
    Likes Received:
    0
    GPU:
    **** you
    No...

    That's Polaris 11

    This is why AMD's nomenclature is stupid. Polaris 10 and 11 are midrange and entry-level respectively

    Polaris 10 should provide ~390x/980 levels of performance for 380x/960 prices

    Similarly Vega 11 should provide FuryX/980Ti performance for 390x/980 prices

    The 1070 is rated for 6500 tflops stock, that's roughly ~1900 SPs at ~1700mhz (1920 seems likely number)

    Now let's assume it overclocks similar to that 1080 that was shown so 2.1ghz

    That's 8tflops

    5% shy of an overclocked 980ti

    980Ti prices are now being slashed, so keep an eye for those deals.
     
    Last edited: May 11, 2016
  20. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    That comparison where entire system used 150W with nVidia graphics and 85W with AMD?
    What from that is CPU, MB, RAM, HDD, FANs?

    If 25W, then nVidia GPU was eating 125W and AMD's 60W. It does not fit, right?
    Because GTX 950 is card with 90W TDP.
    So, let's say System ate 60W, leaving 90W for nVidia GPU and 25W with AMD. Is that OK? Not really as nVidia card could not be saturated, otherwise it would have fps drops, but it was clean 60fps on both systems.

    Why was that wrong? Because It was measured after PSU efficiency took place.
    I presume that at such low power consumption it is like 85% PSU efficiency.
    So, nVidia system components could eat 127.5W and AMD system 72.25W.
    My humble guess for not fully loaded and power efficient intel CPU+MB is 50W.
    So, nVidia card ate 77.5W average and AMD card ate 22.25W average.

    Fun part is, that it is something we can expect in notebooks :) I may get gaming notebook again.
     
    Last edited: May 11, 2016

Share This Page