AMD Greenland Vega10 Silicon To Have 4096 Stream Processors?

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Mar 28, 2016.

  1. -Tj-

    -Tj- Ancient Guru

    Messages:
    18,107
    Likes Received:
    2,611
    GPU:
    3080TI iChill Black
    So this topic went all south.. who cares how much power will it consume.. all im "interested" is in performance.. and that looks like it wont be anything special since hbm2.0 is now reserved for Vega.
     
  2. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    I don't see how HBM2.0 is a requirement for "special performance". AMD could easily put out a card 40-50% faster then current gen, on HBM1, and it would be fine for the most part.
     
  3. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    And nVidia in turn can easily go with 512bit GDDR5 or maybe early 384bit GDDR5X. In either case I do not think GPUs will be eating much more than 150W and therefore there is plenty of room for power hungry memory at high clock.
     
  4. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    For all you who missed the memo:
    There will be no HBM1 cards. In the near future. And proly never.
    Just plain ole GDDR5;

    Raja Koduri spelled it out for AMD.

    And for Nvidia where do I begin...
    Hell would freeze over sooner than NV eating into their own margins, by using something other than GDDR5. Unless it was absolutely necessary. And it proly wont be, because they're used to living on low bandwidth, compared to AMD.
    Texture compression V4.0 or something. HBM2 only for high-end. Maaybe HBM1 v2.
     

  5. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    I don't see HBM making it anywhere either; between the complexity of the interposer and the need to segment your designs into HBM/GDDR I see GDDR5 on large bus, and GDDR5X on smaller bus as being the way forward. It'll be interesting to see how they handle GDDR5X availability, we know for certain that anything coming out this summer won't have GDDR5X; or if it does, it'll be in very limited availability
     
  6. KissSh0t

    KissSh0t Ancient Guru

    Messages:
    13,968
    Likes Received:
    7,800
    GPU:
    ASUS 3060 OC 12GB
    Maybe they will use HBM in CPU with integrated graphics?
     
  7. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    May as well use EDRAM, which Intel is doing. I'm more interested in them integrating whatever comes out of their partnership with Micron in developing that crosspoint memory
     
  8. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Well, cost of HBM1 and Interposer, it is bit overrated. You get cheapest Nano (read: Full Fiji with poor cooling) for price of r9-390x. That's 30% less than Fury X costs (read: exactly same card and GPU with good cooling). And Nano is sold because it is still profitable at that price.

    While it may not be as economical as usage of GDDR5 where power consumption is not a problem (probably any 14/16nm GPU), AMD already developed all things needed to use it and pushed its price down to reasonable level. And HBM1 is capable of doubling its capacity per "block" by doubling capacity of stacks (slices). Block is limited to 4 stacks, therefore only increase of bandwidth can come from increased clock. But it is quite possible that HBM1 is to be dropped in favor of HBM2 which doubles capacity and bandwidth at same time by using up to 8 stacks per block. Or just using 8 stacks for bandwidth and reducing vram density to 1/2 still providing 1GB vram per block. (or any other combination as HBM2 made quite some improvements over HBM1)

    And on this note there were rumors in January that Samsung is ready for mass production of HBM2 with 8Gb = 1GB per stack (layer) therefore 4GB per 4 layer "block".

    I do not think Samsung would go into mass production of something there is no demand for. And March is filled with similar rumors about Hynix who should have mass production in Q3 2016, therefore decent volumes in Q2 2016.
    I wonder who are those companies which are going to use those HBMs.
     
  9. Fender178

    Fender178 Ancient Guru

    Messages:
    4,194
    Likes Received:
    213
    GPU:
    GTX 1070 | GTX 1060
    Yeah and have a card that can overclock near the Maxwell cards are doing now. If AMD can do that it would be a huge plus to me.
     
  10. Ryu5uzaku

    Ryu5uzaku Ancient Guru

    Messages:
    7,552
    Likes Received:
    609
    GPU:
    6800 XT
    Polaris will have either gddr5 or that new version of it. It does not have the same memory controller as fiji has but it has improved over tonga.

    [​IMG]

    Will be interesting to see what Vega is in the end. Polaris I can skip from the get go I think. Polaris 10 should be fast enough to beat Fury X and 980 ti.
     

  11. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,132
    Likes Received:
    974
    GPU:
    Inno3D RTX 3090
    Isn't the Fury X a tiny bit faster with the latest drivers on average? And usually much faster when lower level APIs are concerned? Fiji gives me the Hawaii-underutilized vibe very strongly, but I have the feeling that Fiji owners won't have to wait for so long (they are not even waiting now for 980Ti parity).
     
  12. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    Are we really gonna have this debate again? :p
     
  13. Noisiv

    Noisiv Ancient Guru

    Messages:
    8,230
    Likes Received:
    1,494
    GPU:
    2070 Super
    yeah.. lets not

    one more parity like that, and we'll be downloading our latest driver fixes somewhere between Softpedia and Thepiratbay
     
  14. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    No, there is no need, AMD makes only one global optimization for all cards and that is API overhead optimization.
    That affects anyone with strong enough GPU and slow enough CPU.

    But there are no special gains out of this category for old games. If you took HD7970 and game which got optimizations in 2013 for GCN and run benchmark against this week's 16.3.2WHQL, there would be no improvement in performance.

    Same way you would not see any special improvements between nVidia's optimized drivers back then and newest one.

    Issue is somewhere else. In those old games where GTX 780 easily outrun HD 7970 by a lot, it still does outrun it in same way.
    But in new games, that's very different story. And kids running rampart saying that developers did learn how to optimize for GCN are just dummies. Half of game developers today do not even have AMD cards. Otherwise M$ would not release GOW in state they did (game was throwing artifacts on any GCN card, had so heavy stutter that you could actually call it slide show with gifs).

    You have sweet Maxwell card which in games around Fiji release performed easily 15% better on 1080p and Fiji managed to match it in some games on 4K.

    In a year's time when Pascal is main thing around, you can compare it again.
    You will see if you were Keplered or not.
     
  15. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    Why is it a different story though? How come a 680, which was beating a 7970 back in 2012, is losing to it heavily now? How come the 7970 back then, a 3.7Tflop/3GB card was losing to a 2.5Tflop/2GB card? I mean people love to say "nvidia downgrade" but I think that's just nonsense.

    And aren't most of the current issues only with Fiji itself? Like most of what I read, with all the DX12 games/launches have had problems on Fiji but not older GCN cards. Which would support the console optimization argument, as the GCN 1.1 cards, or 1.0 or whatever.

    There are PC titles with issues on both AMD and Nvidia.. but I can't imagine that a game developer who's designing a game for Xbox/PS4, would simply just ignore all of GCN's strengths. And I'm not sure that the weird little issues (that are usually fixed quickly) are optimization problems and not just configuration problems.

    I mean I think it's pretty clear that developers optimize heavier on consoles, as evident of the graphics shift during the Xbox 360/PS3 generation. Look at launch titles vs EOL titles, the difference in graphics is crazy, and yet the hardware is the same. With GCN being relatively different and new when the consoles launched, I don't see why the same thing wouldn't eventually occur. That developers would start targeting rendering optimizations and underlining code towards those strengths. On the flipside, if Nvidia had two Maxwell units in PS4/Xbox, I guarantee you'd see the opposite. With developers making more heavy use of geometry based effects in games.
     

  16. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    We really need to test Kepler cards with more vram to get to the root of this

    Kepler seems to be doing fine battlefront at 1080p
    http://www.tomshardware.com/reviews/star-wars-battlefront-performance-benchmark,4382.html

    980Ti vs fury X driver performance

    http://www.babeltechreviews.com/driver-performance-evaluation-featuring-fury-x-vs-gtx-980-ti/
     
    Last edited: Mar 29, 2016
  17. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    nVidia is not downgrading. And I did not say that. I explicitly wrote that performance advantage in old games did not change. Because nVidia did not made global degradation changes for Kepler.

    IIRC there were few cases where game performed poorly on all nV cards upon launch. Then nV releases optimized driver and maxwell improves by miles. Keples remains in dust. Internet goes crazy and few days later Kepler gains its optimizations too.

    Its only on nVidia good will if Kepler gets them and if they go as deep as in 2012...
    There may be many reasons why GTX680/770 is obsolete while it used to game, after game beat HD7970. But there are little to no arguments explaining how could GTX 780 end up way it did.

    Or why GTX 780 Ti was faster than GTX 970 by around 10% and now it is 10% slower.
    Take old game and GTX 780 Ti will still have that benefit. Take new Game and GTX 970 is leader. GTX 970 did not get global boost like HD 7970 did with cat12.11b.
    Maxwell gets better/deeper optimizations than kepler these days. or do you expect nVidia to hire extra software engineers to fully support hardware which they no longer sell?

    I did design very easy test for it. You have probably read methodology. Pick one significant game per 6 months of time. For each of those games, run benchmark with driver before game got optimizations. And then with driver which is newest and contains all optimizations.
    Observe how much Kepler did benefit from them, compare to how much Maxwell benefits. And you can do same for AMD cards.

    It seemingly looks like AMD makes huge improvements in driver on global level. But I gained nothing in last 4~5 months, because improvements AMD delivered are for low clocked/low IPC CPUs. And all those sites doing benchmarks for GPUs run ridiculously strong CPUs. Much stronger than my poor i5.
     
  18. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    The article I posted above in babel tech compares performance across drivers for a big set of games. Interesting to see GTX 680 still kicking it with the 7970
     
  19. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Exactly, GTX680 still performs 10% faster in 2013 games like Crysis3. And all others older games.
    But then you get new games like 2015 W3 where it barely matches HD7970 and that's game filled with nVidia's stuff.
    They took 2015 Batman with All GW stuff ON, and there it is 44fps for GTX680 and 62fps for HD7970... (that's big wtf?!?)

    Then you get those November/December 2015 games: AC Syndicate, JC3, R6:Siege, Dirt: Rally...
    In all those GTX 980Ti has 10~20% advantage over Fury X. But gtx680 is behind HD7970 by 5 to 16%.

    So, are those games (filled with nVida's technologies) so well optimized for console GCN, than Fiji is in even bigger disadvantage to GM200? And then this very same non existent optimizations kick in for HD7970 to beat gtx680?

    If you think that's it's only Fiji which somehow does not get those optimizations, then why same disadvantage can be seen with r9-290x losing breath to GTX 970 by that much?

    To me, it looks like those games those games favor nVidia with their technologies bit more than healthy, but old Kepler did not get any of that love.
     
  20. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,132
    Likes Received:
    974
    GPU:
    Inno3D RTX 3090
    Let us not. :infinity:
    I am not so sure about that man. I have older games I have personally played like Mass Effect and SC2, play MUCH better with drivers since 1018.0. Again, I have no numbers to prove anything, I haven't seriously tested.

    Is there any chart with the 680 in the last link? Did I miss it?
     

Share This Page