Polaris validation Failed Might Launch in October Now

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, May 12, 2016.

  1. Dazz

    Dazz Maha Guru

    Messages:
    1,010
    Likes Received:
    131
    GPU:
    ASUS STRIX RTX 2080
    nope they didn't nvidia patched their driver so it didn't expose the feature as such game thinks it doesn't support it and doesn't use as async on nvidia Cards but we don't know if it's the same for Pascal.

    I wonder what is a proper benchmark? Cause thus far nvfan boys have disregard every dx12 game on the market is unreal engine the only ones you consider real benchmarks?
     
  2. kevnb

    kevnb Guest

    Messages:
    365
    Likes Received:
    11
    GPU:
    Zotac 1070
    I know AMD really harped on async compute on their pre mature marketing, but is it really as big as deal as people think it is? seems to me it might be just a great marketing term more than anything else. I know amd seems to be doing well in dx12 so far, but I think its driver related and AMD is banking on people switching to red in hopes for future performance...
    Maybe im jaded, but im not trusting AMD to fulfill promises of anything in the future again.
     
  3. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Rise of the Tomb Raider isn't unreal and the Ti wins in that.

    Quantum Break the Ti loses by 3 fps at stock, but you can overclock almost every single Ti by 35% and it then it just blows past the Fury X. You can tie a Fury X with an overclock in Ashes of Singularity too.

    Pascal has better pre-emption but no Async afaik.
     
    Last edited: May 12, 2016
  4. kevnb

    kevnb Guest

    Messages:
    365
    Likes Received:
    11
    GPU:
    Zotac 1070
    To an extent yes, but once people are happy with a brand they tend to want to stick with it.
     

  5. Aelder

    Aelder Guest

    Messages:
    37
    Likes Received:
    0
    GPU:
    **** you
    Oh man, you can't say **** like this... They're gonna get you in your sleep for this.

    QB results vary hugely depending on the reviewer. The main point is, it runs like ASS at 1080p( with the disgusting upscaling disabled) no matter what card you're using.

    The game should be relegated to the videogaming latrine

    RotTR actually really benefited from the move to dx12 it performs far better and without stutters
     
  6. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,698
    Likes Received:
    9,575
    GPU:
    4090@H2O
    IF this rumor proves to be true, like I already said, it's very, very bad for AMD.
     
  7. kevnb

    kevnb Guest

    Messages:
    365
    Likes Received:
    11
    GPU:
    Zotac 1070
    Quantum Break isnt very good anyway.
     
  8. Texter

    Texter Guest

    Messages:
    3,275
    Likes Received:
    332
    GPU:
    Club3d GF6800GT 256MB AGP
    Polaris 10 is using Globalfoundries Samsung-licensed 14nm node, Polaris is also on TSMC's 16nm node like Pascal.

    So...errrr...GF having problems with a new node again?

    little edit...apparently Polaris is from both...can't imagine both GF and TSMC struggling, though
     
    Last edited: May 12, 2016
  9. waltc3

    waltc3 Maha Guru

    Messages:
    1,445
    Likes Received:
    562
    GPU:
    AMD 50th Ann 5700XT
    Main thing I have against nVidia is that I don't trust the company. There are so many examples in the past of how the company fibs in its marketing specs--just like they did with Maxwell and d3d12--you can't be "12_1" compliant without Async Compute hardware. First they said they'd "turn it on" (lol--they've done that more than once about more than one feature) and then they just stopped talking about it altogether--but they are still pushing f-r-a-u-d-ulent d3d specs for Maxwell even now. They hope the customer will be too stupid to notice--and sadly, he often is. I often don't believe anything they say, more or less. I got a bellyful of nVidia in the late 90's & early 00's. You should have seen how nVidia fought tooth & nail against 3dfx's introduction of FSAA in 3d gaming--just because nVidia couldn't match it. Never forget that stuff. Ever.
     
  10. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Async essentially fills in the gaps of underutilized shaders. If the shaders are going underutilized Async will automagically shift in compute code to run simultaneously, effectively utilizing 100% of the pipeline as much as possible. It's kind of similar to SMT for CPUs.

    Regardless, in most titles it only shows a 4-6% gain in performance. In rare cases it can show far more, up to 20% for example in Ashes 4K benchmark on a Fury X. But in reality all that tells me is that the Fury X is being heavily underutilized at 4K in terms of graphics commands. Probably because its limited to 64 rops.

    It basically boots pipeline efficiency by injecting compute code into the pipeline when it's not functioning at 100%. Which is good, but it comes at the cost of increased chip complexity, die size and power consumption. I'm not sure if those trades are worth a 4-6% increase in performance. That being said, we have no idea how engine development is going to change going forward and if there will be a greater emphasis on compute.

    At least this is how I understand it.
     

  11. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    How can this be true when the PS4 NEO dev kits have Polaris GPUs at 911MHz?

    Also the there is another news item that says that not only Polaris is in time, but Vega is 5 months early :nerd:
     
  12. Aelder

    Aelder Guest

    Messages:
    37
    Likes Received:
    0
    GPU:
    **** you


    Oh boy...

    You can, absolutely, be D3D12_1 compliant without 'async compute'

    'async compute' is actually just concurrent execution of graphics+compute, it is a not a requirement, it's simply a feature that is enabled by the new API. AMD calls it Async Shaders


    Where you getting Fury X 20% gain at 4k ?


    yeah that was a mistake, it was just a rumor psoted by a random guy on a german forum... It was actually Polaris being late
     
  13. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    Anandtech

    http://images.anandtech.com/graphs/graph10067/80354.png

    To me though if Async is making a 20% performance gain that means that the shaders are being under utilized for graphics. Which makes no sense as at a higher resolution it should be increased.

    The only thing I can think of is that the 64 Rops limit Fury X's 4K performance i n certain situations. There is where Async comes in and can fill those gaps and boost it in other ways.
     
    Last edited: May 12, 2016
  14. Aelders

    Aelders Guest

    Messages:
    870
    Likes Received:
    0
    GPU:
    980Ti G1 @ 1490|8000
    That was the beta, the release version presents very different results

    Also, about the ROPs it's possible, + combined with 4x the per-pixel load compared to 1080p that should really saturate the shader array
     
    Last edited: May 12, 2016
  15. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    Why is this true? AMD has been showing Polaris silicon since January.
     

  16. Aelders

    Aelders Guest

    Messages:
    870
    Likes Received:
    0
    GPU:
    980Ti G1 @ 1490|8000
    Well it's not like they showed you the clocks, they may well have had working ES silicon (at low clocks) since January but for some reason they are only now going for a new stepping A1 . It's not certainly true obviously, but it lines up with recent reports we got from AIBs

    This is more true than Vega releasing in October, which was literally a random guy on a German forum who made that statement before it was parroted by everyone and their dog. That guy on a German forum actually misunderstood a post about Polaris being delayed to october and thought it was Vega

    It matches with AIB reports that they have nothing for Polaris @ Computex.

    The guy who originated this rumor actually has a track record of accurate leaks
     
  17. Mugsy

    Mugsy Guest

    Messages:
    287
    Likes Received:
    15
    GPU:
    RX 5700 XT Pulse
    So much bull**** people are willing to believe...incredible...
     
  18. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    I think Polaris 10/11 are both 14nm GF.

    I think the GPU's going on Zen, based on Polaris architecture, are being developed on 16nm TSMC, along with the entire Zen processor.
     
  19. Aelders

    Aelders Guest

    Messages:
    870
    Likes Received:
    0
    GPU:
    980Ti G1 @ 1490|8000
    Well can you explain why AMD have been ****ting on their own parade, talking about Polaris left right and center, painting it is as the second coming of christ.... Since january and not a peep heard from them since. They had a similar problem with R600 series... Took them two quarters to find the problem in their cell libraries

    TWO QUARTERS
     
  20. Aelders

    Aelders Guest

    Messages:
    870
    Likes Received:
    0
    GPU:
    980Ti G1 @ 1490|8000
    I've been hearing many people tell me Polaris is also TSMC. I find that very weird, and dangerous. AMD has a bad track record with their libs, if I'm not mistaken they claimed they were licensing custom libs for Polaris... If they are having trouble clocking on GF as the rumor claims, I can't imagine splitting their work over different libs for TSMC/GF is a good idea lol

    God I really hope this isn't true... It would be such a damned loss for everyone if Polaris is DOA
     

Share This Page