Rumor: AMD cancel RX Vega Primitive Shaders from drivers

Discussion in 'Frontpage news' started by NvidiaFreak650, Jan 23, 2018.

  1. Eastcoasthandle

    Eastcoasthandle Guest

    Messages:
    3,365
    Likes Received:
    727
    GPU:
    Nitro 5700 XT
    Hmm, I think what this means is:
    -AMD is not investing any resources to get this working from the drivers
    -AMD is telling developers to incorporate it in game. If that's the case they will manage back in driver support.

    So, on one hand they aren't technically dropping support for it. They are just delegating responsibility to developers to use it.
    Question is what incentive do developers have to implement it? I would be shocked to see that FC5 will even think twice about development for it at this point.
     
  2. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,793
    Likes Received:
    1,396
    GPU:
    黃仁勳 stole my 4090
    Really? The last explanation of it I heard seemed to say Maxwell does jackshit as far as actual async compute goes.
     
  3. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,952
    Likes Received:
    1,244
    GPU:
    .
    An asynchronous command flush upon a static instruction scheduler, what a wonderful optimization.. Wait, it is completely useless in real-time application and it fights against driver optimizations? Never-mind..

    [sorry, I couldn't resist..]

    No, it's not a conspiracy theory, "Pascal" was a huge improvement in "mixing" compute and graphics task.
     
  4. user1

    user1 Ancient Guru

    Messages:
    2,780
    Likes Received:
    1,302
    GPU:
    Mi25/IGP
    All of this smells like what happens when you don't have enough budget and time to meet performance expectations, seems like they tried a radical approach inorder to hit their performance targets, at high risk. less than a year ago they were still saying the NGG path had 10x the perf of the old path, so i can see why they did it.

    Ultimately based on the hints provided from the AMDVLK code dump, while the hw is not completely borked, there are bugs that make implementing primitive shaders and NGG difficult.

    Here are some examples

     
    Last edited: Jan 24, 2018

  5. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080
    I never said it was an optimization - I said it supported it. It's support was geared towards specific applications and not gaming - but that's where AMD's marketing team came in.

    Ryan Smith from Anandtech wrote it best:

    But then after AOTS the entire community became obsessed with the idea that when implemented the way AMD has it increases gaming performance. Which is fine but remember the whole complaint was that Nvidia had numerous whitepapers/slides/a statement that said Maxwell supported it (which again is technically true and even useful for some applications) - but because it didn't work specifically in games everyone continues to believe to this day that Maxwell never supported it - which is untrue.

    Regardless, TJ's analogy is still bad because Nvidia didn't go around touting Async Compute as a performance gain that would be enabled in a future driver update like AMD did with this. No one even knew what Async Compute was until AOTS beta came out nearly a year after Maxwell launched.
     
    Last edited: Jan 24, 2018
  6. Venix

    Venix Ancient Guru

    Messages:
    3,472
    Likes Received:
    1,972
    GPU:
    Rtx 4070 super
    Is it my idea or when ever they want a feature to die they pass it to developer judgement to implement ?Might as well when that happens assume amd /intel/nvidia took x feature out on the field and shoot it with a shootgun .
     
  7. Neo Cyrus

    Neo Cyrus Ancient Guru

    Messages:
    10,793
    Likes Received:
    1,396
    GPU:
    黃仁勳 stole my 4090
    That makes it a lie on nVidia's part because async compute in games is the only thing 99.99% of us give a single frack about and they know it. They knew people would think it's referring to games. It a lie without being a strictly technical lie, even more of a lie than their "4GB" nonsense.
     
  8. sykozis

    sykozis Ancient Guru

    Messages:
    22,492
    Likes Received:
    1,537
    GPU:
    Asus RX6700XT
    Why is AMD going to lose Ryzen users? For most of us that bought into Ryzen, it performs and functions exactly as expected. My RX 470 has performed and functioned exactly as expected as well. This only affects Vega users. Has nothing to do with Ryzen CPUs or non-Vega GPUs.
     
  9. NvidiaFreak650

    NvidiaFreak650 Master Guru

    Messages:
    691
    Likes Received:
    620
    GPU:
    Nvidia RTX 4080 FE
  10. Keesberenburg

    Keesberenburg Master Guru

    Messages:
    886
    Likes Received:
    45
    GPU:
    EVGA GTX 980 TI sc
    Primitive? We dont need 10.000bc shaders.
     

  11. user1

    user1 Ancient Guru

    Messages:
    2,780
    Likes Received:
    1,302
    GPU:
    Mi25/IGP
    Well based on that, might have just been a miscommuniction some where down the line.

    interesting that they mention that wolfenstiens compute shaders can achieve the same effect, perhaps it wont matter anyway.
     
  12. McFly121

    McFly121 Member

    Messages:
    23
    Likes Received:
    4
    GPU:
    ASUS RX 470 4gb
    Yes Nvidia canceled async compute because their shaders are too more efficient than AMD and cannot compete directly 1:1 shader vs shader.Amd have more shaders for equally same performance.
     
  13. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    Timespy gained from Async being enabled as I think that used an implementation that wasn't designed just for AMD hardware.
    Ironically it was also faster than any other implementation, even on AMD GPU's.

    As for this, I'm sure some might be disappointed but I don't think it's lawsuit material, it is pretty damaging for AMD image wise as any future features that get announced will be met with cynicism. It also fuels the idea that AMD are lazy when it comes to drivers.
     
  14. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,750
    Likes Received:
    9,641
    GPU:
    4090@H2O
    I'm not sure. Miners don't care about such feautres. :rolleyes:
     
  15. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    TBH, I don't think the mining craze has helped AMD's image either.

    I'm sure it has helped the bank account though.
     
    fantaskarsef likes this.

  16. Evildead666

    Evildead666 Guest

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    The GPU's with the best margins are the Low/Midrange cards. They sell a lot more than the high end cards, and that generates a large amount of the revenue.

    The "Crazy MCM" setup is perfectly viable, and cheaper than a large monolithic GPU.
    Intel has probably paid AMD for their External dGPU for their chips.

    AMD can then re-use this, and pop 4 of them onto a single card.
    Each module has 1536 Shaders, assuming that is 100% of them. I think there might be 2048 Shaders on the Full version, as that would enable some leeway for yields.

    In any case, it would enable a ~6000-8000 shader card, with 16GB of VRAM.

    AMD won't be Game Over. They have investors, and their CPU's are doing very well. They are selling all the GPU's they can make atm.
    And things are only going to get better with the APU's arriving, and the Shrinks to Zen (and Vega, for the Pro's).

    Its going to be very interesting how it plays out :) Yes.
     
  17. Evildead666

    Evildead666 Guest

    Messages:
    1,309
    Likes Received:
    277
    GPU:
    Vega64/EKWB/Noctua
    They are selling all they can make. :) Thats good.
    The only problem is the price gouging, and that is not their fault.
    There are people out there making 100% profit on the resale of these cards....loony.
     
  18. RooiKreef

    RooiKreef Guest

    Messages:
    410
    Likes Received:
    51
    GPU:
    MSI RTX3080 Ventus
    Well this is sort of what I knew was coming... AMD normally come up with great ideas just like Nvidia... The difference is that developers jump on Nvidia features immediately while AMD features gets pushed to the side. Look at Mantle, AMD TrueAudio.... The thing is why must AMD spend more money and resources on something that the developers doesn't want to use??? It doesn't sound shady at all to me, just normal business practices...
     
  19. Crazy Serb

    Crazy Serb Master Guru

    Messages:
    270
    Likes Received:
    69
    GPU:
    270X Hawk 1200-1302
    And if we did not had mantle, we would never had mostly bad dx12 implementations and vulkan in its glory with doom. We would be stuck at 4 cores 'till 2030+.

    As for async on maxwell, nV maybe said that maxwell supports it, but maxwell was already out, so I dont get it while people still whine/talk about it. This was one of marketing points for vega, so obviously, this is far more shadier than maxwell's async (or even pascal for that matter). And, its not like that nV cards needs full hardware level async, I wont buy/recommend AMD cards ever again.

    For MCM, I dont think that AMD can deliver, because VEGA have insane power draw (since it is literally GCN1 4.0 or whatever with more "dead on arrival" features), and with weaker GPUs, they are going to loose perf/$ (in world without mining) because we all know how good SLI/Xfire scales. Scaling will be probably better because of faster link, but until we see changes how frames are rendered it will be still hit or miss. And if people expect from devs to optimize for MCM...
     
  20. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    Would not be 1st time AMD/ATi developed something good and it ended up in trash. Trueform was probably one of more important technologies. Happens time to time, you just do not notice because support is removed without telling community.

    Secondly: "I wont buy/recommend AMD cards ever again", very shortsighted unless you believe that AMD's GPU division is going to be closed soon. And says that you are bit weak-minded.
    People buy HW based on performance it delivers at time of purchase, not based on presumption that it will perform better in year.

    Your view on MCM as SLI/CF is wrong. Look at it as having Big GPU cut in half, and then connected again with interposer. it will work in exactly same way as whole chip with exception of tiny latency increase. If it is cut in similar way as Ryzen CPUs (interconnect buss), then there will be close to no performance impact in comparison to whole chip.
    => So, having BIG 8192 SP chip or 1 control chip + 4x 2048SP chips will prove to deliver same performance within margin of error. Actually that glued together MCM package may have more stable power delivery, better cooling and better clock.
     
    Evildead666 likes this.

Share This Page