Intel Arc A380 is plagued with software flaws, often unplayable

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Jul 20, 2022.

  1. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,010
    Likes Received:
    4,383
    GPU:
    Asrock 7700XT
    Intel has been very active with Arc development on Linux. From what I recall, their Xe drivers have performed very well. I expect their Linux performance to be stable and fast, despite the immaturity.
     
    SamuelL421 and PrMinisterGR like this.
  2. SamuelL421

    SamuelL421 Master Guru

    Messages:
    271
    Likes Received:
    198
    GPU:
    RTX 4090 / RTX 5000
    I'm still happy there is a third player again for GPUs. There haven't been 3(+) big, competitive names in the space since the late 90's.

    Pity these aren't better out of the gate, but it is still promising they achieved even this much in one generation of dedicated graphics card development. Maybe next year, next gen of the cards we will see something impressive.

    For me personally, I'm now even more interested in a first-gen Arc GPU now - I love collecting obscure tech and janky stuff that requires tinkering to work.
     
    Maddness and PrMinisterGR like this.
  3. ruthan

    ruthan Master Guru

    Messages:
    573
    Likes Received:
    106
    GPU:
    G1070 MSI Gaming
    We knew that drivers would be problems from begging.. no surprise.. even Ryan Shroud ex PCperspective know that these cards are flop.. Intel needed invest a lot to drivers are they just did not.
     
  4. Crazy Serb

    Crazy Serb Master Guru

    Messages:
    270
    Likes Received:
    69
    GPU:
    270X Hawk 1200-1302
    I will probably buy it and completely replace my 3200G system since HW acceleration always had various issues (depending on driver and app/browser).

    A770 will probably end up in worse position because drivers mostly should have same issues in games. Even if framerates are good, I would not buy a card that can do 140 fps on avg with 60fps in 1% lows...
     

  5. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,128
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    The 5000 + 6000 generations where completely different beasts from GCN. GCN was indeed the way forward, and the only reason AMD survived during the dark times, since it was their console contract. The moment the consoles had AMD tech, they would get some optimization at least, but that couldn't work with the DX11 driver, hence the push for a lower level API. Which was needed, to be fair, and AMD gave a huge push to Vulcan and DX12.

    Funny thing is that among the three, it's Intel that has the most impactful and dramatic implementation.

    If you have a crap CPU, one of the great ways to make up for it is to use a less latency sensitive processor. Async was Czerny's brainchild and was used just for that, and even things like audio etc. It's funny how many things Sony has done for PC gaming, indirectly.
    Nvidia didn't care because their driver and GPU design basically have 100% utilization almost from the get go. You had "Fine Wine" because the AMD driver couldn't keep up with feeding the hardware and was optimized starting from 2015, really really slowly.
    I was here when it happened and had GCN (which I still like as an architecture).

    This is a bit of a semantics game. Although I like the result of what AMD did, I can't give them the innovation credit, because most of their moves are clearly reactions to things they don't innovate with.
    I'm talking strictly software here, to be clear, although I'm not a fan of RDNA up to now, still waiting to see 3.0.

    To me it's obvious that they want to support only modern platforms and modern APIs and not deal with the thousands of years of cruft.

    They even split games into three tiers, and they said that their pricing will be according to the performance of the slower tier with the legacy games. In Linus' video the mid card had some impressive performance in Cyberpunk.


    I don't understand the hate boner everyone here seems to have with the Digital Foundry. They were the first to talk about AMD's driver tanking GCN, and they're basically a tech enthusiast channel.

    Alex is trolling about Crysis, but it was the first modern renderer, and everything it did are still used today by basically everything.

    The next step after is ray tracing and computer vision, and for all AMD has been a participant. ARC is a more interesting tech to talk and discover things about than RDNA, and it will probably be so until the Nvidia 4k series.

    Why would a super nerdy rendering expert of a basically niche tech channel even bother with any AMD product at this moment?
     
  6. Slammy

    Slammy Member Guru

    Messages:
    116
    Likes Received:
    31
    GPU:
    ASUS 7800GTX
    that is EXACTLY what i thought when i heard Intel had hired that fraud. Guy is a total BSer, ran ATi into the ground all the while, for YEARS promising that his next card would beat Nvidia.....HAHAH
    I was hoping Intel had stolen some serious Nvidia people or even new blood from elsewhere but Raja will never deliver
     
    Horus-Anhur likes this.
  7. Horus-Anhur

    Horus-Anhur Ancient Guru

    Messages:
    8,720
    Likes Received:
    10,806
    GPU:
    RX 6800 XT
    AMD had the problem of releasing an arch designed for low level APIs, at a time when DX11 still ruled.
    Makes one wonder if DX12 and Vulkan had been adopted sooner, if AMD would now have a bigger market share.

    Indeed. If it wasn't for Rebar on Arch GPUs, this launch would have been an absolute slaughter.

    This is one of those cases, of having a GPU designed for one purpose, can become a liability.
    One of the things that GCN did, was increase the size of work waves to 64. Having waves this size has the advantage of reducing transistor count.
    But it also means that there is a greater chance of a CU to be underutilized. So having async compute was essential.
    On consoles, this was not much of a problem, because the dev kits were tailored to account for stuff like this. But on PC the adoption of async was very slow.
    In the end, AMD decided to go back to work waves of 32 with RDNA.

    But in that case can we give credit to nvidia for ray tracing? It's a technique that has been used and developed in offline rendering for several decades.
    What about deep learning, since the 70´s that we have been using matrixes for deep learning? One could argue that nvidia just put into hardware, all the research that had been done before.
    Or what about Physx. It was from another company, that nvidia bought. They just ported it to their own GPUs.
    Or what about T&L? nvidia brags a lot of being the first GPU that used this feature, they even paid DF to make a video about it.
    But the reality is that there were graphics chips that had a T&L unit way before the Geforce.
    [/QUOTE]
     
  8. ruukjis

    ruukjis Member

    Messages:
    28
    Likes Received:
    8
    GPU:
    RTX 3080 Ti/12GB
    Something does not add up... Like the first gen i7 were LGA 1156 or LGA 1366 meaning that even the low-end chipsets like H55 had PCI-E not AGP. And VooDoo? That was looong gone. At that time the host stuff was Geforce 7XXX and that worked fine with i7 920 (that was mine setup at the time).
     
  9. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,035
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    They aren't talking about igps on cpus, they are talking about the i740 GPU that intel contracted out to Lockheed Martin's Real3D division
     
  10. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,128
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    AMD had a good console design, but a bad PC design, especially with how they released their driver. The design paid off because it made sense and everything was dragged to how the consoles were designed. It was great hardware, but we underestimate the input that Microsoft, and especially Sony had into it.
    We are in this situation now, and AMD's market share isn't getting any larger. Most of their cards since Turing is out, make no sense. Why would anyone get hardware with less features? A lot of people praise Navi, but it was really late, and it was a bad choice compared to Turing, and I would say the same for RDNA 2.0 vs Ampere.

    I cannot see it not being a slaughter in non-popular older API games, ever. I respect that they seem to go with a clean sheet of paper for new APIs and platforms. Depending on their dedication to the project, this will pay dividends the next decade.

    GCN was made for compute from the get go. That wasn't a bug, it was a feature, and they went for exactly what the console market requested. They didn't expect that Nvidia would lean that much into their driver, and optimize it that much.

    Again, RDNA 2.0 is a console chip, but this time the consoles have proper CPUs, so the GPU isn't needed as a crutch that much. This is not random, and most of the time has nothing to do with the PC market, for which if it wasn't for Ryzen and laptop sales, AMD wouldn't even be on a 10% right now.
    They obviously design for the consoles first. I'm not even mad.

    The first consumer-level cards with usable ray tracing, alongside the necessary work for driver & API implementation, was done by Nvidia, so yeah, they do deserve credit for this.

    Theoretical ideas exist since forever, Nvidia is one if the core reasons we even have AI products today, even from the academic level. They're also the first ones to do computer vision in realtime graphics, same again for APIs etc.


    Physx was from Ageia and not an Nvidia thing. Nvidia harmed it.

    Don't cite the old magic, I was there when it was made. Of course, there were lighting etc accelerators before, but they created the first consumer-level, actually working product and supported it.

    Your arguments sound like that. Because Galileo had a telescope, that doesn't mean that the James Webb is not the summary of a ton of other inventions.
     

  11. ruukjis

    ruukjis Member

    Messages:
    28
    Likes Received:
    8
    GPU:
    RTX 3080 Ti/12GB
    Ups, my bad - misread. I was reading "Compatible" instead of "Competitive". Sorry.
     

  12. Horus-Anhur

    Horus-Anhur Ancient Guru

    Messages:
    8,720
    Likes Received:
    10,806
    GPU:
    RX 6800 XT
    My argument is that just because certain things existed before, in some similar form, doesn't detract from the credit that AMD and nvidia deserve.
    Things like SAM/Rebar and Free-sync already existed in some form, before. But AMD brought it to consumers, implementing it in their drivers and bios. And working with monitor makers to also implement it on their end.

    Deep Learning was not theoretical, it was real with built hardware and working AIs. In the 1970 we had a trained AI, with deep learning, that could do basic driving. And trained AIs that could do basic image recognition.
    Of course this was limited by the hardware of the time being many times slower than today's.

    There were graphics cards in the arcades that already had T&L, years before nvidia released the Geforce. Even the N64 had a form of T&L.
    nvidia was not the first to release a consumer product with T&L. At best they were the first to release a graphics card for PC with the feature.
    I too, was there back in the day.

    Yes, nvidia deserves the credit for bringing RT to consumer GPUs. But their work stands on top the work of previous works.
    Once again, both AMD are implementing features that already existed in some form, but adapting it to consumer GPUs. Both deserve the credit for doing that.
     
  13. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,128
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    Let's put it differently. With the exception of SAM, please name a feature that AMD has brought to the mainstream, that wasn't a "response" to a similar feature that NVIDIA brought to the mainstream, in more or less the last decade.
     
  14. Robbo9999

    Robbo9999 Ancient Guru

    Messages:
    1,858
    Likes Received:
    442
    GPU:
    RTX 3080
    Wow, seems like Intel really have a lot to learn in terms of creating a viable gaming GPU or instead a lot to learn in how to optimise it through drivers.
     
  15. KissSh0t

    KissSh0t Ancient Guru

    Messages:
    13,941
    Likes Received:
    7,760
    GPU:
    ASUS 3060 OC 12GB
    Wouldn't that be the Low-overhead graphics API via Mantle / later Vulkan? which has resulted in Microsoft also focusing on creating similar graphics API with DirectX 12 which benefits Nvidia.
     
    Exodite likes this.
  16. Horus-Anhur

    Horus-Anhur Ancient Guru

    Messages:
    8,720
    Likes Received:
    10,806
    GPU:
    RX 6800 XT
    Low level APIs. This includes Mantle, which became Vulcan.
    A-sync compute
    HBM memory
    Infinity cache
    Chiplets for CPUs and GPUs
    Smartshift
    Partially resident textures (it's like the predecessor of sampler feedback streaming)
    GPU open
    Linux driver support
     
    HandR likes this.

  17. KissSh0t

    KissSh0t Ancient Guru

    Messages:
    13,941
    Likes Received:
    7,760
    GPU:
    ASUS 3060 OC 12GB
    [​IMG]
     
  18. Undying

    Undying Ancient Guru

    Messages:
    25,473
    Likes Received:
    12,881
    GPU:
    XFX RX6800XT 16GB
    Lets be honest here the tier approach to game performance and pricing is really interesting.
     
    PrMinisterGR and fantaskarsef like this.
  19. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,750
    Likes Received:
    9,641
    GPU:
    4090@H2O
    [​IMG]
     
    PrMinisterGR likes this.
  20. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,128
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    Response to their horrid DX / OpenGL driver. They could never make GCN perform correctly without a new API.

    Sony, and a response for their hardware bring idle due to driver utilization.

    3D-stacked random-access memory (RAM) using through-silicon via (TSV) technology was commercialized by Elpida Memory, which developed the first 8 GB DRAM chip (stacked with four DDR3 SDRAM dies) in September 2009, and released it in June 2011. In 2011, SK Hynix introduced 16 GB DDR3 memory (40 nm class) using TSV technology,[2] Samsung Electronics introduced 3D-stacked 32 GB DDR3 (30 nm class) based on TSV in September, and then Samsung and Micron Technology announced TSV-based Hybrid Memory Cube (HMC) technology in October.[39]

    JEDEC first released the JESD229 standard for Wide IO memory,[40] the predecessor of HBM featuring four 128 bit channels with single data rate clocking, in December 2011 after several years of work. The first HBM standard JESD235 followed in October 2013.[/QUOTE]
    AMD used that memory first with Fiji in 2015, and then Nvidia was the first to use HBM 2 just one year after with Tesla. They were both essentially "clients". The interesting part here is that AMD used to have a terrible memory controller on anything pre-Navi, as a lot of people in here know, and HBM was a huge cope for that, and for the terrible power consumption of the chips themselves, in that time at least.

    This is not a consumer-end feature, but it did enable RDNA 2.0 to have great "traditional" performance.

    Again, not a feature for a consumer, just how a product is made. It's not their invention, but they've utilized and developed them greatly and deserve kudos for that.

    This is literally something taken from the PS5, so Sony again.

    Sony again, a core feature of the PS4.

    A huge cope because they will be forever behind in Academia and they never gave enough resources to their GPU department to spread engineers around, as it is a project that would take decades. This is not innovation in any sense, it's just common commercial sense.

    You mean that they finally have a Linux driver that works OK with their products. It's obvious you haven't been around with FGLRX. I love that they were force to do this, but (again), it's not innovation, hundreds of thousands of companies contribute code for their products in the Linux kernel. AMD had terrible performance and compatibility to the point where they couldn't be used with Linux, this was their only way out, and not innovation in any sense.

    I would give you half a point for HBM and (maybe) Smartshift, but even these are not really user-facing. One is a cope for a bad memory controller and power consumption, and the other is probably a copy-paste of their work with Sony.

    Again, feature-wise, I loved Chill, which you don't even mention and I think will be more and more useful as GPU power consumption hits the fan.
     
  21. Horus-Anhur

    Horus-Anhur Ancient Guru

    Messages:
    8,720
    Likes Received:
    10,806
    GPU:
    RX 6800 XT
    Once again, their drivers were good enough on the 5000 and 6000 era. These cards were very competitive with anything nvidia had. In terms of performance efficiency, it was miles ahead of nvidia's Fermi.
    And once again, take a look at the talk many devs were doing at the time about the overhead of high level APIs like DX11, and the performance loss due to limited amount of draw calls.
    AMD choose to listen to devs, partnered with Sony and MS and pushed for low level APIs.
    GCN was made for this new push for low level APIs.

    This happens in both AMD and nvidia cards. Work waves or warps, frequently don't fill the full width of the shader units.
    A-Sync compute allows these unused shader units to be much better utilized.
    The fact that Turing and Ampere get good gains from a-sync compute, prove that both companies needed this.
    I have a Turing GPU, and from day one, I saw games having a nice boost in performance from this feature, first hand.

    AMD used that memory first with Fiji in 2015, and then Nvidia was the first to use HBM 2 just one year after with Tesla. They were both essentially "clients". The interesting part here is that AMD used to have a terrible memory controller on anything pre-Navi, as a lot of people in here know, and HBM was a huge cope for that, and for the terrible power consumption of the chips themselves, in that time at least.[/QUOTE]

    HBM standard was developed by AMD and Jedec. And it's production was a collaboration between AMD, Samsung and Hynix.

    Of course it's a consumer feature. It's part of all GPUs in RDNA2 consumer stack.
    By that logic delta color memory compression from nvidia, would not be an innovation?

    And why this sudden rule of "consumer features". That was never a premisse in the DF argument, or even in this conversation.
    You just suddenly decided this was one way to try to invalidate a few of AMD's innovations.

    Once again, it's on consumer products, and it benefits consumers.
    You are just making excuses.
    Also consider that Intel and nvidia are already trying to copy AMDs chiplets.

    Smarthift was developed by AMD for it's APUs. Then implemented in the APU that AMD created for Sony and the PS5.

    Core feature in all GCN cards.
    BTW, did you know that the PS4 and X1 are GCN 1.1
    But this feature was in GCN 1.0 GPUs for PC.

    By that idea github would be useless. Seriously?
    As if having code being shared among devs would be a bad thing.

    Might I remind you what Linus Torvalds said of nvidia?
    He never said that of AMD.

     
    Last edited: Jul 23, 2022

Share This Page