OpenCL and Vulkan benchmarks were run on Arc A770 and A750 GPUs.

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Oct 2, 2022.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,325
    Likes Received:
    18,407
    GPU:
    AMD | NVIDIA
  2. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,955
    Likes Received:
    4,336
    GPU:
    HIS R9 290
    These results I find more interesting than most because there's a lot less optimization to do for compute workloads, meaning, these results more closely represent the true potential of these GPUs.
     
  3. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,511
    Likes Received:
    2,353
    GPU:
    Nvidia 4070 FE
    Aren't AMD's drivers always considered the reason for AMD underperfoming in these kind of workloads? I haven't yet heard anyone praising the Intel graphics card drivers, so it ought to be unknown how optimised they are throughout the whole spectrum of stuff a GPU can be put through.
     
  4. Crazy Joe

    Crazy Joe Master Guru

    Messages:
    282
    Likes Received:
    124
    GPU:
    RTX 3090/24GB
    Since OpenCL is basically dead at this point as Intel seems to be the only one who is actively developing OpenCL drivers (both AMD and NVIDIA have put these drivers in maintenance mode), those results aren't really that interesting. The Vulkan one, which is an API for which everybody is actively developing and updating their drivers, is much more interesting.
     

  5. Astyanax

    Astyanax Ancient Guru

    Messages:
    16,998
    Likes Received:
    7,340
    GPU:
    GTX 1080ti
    funny, nvidia and amd both have opencl 3.0 drivers.
     
  6. Crazy Joe

    Crazy Joe Master Guru

    Messages:
    282
    Likes Received:
    124
    GPU:
    RTX 3090/24GB
    Yes, but what you have to know is that when OpenCL 2.x was released NVIDIA didn't support any of the new features in that version. Then the Khronos group (the people who sort of manage the standard and a lot of others too) released OpenCL 3.x, which is not much different from version 2.x, except that all the 2.x features are optional now. So basically NVIDIA still only supports the 1.x feature set, but can call its drivers 3.x because of this. AMD on the other hand has dropped support for OpenCL on their CPU's completely, which was sort of the point of OpenCL (write compute once and run it on CPU, GPU and any other OpenCL compatible device). AMD GPU's are supported for OpenCL 2.x (and thus also 3.x), but internally AMD has switched to ROCm and HIP, which is sort of their version of NVIDIA's CUDA. HIP is structured almost identically to CUDA and there is even a cross-compilation tool that allows HIP code to be compiled on NVIDIA cards (by compiling it to a form that can be fed to NVCC, NVIDIA's CUDA compiler). So there is no interest at AMD to further develop OpenCL.

    That is why I stated that these companies have put OpenCL drivers in maintenance mode. No new developments are done, but bugs get fixed and any changes required to make them work on current OSses are still performed.
     
    Caesar likes this.
  7. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,955
    Likes Received:
    4,336
    GPU:
    HIS R9 290
    Not necessarily. AMD has actually been very competitive in OpenCL pretty much since TeraScale2 and they've done practically nothing to optimize until maybe a year ago. The reason why they're never chosen is because they lack CUDA, which not only limits what they can do but CUDA is also just so much better. Nvidia really optimized CUDA for their platform. They wrote most of the libraries themselves. Their documentation is excellent. CUDA is also significantly easier to implement. CUDA is so much better that many open-source developers use it despite the fact it requires closed-source binaries to work.

    In the past 3 years or so, AMD has realized how much money they've been losing out on in the server GPU market. As a result, they've been working on ROCm and HIP. They're basically playing catch up with Nvidia at this point. The good news is, they actually have a chance to catch up - AMD's hardware is fine, they already have their toes in the GPU server market, and compute is a lot easier to optimize for than games. If AMD plays their cards right, I believe they can compete with Nvidia faster than Intel can compete in the gaming market. Key word is "can" though - if they truly want to succeed, they will need to adopt CUDA. That's the only way they can convince people to switch. Even if their performance is worse, being compatible is more important.

    Intel never really cared that much about compute because their GPUs were too weak to be worthwhile. Their OpenCL performance was fine for what the chips were, but still not noteworthy. They're now in the same camp as AMD, and it seems like Intel is making an effort to improve non-CUDA compute.
     
  8. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,511
    Likes Received:
    2,353
    GPU:
    Nvidia 4070 FE
    @schmidtbag AMD server cards are found in some of the most powerful supercomputers in the world, so obviously they can't suffer from poor optimisation. I was just referring to the consumer cards and what was going on in this particular article.
     
  9. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,955
    Likes Received:
    4,336
    GPU:
    HIS R9 290
    Right but my point was with compute, there isn't much to optimize regardless of which market you're talking about. In a lot of the consumer space where Nvidia pulls ahead, it's either because they're using CUDA or some other thing (like OptiX), but when it comes to an apples to apples comparison (where both are using the same API), there is no clear winner. Sometimes even for the same program but a different workload, one brand will do better than the other. As far as I understand, the differences come down to the hardware, rather than the drivers.
     
  10. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,511
    Likes Received:
    2,353
    GPU:
    Nvidia 4070 FE
    Can 3060 Ti beat 6800 in raw power in Vulkan and barely lose in OpenCl? That seems unlikely. That's why I was reminded of having seen such indications in the past, that AMD cards don't pull their weight in these applications, due to poor driver support.
     

  11. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,955
    Likes Received:
    4,336
    GPU:
    HIS R9 290
    AMD has some catching up to do with Vulkan but like I said, performance varies drastically with OpenCL. A single benchmark, including a real-world use case, is inconclusive. Again, even the same program with different workloads will yield different winners, but also, performance can vary substantially between performance tiers or pro-grade models. I'm sure you're thinking "yeah obviously a lower-end GPU will perform worse" but perhaps not in the way you're thinking. The half-precision and double-precision floats can vary across different performance tiers, or in some cases, a GPU could be could be entirely incompatible with them (even for workstation models). In some cases, a mid-tier workstation GPU could run laps around a flagship desktop GPU in workloads heavy with FP64, while in gaming it would be the complete opposite. That's actually where Titan cards became a problem, because they had much better FP64 than the other GeForce cards but a small fraction of the cost compared to Quadros.

    Here's where things get a bit interesting today:
    If you go with an RX 6000 GPU, you will get a 2:1 ratio for FP16 and a 1:16 for FP64. With CDNA2, it peaks at 8:1 for FP16 and 1:1 with FP64, though some of the lower-end models only offer 2:1 for FP16 and 1:2 for FP64.
    If you go with a RTX 3000, you get a 1:1 with FP16 and a miserable 1:64 for FP64. If you go with the A100 40GB, you'll get a 4:1 FP16 and 1:2 for FP64.

    Here's what you can take away from that:
    1. Nvidia really doesn't want you using desktop GPUs for workstation tasks.
    2. Since AMD and Nvidia trade blows in various OpenCL workloads, the GeForce's crap FP16 and FP64 performance implies their FP32 design is better.
    3. If you're on a budget and don't need any of Nvidia's technologies, AMD is the obvious choice.
     
    Last edited: Oct 3, 2022
    Kaarme likes this.
  12. Bender82

    Bender82 Active Member

    Messages:
    53
    Likes Received:
    26
    GPU:
    AMD fury
    I like to see Intel Arc OC.. i think intel ARC A770 will perform better then 3070 card with oc
     
  13. Imglidinhere

    Imglidinhere Guest

    Messages:
    274
    Likes Received:
    22
    GPU:
    12GB RX 6700XT
    I'm not so certain these can be trusted. If this is to be perceived as their "true potential", then the 3060 should trade blows with the 6700XT at the highest level of optimization and it doesn't on any level.
     
  14. Caesar

    Caesar Ancient Guru

    Messages:
    1,556
    Likes Received:
    680
    GPU:
    RTX 4070 Gaming X
    Still no DX numbers?
     
  15. Venix

    Venix Ancient Guru

    Messages:
    3,428
    Likes Received:
    1,939
    GPU:
    Rtx 4070 super
    Hehe those cards are a bit of wildcards with how new their drivers are , they might end up famous for Intel fine wine ..... Oooor Intel abandon em early or never improve the drivers good enough so they get the fine milk aging rep .... We will see eventually!
     

  16. XenthorX

    XenthorX Ancient Guru

    Messages:
    5,017
    Likes Received:
    3,385
    GPU:
    MSI 4090 Suprim X
    Catching up with Intel news regarding A770, it seems really interesting products !
    Already competing with RTX3060, at 100$ less, that's fantastic.
     

Share This Page