Article: GPU Compute render performance benchmarked with 20 graphics cards

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Feb 27, 2020.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,325
    Likes Received:
    18,408
    GPU:
    AMD | NVIDIA
    primetime^, pharma, Caesar and 2 others like this.
  2. barbacot

    barbacot Master Guru

    Messages:
    996
    Likes Received:
    980
    GPU:
    MSI 4090 SuprimX
    I disagree partially... - one thing is very simple: letting aside price, Ray tracing support in the industry, etc RTX 2080Ti is still the king of the hill now - will big navi challenge that? - we'll see.
     
    Last edited: Feb 27, 2020
  3. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,325
    Likes Received:
    18,408
    GPU:
    AMD | NVIDIA
    I did mean that seen from a broad perspective. VRAY only supports CUDA, no OpenCL. So if you planned to run 3Ds Max with VRAY and have a Radeon graphics card, you simpy can't use it. Then Blender offers CUDA for GeForce GTX, and then OptiX for RTX, but not OpenCL for either .. however, OpenCL is the path to use on blender for AMD Radeon cards.

    Regardless of it all, anyone is going to select the fastest API available to them.
     
    pharma likes this.
  4. barbacot

    barbacot Master Guru

    Messages:
    996
    Likes Received:
    980
    GPU:
    MSI 4090 SuprimX
    From my point of view it is really a shame that AMD does not adopt CUDA - Yes, Nvidia “owns” and controls the future of CUDA, so it’s not open in the “open source” definition, but it’s certainly free. AMD could develop CUDA enabled drivers when they want and giving the widespread adoption of this technology in high performance computing it would be a gain for everybody. We at wok use only nvidia cards because we use cuda optimized software in our research so the choice (if any) is simple - amd should think that there is a profit to be made from here also even if it is not their proprietary technology.
     

  5. Pepehl

    Pepehl Member

    Messages:
    33
    Likes Received:
    13
    GPU:
    RTX 4080
    Great article and very important topic, thank you! GPU rendering is a game changer. Instead of investing in the best CPU, you can buy GPU and add more of them later, rather than building whole new workstation.

    I miss RTX 2070 (non super) in the review... would you add it, so the list will be complete?

    Small comment on the best value for money being RTX 2060 super. RTX 2070 super has enabled NVLINK (unlike non-super and 2060), which allows you to share/duble memory. This is very important for rendering more complex scenes...
     
  6. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,325
    Likes Received:
    18,408
    GPU:
    AMD | NVIDIA
    A very valid point yes, thanks for bringing that to my attention. If I can find some time I'll add a regular RTX 2070 as well.
     
  7. jura11

    jura11 Guest

    Messages:
    2,640
    Likes Received:
    707
    GPU:
    RTX 3090 NvLink
    Hi @Hilbert Hagedoorn

    Great article again mate and thanks for doing that

    Although I would like include other render engines like is LuxRender, AMD Pro Render, Arnold Renderer and Redshift and new kid on block Fstorm

    Indigo not sure I have never used and probably will never use because what I remember its slow

    GPU if its game changer, hard to say for many people it is and offers faster render times and OptiX literally can halve render times, but this again depends on GPUs used and optimization of scene, in my case I do high poly scenes and VRAM usage can be there issue, doesn't matter which renderer I use if its Blender Cycles or E-Cycles, Octane or IRAY or Poser SuperFly(which is based on Blender Cycles) or AMD ProRender I still running to issues with VRAM which you can't see as much with normal CPU based renderers like is Corona and maybe due this you see many movies and many VFX companies use still Arnold Renderer which is industry standard

    In many cases I would like to having GPUs with at least 32GB of VRAM which would help at least

    And choosing right renderer too depends on more factors, for archviz Corona and V-RAY are golden standards although Blender Cycles is close but as everything depends on textures and modelling skills etc, Octane I tried several times for archviz and renders never liked them, they never looked as good as Corona, V-RAY or Fstorm

    This sharing memory, its only available with some renderers like Octane or Redshift and V-RAY not tried that with Blender and other renderers

    Hope this helps

    Thanks, Jura
     
    Luc likes this.
  8. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,511
    Likes Received:
    2,353
    GPU:
    Nvidia 4070 FE
    Interesting that AMD cards are really good for mining crypto, but not for boosting rendering speed. Not all GPU compute is the same, clearly.
     
    Luc likes this.
  9. nizzen

    nizzen Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,149
    GPU:
    3x3090/3060ti/2080t
    Finally #RTXOn :D
     
  10. nizzen

    nizzen Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,149
    GPU:
    3x3090/3060ti/2080t
    Are you shure nvlink is working like Quadro with share cuda cores and memory, and not only normal sli (higher bandwidth) with 2070s, 2080 and 2080ti?
     

  11. Pepehl

    Pepehl Member

    Messages:
    33
    Likes Received:
    13
    GPU:
    RTX 4080
    As far as I know NVlink on "gaming" RTX cards has smaller bandwith than on Quadro, but it does share memory between two cards (Quadro can share up to 3 cards, if i remember correctly).

    Taken from Chaosgroup article:
    "...new RTX cards also support NVLink, which gives V-Ray GPU the ability to share the memory between two GPUs..."

    Acording to their tests it had some impact on rendering speed (with NVlink a bit slower then without), but it enabled them to render some scenes which needed more memory.

    More info is here: https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards
     
  12. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    7,955
    Likes Received:
    4,336
    GPU:
    HIS R9 290
    AMD wouldn't really benefit a whole lot in supporting CUDA. For one thing, Nvidia designed CUDA for their architecture. It's fine-tuned to a point AMD doesn't have a chance to compete with (AMD is struggling enough with DX, OpenGL, Vulkan, and OpenCL drivers as it is). Except for the few cases where people at home have an AMD GPU and want to run a CUDA-based application, I'm sure AMD will always be a worse choice when it comes to CUDA, simply because it will never be as refined or purpose-built. AMD would just be making themselves look worse by supporting it.
    Also, any research teams or corporations who opted for CUDA for in-house software deserves to be trapped in Nvidia's ecosystem. You aren't forced to use CUDA; OpenCL and Vulkan/SPIR-V are options on Nvidia too. CUDA isn't inherently better, it's just a more ideal choice because it's easier to develop in, thanks to Nvidia's abundant and actually helpful resources.

    Also, there are translation layers to run CUDA code on non-CUDA hardware. There is some additional overhead, but like I said before, you're not going to outperform Nvidia on CUDA code anyway.

    EDIT:
    Believe me, the newest Nvidia GPU I have is Kepler based and there is software I've wanted to use that depends on CUDA. But I don't think AMD should be responsible for making CUDA drivers. If developers really want their software to be widely adopted, they shouldn't use CUDA. If developers want flexibility, they shouldn't use CUDA.
     
    sykozis likes this.
  13. nizzen

    nizzen Ancient Guru

    Messages:
    2,414
    Likes Received:
    1,149
    GPU:
    3x3090/3060ti/2080t
    Nice! This is great news! Thanx for the link :)

    Too bad there is no "sharing" for nvlink sli in games. Aka two cards working as one unit. Double the cudacores and double the vram.
     
  14. mbk1969

    mbk1969 Ancient Guru

    Messages:
    15,505
    Likes Received:
    13,526
    GPU:
    GF RTX 4070
    @Hilbert Hagedoorn

    A typo (I guess)
    The abbreviation marked with bold font should be "API".
     
    Caesar likes this.
  15. kakiharaFRS

    kakiharaFRS Master Guru

    Messages:
    986
    Likes Received:
    369
    GPU:
    KFA2 RTX 3090
    I guess that like in (some) gaming, gpus need faster clock speeds for I/O between gpu and cpu or something...
     

  16. Caesar

    Caesar Ancient Guru

    Messages:
    1,556
    Likes Received:
    680
    GPU:
    RTX 4070 Gaming X
    i also get :mad: when someone writes internet without "I" capital or hardware/software with "s" in plural form.

    The problem , i think ;), is with me, as i've learnt a lot during my oracle database module (~20 years ago)..... how to use syllables when creating a 'table' .... and so....

    :):):):):):):):):)
     
  17. Gomez Addams

    Gomez Addams Master Guru

    Messages:
    251
    Likes Received:
    161
    GPU:
    RTX 3090
    The reason AMD has not adopted CUDA is because they can't. As you wrote, Nvidia owns it and no one else is allowed to adopt it because, as of right now, APIs can be copyrighted and CUDA's API is. There is a court case in progress between Oracle and Google that will decide the future of this situation. If Google wins and APIs are not allowed to be copyrighted then nothing can stop AMD from adopting CUDA other than a willingness to do so. For all I know, AMD could have their driver team working on CUDA support right now in hopes that Google will win. FWIW, the vast majority of industry opinions submitted to the court so far support Google.

    One more thing - I can understand why Nvidia would not want to license CUDA. They have a dominate position in data center GPUs and those are very high margin - the V100 costs around 9K. Opening CUDA for use by others could cut into that position.
     
  18. xg-ei8ht

    xg-ei8ht Ancient Guru

    Messages:
    1,820
    Likes Received:
    32
    GPU:
    1gb 6870 Sapphire
    Hi Hilbert.

    Just thought I'd ask.

    Does it make a difference if COMPUTE is selected in the AMD drivers.
     
    Last edited: Feb 27, 2020
  19. Kaarme

    Kaarme Ancient Guru

    Messages:
    3,511
    Likes Received:
    2,353
    GPU:
    Nvidia 4070 FE
    Strangely enough Nvidia has no problems using HBM in those expensive V100 cards, despite HBM being a project AMD launched years ago. Now Nvidia even allows GeForce gamers to tap into the vast pool of Freesync screens, which only used to exist because of AMD, though now there would be new generic adaptive sync screens as well. Conversely it would make sense Nvidia would allow AMD to put the Cuda API to use. It's like Jensen can find one sleeve of his Leather Jacket® but not the other.
     
    Luc likes this.
  20. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,325
    Likes Received:
    18,408
    GPU:
    AMD | NVIDIA
    Unless I am completely overlooking it (and please do correct me if I am wrong), that setting is no longer present in the 2020 drivers.
     
    Luc likes this.

Share This Page