Article: GPU Compute render performance benchmarked with 20 graphics cards

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Feb 27, 2020.

  1. Luc

    Luc Active Member

    Messages:
    94
    Likes Received:
    57
    GPU:
    RX 480 | Gt 710
    People talking about this at Indigo Renderer forums, a developer said:

    "Raw flops as a measurement is almost irrelevant. Data has to be fed to the arithmetic units. This has been a major problem for many years and is the limiting factor in most cases."

    https://www.indigorenderer.com/foru...sid=eb3e0a25eab979b5504130a841ed1032&start=45

    It's a vague explanation of the issue, but it could be related to why Radeons are always sometimes slower despite having higher compute power...

    (Edited)
     
    Last edited: Feb 27, 2020
    Kaarme, pharma and jura11 like this.
  2. jura11

    jura11 Ancient Guru

    Messages:
    2,641
    Likes Received:
    705
    GPU:
    RTX 3090 NvLink
    In theory you can pool or share VRAM in OpenCL 2.0 renderers or in some renderers you can use Out of core memory which is helpful

    More GPUs you have, faster render times will be, due this I'm running 4*GPUs and if my Zotac RTX 2080Ti AMP doesn't sell then I will cannibalise that card to my loop hahaha, hate have something unused

    Have look in rendering adding extra GPU will speed up the renders for sure

    Hope this helps

    Thanks, Jura
     
    pharma likes this.
  3. jura11

    jura11 Ancient Guru

    Messages:
    2,641
    Likes Received:
    705
    GPU:
    RTX 3090 NvLink
    This again depends, in some renderers AMD GPUs are comparable with similar Nvidia gaming counterpart like is in LuxRender or AMD ProRender

    Hope this helps

    Thanks, Jura
     
    Kaarme and Luc like this.
  4. pharma

    pharma Ancient Guru

    Messages:
    2,081
    Likes Received:
    814
    GPU:
    Asus Strix GTX 1080
    Thanks HH! It's a pleasant change seeing a review devoted to professional apps and the GPU benchmark results.

    I would expect some improvement coming with the Ampere node change.
     
    Last edited: Feb 27, 2020
    jura11 likes this.

  5. Kaotik

    Kaotik Member Guru

    Messages:
    161
    Likes Received:
    4
    GPU:
    Radeon RX 6800 XT
    Wow, you actually managed to find OpenCL application where NVIDIA is competitive, that's surprising.

    Of course not all GPU Compute is the same, but not all GPU Compute accelerated rendering is the same either. Check for example LuxMark (based on LuxRender), where AMD is doing just fine
     
  6. Spets

    Spets Ancient Guru

    Messages:
    3,340
    Likes Received:
    452
    GPU:
    RTX 3090
    #RTXOn :D
     
  7. Astyanax

    Astyanax Ancient Guru

    Messages:
    14,019
    Likes Received:
    5,649
    GPU:
    GTX 1080ti
    This is incorrect, Nvidia have on multi occasions offered to work with AMD to run cuda on AMD hardware, and AMD have an inhouse tool for converting CUDA applications.
     
  8. Mufflore

    Mufflore Ancient Guru

    Messages:
    13,693
    Likes Received:
    1,915
    GPU:
    Aorus 3090 Xtreme
    There is no link to this forum thread from the article, fyi.
     
  9. Athlonite

    Athlonite Maha Guru

    Messages:
    1,345
    Likes Received:
    46
    GPU:
    Pulse RX5700 8GB
    Hilbert if you click the cog on the top right and choose Graphics under the first list you'll see an Advanced drop down click that and it's the second from the bottom choice
     
  10. Nima V

    Nima V Active Member

    Messages:
    58
    Likes Received:
    9
    GPU:
    GTX 760 2GB
    It's not true. Nividia cards are considerably faster and more efficient in more than 90 percent of mining algorithms. AMD cards are usually used for mining on Ethash but Ethash performance is not limited by GPU compute performance, it's mostly dependent on memory bandwidths which AMD cards are usually good at.
     
    Last edited: Feb 28, 2020

  11. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    44,752
    Likes Received:
    11,403
    GPU:
    AMD | NVIDIA
    No Sir, it isn't .... it might have become an architecture dependant setting though, so I'll look some more with another architecture, this is NAVI.

    7685.png
     
    kakiharaFRS and pharma like this.
  12. haste

    haste Ancient Guru

    Messages:
    1,805
    Likes Received:
    744
    GPU:
    GTX 1080 @ 2.1GHz
    Interesting comparison. I miss two things though...

    1. Unity3D - GPU progressive lightmapper comparison. It can use both RadeonRays and Optix for raytracing.
    2. No GTX1080 and no SLI test?
     
  13. Astyanax

    Astyanax Ancient Guru

    Messages:
    14,019
    Likes Received:
    5,649
    GPU:
    GTX 1080ti
    The setting is probably specific to Vega.
     
  14. cpy2

    cpy2 Active Member

    Messages:
    82
    Likes Received:
    30
    GPU:
    ****
    Would've been nice to see some CPU thrown in so we can compare how much better GPUs are vs CPUs.
     
  15. kakiharaFRS

    kakiharaFRS Master Guru

    Messages:
    890
    Likes Received:
    305
    GPU:
    KFA2 RTX 3090
    I know it's different but I transcoded a 3hr movie in H.264 4K HDR into H.265 once with only the cpu and then with NVENC on a 1080ti and the fps pretty much doubled
    would be great to have 1 "gaming" and 1 "hedt" cpus just to see the giant trench between gpu and cpu processing
     

  16. jura11

    jura11 Ancient Guru

    Messages:
    2,641
    Likes Received:
    705
    GPU:
    RTX 3090 NvLink
    Hi there

    Agree it would be worthwhile to test CPU vs GPU in rendering as most of these renderers do offer CPU only mode

    3990x would be very close to RTX 2080Ti I would suspect in some renderers and in Blender I think 3990X would be slightly faster than RTX 2080Ti then add GPU to mix and you have one of hell render Workstation

    Hope this helps

    Thanks, Jura
     
  17. jura11

    jura11 Ancient Guru

    Messages:
    2,641
    Likes Received:
    705
    GPU:
    RTX 3090 NvLink
    Hi there

    Not sure if would compare CPU vs GPU in video editing or video rendering

    I think Linus and few others done such tests while back

    Hope this helps

    Thanks, Jura
     
  18. geogan

    geogan Maha Guru

    Messages:
    1,001
    Likes Received:
    287
    GPU:
    3070 AORUS Master
    From those results it appears to me the 2070 Super is the clear winner if you want to render using Blender. It is only 20% slower than faster 2080 Ti using the Optix API - and you can get it for 45% of price!

    Cheapest 2080Ti is about £950, cheapest 2070 Super is £450... most expensive, fastest 2070 Super is still only £582 FFS!

    Also an even closer 2080 Super or 2080 is still only the £650 area.

    2080 ti should definitely be faster using Optix - it has 140% number of cores of 2080 Super and only 112% performance difference. Where did it lose the 28% difference??

    Against 2070 Super it has 170% number of cores and only 120% difference in speed.
     
    Last edited: Feb 28, 2020
  19. Mufflore

    Mufflore Ancient Guru

    Messages:
    13,693
    Likes Received:
    1,915
    GPU:
    Aorus 3090 Xtreme
    100% slower would be zero performance.
    Not sure what to make of 120% less performance, does it undo work?
     
  20. geogan

    geogan Maha Guru

    Messages:
    1,001
    Likes Received:
    287
    GPU:
    3070 AORUS Master
    The 2070 Super is 120% while the 2080 Ti is 100% relative speed. It's obvious what I was talking about. So maybe its 20% slower then.
     

Share This Page