Article: BaseMark GPUScore Relic Of Life benchmarks with 22 GPUs

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Mar 8, 2022.

  1. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,952
    Likes Received:
    1,244
    GPU:
    .
    they are not. the web is full of papers and presentation about how you can optimize for a specific architecture (both CPUs and GPUs, but especially the latter) without using proprietary tools but just the common platform SDK (d3d12 without driver extensions in this case). And with APIs with Vulkan and Direct3D12 you can optimize even more than OpenGL and previous versions of Direct3D, even without driver extensions.
     
  2. Horus-Anhur

    Horus-Anhur Ancient Guru

    Messages:
    8,730
    Likes Received:
    10,817
    GPU:
    RX 6800 XT
    Could be.
    Or someone very poorly informed.
     
  3. Odarnok

    Odarnok Active Member

    Messages:
    98
    Likes Received:
    42
    GPU:
    32 GB DDR3 2133 MHz
    Please, could you put here the differences in the code ?

    The Vulkan and Dx12 instructions are the same for Nvidia and AMD, the enormous difference in performance is caused by hardware and drivers.

    You affirm benchmark creators lie in their FAQ:
    https://powerboard4.basemark.com/faq
    The GPUScore benchmark adheres strictly to APIs, and allows consumers to make objective comparisons between devices with different operating systems, including Android, iOS, Windows, Linux and macOS.
     
    Last edited: Mar 12, 2022
  4. Yxskaft

    Yxskaft Maha Guru

    Messages:
    1,495
    Likes Received:
    124
    GPU:
    GTX Titan Sli
    I'm not expecting Nvidia to age much better due to their better raytracing performance. Reason being the reference hardware for the most part of the next years will be the new consoles, and Nvidia's advantages will probably be one or two settings in their sponsored titles.

    Similar to how Nvidia hammered AMD in tessellation benchmarks and using Nvidia Gameworks back in the day, but eventually tessellation barely is mentioned in the platform and Nvidia vs AMD comparisons.

    The by far main win for Nvidia for me since releasing the RTX cards though, have been DLSS. And having DLSS makes the RTX 2060 alot better for these games than they would be on its main competitor the 5700XT. The big disadvantage there is that you still cannot count on having DLSS from day one for all games.
     

  5. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,952
    Likes Received:
    1,244
    GPU:
    .
    Have you ever programmed anything that isn't a script a managed programming language? It's not about instruction (despite the fact there are not Vulkan or D3D12 instructions at all but API calls and HLS/SPIR-v bytecode that feed the runtime and than the driver...). it's about what is the driver told by compiled code: there are different architectures, which means different implementation with difference efficiency of MANY of things: using a surface or a buffer format instead of another, using a certain sync point (eg: barriers) in a specific code workflow instead into another point, relay on a specific queue (that maps on the hardware scheduler) for a workload instead of another, setting up a threadgroup in a specific way or not.. it's all about optimizing more close to the hardware. More you get close to the hardware, less the driver/firmware can optimize and viceversa. And in Vulkan it's even more true since you have official proprietary extension support in the API by design, which means you can use a specific hardware/driver proprietary feature without writing invalid code (by the API specifics point of view).
     
    Last edited: Mar 13, 2022
    Venix likes this.
  6. Odarnok

    Odarnok Active Member

    Messages:
    98
    Likes Received:
    42
    GPU:
    32 GB DDR3 2133 MHz
    The devs are direct and clear: https://powerboard4.basemark.com/faq
    Please, put the benchmark code here that or stay silent.
    Your post is a shameful cry.
     
    Last edited: Mar 13, 2022
  7. user1

    user1 Ancient Guru

    Messages:
    2,782
    Likes Received:
    1,305
    GPU:
    Mi25/IGP
    benchmarks that do not optimize per vendor while using dx12 or vulkan, they defeat the purpose of the newer apis which is to expose more of the hardware to the developer to allow greater optimization. by "stricly adhearing" they mean " no monkey business" as was the case with some older benchmarks, which would intentionally misuse the api to bias performance against certain manufacturers . That doesn't mean they don't do amd or nvidia specific optimizations.

    as for amd's performance, there is no hidden gpu magic, amd's implementation is simpler and uses less die space. They opted to use that increased die space efficiency to reduce cost rather than adding more CUs , the rx 6900xt has about 1.5 billion less transistors than the 3090, and manages to maintain similar raster peformance, and keeps enough raytracing performance to be useful (about 60% of the 3090 in this benchmark). the gap will be smaller the lighter the RT is, since amd implementing it in each CU, its means the overall design efficiency is better, since less silicon is left unused. Ie with nvidia if a game uses no or little raytracing , all of the tensor cores and rt units do a big fat nothing, where as on amd all the cus are still being utilized.

    you might think "1.5 billions transitors isn't that much" considering both designs are >25billion transistors, but for perspective thats basically equivalent to an entire hd 6870 gpu, and these chips are clocked almost 3x higher, not only that but the chip yields get exponentially worse the larger the die is, so its quite significant.
     
  8. Odarnok

    Odarnok Active Member

    Messages:
    98
    Likes Received:
    42
    GPU:
    32 GB DDR3 2133 MHz
    Sorry, but you say something that is contrary to the post cited by me. If the code is different in every platform, because is greatly optimized for every one, then the claims against the bench results stupid cries.

    If you say that only is optimized for Nvidia, then you must prove it with the code of the benchmark.

    The FAQ says that bench is strictly platform agnostic and it's made to fairly compare all of them.

    And the games' benchmarks have the same results than bench when RT is really used.
     
  9. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,952
    Likes Received:
    1,244
    GPU:
    .
    Why should I have the procreating source code? The matter is simple: using ANY modern 3d graphics API (note this is also true for older APIs) you can favour an architecture or another based on your coding choices and you do NOT need proprietary3rd part IHV libraries to do that. And you can do that in a gazillion of ways.

    Two classic cases:

    Case 1 shader programming, waves optimizations, just a simple short sample (please note this was valid for 2017 uArchs, recent uArch from both AMD and NVIDIA may benefit by different sizes)

    waves.PNG

    Based on the uArch running that small piece of code, you may get different performance scaling based on the HLSL threads setup. And even from same vendor, you may get different scaling from different uArchs (eg; like RDNA2 vs GCN... don't remember about RDNA1).

    Case 2, root signature, (basic core concept of direct3d12 API, 2019 presentation, should not have changed much about this even on the last uArchs)

    Cattura.PNG

    Also confirmed by the basic NVIDIA recommendations about d3d12 programming https://developer.nvidia.com/dx12-dos-and-donts#roots , first point). Please this is not about the ray tracing root signatures (global and local), but the "common" root signature of the api, nor about D3D12 raytracing root signagutures being a sligltly different from the Vulkan version.

    Anyway, just messing around with these 2 stuff preferring only 1 way in both cases, you can scale well in a uarch and bad in another, even if both cases show you legit stuff to do by API point of view. So how to optimize for all u-arch?:
    1) in a "lowest common denominator" way that may lead in multiple uark bad performance (and even so you have to do choices)
    2) separate paths for multiple uArchs
    3) a mix of the first two

    Though path 3 looks like the most legit and better trade-off about over-optimizations (also looking for future uArchs) and the most obvious, what happen most of the time is that 3 becomes a crippled version of the 2, where the optimizations depend on team past knowledge (eg: DXR code had almost a couple of years of only 1 vendor hardware to be tested with or in the past years about the less strictly resource bindings in GCN) and IHVs partnershipweight more than good common sense.

    Now let's come to Vulkan: vulkan has built-in extensions support. Extensions are valid and legit code, but this doesn't mean they run on any hardware. Like was for OpenGL, this accentuate how you can optimize just for specific uArcs based on your coding choices.
     
    Last edited: Mar 14, 2022
    Venix likes this.
  10. Odarnok

    Odarnok Active Member

    Messages:
    98
    Likes Received:
    42
    GPU:
    32 GB DDR3 2133 MHz
    Because if you don't have the source code (or reverse engineering analysis), you don't have any prove for your claims. It is simple.

    Now please stop ranting and change your mind.
     
    Last edited: Mar 14, 2022

  11. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,037
    Likes Received:
    7,378
    GPU:
    GTX 1080ti
    this is a fair assessment,

    D3D12 is not one size fits all situation,

    however a benchmark that is implemented explicitly to microsoft guidelines can be assumed to be fair across all vendors - it won't be optimised towards any specific vendor so only the hardware and driver efficiency becomes the arguing factors.
     
    user1 and Odarnok like this.
  12. user1

    user1 Ancient Guru

    Messages:
    2,782
    Likes Received:
    1,305
    GPU:
    Mi25/IGP
    I don't think the benchmark is biased, I was clarifying that it would defeat the purpose of the benchmark to use an entirely generic code path, I do think it uses some minimal optimization for both vendors, the observed performance is pretty much what you would expect for a raytracing heavy workload, if there were no optimizations at all the performance would likely be even worse for amd,


    edit: actually looking at the numbers again, it seems like the vulkan path is unoptimized as its performance closely parallels the performance gap of rtx quake (within 5%) with vulkan rt, where as the dx12 path is quite a bit better for amd relatively speaking. I don't think there is anything special about vulkan that could explain such a difference in relative performance.
     
    Last edited: Mar 14, 2022
  13. Alessio1989

    Alessio1989 Ancient Guru

    Messages:
    2,952
    Likes Received:
    1,244
    GPU:
    .
    you don't know what you are talking about. it's simple.
     
  14. alanm

    alanm Ancient Guru

    Messages:
    12,270
    Likes Received:
    4,472
    GPU:
    RTX 4080
    I think it would be hard for any "monkey business" to occur in any benchmarks without it being discovered by either of the vendors (AMD or Nvidia). They would then quickly alert any tech media, youtube channels, etc, for a nice juicy story.
     
  15. user1

    user1 Ancient Guru

    Messages:
    2,782
    Likes Received:
    1,305
    GPU:
    Mi25/IGP
    Well maybe, but I feel as though amd or nvidia wouldn't care that much about a benchmark, unless it was as popular as say 3dmark. They are much more concerned about game performance, " this game runs like garbage on vendor X" has alot more weight than "this benchmark runs like garbage on Vendor x"
     
    alanm likes this.

  16. Odarnok

    Odarnok Active Member

    Messages:
    98
    Likes Received:
    42
    GPU:
    32 GB DDR3 2133 MHz
    Stop your rants.
     
  17. Horus-Anhur

    Horus-Anhur Ancient Guru

    Messages:
    8,730
    Likes Received:
    10,817
    GPU:
    RX 6800 XT
    This is true, up to a certain point.
    Several times, several versions of DirectX were made directly from the specs of one vendor. Although these were mostly increments within a version of an API.
    A few quick examples.
    DX8.1 was based on ATI specs.
    DX9.0c was based on nVidia
    DX10.1 was based on ATI
    NGGP in DX12_2 was based on nVidia's
    And there is a good chance that nVidia being the first to market, influenced a lot of the RT instructions in DX12
    But then again, with the Xbox Series S/X using AMDs hardware, there is a good chance that it influenced RT in DX12.
    In fact, AMD made a big influence in DX12 and Vulkan, with their push for low level APIs.

    People of a certain age will surely remember that some of these things were quite the object of contention, in those days.
     

Share This Page