Futuremark developer responds to accusations of cheating in Time Spy benchmark.

Discussion in 'Frontpage news' started by mtrai, Jul 16, 2016.

  1. mtrai

    mtrai Maha Guru

    Messages:
    1,183
    Likes Received:
    374
    GPU:
    PowerColor RD Vega
    Just oh F'ing wow is all I can say now. Time Spy benchmarks cannot be compared between AMD and Nvidia cards. If I am truly reading all this correct.

    Futuremark/3dmark developer over on steamed commented on and cleared some issues up. His comment is the 2nd post

    http://steamcommunity.com/app/223850/discussions/0/366298942110944664/

    Dev response:
    "FM_Jarnis [developer] 4 hours ago
    Yes it does.

    http://www.futuremark.com/downloads/3DMark_Technical_Guide.pdf

    It was not tailored for any specific architecture. It overlaps different rendering passes for asynchronous compute, in paraller when possible. Drivers determine how they process these - multiple paraller queues are filled by the engine.

    The reason Maxwell doesn't take a hit is because NVIDIA has explictly disabled async compute in Maxwell drivers. So no matter how much we pile things to the queues, they cannot be set to run asynchronously because the driver says "no, I can't do that". Basically NV driver tells Time Spy to go "async off" for the run on that card. If NVIDIA enables Asynch Compute in the drivers, Time Spy will start using it. Performance gain or loss depends on the hardware & drivers.

    Edit: Quoting 3DMark Technical guide

    Asynchronous Compute
    With DirectX 11, all rendering work is executed in one queue with the driver deciding the order of the tasks.

    With DirectX 12, GPUs that support asynchronous compute can process work from multiple queues in parallel.

    There are three types of queue: 3D, compute, and copy. A 3D queue executes rendering commands and can also handle other work types. A compute queue can handle compute and copy work. A copy queue only accepts copy operations. The queues all race for the same resources so the overall benefit depends on the workload.

    In Time Spy, asynchronous compute is used heavily to overlap rendering passes to maximize GPU utilization. The asynchronous compute workload per frame varies between 10 - 20%.

    To observe the benefit on your own hardware, you can optionally choose to disable async compute using the Custom run settings."
     
    Last edited: Jul 16, 2016
  2. IceVip

    IceVip Master Guru

    Messages:
    918
    Likes Received:
    225
    GPU:
    MSI 4090 Liquid X
    I call bs. They would never ruin their rep like this. butthurt nerds wanna get attention.
     
  3. mtrai

    mtrai Maha Guru

    Messages:
    1,183
    Likes Received:
    374
    GPU:
    PowerColor RD Vega
    I am mixed on this since FM_Jarnis is a dev at 3dmark. I copied and pasted his direct response here for discussion.
     
  4. SimBy

    SimBy Guest

    Messages:
    189
    Likes Received:
    0
    GPU:
    R9 290
    Is the problem here strictly how 'async compute' is implemented or that AMD/Nvidia get less/more benefit than they should? We'll be getting more games with 'async compute' support soon so we'll know if Spy results translate into actual games.

    But benefit wise, my 290 gets exactly 15% boost with 'async compute' enabled. I'd consider that up there where I would expect. It also translates to benefits seen in Doom running SMAA/TSSAA.

    I also expect some 'async compute' fine tuning by AMD for different GCN versions which will likely bring those benefits closer together.

    Right now RX480 only gains like 8%.
     
    Last edited: Jul 17, 2016

  5. nhlkoho

    nhlkoho Guest

    Messages:
    7,754
    Likes Received:
    366
    GPU:
    RTX 2080ti FE
    I really doubt that the "best overclockers in the world" as stated by one of the posters, knows more about the DX12 architecture than the Futuremark devs. And if they do, then write your own benchmark to prove otherwise.
     
  6. prazola

    prazola Member Guru

    Messages:
    179
    Likes Received:
    20
    GPU:
    R9390XSOC / R9290DCU2OC
    That's not the point. They are payed to develop something neutral, not to be ***** like this. They can't call async a hard use of context switch (only because nvidia has improved it in the last architecture). This is not a parallel execution of threads, it's just a stupid thing to do.
    And I really don't know why 3D mark is considered a standard while there are a lot of tech demo by real game engine that are graphically superior.
     
  7. nhlkoho

    nhlkoho Guest

    Messages:
    7,754
    Likes Received:
    366
    GPU:
    RTX 2080ti FE
    Graphically superior has nothing to do with how well the benchmark tests your hardware.
     
  8. mtrai

    mtrai Maha Guru

    Messages:
    1,183
    Likes Received:
    374
    GPU:
    PowerColor RD Vega
    My beef is that 3dmark did not state this little fact on how the test is actually different between AMD and Nvidia, not that AMD or Nvidia does something better or worse.

    We have trusted 3dmark for many many years. Right now as I am seeing it you cannot actually use Time Spy as a true DX 12 /Async comparison between the two different companies until this is all sorted out.

    I mean for a comparison benchmark to be valid the benchmark test always has to be identical for all and as it stands right now it is not the same exact test.
     
  9. SimBy

    SimBy Guest

    Messages:
    189
    Likes Received:
    0
    GPU:
    R9 290
    Got it. Agree. Another issue worth pointing out is Nvidia disables 'async compute' on a driver level on anything older than Pascal. This prevents performance tanking. Which should also invalidate the score. Just like it does when you limit tessellation factor on the driver level for AMD. Quite simple really, if you disable 'async compute' in TimeSpy benchmark it also invalidates the score.
     
    Last edited: Jul 17, 2016
  10. mtrai

    mtrai Maha Guru

    Messages:
    1,183
    Likes Received:
    374
    GPU:
    PowerColor RD Vega
    I so agree and you and others get what I was getting at...for me this is not a AMD vs Nvidia (however it might be we will never know) but pointing directly at our trusted and goto benchmark test company. This just begs the question now how trustworthy any comparison has been using 3dmark. It also makes me question why they were not honest and upfront that is really a different test between AMD and Nvidia and not to use it to compare, almost as if they were pushing the Time Spy Upgrade for the money vs true reliability benchmark comparisons.
     

  11. PhazeDelta1

    PhazeDelta1 Guest

    Messages:
    15,608
    Likes Received:
    14
    GPU:
    EVGA 1080 FTW
    I'm a firm believer of giving people the benefit of the doubt. But damn. I really hope this isn't true.
     
  12. mtrai

    mtrai Maha Guru

    Messages:
    1,183
    Likes Received:
    374
    GPU:
    PowerColor RD Vega
    Please Please please...do not turn this thread into AMD vs Nvdia cards and whatever they are actually or not actually able to do. I posted this based inherent large use of the benchmark for comparisons, not for bashing either GPU manufacturer.
     
  13. prazola

    prazola Member Guru

    Messages:
    179
    Likes Received:
    20
    GPU:
    R9390XSOC / R9290DCU2OC
    If I see more poligons, more dynamic lights and objects, better physics, better textures, with almost the same fps, for me it's a better utilization of the HW.
    They used nvidia physix in the past, now this scam about async...that's really not what I espect from a reliable benchmarking software.
     
  14. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    What he told is that they ask the driver to do it, if it's done depends on the driver. It's NVIDIA's problem, not Futuremark's.
     
  15. Agonist

    Agonist Ancient Guru

    Messages:
    4,287
    Likes Received:
    1,316
    GPU:
    XFX 7900xtx Black
    [​IMG]

    But this is how I kinda feel about it myself.

    Its a very interesting point.
     

  16. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    It's not the same at all. The tessellation setting changes the visuals of whatever is shown, disabling async is an internal issue of the NVIDIA pipeline that has no effect on visual quality. These are two very different things.
     
  17. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    It is nearly 3am and i got woken up with a crying baby, but is this not a non story?

    Was it not nearly a year ago that Oxide confirmed that Nvidia disabled async compute in the drivers to avoid the performance hit?
    Why is now a big story because a Futuremark dev has said the same thing?
     
  18. SimBy

    SimBy Guest

    Messages:
    189
    Likes Received:
    0
    GPU:
    R9 290
    It has no effect on visual quality, correct, but it prevents performance tanking and is not an apple to apple comparison.

    Try disabling 'async compute' in TimeSpy benchmark itself. Invalid score.
     
  19. Redemption80

    Redemption80 Guest

    Messages:
    18,491
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    There is no way to get round that.

    What you're essentially saying is that AMD and Nvidia hardware should never be compared now?
     
  20. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    It is a complete non-story.

    It is apples to apples since they both output the same stuff on the screen. NVIDIA DOES PAY the performance penalty for not supporting async with anything before Pascal, so what's the issue exactly?

    Look, both output the SAME things, alright? Async is something that the application asks from the driver to perform. If they driver can't do it right, you pay the performance penalty. It's like saying we can't compare architectures because GCN is bad at tessellation and it can't show more than X number of triangles. It's an internal bottleneck and it hurts them in the benchmark.
     

Share This Page