Futuremark developer responds to accusations of cheating in Time Spy benchmark.

Discussion in 'Frontpage news' started by mtrai, Jul 16, 2016.

  1. Agonist

    Agonist Ancient Guru

    Messages:
    4,020
    Likes Received:
    1,130
    GPU:
    Dell 6800XT 16GB
    I view it as a driver cheat for Nvidia just like whe ATI used to do with Anistrophic filtering in 3dmark years ago at driver level.

    Its just personal opinion but I just dont Nvidia anymore.

    Its not biased or fanboy based from me. Just how I feel on about it.


    Personally I feel async should be on period. I sure as bet you my R9 280 would destory a GTX 670, GTX 760 or even beat a GTX 960.
    Wish I still have my GTX 970 and Fury X to test this with.

    I dont get Async support in ROTR and thats a downer.
     
  2. SimBy

    SimBy Member Guru

    Messages:
    189
    Likes Received:
    0
    GPU:
    R9 290
    You have to realize that 'async compute' is an integral part of TimeSpy benchmark. Messing with it in any way is not an apple to apple comparison.

    Benchmarks should expose hardware limitations, not try to hide them.

    You can still compare 'async compute' 'capable' hardware from Nvidia and AMD. But of course that limits Nvidia to Pascal.

    But this is just part of the issue this thread was started. It's actually a smaller part.
     
    Last edited: Jul 17, 2016
  3. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,086
    Likes Received:
    929
    GPU:
    Inno3D RTX 3090
    That had visual quality differences, especially from certain angles. It's not the same thing.

    Isn't this done by showing that nothing from NVIDIA except Pascal gains anything from Async?
     
  4. Redemption80

    Redemption80 Ancient Guru

    Messages:
    18,495
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    It's up to the hardware maker to fix those hardware limitations and that is what they have done by disabling it at a driver level.

    What you're essentially asking is that Nvidia should release a driver that makes performance worse their own consumers, just so AMD customers can go look at the gap between their own hardware and competing Nvidia hardware and feel better.

    Not being allowed to fix hardware issues is clutching at straws.

    No, they want Maxwell users to have a performance penalty when it's enabled, and this would translate to games as well.
    A tad selfish if you ask me lol.
     
    Last edited: Jul 17, 2016

  5. SimBy

    SimBy Member Guru

    Messages:
    189
    Likes Received:
    0
    GPU:
    R9 290
    Yes you could say that, but the issue of comparison remains. In a true apples to apples comparison, anything older than Pascal would not just not gain anything, it would tank massively.
     
    Last edited: Jul 17, 2016
  6. Undying

    Undying Ancient Guru

    Messages:
    20,742
    Likes Received:
    9,069
    GPU:
    RTX 3070 OC
    So, Maxwell is incapable of performing async compute and Nvidia disabled it on a driver level? Wow.

    Can they even do that?
     
  7. Redemption80

    Redemption80 Ancient Guru

    Messages:
    18,495
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    So yes, you are asking that Nvidia release a driver that intentionally causes a performance hit for their customers on Maxwell/Kepler.

    It will not benefit you or other AMD owners, it will not improve your score or the performance in any of your games, it doesn't change image quality.
    It just makes things worse for certain Nvidia owners.

    Come on, PC gaming is problematic enough as it it, and now AMD/Nvidia are not allowed to fix hardware flaws.

    If it wasn't capable, then they wouldn't need to disable it at a driver level :wanker:

    My interpretation of the issue is that their implementation causes a performance hit, and the short term fix from last year was to just disable it while they worked on fixing it, still no fix and i don't believe there will be.

    Of course they can do that, it's their own hardware lol.
     
    Last edited: Jul 17, 2016
  8. HeavyHemi

    HeavyHemi Ancient Guru

    Messages:
    6,952
    Likes Received:
    960
    GPU:
    GTX1080Ti
    The "benchmarks" whatever than means, don't show a 'massive tanking'. Extraordinary claims require extraordinary evidence. How about showing us multiple examples of async compute benchmarks demonstrating your claims.
     
  9. Undying

    Undying Ancient Guru

    Messages:
    20,742
    Likes Received:
    9,069
    GPU:
    RTX 3070 OC
    You know what all this Maxwell async fixing reminds me? To that Fermi ongoing dx12 driver. We all know how that turned up.
     
  10. Denial

    Denial Ancient Guru

    Messages:
    13,993
    Likes Received:
    3,769
    GPU:
    EVGA RTX 3080
    No one knows as it's been disabled in the driver since it launched. PellyNV tweeted about it way back when the beta of ashes launched. Probably because it doesn't work and has issues.
     

  11. Redemption80

    Redemption80 Ancient Guru

    Messages:
    18,495
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    Possibly, making promises even though they had no idea they could keep them.

    As PrMinisterGR mentioned, it's a non story and not even a new story.

    Could you imagine if the rules meant any sort of fix was disallowed?
    GPU's had to be benched with launch day drivers only, games and benchmarks could not be patched at all etc...
     
    Last edited: Jul 17, 2016
  12. SimBy

    SimBy Member Guru

    Messages:
    189
    Likes Received:
    0
    GPU:
    R9 290
    I am absolutely not saying that. What I am saying is scores should be invalidated. You realize that the score is invalidated if I manually disable 'async compute' in TimeSpy benchmark? How is that any different than disabling it in the driver.

    Those are the claims of AoTS developers and pure common sense. If the performance would not tank massively, why disable it in the driver in the first place?

    But again this thread was not started because of this. That's just an issue I raised. Whats more puzzling is how 'async compute' 'lite' is implemented in TimeSpy.
     
  13. Redemption80

    Redemption80 Ancient Guru

    Messages:
    18,495
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    Why, it's not the user tampering with the settings to get a better score.
    It's the GPU manufacturer disabling a feature that does not work properly.

    The AOTS devs were even the first ones who claimed certain Nvidia drivers were mistakenly reporting reporting fully functioning async compute and that they needed to fix the drivers.
    Disabling a fix to artificially make AMD cards look better is dodgy, even AMD themselves would not go along with that.

    Have you even saw the benchmarks, how is it async compute lite when AMD cards gain more in this benchmark with it enabled than any other game.
    Even the AMD sponsored Hitman only got 5-10% according to the devs.
     
    Last edited: Jul 17, 2016
  14. Denial

    Denial Ancient Guru

    Messages:
    13,993
    Likes Received:
    3,769
    GPU:
    EVGA RTX 3080
    What is Async compute lite?

    They utilize parallel work queues, like every other async implementation because that's literally what it is. When the GPU has idle time it will pull a queue into it's pipeline to minimize idleness. And it's not like Time Spy is short on Compute code:

    http://images.anandtech.com/doci/10486/TimeSpyStats_575px.png

    Also the apples to apples thing you posted before is nonsense. AMD and Nvidia do a number of things differently from a driver perspective. So it was never apples to apples in software to begin with. It's like comparing a Ford Mustang to a Chevy Camero but saying the Camero is invalid because Chevy didn't build something identical to the Mustang. The whole point is that the architectures are different. As long as the image quality is identical the method they use to get there is irrelevant.
     
  15. Redemption80

    Redemption80 Ancient Guru

    Messages:
    18,495
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    Exactly, i saw that the FuryX gets a 13% performance increase with async compute enabled, over double what the brand new Nvidia card gets.

    AMD Sponsored Titles
    AOTS - FuryX gets around 10% from Async compute.
    Hitman - 5-10%

    Hows is 13% from 3D Mark "lite" or in any way bias towards Nvidia.

    I feel silly asking that after finding out those numbers.
     
    Last edited: Jul 17, 2016

  16. Spartan

    Spartan Master Guru

    Messages:
    676
    Likes Received:
    2
    GPU:
    R9 290 PCS+
    That is a good point.
     
  17. Redemption80

    Redemption80 Ancient Guru

    Messages:
    18,495
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    It's not a good point, and his suggestion would lead to this benchmark having no Maxwell/Kepler scores?

    What would the point in that?
     
  18. Spartan

    Spartan Master Guru

    Messages:
    676
    Likes Received:
    2
    GPU:
    R9 290 PCS+
    So Nvidia can do whatever they want, but I have to use async to get a valid score? That doesn't make any sense to me.
     
  19. Redemption80

    Redemption80 Ancient Guru

    Messages:
    18,495
    Likes Received:
    267
    GPU:
    GALAX 970/ASUS 970
    Yes, Nvidia and AMD make the hardware so it's their call when it comes to deciding what needs fixed, not the user.

    Maybe start a petition to get AMD to disable sync compute in the drivers if it means that much to you.
     
  20. GPU

    GPU Guest

    some posters here drink too much nv koolaid
    as long as nv wins anything nv does gets a thumbs up.
     

Share This Page