Review: Hitman 2016: PC graphics performance benchmarks

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Mar 16, 2016.

  1. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    That's interesting, although it's flawed. Each core can do 2 floating point operations per clock

    3072x2=6144 per clock. Multiply by 1Ghz you get 6144Gflop/s

    Problem is average 980Ti clocks at ~1380 out the box, doing the math ; 7,772 GFlop/s

    A titan X at 1400 mhz is 8.6 Tflops
    My 980Ti at 1510 is 8.5 Tflops
     
    Last edited: Mar 16, 2016
  2. Ryu5uzaku

    Ryu5uzaku Ancient Guru

    Messages:
    7,547
    Likes Received:
    608
    GPU:
    6800 XT
    And Fury X with decent oc is 9.2 Tflops. 980 goes to like 6.1 and 390x goes to like 6.7.

    Gains on nvidia are bigger. All things considered it does not change the playing field that much evens it out for sure.
     
  3. Fox2232

    Fox2232 Guest

    Messages:
    11,808
    Likes Received:
    3,371
    GPU:
    6900XT+AW@240Hz
    nVidia has fully working async compute, they just did not put functionality to driver package...
     
  4. kegastaMmer

    kegastaMmer Guest

    Messages:
    326
    Likes Received:
    40
    GPU:
    strix 1070 GTX
    how come no one ever discusses about gains from tessellation and async on the most popular genre, aka Strategy, or space sims or fps and contest about how to optimize those game's mechanics. Instead every new generation, 100s flock to this is good, dat is baaadd, u are wrong i am right, everything is uncertain. geez just calm down and stop defending your purchases. Be happy for them, dont worry, we all age, some faster and some slower. Instead of being offensive, lets try to simply state what you feel, backed up with facts and just carry on, no need to worry this much. all is not going to be over if we lose a few fps, we gain sometimes, and we lose sometimes, no reason to lose temper or get irritated i say! keep calm and carry on
     

  5. Lane

    Lane Guest

    Messages:
    6,361
    Likes Received:
    3
    GPU:
    2x HD7970 - EK Waterblock
    ... Lets hope they fix some bugs on the DX12 version enough fast.
     
    Last edited: Mar 17, 2016
  6. semantics

    semantics Member

    Messages:
    41
    Likes Received:
    4
    GPU:
    N/A
    2900XT isn't from an ice age, it melt all the ice.

    Also that fps lock thing it's only AMD cards isn't it? Wasn't that an issue with DX12 in ArK, it would internally render more but only output 60hz to the monitors.
     
  7. Syranetic

    Syranetic Master Guru

    Messages:
    618
    Likes Received:
    145
    GPU:
    Zotac RTX 4080
    What I meant to say was, Vulkan might see wider adoption if you face the same challenges between adopting it or DX12 (ie, start from scratch under the API).
     
  8. r2daizzzo

    r2daizzzo Guest

    Messages:
    2
    Likes Received:
    0
    GPU:
    two r9 390
    Hey any having problems running this game in crossfire cuz I can't get it to work
     
  9. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    Vulkan is doing the same mistakes as OpenGL. Instead of providing a stable API, they allow 3rd party extensions. NVIDIA has started publishing a couple already. This is not very good as it will probably lead to more hardware segregation and more work to whoever ports the games to Vulkan, same story as OpenGL.

    The whole point of the new APIs was two fold.

    First you leave the DX11 drivers and that whole labyrinthian, crufty and opaque mess behind. Good engine coders had reached the point where they couldn't properly profile the drivers because there were 20 ways to do the same thing, each one of them with different bottlenecks. That, along with support for the lower level APIs from the free engines, made this transition possible.

    Second, you finally get console-like CPU efficiency and control. That will make games that weren't possible before, possible. A lot of people shat on Assassin's Creed Unity with very good justification. The only problem of that game was its technical ambition. Games like that, or even greater are going to be possible now.

    Furthermore, with things like preemption/async compute, deep pipeline GPU designs can be finally utilized fully.

    That's a good catch, but it's not only compute that matters. NVIDIA cards are very good with graphics operations.

    It's not a lock, it's a bad implementation of flipping. They said they will fix it.
     
  10. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    How does it not change the playing field, this was supposed to justify the performance in hitman. Considering overclocks the 980Ti and Titan X jump to second place after fury x with a difference of less than 5% for TX, <10% for ti (talking. About overclocked fury here @1125)

    980 @ 1500 =6144gflops
    390x@ 1175=6600gflops
    970@1500=5376gflops

    Shouldn't be performing this badly on nvidia hardware, even if it's compute limited, combined with the drm issues I'm feeling quite bitter about this game. Really love hitman but this is some shameful, unjustifiable ****
     
    Last edited: Mar 17, 2016

  11. CronoGraal

    CronoGraal Ancient Guru

    Messages:
    4,194
    Likes Received:
    20
    GPU:
    XFX 6900XT Merc 319
    The 290 (not even the 290x) ****ting on the GTX980...interesting turn of events.
     
  12. Arend.C

    Arend.C Guest

    Messages:
    195
    Likes Received:
    3
    GPU:
    MSI GTX 970 Gaming
    How come a gtx 950 is faster than a gtx 770? I dont get it.
     
  13. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    I agree with this. The problem is you won't see most of it until you leave DX11 behind completely. Which they aren't quite doing yet. Oxide has an interview where they discuss that they have a bunch of really good rendering systems they want to use, but they wouldn't work correctly with DX11, so they didn't use them for Ashes.

    And, by the time games are fully developed under 12 from the start, with things like Oxide are talking about, there will be a complete new generation of graphics cards/engines/etc and there will be no DX11/12 comparisons anymore. So all the performance gains and control is just obfuscated in with graphics enhancements and gains from other technologies.

    Like right now we can see a game like ROTR going from DX11/12 and we say "there is no performance benefits, DX12 immature" or whatever people are saying. Or we can see Ashes DX11 vs 12 and say "Big boost to AMD, Async better utilizing GCN's pipeline, DX12 so strong". But a year from now, when Ashes 2 comes out and it's DX12 only, utilizing a new cool rendering backend that's only possible on DX12, you will have no DX11 comparison, because it's DX11 only. So all those additional gains from moving to DX12 are essentially lost in terms of being able to see the performance on a benchmark. How much did that DX12 only renderer help? I have no idea, I have nothing to compare it to.

    The same thing happened with DX10/11. Microsoft hyped it up, the initial A/B releases were essentially the same. People cried rivers said DX10/11 did nothing for gaming, were just reasons to sell new versions of windows, etc. But I can almost guarantee that both led to significant performance increases in various titles years later. You just have no DX9/10 variants to compare them to.
     
  14. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    And I agree with this :p

    I believe that DX12 will be adopted much much faster, just because of the cruft in DX11. You can already see it actually. The API really launched on November and less than 6 months after we have quite a lot of titles and all major vendors jumped on it immediately almost. TR didn't see much of an improvement because they probably just used a wrapper for it. Even for that, people who have the game report improvements in smoothness, although the average frame rates dropped.
     
  15. Undying

    Undying Ancient Guru

    Messages:
    25,330
    Likes Received:
    12,743
    GPU:
    XFX RX6800XT 16GB
    No one gets it. Thats why there is so many controversy lately surrounding the Kepler cards.
     

  16. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    I still think it's memory.

    I'm trying to confirm whether or not the compression effects total memory usage. I can't really imagine a way that it doesn't. So the 950 should technically be able to fit more into memory. And if you look at games like GTA:V, when a 680 hits memory cap it goes from 25fps to 1.5fps. Shadows of Mordor, 31 fps to 14fps.

    Really wish someone with a 4GB 770 would benchmark either this or The Division on Ultra settings.

    Like I said on the other page. If you look at the 370 (4GB) vs the 370 (2GB) the difference is pretty huge in terms of relative performance to the 770.

    I don't know if Hilbert has one or the other, but seeing a 770 (2GB vs 4GB) and/or a 370 (2GB vs 4GB) benchmark of either Hitman/The Division would be interesting. At least for me it would.
     
  17. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    As far as I understand, Maxwell has delta/compressed memory transfers. It doesn't store thing into memory in a compressed state. And that bench would be really interesting indeed. Or something like a retrospective article, starting with the original drivers and Windows 7, and ending up to today with the latest ones, comparing both new and old games.
     
  18. Denial

    Denial Ancient Guru

    Messages:
    14,206
    Likes Received:
    4,118
    GPU:
    EVGA RTX 3080
    How can it compress the memory transfer and not the item itself? That doesn't make any sense to me. The only way to reduce bandwidth over the bus is to reduce the size of the object your moving over it. As a result that object is going to be smaller when it hits the other side and gets stored. It's not like they are going to uncompress it again in memory, they can't, unless they bring it back over the bus again.

    You're right that it doesn't work like normal compression, it essentially stores the data from the previous frame and only the delta values from the new frame. But that's still going to use less memory then storing both frames entirely, which it would have to do because they use data from the previous frame to calculate the new frame anyway.

    Idk, nothing else really makes sense to me about it. Like I get that Undying wants it to be Nvidia intentionally downgrading Kepler, like really bad, he wants it so bad, but I don't think that's whats happening. It's too simple for me, nothing is simple with graphics, and for Nvidia to just continuously do it despite people calling them out for it doesn't make any sense to me. It's either memory or some other bottleneck like compute/tessellation.

    And like I said you can kind of see the memory issue in other games if you look for it. It's just that there is no comprehensive review. Like in Hilberts test his 370 4GB outperforms the 770 at Ultra in The Division, but at techspot, their 2GB 370 loses to the 770 at High by like 22%. So either something in that switch from high --> ultra is really screwing the 770 over, or that 2GB/4GB memory difference is making up roughly ~20% of performance.
     
  19. Ieldra

    Ieldra Banned

    Messages:
    3,490
    Likes Received:
    0
    GPU:
    GTX 980Ti G1 1500/8000
    You have to remember though, Nvidia did force Crytek to add a second underground highly tessellated map in Crysis 3 at gunpoint.

    "If you don't add that second underground map I'm going to put my nanosuit on, tear off your limbs and force them down your throat" - Jen-Hsun Huang to Cevat Yerli
     
  20. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,125
    Likes Received:
    969
    GPU:
    Inno3D RTX 3090
    You can use delta transfers. The transfers themselves are much smaller, since all the data changing for each object in memory is changed, but the data itself is stored uncompressed. If I understand correctly, that's what Maxwell and Tonga/Fiji are doing. Although I believe that Tonga/Fiji do keep data compressed.

    But still, in the end, after the data transfer is complete, you have an uncompressed frame worth of data in memory. Which means that the storage requirements don't change.

    I don't believe that they downgrade. Just that they have stopped giving two ****s about it. They even almost said it with the Witcher, that they would like the "community" to notify them for any more Kepler performance regressions. That sounds to me like they don't monitor that themselves.

    I believe it's both, but if Maxwell is not storing things in a compressed state, then it simply means that there is that setting that ****s things up.
     

Share This Page