Crytek releases Neon Noir Ray Tracing Benchmark

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Nov 14, 2019.

  1. Astyanax

    Astyanax Ancient Guru

    Messages:
    6,643
    Likes Received:
    2,080
    GPU:
    GTX 1080ti
    must be gearing up to sell the engine again.
     
    K.S. likes this.
  2. sykozis

    sykozis Ancient Guru

    Messages:
    21,631
    Likes Received:
    937
    GPU:
    MSI RX5700
    I was going to download it and give it a run....until I read the part about "crytek launcher"....
     
  3. Denial

    Denial Ancient Guru

    Messages:
    12,788
    Likes Received:
    2,053
    GPU:
    EVGA 1080Ti

    They couldn't build a bigger GPU because they are limited by TDP. Cutting the RT/Tensor would have cut price but not led to a faster GPU. It uses 300w without doing RT/Tensor, it's not going to be different if the Tensor/RT were gone.

    Also Nvidia did push this - it's essentially what Nvidia's Voxel Global Illumination was.. except that was used for GI and not reflections. The problem, like I said in my previous post, is that it requires a lot of art setup time and it's extremely ineffective in dynamic lighting - which is why they don't use cone tracing on anything moving in the scene.
     
  4. Caesar

    Caesar Maha Guru

    Messages:
    1,117
    Likes Received:
    419
    GPU:
    GTX 1070Ti Titanium
    ......shadow details in the demo is very limited....many object does not cast shadows....

    ---------------------------------
    There's a big difference compared to this one:

     

  5. craycray

    craycray Member Guru

    Messages:
    106
    Likes Received:
    30
    GPU:
    1080ti SC2

    I think we are missing each other's point here. I agree about TDP and I agree about staying within those limits. But we also have to agree that in RTX games with both Tensor and Shaders active and the cards stay within TDP.

    Now the actual economic theory is, the whole idea is to get ROI from R&D. nVidia put in billions into development of compute tensor cores for AI and deep learning industry. However, it is near impossible to recover all costs and make profit from a growing industry in a short time, so they had to find a way to sell it into an established industry (such as gaming), hence RTX API was built to leverage tensor cores for gaming.

    What we are discussing is that we don't need tensor for Real time Ray Tracing, as we were led to believe. All we need is RPM, True Async Compute (not just pre-empt), and more shaders to execute it. Sad to say, I never bought into their RTX implementation, I bought 980ti launch day, 1080ti launch day, but not 2080ti.

    RTX seems to be something that will become GSync of raytracing. Soon, there will be 'RTX compatible'.
     
    Backstabak and CPC_RedDawn like this.
  6. g60force

    g60force Master Guru

    Messages:
    622
    Likes Received:
    5
    GPU:
    NV 1660Ti + 5850
    I'm so suprised this runs perfect @1920x1080 on my Ngreedia 1660GTX ti Non-OC constant 50+ towards 70ish on Ultra 5845
     
  7. BLU

    BLU New Member

    Messages:
    1
    Likes Received:
    0
    GPU:
    GTX 1080
    Windows 10 pro 64-bit November 1909
    9900K @ 5.2all
    16GB DDR4 4000 @ c18
    GTX 1080
    1tB NVME

    ultra 1920x1080: 7455
    60-102 FPS

    runs pretty slick
     
  8. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    11,553
    Likes Received:
    3,513
    GPU:
    2080Ti @h2o
    I think this thing is problematic not because Crytek built it without DXR (and introducing sort of a second standard), but because, as far as I've understood, every implementation of DXR by now, and "ray tracing" such as this doesn't do the same...

    If I understood correctly, some are just using it for reflections, sometimes it's used for shadows, other times for global illumination...

    I have the impression, if they'd all use the same, every card on the market right now would choke and throw up badly.

    That said, a lot of things called ray tracing actually don't even talk about the same stuff... or am I off here?
     
  9. angelgraves13

    angelgraves13 Ancient Guru

    Messages:
    1,902
    Likes Received:
    504
    GPU:
    RTX 2080 Ti FE
    This tool is 100% pointless. No games use CryEngine anyway.
     
  10. nick0323

    nick0323 Master Guru

    Messages:
    979
    Likes Received:
    41
    GPU:
    Palit GTX970 Jetstream OC
    You l0st me at "Crytek Launcher". o_O
     
    XenthorX, fantaskarsef and sykozis like this.

  11. Aura89

    Aura89 Ancient Guru

    Messages:
    7,865
    Likes Received:
    1,041
    GPU:
    -
  12. angelgraves13

    angelgraves13 Ancient Guru

    Messages:
    1,902
    Likes Received:
    504
    GPU:
    RTX 2080 Ti FE
  13. Aura89

    Aura89 Ancient Guru

    Messages:
    7,865
    Likes Received:
    1,041
    GPU:
    -
    Sniper Ghost Warrior Contracts is coming out this month, though it's not listed on the wikipedia page i believe, and not everyone is into that game.

    But there are many games that have not been released yet, though i believe many of them are probably effectively canceled.

    Either way my point was it's been more popular than i think people give it credit for. And yes, i agree, prey was a very good game and not horrible in performance on cryengine, i wish there were more like it.
     
  14. -Tj-

    -Tj- Ancient Guru

    Messages:
    16,847
    Likes Received:
    1,770
    GPU:
    Zotac GTX980Ti OC
    Last edited: Nov 16, 2019
  15. CPC_RedDawn

    CPC_RedDawn Ancient Guru

    Messages:
    7,937
    Likes Received:
    362
    GPU:
    Zotac GTX1080Ti AMP
    Excuse me for maybe being naive, but this logic makes no sense. If an RTX2080Ti uses 300W TDP for just the CUDA cores (CU's) and not the RT/Tensor cores then with the RT/Tensor cores being used wouldn't they then be exceeding the 300W TDP limit anyway? This makes no sense, removing the RT/Tensor cores would of freed up die space allowing for more CU's.... sure they might have hit TDP limits but this is when optimisation comes in and clock speeds are reduced in order to meet said TDP limit. Having a better more efficient architecture, coupled with much faster GDDR6 memory, and more CU's even at a lower clock speed would of led to a much faster GPU for raw compute power. Heck, if TDP is really the reason why, then instead of pouring money into a technology (RTX) that simply isn't ready for the mass market why not focus their resources on a node shrink and move from 12nmFF to 10 or 7nm.

    It's like someone else mentioned, they poured so much rnd into tensor cores for the A.I and automotive industries that they are struggling to make a profit as those markets are still relatively new, so they needed a way to please investors and shareholders by using them for a new gimmick technology. Selling it to gamers as the next big thing, when in reality it IS the next big thing but not for at least another 3-5 years.

    This tech in the video should of been the first stepping stone, instead Nvidia treated it as a race and not a marathon.
     

  16. angelgraves13

    angelgraves13 Ancient Guru

    Messages:
    1,902
    Likes Received:
    504
    GPU:
    RTX 2080 Ti FE
    It was a great engine at one time, but seems abandoned. Hell, CryTek might go bankrupt any second now...so what's the point in using the engine?
     
  17. Cyberdyne

    Cyberdyne Ancient Guru

    Messages:
    3,462
    Likes Received:
    225
    GPU:
    2080 Ti FTW3 Ultra
    They can't go bankrupt, they are owned by EA, and EA is doing fine. EA could for sure close them down (as they are known to do), but if they are putting out projects like this that should tell you EA still sees a point in Crytek and their engine.
    EA loves their Frostbite engine, but I don't think EA likes the idea of it being public. Cryengine remains as EA's "Unity competition"
     
    Last edited: Nov 16, 2019
  18. Cyberdyne

    Cyberdyne Ancient Guru

    Messages:
    3,462
    Likes Received:
    225
    GPU:
    2080 Ti FTW3 Ultra
    When the RT cores are in use, CUDA cores draw a lot less power. This is because the RT cores performance is weak enough to bottleneck the CUDA cores.

    They are node shrinking with their new GPU's likely due next year. Likely 7nm. What the node size is doesn't really matter if the performance is there. Turing doesn't struggle.

    I don't see an issue with pushing for new tech before it's prime-time. PC has always been the place to see the future before it's ready. AMD did it with Mantle.
     
  19. Astyanax

    Astyanax Ancient Guru

    Messages:
    6,643
    Likes Received:
    2,080
    GPU:
    GTX 1080ti
    the RT cores are not the bottleneck.
     
  20. Cyberdyne

    Cyberdyne Ancient Guru

    Messages:
    3,462
    Likes Received:
    225
    GPU:
    2080 Ti FTW3 Ultra
    If that were true then enabling RT wouldn't effect performance. I guess I can't say whether the cores themselves are the root of it, but the CUDA cores are being bottlenecked with RT turned on.
    Since we can see what performance looks like without RT we know the baseline of what the CUDA cores could do. If anything, with RT off, CUDA has a harder time thanks to the addition of traditional reflections/shadows/GI. Yet since performance goes down with RT, we have what's called a bottleneck.
     

Share This Page