Crytek releases Neon Noir Ray Tracing Benchmark

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Nov 14, 2019.

  1. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,050
    Likes Received:
    7,382
    GPU:
    GTX 1080ti
    must be gearing up to sell the engine again.
     
    Deleted member 213629 likes this.
  2. sykozis

    sykozis Ancient Guru

    Messages:
    22,492
    Likes Received:
    1,537
    GPU:
    Asus RX6700XT
    I was going to download it and give it a run....until I read the part about "crytek launcher"....
     
  3. Denial

    Denial Ancient Guru

    Messages:
    14,207
    Likes Received:
    4,121
    GPU:
    EVGA RTX 3080

    They couldn't build a bigger GPU because they are limited by TDP. Cutting the RT/Tensor would have cut price but not led to a faster GPU. It uses 300w without doing RT/Tensor, it's not going to be different if the Tensor/RT were gone.

    Also Nvidia did push this - it's essentially what Nvidia's Voxel Global Illumination was.. except that was used for GI and not reflections. The problem, like I said in my previous post, is that it requires a lot of art setup time and it's extremely ineffective in dynamic lighting - which is why they don't use cone tracing on anything moving in the scene.
     
  4. Caesar

    Caesar Ancient Guru

    Messages:
    1,561
    Likes Received:
    686
    GPU:
    RTX 4070 Gaming X
    ......shadow details in the demo is very limited....many object does not cast shadows....

    ---------------------------------
    There's a big difference compared to this one:

     

  5. craycray

    craycray Member Guru

    Messages:
    168
    Likes Received:
    43
    GPU:
    3080 Gaming X Trio

    I think we are missing each other's point here. I agree about TDP and I agree about staying within those limits. But we also have to agree that in RTX games with both Tensor and Shaders active and the cards stay within TDP.

    Now the actual economic theory is, the whole idea is to get ROI from R&D. nVidia put in billions into development of compute tensor cores for AI and deep learning industry. However, it is near impossible to recover all costs and make profit from a growing industry in a short time, so they had to find a way to sell it into an established industry (such as gaming), hence RTX API was built to leverage tensor cores for gaming.

    What we are discussing is that we don't need tensor for Real time Ray Tracing, as we were led to believe. All we need is RPM, True Async Compute (not just pre-empt), and more shaders to execute it. Sad to say, I never bought into their RTX implementation, I bought 980ti launch day, 1080ti launch day, but not 2080ti.

    RTX seems to be something that will become GSync of raytracing. Soon, there will be 'RTX compatible'.
     
    Backstabak and CPC_RedDawn like this.
  6. g60force

    g60force Guest

    Messages:
    624
    Likes Received:
    6
    GPU:
    NV 1660Ti + 5850
    I'm so suprised this runs perfect @1920x1080 on my Ngreedia 1660GTX ti Non-OC constant 50+ towards 70ish on Ultra 5845
     
  7. BLU

    BLU Guest

    Messages:
    1
    Likes Received:
    0
    GPU:
    GTX 1080
    Windows 10 pro 64-bit November 1909
    9900K @ 5.2all
    16GB DDR4 4000 @ c18
    GTX 1080
    1tB NVME

    ultra 1920x1080: 7455
    60-102 FPS

    runs pretty slick
     
  8. fantaskarsef

    fantaskarsef Ancient Guru

    Messages:
    15,773
    Likes Received:
    9,669
    GPU:
    4090@H2O
    I think this thing is problematic not because Crytek built it without DXR (and introducing sort of a second standard), but because, as far as I've understood, every implementation of DXR by now, and "ray tracing" such as this doesn't do the same...

    If I understood correctly, some are just using it for reflections, sometimes it's used for shadows, other times for global illumination...

    I have the impression, if they'd all use the same, every card on the market right now would choke and throw up badly.

    That said, a lot of things called ray tracing actually don't even talk about the same stuff... or am I off here?
     
  9. nick0323

    nick0323 Maha Guru

    Messages:
    1,032
    Likes Received:
    77
    GPU:
    Asus DUAL RTX2060S
    You l0st me at "Crytek Launcher". o_O
     
    XenthorX, fantaskarsef and sykozis like this.
  10. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -

  11. Aura89

    Aura89 Ancient Guru

    Messages:
    8,413
    Likes Received:
    1,483
    GPU:
    -
    Sniper Ghost Warrior Contracts is coming out this month, though it's not listed on the wikipedia page i believe, and not everyone is into that game.

    But there are many games that have not been released yet, though i believe many of them are probably effectively canceled.

    Either way my point was it's been more popular than i think people give it credit for. And yes, i agree, prey was a very good game and not horrible in performance on cryengine, i wish there were more like it.
     
  12. -Tj-

    -Tj- Ancient Guru

    Messages:
    18,110
    Likes Received:
    2,611
    GPU:
    3080TI iChill Black
    Last edited: Nov 16, 2019
  13. CPC_RedDawn

    CPC_RedDawn Ancient Guru

    Messages:
    10,470
    Likes Received:
    3,156
    GPU:
    PNY RTX4090
    Excuse me for maybe being naive, but this logic makes no sense. If an RTX2080Ti uses 300W TDP for just the CUDA cores (CU's) and not the RT/Tensor cores then with the RT/Tensor cores being used wouldn't they then be exceeding the 300W TDP limit anyway? This makes no sense, removing the RT/Tensor cores would of freed up die space allowing for more CU's.... sure they might have hit TDP limits but this is when optimisation comes in and clock speeds are reduced in order to meet said TDP limit. Having a better more efficient architecture, coupled with much faster GDDR6 memory, and more CU's even at a lower clock speed would of led to a much faster GPU for raw compute power. Heck, if TDP is really the reason why, then instead of pouring money into a technology (RTX) that simply isn't ready for the mass market why not focus their resources on a node shrink and move from 12nmFF to 10 or 7nm.

    It's like someone else mentioned, they poured so much rnd into tensor cores for the A.I and automotive industries that they are struggling to make a profit as those markets are still relatively new, so they needed a way to please investors and shareholders by using them for a new gimmick technology. Selling it to gamers as the next big thing, when in reality it IS the next big thing but not for at least another 3-5 years.

    This tech in the video should of been the first stepping stone, instead Nvidia treated it as a race and not a marathon.
     
  14. Cyberdyne

    Cyberdyne Guest

    Messages:
    3,580
    Likes Received:
    308
    GPU:
    2080 Ti FTW3 Ultra
    They can't go bankrupt, they are owned by EA, and EA is doing fine. EA could for sure close them down (as they are known to do), but if they are putting out projects like this that should tell you EA still sees a point in Crytek and their engine.
    EA loves their Frostbite engine, but I don't think EA likes the idea of it being public. Cryengine remains as EA's "Unity competition"
     
    Last edited: Nov 16, 2019
  15. Cyberdyne

    Cyberdyne Guest

    Messages:
    3,580
    Likes Received:
    308
    GPU:
    2080 Ti FTW3 Ultra
    When the RT cores are in use, CUDA cores draw a lot less power. This is because the RT cores performance is weak enough to bottleneck the CUDA cores.

    They are node shrinking with their new GPU's likely due next year. Likely 7nm. What the node size is doesn't really matter if the performance is there. Turing doesn't struggle.

    I don't see an issue with pushing for new tech before it's prime-time. PC has always been the place to see the future before it's ready. AMD did it with Mantle.
     

  16. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,050
    Likes Received:
    7,382
    GPU:
    GTX 1080ti
    the RT cores are not the bottleneck.
     
  17. Cyberdyne

    Cyberdyne Guest

    Messages:
    3,580
    Likes Received:
    308
    GPU:
    2080 Ti FTW3 Ultra
    If that were true then enabling RT wouldn't effect performance. I guess I can't say whether the cores themselves are the root of it, but the CUDA cores are being bottlenecked with RT turned on.
    Since we can see what performance looks like without RT we know the baseline of what the CUDA cores could do. If anything, with RT off, CUDA has a harder time thanks to the addition of traditional reflections/shadows/GI. Yet since performance goes down with RT, we have what's called a bottleneck.
     
  18. Astyanax

    Astyanax Ancient Guru

    Messages:
    17,050
    Likes Received:
    7,382
    GPU:
    GTX 1080ti
    RayTracing is not done how you think its done, the RT cores are for sorting rays (is this on screen, or isn't it onscreen) which passes results onto the next stage of rendering which is done in the traditional way.

    [​IMG]

    The shadow, or reflection, or lighting you end up with gets done by the traditional shading pipe.
    You can sort faster but its the traditional shaders that are contention.

    https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/5
     
  19. CPC_RedDawn

    CPC_RedDawn Ancient Guru

    Messages:
    10,470
    Likes Received:
    3,156
    GPU:
    PNY RTX4090
    This makes more sense to me, if this is the case then Turing was an even bigger mistake. Why on earth put RT/Tensor cores onto your chip that ends up starving your CUDA cores and in turn massively decreasing performance. On the one hand they were touting Turing as being a 4K60fps monster, then on the other hand they were pushing RTX which then tanks performance and the cards become 1080p cards.... More proof the tech is not ready. Its like bugatti making a new car that can do 1000mph.... but when it does it explodes.....

    Also Mantle was waaaaay more ready than RTX. I had a HD7970GHz GPU when Mantle came out and in BF4 my performance shot up and I was able to max the game out at Ultra settings 1080p and gained about -/+30% better performance with MUCH higher minimum frame rates. Mantle eventually pushed the industry to adopt low level API's after years and years of massive overheads, which eventually became Vulkun and microsoft followed suite with DX12. Not to mention AMD basically gave their Mantle code away to the OpenGL team, can you see Nvidia doing the same with their tech? Unless it becomes unprofitable they will never give it away to benefit the whole industry just their wallets.

    All RTX has done is show how far behind we are with the hardware needed to run this properly.
     
  20. Cyberdyne

    Cyberdyne Guest

    Messages:
    3,580
    Likes Received:
    308
    GPU:
    2080 Ti FTW3 Ultra
    Turing is both of those things. Just not at the same time. I never felt mislead by their marketing, I certainly was not expecting the 2080 Ti to do 4k60fps with RT.
    But Turing is the best at 4k, and it's the best at RT. Real time raytracing has to start somewhere, and NV is willing to invest.

    Mantle worked out the gate, RT also works out the gate, "it just works!" lol. Mantle was focused on more FPS, RT never made such claims. RT offers raytracing in real time, and it does that.
    RT is also not proprietary. When AMD supports RT, these current RTX games will work on AMD GPUs out of the box. That's been the case since RTX was a thing, can't say the same thing about Mantle.
     

Share This Page