Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Nov 14, 2019.
must be gearing up to sell the engine again.
I was going to download it and give it a run....until I read the part about "crytek launcher"....
They couldn't build a bigger GPU because they are limited by TDP. Cutting the RT/Tensor would have cut price but not led to a faster GPU. It uses 300w without doing RT/Tensor, it's not going to be different if the Tensor/RT were gone.
Also Nvidia did push this - it's essentially what Nvidia's Voxel Global Illumination was.. except that was used for GI and not reflections. The problem, like I said in my previous post, is that it requires a lot of art setup time and it's extremely ineffective in dynamic lighting - which is why they don't use cone tracing on anything moving in the scene.
......shadow details in the demo is very limited....many object does not cast shadows....
There's a big difference compared to this one:
I think we are missing each other's point here. I agree about TDP and I agree about staying within those limits. But we also have to agree that in RTX games with both Tensor and Shaders active and the cards stay within TDP.
Now the actual economic theory is, the whole idea is to get ROI from R&D. nVidia put in billions into development of compute tensor cores for AI and deep learning industry. However, it is near impossible to recover all costs and make profit from a growing industry in a short time, so they had to find a way to sell it into an established industry (such as gaming), hence RTX API was built to leverage tensor cores for gaming.
What we are discussing is that we don't need tensor for Real time Ray Tracing, as we were led to believe. All we need is RPM, True Async Compute (not just pre-empt), and more shaders to execute it. Sad to say, I never bought into their RTX implementation, I bought 980ti launch day, 1080ti launch day, but not 2080ti.
RTX seems to be something that will become GSync of raytracing. Soon, there will be 'RTX compatible'.
I'm so suprised this runs perfect @1920x1080 on my Ngreedia 1660GTX ti Non-OC constant 50+ towards 70ish on Ultra 5845
Windows 10 pro 64-bit November 1909
9900K @ 5.2all
16GB DDR4 4000 @ c18
ultra 1920x1080: 7455
runs pretty slick
I think this thing is problematic not because Crytek built it without DXR (and introducing sort of a second standard), but because, as far as I've understood, every implementation of DXR by now, and "ray tracing" such as this doesn't do the same...
If I understood correctly, some are just using it for reflections, sometimes it's used for shadows, other times for global illumination...
I have the impression, if they'd all use the same, every card on the market right now would choke and throw up badly.
That said, a lot of things called ray tracing actually don't even talk about the same stuff... or am I off here?
This tool is 100% pointless. No games use CryEngine anyway.
You l0st me at "Crytek Launcher".
Lots of games have, and still are using CryEngine, of various versions.
Yeah besides a few games...I don’t see anything in the horizon.
Prey is probably my favorite CryEngine game.
Sniper Ghost Warrior Contracts is coming out this month, though it's not listed on the wikipedia page i believe, and not everyone is into that game.
But there are many games that have not been released yet, though i believe many of them are probably effectively canceled.
Either way my point was it's been more popular than i think people give it credit for. And yes, i agree, prey was a very good game and not horrible in performance on cryengine, i wish there were more like it.
Runs ok'ish on 980ti @ 1405MHz as well
1080p - ultra: 5781
1080p- v.high: 7328
Ultra looks much nicer though.
max oc 1470: 6024
Excuse me for maybe being naive, but this logic makes no sense. If an RTX2080Ti uses 300W TDP for just the CUDA cores (CU's) and not the RT/Tensor cores then with the RT/Tensor cores being used wouldn't they then be exceeding the 300W TDP limit anyway? This makes no sense, removing the RT/Tensor cores would of freed up die space allowing for more CU's.... sure they might have hit TDP limits but this is when optimisation comes in and clock speeds are reduced in order to meet said TDP limit. Having a better more efficient architecture, coupled with much faster GDDR6 memory, and more CU's even at a lower clock speed would of led to a much faster GPU for raw compute power. Heck, if TDP is really the reason why, then instead of pouring money into a technology (RTX) that simply isn't ready for the mass market why not focus their resources on a node shrink and move from 12nmFF to 10 or 7nm.
It's like someone else mentioned, they poured so much rnd into tensor cores for the A.I and automotive industries that they are struggling to make a profit as those markets are still relatively new, so they needed a way to please investors and shareholders by using them for a new gimmick technology. Selling it to gamers as the next big thing, when in reality it IS the next big thing but not for at least another 3-5 years.
This tech in the video should of been the first stepping stone, instead Nvidia treated it as a race and not a marathon.
It was a great engine at one time, but seems abandoned. Hell, CryTek might go bankrupt any second now...so what's the point in using the engine?
They can't go bankrupt, they are owned by EA, and EA is doing fine. EA could for sure close them down (as they are known to do), but if they are putting out projects like this that should tell you EA still sees a point in Crytek and their engine.
EA loves their Frostbite engine, but I don't think EA likes the idea of it being public. Cryengine remains as EA's "Unity competition"
When the RT cores are in use, CUDA cores draw a lot less power. This is because the RT cores performance is weak enough to bottleneck the CUDA cores.
They are node shrinking with their new GPU's likely due next year. Likely 7nm. What the node size is doesn't really matter if the performance is there. Turing doesn't struggle.
I don't see an issue with pushing for new tech before it's prime-time. PC has always been the place to see the future before it's ready. AMD did it with Mantle.
the RT cores are not the bottleneck.
If that were true then enabling RT wouldn't effect performance. I guess I can't say whether the cores themselves are the root of it, but the CUDA cores are being bottlenecked with RT turned on.
Since we can see what performance looks like without RT we know the baseline of what the CUDA cores could do. If anything, with RT off, CUDA has a harder time thanks to the addition of traditional reflections/shadows/GI. Yet since performance goes down with RT, we have what's called a bottleneck.