With Hellblade, they scanned this actor under different light condition and use dynamic shader which blends those scanned textures based on light type and intensity. UE4 uses this technique for water effect on surface of objects (Blood in KF2 instead of decal blood). Basically they need texture for each state and then some memory object holding information about blending level. But that's for static geometry. Even girl here can be considered static object, as skin only bends/stretches and will still look realistic. But all those leaves, sand, rock. With that kind of detail you expect them to interact in a way and that would be very resource costly (if possible at all). Biggest downside of all those techniques? It is huge resource hog. Tons of super high resolution textures require not only lot of vram, but bandwidth. Usage of materials, even complex ones can create same result with only fraction of vram used and with much lower bandwidth demands. But time to create those...
Still too much input lag on most 4k sets sadly. I just grabbed a 1080p 60" that has 18ms input lag and backlight razamataz jibbery smacks to stop motion blur. I could not be happier thanks to Nvidia DSR and upscaling. I just don't see the need even at 60 inches. Too many pixels to push for too little benefit. Well, and my eyes are not what they used to be.
Agreed. It does indeed look stunning. Cannot wait for future games to actually look like that. That said, where are the gpu's capable of running this fully maxed at 120hz?
Remember the initial graphics quality in Ark: Survival Evolved? Well not long after they began early access, those textures were removed. The update coming this week is going to retexture the game all the way up to 4k
So wait...someone takes raw pictures, imports them into UE4, and then pastes them onto a 4,096x4,096 plane and then /slashes the highresshot command, imports this into windows movie maker, adds a an intro and outro and everyone has a nergasm? FOR WHAT? There are not any models here are there? There is no 'rendering' involved, is there? What is everyone shooting loads for?
You really think this is just a picture? Seriously? Please open your eyes and watch the video again. It's not "static" like how you're thinking.
He pointed out same things I did. Where is edge of showing you photo and where it really becomes what we call computer graphics. If you zoom photo in image viewer, is that computer graphics as it is computer which is doing scaling? If you put photo on texture and alter position of corners of that texture to change perspective, is that computer graphics? If you add height map to that texture made of photo and move viewport around. Is that computer graphics? If it is not only height map, but few more objects which photo texture was placed on, is that finally computer graphics? Truth is, all of them are computer graphics and at same time, none of those are impressive. Except of automated technology converting series of photos into geometry and height maps. But that's nothing new. Using such techniques has advantage, not for you as customer, sorry... But studios can benefit greatly. Faster creation of game resources like static objects, environments saves time (money). While giving impression of near photo realism (sales increase). And downside has to be paid, and it is paid by customer. Higher HW requirements. As author of this thing mentioned, it is possible to render it in real time at somewhat acceptable fps on 1080p. And that's small bit of geometry in very limited space. Turn around 90° and there is nothing to be seen, there is no game logic involved in such project. Likelyhood that there are dynamic light and shadows as addition to this is very small. (Because everyone who makes environment with anything dynamic shows it, as that is real wow maker.)
It's pretty obvious that they are 3d models due to perspective distortion. For example in that last scene with the rock you can see more ground appearing from behind the sharp edges of the rock as the camera moves back.
Fox2232. I understand your point, but, technology is progressing all the time. Life-like graphics will require processing power and memory we could only dream of right now, but, give it 10yrs? What you're seeing is industry techniques incorporated into a games-oriented engine. While this is on a very small scale right now, others will see it and expand on it. Don't forget all the real-time demos that have been created using UE. Each one a small step towards that goal aswell. In 2016 we are just about starting to reach towards 100GB games. Imagine where we'll be when we see 250GB, 500GB...1TB games! How does it make you feel that there'll be 1TB games? We can only imagine what devs will achieve in the future.
I don't really see how it has no advantage to the consumer and only to the studio. Vanishing of Ethan Carter uses photogrammetry and it's the reason why the game looks as good as it does. Are the scenes being dynamically lit? No. But they don't need to be for that kind of game. Kite Demo showcases the same technique, but does actually have dynamically lit scenes. You lose some quality in the delighting process, but the overall technique is still graphically superior to other options. It's like any other technique, a good artist will use it well, a terrible artist will use it poorly. I really don't see the issue.
So this is how they faked the moon landings then NASA had this tech back in the 60's and only now are we getting to see and use it.
No, this is a 3D representative view, a fixed image rendered, you can only move the camera around... basically you "scan" the environnement, + use the " 3D photos" and generate the 3D models + textures based on it. This guy was working for Dice before and allready show similar thing 1 year ago when he was working on SW: Battlefront. Its the use of photogrammetry for creating textures and 3D models assets environnement.. ( similar to what was used in Battlefront ). This said, you are not ready to see the same quality on an 3D games before a while, actually the best example we have as result of it is SW: Battlefront. http://starwars.ea.com/starwars/battlefront/news/how-we-used-photogrammetry
If someone builds the environment polygon by polygon, builds the textures and then the materials using the engine - and finally lights it using the lightmas and renders out a matinee sequence - fine. This is not fine, it's just photographs inside the engine using forced perspective changes. To quote the artist from their own website: "...it is scanned..."
Uh, read what photogrammetry is. That's what the artist used. Yes you scan images in, but you apply them to physical assets. They contain multiple maps that you need to properly blend and configure, there are tools that assist in this process. And then you need to setup the lighting in the scene so that it matches to the same profile as the the scan, unless as I said, you delit it to form an albedo. https://www.unrealengine.com/blog/creating-assets-for-open-world-demo
I know what that is as I followed the SW:Battlefield development under the Frostbite engine. It's still scanned and not a fresh scene. Probably took an afternoon to do at most.