https://www.youtube.com/watch?v=ZKsbBRVgciU
In this video I showcase an interesting image processing technology where one can create AI-assisted computer generated 2.5D animations using single high definition images taken using a modified drone with multispectral imaging capability (in Visible, Near-Infrared and Near-Ultravioltet). The result is a immersive 3D zoom and scan experience.
The so-called "Ken Burns" effect allows animating still images with a virtual camera scan and zoom. Adding parallax, which results in the 3D Ken Burns effect, enables one to simulate depth in certain regions of the image where there is a clear distinction between objects in the foreground and background. Creating such effects manually is time-consuming and demands sophisticated editing skills.
Using artificial intelligence methods taken from machine learning allows one to synthesize the 3D Ken Burns effect from a single image using a semantic-aware neural network for depth prediction, coupled with a segmentation-based depth adjustment process, and employing a refinement neural network that facilitates accurate depth predictions at object boundaries.
Using the depth prediction, the input image to a point cloud and synthesizes the resulting video frames by rendering the point cloud from the corresponding camera positions.
Color- and depth-inpainting can be used to fill in the missing information in the extreme views of the camera path, thus extending the scene geometry of the point cloud.
Experiments with a wide variety of drone image content, in visible, near-infrared and near-ultraviolet wavelengths, enables realistic immersion in the scene.
Video made by MuonRay
Music Used
Afterlife City (Royalty Free Music) [CC-BY]
by MachinimaSound
Creative Commons Attribution 4.0 International (CC BY 4.0)
2 Comments
Surreal!
Thank You! That's what I was going for!
Reply to this comment...
Log in to comment
Login to comment.