This document discusses interactive dynamic videos (IDV), which analyze videos to predict how recorded objects will respond to new, unseen forces. IDV turn videos into interactive animations that users can manipulate with virtual forces. It works by extracting models from video frames using MATLAB then exporting them to a simulation software written in C++ for rendering. Potential applications include interactive animations, special effects, augmented/virtual reality games, and predicting structural responses in engineering.
3. Introduction
One of the most important ways that we experience our environment Is by
manipulating it.(push, pull, poke and prod)
By observing how objects respond to forces that we control we learn about their
dynamics.
Videos contain enough information to predict how recorded objects will respond
to new, unseen forces.
Videos makes it easy to capture the appearance of our surroundings, but offers no
physical interactions with them.
4. WHAT IS AN INTERACTIVE DYNAMIC VIDEO?
Unfortunately videos does not afford different types of manipulations – it limits us
to just observe the dynamics that we record.
To create realistic simulations, videos are analyzed at different frequencies and are
predicted how these objects will move in new situations.
Turns videos into interactive animations that users can explore with some virtual
forces that they control.
6. Implementation details
Mode extractions and selection interface are written in MATLAB.
Once models have been selected, they are exported as RGBA, TIFF images and
loaded into the simulation software.
Simulation software is written in c++ and uses Qt and openGL.
7. Applications
Interactive animations
Special effects : a variety of visual effects can be achieved by specifying forces in
different ways.
It can also improve AR/VR games like pokemonGo to drop digital characters into
real world environments.
Applications in civil engineering by predicting how constructions like bridges will
react when some external unknown forces act on them.
11. Conclusion
With minimal user input, we can extract a modal basis for image-space
deformations of an object from a video.
We can use the video basis to synthesize animations with physically plausible
dynamics.
The interactive animations we create bring a sense of physical responsiveness to
regular videos