One of the most important ways that we experience our environment is by manipulating it: we push, pull, poke, and prod to test hypotheses about our surroundings. By observing how objects respond to forces that we control, we learn about their dynamics. Unfortunately, regular video does not afford this type of manipulation - it limits us to observing what was recorded. The goal of our work is to record objects in a way that captures not only their appearance, but their physical behavior as well.
This work is mostly based on the paper "Image-Space Modal Bases for Plausible Manipulation of Objects in Video" by Abe Davis, Justin G. Chen, and Fredo Durand. For more info about that and other related publications, check out the Publications page. For videos about this and related work, check out the Videos page.
This website, and the linked videos were made by Abe Davis. This work is part of his PhD dissertation at MIT. It is academic research - there is no commercial product at this time, though the technology is patented through MIT. You may contact Abe Davis, Justin G. Chen, or Neal Wadhwa (or all 3 in one email) about licensing through MIT.
UPDATE: Abe will be starting a post-doctorate at Stanford University in the fall. Contact info will be updated accordingly.
Follow @abedavis on Twitter