Much as optical flow describes the dense motion field of objects within the image plane, scene flow describes the three dimensional motion of objects in the world. This has interesting applications in a range of areas including scene understanding, tracking, segmentation, navigation and video compression. Examples of estimated Scene Flow fields can be seen below, where the motion of every point proceeds from the cyan vertex to the white vertex.
I have worked to develop a new algorithm for the estimation of scene flow, which is faster, more generally applicable, and does not suffer from the regularisation artifacts of previous approaches.
The majority of scene flow estimation techniques are based on the optimisation of an energy function. Regularisation elements in this energy function, allow results to be smoothed across untextured regions, where the motion is ambiguous. Unfortunately, this regularisation also tends to remove fine details from the motion field, and damages results around discontinuities.
The Scene Particle algorithm instead takes a probabilistic approach. Every point is examined in isolation, and no smoothness is enforced. For more information see the papers (PAMI 2014, ICCV 2011) or chapters 4 and 5 of my thesis.
The input .oni files for the sequences shown above are provided here and here. Higher quality encodings of the results videos are available here and here.
The current implementation is entirely sequential, despite the independency of the particles. Binaries for 64 bit linux are available here.
ICCV Video Poster
For ICCV 2011, a short video was created, explaining the basis of the algorithm, to be displayed on screens around the conference centre. This video can be viewed below, as well as on the ICCV website, poster 1-35.
Interactive Kinect Demo
An interactive demo system was created, running at 2 frames per second, for display at ICCV 2011. However, a power socket was not provided at the poster stand, so the demo could not be shown. Below a video is provided of the system in use.
In order for the old sequential implementation to run at near real-time, the number of motion hypotheses per ray was reduced to 5. This significantly reduces the accuracy of the estimation, but is useful for showing the versatility of the algorithm in terms of Speed Vs. Accuracy.
Multi-View and Object Tracking
At ICIAR, 2012 a paper was presented using a multi-viewpoint extension of the Scene Particles algorithm, which simultaneously estimated structure and motion, to track objects in 3D. Below example videos are shown, estimating the motion of hands and head in multiview sign language scenarios. The first video demonstrates performance on a 2 view, narrow baseline system with a cluttered background, the second in a 3 view, wide baseline setup.
It is important to note that trajectories are estimated in 3D, and then projected to each viewpoint for display, rather than estimating independent 2D trajectories for each view.