Motion Capture techniques (MoCap) provide a convenient way to capture human skeletal motion
in the real world and allow motion retargetting to a virtual character.
MoCap has been extensively developed for decades and successfully used for research and in industries.
However, skeletal MoCap does not capture the surface movements, e.g. moving wrinkles in clothing and the motion of hair,
or surface appearance, e.g. costume and facial expressions.
These surface movements and appearance are critical to realism and hard to reproduce afterwards.
4D Performance Capture (4DPC) has been introduced recently to capture both shape,
appearance and motion of the human body from multiview video.
The outcome of 4DPC is a sequence of reconstructed 3D meshes
with detailed surface dynamics and video-quality textures.
4DPC meshes are temporal consistent, i.e. the number of vertices and the connectivities are unchanging,
making them compatible with conventional animation pipelines.
To achieve temporal consistency across multiple sequences,
Global temporal non-rigid surface registration/alignment is introduced.
- A global non-rigid surface alignment framework is introduced to minimise the
deformation required to register all meshes from different input sequences into
a temporally consistent structure. The input sequences are first independently
temporally aligned and then cross sequences aligned through frames with similar 3D shape.
[Huang et al. 2011]
- Further global non-sequential alignment provides a means to construct
a Minimum Spanning Tree (MST) to include all unaligned frames
based on their 3D shape similarity and performing global alignment across the tree.
[Budd et al. 2013]
[Budd et al. 2011]