My currently ongoing research projects are listed below. Click the project logo for more information.
Reflexive robotics using asynchronous perception (Principle investigator)
ROSSINI: Reconstructing 3D structure from single images: a perceptual reconstruction approach (Co-investigator)
PeRsOnalized nutriTion for hEalthy livINg (Co-investigator)
PhD studentships: various industrial partners, with topics ranging from next-generation sensing, to autonomous vehicles, robotics and deep-learning.
Previous Research Projects
My previous research projects are listed below. Click the project logo for more information.
SMILE: Scalable Multimodal sign language Technology for sIgn language Learning and assessmEnt (Co-investigator)
Older Research Interests
The below is an archive of older research activities from my time as a PhD student and research fellow. For more up to date information look at the projects listed above, my CV and my publications list.
Much as optical flow describes the dense motion field of objects within the image plane, scene flow describes the three dimensional motion of objects in the world. This has interesting applications in a range of areas including scene understanding, tracking, segmentation, navigation and video compression.
I have worked to develop a new algorithm for the estimation of scene flow, which is faster, more generally applicable, and less susceptible to smoothing artefacts than previous approaches. For more information, see here.
I have also investigated the use of machine learning techniques to create more robust cost functions for motion estimation. These exhibit much more desirable properties than standard techniques based on brightness constancy. More details and interactive supplementary results here.
I have also worked on the exploitation of 3D information within natural action recognition, including the compilation of a new dataset called Hollywood 3D, and the release of extensive baseline results and code.
Further I have exploited the fast scene flow described above, to perform 3D motion segmentation and tracking of hands during multi-view sign language sequences (see the bottom of this page).
I developed a system to classify hand pose, and from it created a demonstration Paper,
Scissors, Stone game, shown in the video below.
The pose estimation system is extremely flexible, and has also been used to estimate the 3 angles of head pose.
The system made use of a stereo camera system (Bumblebee2 from PointGrey) providing appearance and depth data.
Thanks to Brian Holt for the image capture code, and UEA for the animated avatar.