The Data-Driven Facial Modeling and Animation project explores the use of facial models made directly from motion capture data to address the goals of realism and automation. More specifically, a range of typical face motion data is first captured. This data is then modeled with machine learning techniques. Once a model is available, its output and parameters can be explored to extrapolate or synthesize novel motions. While this general data-driven approach is a recent theme across several research groups, IMSC already has some unique results, as shown below.
Tech Summary
NSF Report (Year 7)
NSF Report (Year 8)
Poster
More Info
Laboratory