A smiley emoticon may be in order.
Researchers at Carnegie Mellon University are working with Disney Research to build 3D face models designed to give animators more intuitive control of facial expressions.
The data-driven approach to facial recognition involves computerized face models that reflect a full range of natural expressions. The method translates the motions of actors into a three-dimensional face model and also subdivides it into facial regions that enable animators to create the poses they need.
Whereas previous data-driven approaches have resulted in models that capture motion across the face as a whole, the researchers say this method will allow animators to alter one part of an expression — a cocked eye, for instance – that will in turn alter the rest of the character’s face.
A visual demonstration of this new approach will be presented today at SIGGRAPH 2011, the International Conference on Computer Graphics and Interactive Techniques currently underway in Vancouver.