Verónica Costa Orvalho speaks on advances in facial animation


Interdisciplinary research,
education and capacity building


10 Aug 2010

Orvalho's techniques speed animation pipeline, allow reuse of rigs and animations.

Dr. Verónica Costa Orvalho, assistant professor of computer science at the University of Porto, as well as founder of the Porto Interactive Center (PINC) and Face in Motion, presented “Facial Animation, Fast and Easy” at UT Austin’s Department of Electrical and Computer Engineering on August 10. Dr. Orvalho’s software, Fimmie, makes the process of animating facial expressions up to 99 percent faster than animation by traditional methods, according to the Face in Motion site.

Before developing Fimmie, Dr. Orvalho spent 18 months asking film production studios about areas in which they could use improved methodologies.  During Dr. Orvalho’s research, many studio employees described traditional rigging methods as very time-consuming. Artists rig by identifying key points on the face. Key points include areas that govern a lot of movement, such as around the lips and in the corners of eyes. These points are used to create a mesh which is like a computer-generated cheesecloth. The mesh stretches or compresses and gives resolution to 3-D characters.

“Artists currently rig by hand,” Dr. Orvalho said, “and this causes bottlenecks in any computer graphics production. I asked the artists why rigging was so slow, but they could not really define it for me. I had to figure out the reasons on my own.” Dr. Orvalho determined that accurate geometric deformation was essential to automating facial animation. Through a series of mathematical algorithms, she “created a program that bridges the gap between modeling and animation,” she said. Fimmie allows artists to create several characters from one rig. Animation for one character can be automatically applied to animation for other characters that use the same rig.

Dr. Orvalho is the recipent of a €230,000 R&D award through CoLab and the Portuguese Foundation for Science and Technology, together with her collaborators at UT Austin, Dr. J.K. Aggarwal of the Department of Electrical and Computer Engineering and Dr. Yan Zhang of the School of Information. Their project, entitled “LIFEisGAME: Learning of Facial Emotions Using Serious Games,” will embed real-time facial analysis and synthesis into a video game for people with autism spectrum disorders (ASDs). According to Portugal’s Telecommunication Institute site, “The ability of socially- and emotionally-impaired individuals to recognize and respond to emotions conveyed by the face is critical to improving their communication skills. LIFEisGAME [will help people with ASDs] to recognize facial emotions using real-time synthesis and automatic facial-expression analysis.”

For more information: