Abstract
A method for extracting facial motion parameters is proposed. The method consists of three steps. First, the feature points of the face, selected automatically in the first frame, are tracked in successive frames. Then, the feature points are connected with Delaunay triangulation so that the motion of each point relative to the surrounding points can be computed. Finally, muscle motions are estimated based on motions of the feature points placed near each muscle. The experiments showed that the proposed method can extract facial motion parameters accurately. In addition, the facial motion parameters are used to render a facial animation sequence.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Massaro, D, W.: Perceiving Talking Faces, MIT Press (1998).
Ekman, P., Friesen, W.V.: The Facial Action Coding System, Consulting Psychologists Press, Inc., (1978).
Terzopoulos, D., Waters, K.: Physically-based facial modeling, analysis, and animation, The J. of Visualization and Computer Animation, 1(2) (1990) 73–80.
Terzopoulos, D., Waters, K.: Analysis and synthesis of facial image sequences using physical and anatomical models, IEEE Trans. on Pattern Analysis and Machine Intelligence, 15(6) (1993) 569–579.
Mase, K.: Recognition of facial expression from optical flow, IEICE Trans., E74(10) (1991) 3474–3483.
Essa, I. A., Pentland, A.: Coding, Analysis, Interpretation, and Recognition of Facial Expressions, IEEE Trans. on Pattern Analysis and Machine Intelligence, 19(7) (1997).
Otsuka, T., Ohya, J.: Recognizing Multiple Persons’ Facial Expressions Using HMM Based on Automatic Extraction of Significant Frames from Image Sequences, ICIP’97, vol.II (1997) 546–549.
DeCarlo, D., Metaxas, D.: Deformable Modes-Based Shape and Motion Analysis from Images using Motion Residual Error, Proc. ICCV’98 (1998) 113–119.
Cambridge Digital Research Laboratory: FaceWorks, URL http://www.interface.digital.com/.
Shi, J., Tomasi, C.: Good features to track, Proc of the IEEE Conf. on Computer Vision and Pattern Recognition (1994) 593–600, 1994.
Shapiro, L., Zisserman, A., Brady, M.: 3D motion recovery via affine epipolar geometry, Int. J. Computer Vision, 16(2) (1995) 147–182.
Lawson, C.L., Transforming triangulations, Discrete Math., 3 (1972) 365–372.
de Berg, M., van Kreveld, M., Overmars, M., Schwarzkopf, O.: Computational Geometry, Chapter 9. Springer-Verlag (1997).
Pelachaud, C., Badler, N. I., Steedman, M.: Generating facial expressions for speech, Cognitive Science, 20(1) (1996) 1–46.
Otsuka, T., J. Ohya, J.: Converting Facial Expressions Using Recognition-Based Analysis of Image Sequences, ACCV’98, vol. II (1998) 703–710.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Otsuka, T., Ohya, J. (1999). Extracting Facial Motion Parameters by Tracking Feature Points. In: Nishio, S., Kishino, F. (eds) Advanced Multimedia Content Processing. AMCP 1998. Lecture Notes in Computer Science, vol 1554. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48962-2_30
Download citation
DOI: https://doi.org/10.1007/3-540-48962-2_30
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-65762-0
Online ISBN: 978-3-540-48962-7
eBook Packages: Springer Book Archive