[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/1502650.1502681acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

Automatic design of a control interface for a synthetic face

Published: 08 February 2009 Publication History

Abstract

Getting synthetic faces to display natural facial expressions is essential to enhance the interaction between human users and virtual characters. Yet traditional facial control techniques provide precise but complex sets of control parameters, which are not adapted for non-expert users. In this article, we present a system that generates a simple, 2-Dimensional interface that offers an efficient control over the facial expressions of any synthetic character. The interface generation process relies on the analysis of the deformation of a real human face. The principal geometrical and textural variation patterns of the real face are detected and automatically reorganized onto a low-dimensional space. This control space can then be easily adapted to pilot the deformations of synthetic faces. The resulting virtual character control interface makes it easy to produce varied emotional facial expressions, both extreme and subtle. In addition, the continuous nature of the interface allows the production of coherent temporal sequences of facial animation.

References

[1]
B. Abboud, F. Davoine, and M. Dang. Facial expression recognition and synthesis based on an appearance model. Signal Processing: Image Communication, 19(8):723--740, 2004.
[2]
I. Albrecht, M. Schrder, J. Haber, and H.-P. Seidel. Mixed feelings: Expression of non-basic emotions in a muscle-based talking head. Virtual Reality, 8(4):201--212, 2005.
[3]
C. B. Barber, D. P. Dobkin, and H. Huhdanpaa. The quickhull algorithm for convex hulls. ACM Transactions on Mathematical Software (TOMS), 22(4):469--483, 1996.
[4]
F. Bookstein. Principal warps: Thin-plate splines and the decomposition of deformations. Transactions on Pattern Analysis and Machine Intelligence, 11(6):567--585, 1989.
[5]
T. F. Cootes, G. Edwards, and C. J. Taylor. Active appearance models. European Conference on Computer Vision (ECCV), 1407:484, 1998.
[6]
R. Cowie, D.-C. Ellen, S. Savvidou, E. McMahon, M. Sawey, and M. Schrder. Feeltrace: An instrument for recording perceived emotion in real time. Proc. of the ISCA Workshop on Speech and Emotion, pages 19--24, 2000.
[7]
Y. Du, W. Bi, T. Wang, Y. Zhang, and H. Ai. Distributing expressional faces in 2-d emotional space. Proceedings of the conference on Image and video retrieval, pages 395--400, 2007.
[8]
Y. Du and X. Lin. Emotional facial expression model building. Pattern Recognition Letters, 24(16):2923--2934, 2003.
[9]
P. Ekman, E. Rolls, D. I. Perrett, and H. Ellis. Facial expressions of emotion: An old controversy and new findings. Philosophical Transactions: Biological Sciences, 335(1273):63--69, 1992.
[10]
A. Eleftheriadis, C. Herpel, G. Rajan, and L. Ward. Mpeg-4 systems, text for iso/iec fcd 14496-1 systems. In MPEG-4 SNHC. 1998.
[11]
C. Hu, Y. Chang, R. Feris, and M. Turk. Manifold based analysis of facial expression. Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop, page 81, 2004.
[12]
P. Kalra, A. Mangili, N. Magnenat-Thalmann, and D. Thalmann. Simulation of facial muscle actions based on rational free form deformations. Computer Graphics Forum, 11(3):59--69, 1992.
[13]
C.-S. Lee, A. Elgammal, and D. Metaxas. Synthesis and control of high resolution facial expressions for visual interactions. IEEE International Conference on Multimedia and Expo (ICME), pages 65--68, 2006.
[14]
N. Magnenat-Thalmann, E. Primeau, and D. Thalmann. Abstract muscle action procedures for human face animation. Visual Computer, 3(5):290--7, 1988.
[15]
A. Mehrabian. Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament. Current Psychology, 14(4):261--292, 1996.
[16]
M. Ochs, P. Catherine, and D. Sadek. An empathic virtual dialog agent to improve human--machine interaction. Autonomous Agent and Multi-Agent Systems (AAMAS), pages 89--96, 2008.
[17]
F. I. Parke. Parameterized models for facial animation. IEEE Computer Graphics and Applications, 2(9):61--68, 1982.
[18]
C. Pelachaud, N. I. Badler, and M. Steedman. Generating facial expressions for speech. Cognitive Science, 20(1):1--46, 1996.
[19]
R. Plutchik. The nature of emotions. American Scientist, 89(4):344, 2001.
[20]
Z. Ruttkay, H. Noot, and P. ten Hagen. Emotion disc and emotion squares: tools to explore the facial expression space. Computer Graphics Forum, 22(1):49--53, 2003.
[21]
H. Schlosberg. The description of facial expressions in terms of two dimensions. Journal of Experimental Psychology, 44(4), 1952.
[22]
C. Shan, S. Gong, and P. W. McOwan. Appearance manifold of facial expressions. In Computer Vision in Human-Computer Interaction, pages 221--230. Springer, Berlin, 2005.
[23]
N. Tsapatsoulis, A. Raouzaiou, S. Kollias, R. Cowie, and D.-C. Ellen. Emotion recognition and synthesis based on mpeg-4 faps. In MPEG-4 Facial Animation -- The standard, implementations and applications, pages 141--167. John Wiley and Sons, Hillsdale, NJ, USA, 2002.
[24]
M.-L. Viaud and H. Yahia. Facial animation with wrinkles. Eurographics Worshop on Animation and Simulation, 1992.
[25]
K. Waters. A muscle model for animation three-dimensional facial expression. Proceedings of the 14th annual conference on Computer graphics and interactive techniques, 21:17--24, 1987.
[26]
C. Whissel. The dictionary of affect in language. In R. Plutchik and H. Kellerman, editors, Emotion: Theory, Research, and Experience, pages 113--131. Academic Press, New York, 1989.
[27]
S. Zhang, Z. Wu, H. M. Meng, and L. Cai. Facial expression synthesis using pad emotional parameters for a chinese expressive avatar. Lecture Notes in Computer Science, 4738:24--35, 2007.
[28]
C. Zhou and X. Lin. Facial expressional image synthesis controlled by emotional parameters. Pattern Recognition Letters, 26(16):2611--2627, 2005.

Cited By

View all
  • (2020)Intuitive facial animation editing based on a generative RNN frameworkProceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation10.1111/cgf.14117(1-11)Online publication date: 6-Oct-2020
  • (2019)A video prediction approach for animating single face imageMultimedia Tools and Applications10.1007/s11042-018-6952-y78:12(16389-16410)Online publication date: 1-Jun-2019
  • (2015)From a Discrete Perspective of Emotions to Continuous, Dynamic, and Multimodal Affect SensingEmotion Recognition10.1002/9781118910566.ch18(461-491)Online publication date: 2-Jan-2015
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
IUI '09: Proceedings of the 14th international conference on Intelligent user interfaces
February 2009
522 pages
ISBN:9781605581682
DOI:10.1145/1502650
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 February 2009

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. aam
  2. avatar
  3. facial animation
  4. traveling salesman problem
  5. user interface
  6. virtual character

Qualifiers

  • Research-article

Conference

IUI09
IUI09: 14th International Conference on Intelligent User Interfaces
February 8 - 11, 2009
Florida, Sanibel Island, USA

Acceptance Rates

Overall Acceptance Rate 746 of 2,811 submissions, 27%

Upcoming Conference

IUI '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2
  • Downloads (Last 6 weeks)0
Reflects downloads up to 01 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2020)Intuitive facial animation editing based on a generative RNN frameworkProceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation10.1111/cgf.14117(1-11)Online publication date: 6-Oct-2020
  • (2019)A video prediction approach for animating single face imageMultimedia Tools and Applications10.1007/s11042-018-6952-y78:12(16389-16410)Online publication date: 1-Jun-2019
  • (2015)From a Discrete Perspective of Emotions to Continuous, Dynamic, and Multimodal Affect SensingEmotion Recognition10.1002/9781118910566.ch18(461-491)Online publication date: 2-Jan-2015
  • (2014)Integrating virtual agents in BCI neurofeedback systemsProceedings of the 2014 Virtual Reality International Conference10.1145/2617841.2620713(1-8)Online publication date: 9-Apr-2014
  • (2013)Bilinear decomposition for blended expressions representation2013 Visual Communications and Image Processing (VCIP)10.1109/VCIP.2013.6706355(1-6)Online publication date: Nov-2013
  • (2013)Invariant representation of facial expressions for blended expression recognition on unknown subjectsComputer Vision and Image Understanding10.1016/j.cviu.2013.07.005117:11(1598-1609)Online publication date: 1-Nov-2013
  • (2012)A multimodal fuzzy inference system using a continuous facial expression representation for emotion detectionProceedings of the 14th ACM international conference on Multimodal interaction10.1145/2388676.2388782(493-500)Online publication date: 22-Oct-2012
  • (2012)A new invariant representation of facial expressions: Definition and application to blended expression recognition2012 19th IEEE International Conference on Image Processing10.1109/ICIP.2012.6467435(2617-2620)Online publication date: Sep-2012
  • (2012)ReferencesMultimedia Information Extraction10.1002/9781118219546.refs(425-460)Online publication date: 24-Aug-2012
  • (2011)Scalable multimodal fusion for continuous affect sensing2011 IEEE Workshop on Affective Computational Intelligence (WACI)10.1109/WACI.2011.5953150(1-8)Online publication date: Apr-2011
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media