[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/1964921.1964972acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
research-article

Realtime performance-based facial animation

Published: 25 July 2011 Publication History

Abstract

This paper presents a system for performance-based character animation that enables any user to control the facial expressions of a digital avatar in realtime. The user is recorded in a natural environment using a non-intrusive, commercially available 3D sensor. The simplicity of this acquisition device comes at the cost of high noise levels in the acquired data. To effectively map low-quality 2D images and 3D depth maps to realistic facial expressions, we introduce a novel face tracking algorithm that combines geometry and texture registration with pre-recorded animation priors in a single optimization. Formulated as a maximum a posteriori estimation in a reduced parameter space, our method implicitly exploits temporal coherence to stabilize the tracking. We demonstrate that compelling 3D facial dynamics can be reconstructed in realtime without the use of face markers, intrusive lighting, or complex scanning hardware. This makes our system easy to deploy and facilitates a range of new applications, e.g. in digital gameplay or social interactions.

Supplementary Material

Supplemental material. (a77-weise.zip)
MP4 File (tp077_11.mp4)

References

[1]
Alexander, O., Rogers, M., Lambeth, W., Chang, M., and Debevec, P. 2009. The digital emily project: photoreal facial modeling and animation. ACM SIGGRAPH 2009 Courses.
[2]
Beeler, T., Bickel, B., Beardsley, P., Sumner, B., and Gross, M. 2010. High-quality single-shot capture of facial geometry. ACM Trans. Graph. 29, 40:1--40:9.
[3]
Black, M. J., and Yacoob, Y. 1995. Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion. In ICCV, 374--381.
[4]
Blanz, V., and Vetter, T. 1999. A morphable model for the synthesis of 3d faces. In Proc. SIGGRAPH 99.
[5]
Borshukov, G., Piponi, D., Larsen, O., Lewis, J. P., and Tempelaar-Lietz, C. 2005. Universal capture - image-based facial animation for "the matrix reloaded". In SIGGRAPH 2005 Courses.
[6]
Bradley, D., Heidrich, W., Popa, T., and Sheffer, A. 2010. High resolution passive facial performance capture. ACM Trans. Graph. 29, 41:1--41:10.
[7]
Chai, J. X., Xiao, J., and Hodgins, J. 2003. Vision-based control of 3d facial animation. In SCA.
[8]
Chuang, E., and Bregler, C. 2002. Performance driven facial animation using blendshape interpolation. Tech. rep., Stanford University.
[9]
Cootes, T., Edwards, G., and Taylor, C. 2001. Active appearance models. PAMI 23, 681--685.
[10]
Covell, M. 1996. Eigen-points: Control-point location using principle component analyses. In FG '96.
[11]
DeCarlo, D., and Metaxas, D. 1996. The integration of optical flow and deformable models with applications to human face shape and motion estimation. In CVPR.
[12]
DeCarlo, D., and Metaxas, D. 2000. Optical flow constraints on deformable models with applications to face tracking. IJCV 38, 99--127.
[13]
Ekman, P., and Friesen, W. 1978. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press.
[14]
Essa, I., Basu, S., Darrell, T., and Pentland, A. 1996. Modeling, tracking and interactive animation of faces and heads using input from video. In Proc. Computer Animation.
[15]
Furukawa, Y., and Ponce, J. 2009. Dense 3d motion capture for human faces. In CVPR.
[16]
Grochow, K., Martin, S. L., Hertzmann, A., and Popović, Z. 2004. Style-based inverse kinematics. ACM Trans. Graph. 23, 522--531.
[17]
Guenter, B., Grimm, C., Wood, D., Malvar, H., and Pighin, F. 1993. Making faces. IEEE Computer Graphics and Applications 13, 6--8.
[18]
Ikemoto, L., Arikan, O., and Forsyth, D. 2009. Generalizing motion edits with gaussian processes. ACM Trans. Graph. 28, 1:1--1:12.
[19]
Lau, M., Chai, J., Xu, Y.-Q., and Shum, H.-Y. 2007. Face poser: interactive modeling of 3d facial expressions using model priors. In SCA.
[20]
Li, H., Roivainen, P., and Forcheimer, R. 1993. 3-d motion estimation in model-based facial image coding. PAMI 15, 545--555.
[21]
Li, H., Adams, B., Guibas, L. J., and Pauly, M. 2009. Robust single-view geometry and motion reconstruction. ACM Trans. Graph. 28, 175:1--175:10.
[22]
Li, H., Weise, T., and Pauly, M. 2010. Example-based facial rigging. ACM Trans. Graph. 29, 32:1--32:6.
[23]
Lin, I.-C., and Ouhyoung, M. 2005. Mirror mocap: Automatic and efficient capture of dense 3d facial motion parameters from video. The Visual Computer 21, 6, 355--372.
[24]
Lou, H., and Chai, J. 2010. Example-based human motion denoising. IEEE Trans. on Visualization and Computer Graphics 16, 870--879.
[25]
Lu, P., Nocedal, J., Zhu, C., Byrd, R. H., and Byrd, R. H. 1994. A limited-memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing.
[26]
Ma, W.-C., Hawkins, T., Peers, P., Chabert, C.-F., Weiss, M., and Debevec, P. 2007. Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination. In EUROGRAPHICS Symposium on Rendering.
[27]
McLachlan, G. J., and Krishnan, T. 1996. The EM Algorithm and Extensions. Wiley-Interscience.
[28]
Pérez, P., Gangnet, M., and Blake, A. 2003. Poisson image editing. ACM Trans. Graph. 22, 313--318.
[29]
Pighin, F., and Lewis, J. P. 2006. Performance-driven facial animation. In ACM SIGGRAPH 2006 Courses.
[30]
Pighin, F., Szeliski, R., and Salesin, D. 1999. Resynthesizing facial animation through 3d model-based tracking. ICCV 1, 143--150.
[31]
Roberts, S. 1959. Control chart tests based on geometric moving averages. In Technometrics, 239250.
[32]
Tipping, M. E., and Bishop, C. M. 1999. Probabilistic principal component analysis. Journal of the Royal Statistical Society, Series B.
[33]
Tipping, M. E., and Bishop, C. M. 1999. Mixtures of probabilistic principal component analyzers. Neural Computation 11.
[34]
Viola, P., and Jones, M. 2001. Rapid object detection using a boosted cascade of simple features. In CVPR.
[35]
Weise, T., Leibe, B., and Gool, L. V. 2008. Accurate and robust registration for in-hand modeling. In CVPR.
[36]
Weise, T., Li, H., Gool, L. V., and Pauly, M. 2009. Face/off: Live facial puppetry. In SCA.
[37]
Williams, L. 1990. Performance-driven facial animation. In Comp. Graph. (Proc. SIGGRAPH 90).
[38]
Wilson, C. A., Ghosh, A., Peers, P., Chiang, J.-Y., Busch, J., and Debevec, P. 2010. Temporal upsampling of performance geometry using photometric alignment. ACM Trans. Graph. 29, 17:1--17:11.
[39]
Zhang, S., and Huang, P. 2004. High-resolution, real-time 3d shape acquisition. In CVPR Workshop.
[40]
Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: high resolution capture for modeling and animation. ACM Trans. Graph. 23, 548--558.

Cited By

View all
  • (2023)CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52729.2023.01229(12780-12790)Online publication date: Jun-2023
  • (2022)Fast 3D Face Reconstruction from a Single Image Using Different Deep Learning Approaches for Facial Palsy PatientsBioengineering10.3390/bioengineering91106199:11(619)Online publication date: 27-Oct-2022
  • (2022)An Objective Pain Measurement Machine Learning Model through Facial Expressions and Physiological Signals2022 28th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)10.1109/M2VIP55626.2022.10041105(1-4)Online publication date: 16-Nov-2022
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGGRAPH '11: ACM SIGGRAPH 2011 papers
August 2011
869 pages
ISBN:9781450309431
DOI:10.1145/1964921
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 July 2011

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. blendshape animation
  2. face animation
  3. markerless performance capture
  4. real-time tracking

Qualifiers

  • Research-article

Funding Sources

Conference

SIGGRAPH '11
Sponsor:

Acceptance Rates

SIGGRAPH '11 Paper Acceptance Rate 82 of 432 submissions, 19%;
Overall Acceptance Rate 1,822 of 8,601 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)19
  • Downloads (Last 6 weeks)4
Reflects downloads up to 25 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2023)CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52729.2023.01229(12780-12790)Online publication date: Jun-2023
  • (2022)Fast 3D Face Reconstruction from a Single Image Using Different Deep Learning Approaches for Facial Palsy PatientsBioengineering10.3390/bioengineering91106199:11(619)Online publication date: 27-Oct-2022
  • (2022)An Objective Pain Measurement Machine Learning Model through Facial Expressions and Physiological Signals2022 28th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)10.1109/M2VIP55626.2022.10041105(1-4)Online publication date: 16-Nov-2022
  • (2022)A Survey of Facial Capture for Virtual RealityIEEE Access10.1109/ACCESS.2021.313820010(6042-6052)Online publication date: 2022
  • (2022)Robust 3D face modeling and tracking from RGB-D imagesMultimedia Systems10.1007/s00530-022-00925-728:5(1657-1666)Online publication date: 26-Apr-2022
  • (2021)Non-isomorphic Interaction Techniques for Controlling Avatar Facial Expressions in VRProceedings of the 27th ACM Symposium on Virtual Reality Software and Technology10.1145/3489849.3489867(1-10)Online publication date: 8-Dec-2021
  • (2021)Real-time 3D neural facial animation from binocular videoACM Transactions on Graphics10.1145/3450626.345980640:4(1-17)Online publication date: 19-Jul-2021
  • (2021)Normalized Avatar Synthesis Using StyleGAN and Perceptual Refinement2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR46437.2021.01149(11657-11667)Online publication date: Jun-2021
  • (2020)Real-Time Cleaning and Refinement of Facial Animation SignalsProceedings of the 4th International Conference on Graphics and Signal Processing10.1145/3406971.3406985(70-75)Online publication date: 26-Jun-2020
  • (2020)Facial Movement Interface for Mobile Devices Using Depth-sensing Camera2020 12th International Conference on Knowledge and Smart Technology (KST)10.1109/KST48564.2020.9059497(115-120)Online publication date: Jan-2020
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media