[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

An anatomically-constrained local deformation model for monocular face capture

Published: 11 July 2016 Publication History

Abstract

We present a new anatomically-constrained local face model and fitting approach for tracking 3D faces from 2D motion data in very high quality. In contrast to traditional global face models, often built from a large set of blendshapes, we propose a local deformation model composed of many small subspaces spatially distributed over the face. Our local model offers far more flexibility and expressiveness than global blendshape models, even with a much smaller model size. This flexibility would typically come at the cost of reduced robustness, in particular during the under-constrained task of monocular reconstruction. However, a key contribution of this work is that we consider the face anatomy and introduce subspace skin thickness constraints into our model, which constrain the face to only valid expressions and helps counteract depth ambiguities in monocular tracking. Given our new model, we present a novel fitting optimization that allows 3D facial performance reconstruction from a single view at extremely high quality, far beyond previous fitting approaches. Our model is flexible, and can be applied also when only sparse motion data is available, for example with marker-based motion capture or even face posing from artistic sketches. Furthermore, by incorporating anatomical constraints we can automatically estimate the rigid motion of the skull, obtaining a rigid stabilization of the performance for free. We demonstrate our model and single-view fitting method on a number of examples, including, for the first time, extreme local skin deformation caused by external forces such as wind, captured from a single high-speed camera.

Supplementary Material

ZIP File (a115-wu-supp.zip)
Supplemental files.

References

[1]
Beeler, T., and Bradley, D. 2014. Rigid stabilization of facial expressions. ACM Trans. Graphics (Proc. SIGGRAPH) 33, 4, 44:1--44:9.
[2]
Beeler, T., Bickel, B., Sumner, R., Beardsley, P., and Gross, M. 2010. High-quality single-shot capture of facial geometry. ACM Trans. Graphics (Proc. SIGGRAPH).
[3]
Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R. W., and Gross, M. 2011. High-quality passive facial performance capture using anchor frames. ACM Trans. Graphics (Proc. SIGGRAPH) 30, 75:1--75:10.
[4]
Black, M., and Yacoob, Y. 1995. Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion. In ICCV, 374--381.
[5]
Blanz, V., and Vetter, T. 1999. A morphable model for the synthesis of 3d faces. In Proc. SIGGRAPH, 187--194.
[6]
Bouaziz, S., Wang, Y., and Pauly, M. 2013. Online modeling for realtime facial animation. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, 40:1--40:10.
[7]
Bradley, D., Heidrich, W., Popa, T., and Sheffer, A. 2010. High resolution passive facial performance capture. ACM Trans. Graphics (Proc. SIGGRAPH) 29, 41:1--41:10.
[8]
Bregler, C., Malik, J., and Pullen, K. 2004. Twist based acquisition and tracking of animal and human kinematics. IJCV 56, 3, 179--194.
[9]
Brox, T., Bruhn, A., Papenberg, N., and Weickert, J. 2004. High accuracy optical flow estimation based on a theory for warping. In ECCV. 25--36.
[10]
Brunton, A., Bolkart, T., and Wuhrer, S. 2014. Multilinear wavelets: A statistical shape space for human faces. In ECCV.
[11]
Cao, C., Weng, Y., Lin, S., and Zhou, K. 2013. 3d shape regression for real-time facial animation. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, 41:1--41:10.
[12]
Cao, C., Hou, Q., and Zhou, K. 2014. Displaced dynamic expression regression for real-time facial tracking and animation. ACM Trans. Graphics (Proc. SIGGRAPH) 33, 4, 43:1--43:10.
[13]
Cao, C., Bradley, D., Zhou, K., and Beeler, T. 2015. Real-time high-fidelity facial performance capture. ACM Trans. Graphics (Proc. SIGGRAPH).
[14]
Chen, Y.-L., Wu, H.-T., Shi, F., Tong, X., and Chai, J. 2013. Accurate and robust 3d facial capture using a single rgbd camera. In ICCV.
[15]
Cootes, T. F., Edwards, G. J., and Taylor, C. J. 2001. Active appearance models. IEEE TPAMI 23, 6, 681--685.
[16]
DeCarlo, D., and Metaxas, D. 1996. The integration of optical flow and deformable models with applications to human face shape and motion estimation. In CVPR, 231.
[17]
Ekman, P., and Friesen, W. V. 1977. Facial action coding system.
[18]
Essa, I., Basu, S., Darrell, T., and Pentland, A. 1996. Modeling, tracking and interactive animation of faces and heads using input from video. In Proc. of Computer Animation, 68.
[19]
Fyffe, G., Jones, A., Alexander, O., Ichikari, R., and Debevec, P. 2014. Driving high-resolution facial scans with video performance capture. ACM Trans. Graphics 34, 1, 8:1--8:14.
[20]
Garrido, P., Valgaert, L., Wu, C., and Theobalt, C. 2013. Reconstructing detailed dynamic face geometry from monocular video. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 32, 6, 158:1--158:10.
[21]
Ghosh, A., Fyffe, G., Tunwattanapong, B., Busch, J., Yu, X., and Debevec, P. 2011. Multiview face capture using polarized spherical gradient illumination. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 30, 6, 129:1--129:10.
[22]
Gower, J. C. 1975. Generalized procrustes analysis. Psychometrika 40, 1.
[23]
Huang, H., Chai, J., Tong, X., and Wu, H.-T. 2011. Leveraging motion capture and 3d scanning for high-fidelity facial performance acquisition. ACM Trans. Graphics (Proc. SIGGRAPH) 30, 4, 74:1--74:10.
[24]
Joshi, P., Tien, W. C., Desbrun, M., and Pighin, F. 2003. Learning controls for blend shape based realistic facial animation. In SCA, 187--192.
[25]
Kobbelt, L., Vorsatz, J., and Seidel, H.-P. 1999. Multiresolution hierarchies on unstructured triangle meshes. Comput. Geom. Theory Appl. 14, 1-3, 5--24.
[26]
Lau, M., Chai, J., Xu, Y.-Q., and Shum, H.-Y. 2009. Face poser: Interactive modeling of 3d facial expressions using facial priors. ACM Trans. Graph. 29, 1 (Dec.), 3:1--3:17.
[27]
Lewis, J. P., Anjyo, K., Rhee, T., Zhang, M., Pighin, F., and Deng, Z. 2014. Practice and Theory of Blendshape Facial Models. In Eurographics 2014 - State of the Art Reports, The Eurographics Association, S. Lefebvre and M. Spagnuolo, Eds.
[28]
Li, H., Roivainen, P., and Forcheimer, R. 1993. 3-d motion estimation in model-based facial image coding. IEEE TPAMI 15, 6, 545--555.
[29]
Li, H., Yu, J., Ye, Y., and Bregler, C. 2013. Realtime facial animation with on-the-fly correctives. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, 42:1--42:10.
[30]
Na, K.-G., and Jung, M.-R. 2011. Local shape blending using coherent weighted regions. The Vis. Comp. 27, 6-8, 575--584.
[31]
Neumann, T., Varanasi, K., Wenger, S., Wacker, M., Magnor, M., and Theobalt, C. 2013. Sparse localized deformation components. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 32, 6, 179:1--179:10.
[32]
Rhee, T., Hwang, Y., Kim, J. D., and Kim, C. 2011. Real-time facial animation from live video tracking. In Proc. SCA, 215--224.
[33]
Saragih, J. M., Lucey, S., and Cohn, J. F. 2011. Deformable model fitting by regularized landmark mean-shift. IJCV 91, 2, 200--215.
[34]
Shi, F., Wu, H.-T., Tong, X., and Chai, J. 2014. Automatic acquisition of high-fidelity facial performances using monocular videos. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 33.
[35]
Suwajanakorn, S., Kemelmacher-Shlizerman, I., and Seitz, S. M. 2014. Total moving face reconstruction. In ECCV.
[36]
Tena, J. R., De la Torre, F., and Matthews, I. 2011. Interactive region-based linear 3d face models. ACM Trans. Graphics (Proc. SIGGRAPH) 30, 4, 76:1--76:10.
[37]
Valgaerts, L., Wu, C., Bruhn, A., Seidel, H.-P., and Theobalt, C. 2012. Lightweight binocular facial performance capture under uncontrolled lighting. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 31, 6.
[38]
Vlasic, D., Brand, M., Pfister, H., and Popović, J. 2005. Face transfer with multilinear models. ACM Trans. Graphics (Proc. SIGGRAPH) 24, 3, 426--433.
[39]
Weise, T., Li, H., Van Gool, L., and Pauly, M. 2009. Face/off: live facial puppetry. In Proc. SCA, 7--16.
[40]
Weise, T., Bouaziz, S., Li, H., and Pauly, M. 2011. Realtime performance-based facial animation. ACM Trans. Graphics (Proc. SIGGRAPH) 30, 4, 77:1--77:10.
[41]
Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: high resolution capture for modeling and animation. ACM Trans. Graphics (Proc. SIGGRAPH), 548--558.
[42]
Zollhöfer, M., Niessner, M., Izadi, S., Rehmann, C., Zach, C., Fisher, M., Wu, C., Fitzgibbon, A., Loop, C., Theobalt, C., and Stamminger, M. 2014. Real-time non-rigid reconstruction using an rgb-d camera. ACM Trans. Graphics (Proc. SIGGRAPH) 33, 4, 156:1--156:12.

Cited By

View all
  • (2024)Learning Based Toolpath Planner on Diverse Graphs for 3D PrintingACM Transactions on Graphics10.1145/368793343:6(1-16)Online publication date: 19-Dec-2024
  • (2024)Learning a Generalized Physical Face Model From DataACM Transactions on Graphics10.1145/365818943:4(1-14)Online publication date: 19-Jul-2024
  • (2024)Stylize My Wrinkles: Bridging the Gap from Simulation to RealityComputer Graphics Forum10.1111/cgf.15048Online publication date: 15-May-2024
  • Show More Cited By

Index Terms

  1. An anatomically-constrained local deformation model for monocular face capture

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Graphics
    ACM Transactions on Graphics  Volume 35, Issue 4
    July 2016
    1396 pages
    ISSN:0730-0301
    EISSN:1557-7368
    DOI:10.1145/2897824
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 July 2016
    Published in TOG Volume 35, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. anatomical constraints
    2. facial performance capture
    3. local face model
    4. monocular face tracking

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)62
    • Downloads (Last 6 weeks)9
    Reflects downloads up to 24 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Learning Based Toolpath Planner on Diverse Graphs for 3D PrintingACM Transactions on Graphics10.1145/368793343:6(1-16)Online publication date: 19-Dec-2024
    • (2024)Learning a Generalized Physical Face Model From DataACM Transactions on Graphics10.1145/365818943:4(1-14)Online publication date: 19-Jul-2024
    • (2024)Stylize My Wrinkles: Bridging the Gap from Simulation to RealityComputer Graphics Forum10.1111/cgf.15048Online publication date: 15-May-2024
    • (2024)3D Face Tracking from 2D Video through Iterative Dense UV to Image Flow2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.00123(1227-1237)Online publication date: 16-Jun-2024
    • (2024)Robust facial marker tracking based on a synthetic analysis of optical flows and the YOLO networkThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-023-02931-w40:4(2471-2489)Online publication date: 1-Apr-2024
    • (2023)Towards Practical Capture of High-Fidelity Relightable AvatarsSIGGRAPH Asia 2023 Conference Papers10.1145/3610548.3618138(1-11)Online publication date: 10-Dec-2023
    • (2023)A Temporal Coherent Topology Optimization Approach for Assembly Planning of Bespoke Frame StructuresACM Transactions on Graphics10.1145/359210242:4(1-13)Online publication date: 26-Jul-2023
    • (2023)A Perceptual Shape Loss for Monocular 3D Face ReconstructionComputer Graphics Forum10.1111/cgf.1494542:7Online publication date: 6-Dec-2023
    • (2023)Semantically Disentangled Variational Autoencoder for Modeling 3D Facial DetailsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2022.316666629:8(3630-3641)Online publication date: 1-Aug-2023
    • (2023)Deep Detector and Optical Flow-based Tracking Approach of Facial Markers for Animation Capture2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)10.1109/ISMAR-Adjunct60411.2023.00134(625-630)Online publication date: 16-Oct-2023
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media