[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

High-Fidelity Face Reenactment Via Identity-Matched Correspondence Learning

Published: 25 February 2023 Publication History

Abstract

Face reenactment aims to generate an animation of a source face using the poses and expressions from a target face. Although recent methods have made remarkable progress by exploiting generative adversarial networks, they are limited in generating high-fidelity and identity-preserving results due to the inappropriate driving information and insufficiently effective animating strategies. In this work, we propose a novel face reenactment framework that achieves both high-fidelity generation and identity preservation. Instead of sparse face representations (e.g., facial landmarks and keypoints), we utilize the Projected Normalized Coordinate Code (PNCC) to better preserve facial details. We propose to reconstruct the PNCC with the source identity parameters and the target pose and expression parameters estimated by 3D face reconstruction to factor out the target identity. By adopting the reconstructed representation as the driving information, we address the problem of identity mismatch. To effectively utilize the driving information, we establish the correspondence between the reconstructed representation and the source representation based on the features extracted by an encoder network. This identity-matched correspondence is then utilized to animate the source face using a novel feature transformation strategy. The generator network is further enhanced by the proposed geometry-aware skip connection. Once trained, our model can be applied to previously unseen faces without further training or fine-tuning. Through extensive experiments, we demonstrate the effectiveness of our method in face reenactment and show that our model outperforms state-of-the-art approaches both qualitatively and quantitatively. Additionally, the proposed PNCC reconstruction module can be easily inserted into other methods and improve their performance in cross-identity face reenactment.

References

[1]
Rameen Abdal, Peihao Zhu, Niloy J. Mitra, and Peter Wonka. 2021. StyleFlow: Attribute-conditioned exploration of StyleGAN-generated images using conditional continuous normalizing flows. ACM Transactions on Graphics 40, 3 (2021), 1–21.
[2]
Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency. 2018. OpenFace 2.0: Facial behavior analysis toolkit. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face and Gesture Recognition (FG’18). IEEE, Los Alamitos, CA, 59–66.
[3]
Aayush Bansal, Shugao Ma, Deva Ramanan, and Yaser Sheikh. 2018. Recycle-GAN: Unsupervised video retargeting. In Proceedings of the European Conference on Computer Vision (ECCV’18). 119–135.
[4]
Volker Blanz and Thomas Vetter. 1999. A morphable model for the synthesis of 3D faces. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques. 187–194.
[5]
Volker Blanz and Thomas Vetter. 2003. Face recognition based on fitting a 3D morphable model. IEEE Transactions on Pattern Analysis and Machine Intelligence 25, 9 (2003), 1063–1074.
[6]
Stella Bounareli, Vasileios Argyriou, and Georgios Tzimiropoulos. 2022. Finding directions in GAN’s latent space for neural face reenactment. arXiv preprint arXiv:2202.00046 (2022).
[7]
Adrian Bulat and Georgios Tzimiropoulos. 2017. How far are we from solving the 2D and 3D face alignment problem? (and a dataset of 230,000 3D facial landmarks). In Proceedings of the IEEE International Conference on Computer Vision. 1021–1030.
[8]
Egor Burkov, Igor Pasechnik, Artur Grigorev, and Victor Lempitsky. 2020. Neural head reenactment with latent pose descriptors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13786–13795.
[9]
Chen Cao, Yanlin Weng, Shun Zhou, Yiying Tong, and Kun Zhou. 2013. FaceWarehouse: A 3D facial expression database for visual computing. IEEE Transactions on Visualization and Computer Graphics 20, 3 (2013), 413–425.
[10]
Zhuo Chen, Chaoyue Wang, Bo Yuan, and Dacheng Tao. 2020. PuppeteerGAN: Arbitrary portrait animation with semantic-aware appearance transformation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13518–13527.
[11]
J. S. Chung, A. Nagrani, and A. Zisserman. 2018. VoxCeleb2: Deep speaker recognition. In Proceedings of INTERSPEECH 2018.
[12]
Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. 2019. ArcFace: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4690–4699.
[13]
Yu Deng, Jiaolong Yang, Dong Chen, Fang Wen, and Xin Tong. 2020. Disentangled and controllable face image generation via 3D imitative-contrastive learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5154–5163.
[14]
Michail Christos Doukas, Mohammad Rami Koujan, Viktoriia Sharmanska, Anastasios Roussos, and Stefanos Zafeiriou. 2021. Head2Head++: Deep facial attributes re-targeting. IEEE Transactions on Biometrics, Behavior, and Identity Science 3, 1 (2021), 31–43.
[15]
Matteo Fincato, Marcella Cornia, Federico Landi, Fabio Cesari, and Rita Cucchiara. 2022. Transform, warp, and dress: A new transformation-guided model for virtual try-on. ACM Transactions on Multimedia Computing, Communications, and Applications 18, 2 (2022), 1–24.
[16]
Kyle Genova, Forrester Cole, Aaron Maschinot, Aaron Sarna, Daniel Vlasic, and William T. Freeman. 2018. Unsupervised training for 3D morphable model regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8377–8386.
[17]
Partha Ghosh, Pravir Singh Gupta, Roy Uziel, Anurag Ranjan, Michael J. Black, and Timo Bolkart. 2020. GIF: Generative interpretable faces. In Proceedings of the 2020 International Conference on 3D Vision (3DV’20). IEEE, Los Alamitos, CA, 868–878.
[18]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2672–2680.
[19]
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C. Courville. 2017. Improved training of Wasserstein GANs. In Advances in Neural Information Processing Systems. 5767–5777.
[20]
Jianzhu Guo, Xiangyu Zhu, Yang Yang, Fan Yang, Zhen Lei, and Stan Z Li. 2020. Towards fast, accurate and stable 3D dense face alignment. In Computer Vision—ECCV 2020. Lecture Notes in Computer Science, Vol. 12364. Springer, 152–168.
[21]
Sungjoo Ha, Martin Kersner, Beomsu Kim, Seokjun Seo, and Dongyoung Kim. 2020. Marionette: Few-shot face reenactment preserving identity of unseen targets. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 10893–10900.
[22]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
[23]
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Advances in Neural Information Processing Systems. 6626–6637.
[24]
Gee-Sern Hsu, Chun-Hung Tsai, and Hung-Yi Wu. 2022. Dual-generator face reenactment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 642–650.
[25]
Po-Hsiang Huang, Fu-En Yang, and Yu-Chiang Frank Wang. 2020. Learning identity-invariant motion representations for cross-ID face reenactment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7084–7092.
[26]
Xun Huang and Serge Belongie. 2017. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision. 1501–1510.
[27]
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1125–1134.
[28]
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision. 694–711.
[29]
Wonjun Kang, Geonsu Lee, Hyung Il Koo, and Nam Ik Cho. 2022. One-shot face reenactment on megapixels. arXiv preprint arXiv:2205.13368 (2022).
[30]
Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4401–4410.
[31]
Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Niessner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, and Christian Theobalt. 2018. Deep video portraits. ACM Transactions on Graphics 37, 4 (2018), 1–14.
[32]
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations.
[33]
Mohammad Rami Koujan, Michail Christos Doukas, Anastasios Roussos, and Stefanos Zafeiriou. 2020. Head2Head: Video-based neural head synthesis. In Proceedings of the 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG’20). IEEE, Los Alamitos, CA, 16–23.
[34]
Marek Kowalski, Stephan J. Garbin, Virginia Estellers, Tadas Baltrušaitis, Matthew Johnson, and Jamie Shotton. 2020. CONFIG: Controllable neural face image generation. In Computer Vision—ECCV 2020. Lecture Notes in Computer Science, Vol. 12356. Springer, 299–315.
[35]
Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014).
[36]
Arsha Nagrani, Samuel Albanie, and Andrew Zisserman. 2018. Seeing voices and hearing faces: Cross-modal biometric matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8427–8436.
[37]
A. Nagrani, J. S. Chung, and A. Zisserman. 2017. VoxCeleb: A large-scale speaker identification dataset. In Proceedings of INTERSPEECH2017.
[38]
Yuval Nirkin, Yosi Keller, and Tal Hassner. 2019. FSGAN: Subject agnostic face swapping and reenactment. In Proceedings of the IEEE International Conference on Computer Vision. 7184–7193.
[39]
Yuval Nirkin, Yosi Keller, and Tal Hassner. 2022. FSGANv2: Improved subject agnostic face swapping and reenactment. arXiv preprint arXiv:2202.12972 (2022).
[40]
Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. 2019. Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2337–2346.
[41]
Omkar M. Parkhi, Andrea Vedaldi, and Andrew Zisserman. 2015. Deep face recognition. In Proceedings of the British Machine Vision Conference.
[42]
Pascal Paysan, Reinhard Knothe, Brian Amberg, Sami Romdhani, and Thomas Vetter. 2009. A 3D face model for pose and illumination invariant face recognition. In Proceedings of the 2009 6th IEEE International Conference on Advanced Video and Signal Based Surveillance. IEEE, Los Alamitos, CA, 296–301.
[43]
Joseph Roth, Yiying Tong, and Xiaoming Liu. 2016. Adaptive 3D face reconstruction from unconstrained photo collections. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4197–4206.
[44]
Gil Shamai, Ron Slossberg, and Ron Kimmel. 2019. Synthesizing facial photometries and corresponding geometries using generative adversarial networks. ACM Transactions on Multimedia Computing, Communications, and Applications 15, 3s (2019), 1–24.
[45]
Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. 2019. Animating arbitrary objects via deep motion transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2377–2386.
[46]
Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. 2019. First order motion model for image animation. In Advances in Neural Information Processing Systems. 7137–7147.
[47]
Aliaksandr Siarohin, Enver Sangineto, Stéphane Lathuiliere, and Nicu Sebe. 2018. Deformable GANs for pose-based human image generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3408–3416.
[48]
Aliaksandr Siarohin, Oliver J. Woodford, Jian Ren, Menglei Chai, and Sergey Tulyakov. 2021. Motion representations for articulated animation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13653–13662.
[49]
Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations.
[50]
Jingkuan Song, Jingqiu Zhang, Lianli Gao, Xianglong Liu, and Heng Tao Shen. 2018. Dual conditional GANs for face aging and rejuvenation. In Proceedings of the 27th International Conference on Artificial Intelligence (IJCAI’18). 899–905.
[51]
Linsen Song, Wayne Wu, Chaoyou Fu, Chen Qian, Chen Change Loy, and Ran He. 2021. Pareidolia face reenactment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2236–2245.
[52]
Ayush Tewari, Mohamed Elgharib, Gaurav Bharaj, Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zollhofer, and Christian Theobalt. 2020. StyleRig: Rigging StyleGAN for 3D control over portrait images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6142–6151.
[53]
Ayush Tewari, Michael Zollhofer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Perez, and Christian Theobalt. 2017. MoFA: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In Proceedings of the IEEE International Conference on Computer Vision Workshops. 1274–1283.
[54]
Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger, and Christian Theobalt. 2015. Real-time expression transfer for facial reenactment.ACM Transactions on Graphics 34, 6 (2015), Article 183, 14 pages.
[55]
Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. 2016. Face2Face: Real-time face capture and reenactment of RGB videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2387–2395.
[56]
Soumya Tripathy, Juho Kannala, and Esa Rahtu. 2020. ICface: Interpretable and controllable face reenactment using GANs. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 3385–3394.
[57]
Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. 2018. Towards accurate generative models of video: A new metric and challenges. arXiv preprint arXiv:1812.01717 (2018).
[58]
Ting-Chun Wang, Ming-Yu Liu, Andrew Tao, Guilin Liu, Jan Kautz, and Bryan Catanzaro. 2019. Few-shot video-to-video synthesis. In Advances in Neural Information Processing Systems. 5013–5024.
[59]
Ting-Chun Wang, Arun Mallya, and Ming-Yu Liu. 2021. One-shot free-view neural talking-head synthesis for video conferencing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10039–10049.
[60]
Xueping Wang, Yunhong Wang, and Weixin Li. 2019. U-Net conditional GANs for photo-realistic and identity-preserving facial expression synthesis. ACM Transactions on Multimedia Computing, Communications, and Applications 15, 3s (2019), 1–23.
[61]
Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing 13, 4 (2004), 600–612.
[62]
Olivia Wiles, A. Sophia Koepke, and Andrew Zisserman. 2018. X2Face: A network for controlling face generation using images, audio, and pose codes. In Proceedings of the European Conference on Computer Vision (ECCV’18). 670–686.
[63]
Wayne Wu, Yunxuan Zhang, Cheng Li, Chen Qian, and Chen Change Loy. 2018. ReenactGAN: Learning to reenact faces via boundary transfer. In Proceedings of the European Conference on Computer Vision (ECCV’18). 603–619.
[64]
Sitao Xiang, Yuming Gu, Pengda Xiang, Mingming He, Koki Nagano, Haiwei Chen, and Hao Li. 2020. One-shot identity-preserving portrait reenactment. arXiv preprint arXiv:2004.12452 (2020).
[65]
Runze Xu, Zhiming Zhou, Weinan Zhang, and Yong Yu. 2017. Face transfer with generative adversarial network. arXiv preprint arXiv:1710.06090 (2017).
[66]
Guangming Yao, Yi Yuan, Tianjia Shao, Shuang Li, Shanqi Liu, Yong Liu, Mengmeng Wang, and Kun Zhou. 2021. One-shot face reenactment using appearance adaptive normalization. In Proceedings of the AAAI Conference on Artificial Intelligence. 3172–3180.
[67]
Guangming Yao, Yi Yuan, Tianjia Shao, and Kun Zhou. 2020. Mesh guided one-shot face reenactment using graph convolutional networks. In Proceedings of the 28th ACM International Conference on Multimedia. 1773–1781.
[68]
Egor Zakharov, Aleksei Ivakhnenko, Aliaksandra Shysheya, and Victor Lempitsky. 2020. Fast bi-layer neural synthesis of one-shot realistic head avatars. In Proceedings of the European Conference on Computer Vision. 524–540.
[69]
Egor Zakharov, Aliaksandra Shysheya, Egor Burkov, and Victor Lempitsky. 2019. Few-shot adversarial learning of realistic neural talking head models. In Proceedings of the IEEE International Conference on Computer Vision. 9459–9468.
[70]
Haichao Zhang, Youcheng Ben, Weixi Zhang, Tao Chen, Gang Yu, and Bin Fu. 2021. Fine-grained identity preserving landmark synthesis for face reenactment. arXiv preprint arXiv:2110.04708 (2021).
[71]
Ji Zhang, Jingkuan Song, Lianli Gao, Ye Liu, and Heng Tao Shen. 2022. Progressive meta-learning with curriculum. IEEE Transactions on Circuits and Systems for Video Technology 32, 5 (2022), 5916–5930.
[72]
Pan Zhang, Bo Zhang, Dong Chen, Lu Yuan, and Fang Wen. 2020. Cross-domain correspondence learning for exemplar-based image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5143–5153.
[73]
Yunxuan Zhang, Siwei Zhang, Yue He, Cheng Li, Chen Change Loy, and Ziwei Liu. 2019. One-shot face reenactment. In Proceedings of the British Machine Vision Conference (BMVC’19).
[74]
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision. 2223–2232.
[75]
Xiangyu Zhu, Xiaoming Liu, Zhen Lei, and Stan Z. Li. 2017. Face alignment in full pose range: A 3D total solution. IEEE Transactions on Pattern Analysis and Machine Intelligence 41, 1 (2017), 78–92.

Cited By

View all
  • (2025)Talking-head video generation with long short-term contextual semanticsApplied Intelligence10.1007/s10489-024-06010-y55:2Online publication date: 1-Jan-2025
  • (2025)Towards a Fine-Grained Threat Model for Video-Based Remote Identity ProofingMachine Learning and Principles and Practice of Knowledge Discovery in Databases10.1007/978-3-031-74627-7_45(531-536)Online publication date: 1-Jan-2025
  • (2024)Head3D: Complete 3D Head Generation via Tri-plane Feature DistillationACM Transactions on Multimedia Computing, Communications, and Applications10.1145/363571720:6(1-20)Online publication date: 8-Mar-2024
  • Show More Cited By

Index Terms

  1. High-Fidelity Face Reenactment Via Identity-Matched Correspondence Learning

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Multimedia Computing, Communications, and Applications
    ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 19, Issue 3
    May 2023
    514 pages
    ISSN:1551-6857
    EISSN:1551-6865
    DOI:10.1145/3582886
    • Editor:
    • Abdulmotaleb El Saddik
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 25 February 2023
    Online AM: 23 November 2022
    Accepted: 06 November 2022
    Revised: 02 October 2022
    Received: 24 June 2022
    Published in TOMM Volume 19, Issue 3

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Face reenactment
    2. 3D face reconstruction
    3. feature transformation
    4. generative adversarial networks

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)191
    • Downloads (Last 6 weeks)18
    Reflects downloads up to 11 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Talking-head video generation with long short-term contextual semanticsApplied Intelligence10.1007/s10489-024-06010-y55:2Online publication date: 1-Jan-2025
    • (2025)Towards a Fine-Grained Threat Model for Video-Based Remote Identity ProofingMachine Learning and Principles and Practice of Knowledge Discovery in Databases10.1007/978-3-031-74627-7_45(531-536)Online publication date: 1-Jan-2025
    • (2024)Head3D: Complete 3D Head Generation via Tri-plane Feature DistillationACM Transactions on Multimedia Computing, Communications, and Applications10.1145/363571720:6(1-20)Online publication date: 8-Mar-2024
    • (2024)CorrTalk: Correlation Between Hierarchical Speech and Facial Activity Variances for 3D AnimationIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.338683634:9(8953-8965)Online publication date: 1-Sep-2024
    • (2024)DialogueNeRF: towards realistic avatar face-to-face conversation video generationVisual Intelligence10.1007/s44267-024-00057-82:1Online publication date: 7-Aug-2024
    • (2023)LipFormer: Learning to Lipread Unseen Speakers Based on Visual-Landmark TransformersIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2023.328222433:9(4507-4517)Online publication date: 1-Sep-2023
    • (2023)DeepFake on Face and Expression Swap: A ReviewIEEE Access10.1109/ACCESS.2023.332440311(117865-117906)Online publication date: 2023

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media