[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3411764.3445627acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Towards Understanding Perceptual Differences between Genuine and Face-Swapped Videos

Published: 07 May 2021 Publication History

Abstract

In this paper, we report on perceptual experiments indicating that there are distinct and quantitatively measurable differences in the way we visually perceive genuine versus face-swapped videos.
Recent progress in deep learning has made face-swapping techniques a powerful tool for creative purposes, but also a means for unethical forgeries. Currently, it remains unclear why people are misled, and which indicators they use to recognize potential manipulations. Here, we conduct three perceptual experiments focusing on a wide range of aspects: the conspicuousness of artifacts, the viewing behavior using eye tracking, the recognition accuracy for different video lengths, and the assessment of emotions.
Our experiments show that responses differ distinctly when watching manipulated as opposed to original faces, from which we derive perceptual cues to recognize face swaps. By investigating physiologically measurable signals, our findings yield valuable insights that may also be useful for advanced algorithmic detection.

Supplementary Material

Supplementary Materials (3411764.3445627_supplementalmaterials.zip)
MP4 File (3411764.3445627_videopreview.mp4)
Preview video

References

[1]
Laura Abbruzzese, Nadia Magnani, Ian Hamilton Robertson, and Mauro Mancuso. 2019. Age and gender differences in emotion recognition. Frontiers in psychology 10 (2019), 2371. https://doi.org/10.3389/fpsyg.2019.02371
[2]
Darius Afchar, Vincent Nozick, Junichi Yamagishi, and Isao Echizen. 2018. Mesonet: a compact facial video forgery detection network. In IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, New York, NY, USA, 1–7. https://doi.org/10.1109/WIFS.2018.8630761
[3]
Shruti Agarwal, Hany Farid, Yuming Gu, Mingming He, Koki Nagano, and Hao Li. 2019. Protecting World Leaders Against Deep Fakes. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshop (CVPRW). IEEE, New York, NY, USA, 38–45.
[4]
Robert R. Althoff and Neal J. Cohen. 1999. Eye-Movement-Based Memory Effect: A Reprocessing Effect in Face Perception. Journal of Experimental Psychology: Learning, Memory, and Cognition 25, 4 (7 1999), 997–1010. https://doi.org/10.1037/0278-7393.25.4.997
[5]
Lisa Feldman Barrett, Batja Mesquita, and Maria Gendron. 2011. Context in emotion perception. Current Directions in Psychological Science 20, 5 (2011), 286–290. https://doi.org/10.1177/0963721411422522
[6]
Elina Birmingham and Alan Kingstone. 2009. Human social attention. Progress in brain research 176 (2009), 309–320. https://doi.org/10.1016/S0079-6123(09)17618-5
[7]
Dario Bombari, Fred W Mast, and Janek S Lobmaier. 2009. Featural, Configural, and Holistic Face-Processing Strategies Evoke Different Scan Patterns. Perception 38, 10 (2009), 1508–1521. https://doi.org/10.1068/p6117
[8]
Isabelle Boutet, Chantal L Lemieux, Marc-André Goulet, and Charles A Collin. 2017. Faces elicit different scanning patterns depending on task demands. Attention, Perception, & Psychophysics 79, 4 (2017), 1050–1063. https://doi.org/10.3758/s13414-017-1284-y
[9]
Julie N Buchan, Martin Paré, and Kevin G Munhall. 2007. Spatial statistics of gaze fixations during dynamic face processing. Social Neuroscience 2, 1 (2007), 1–13. https://doi.org/10.1080/17470910601043644
[10]
Martin Čadík, Robert Herzog, Rafał Mantiuk, Radosław Mantiuk, Karol Myszkowski, and Hans-Peter Seidel. 2013. Learning to predict localized distortions in rendered images. In Computer Graphics Forum, Vol. 32. Wiley Online Library, Hoboken, New Jersey, USA, 401–410. https://doi.org/10.1111/cgf.12248
[11]
Manuel G. Calvo and Lauri Nummenmaa. 2009. Eye-movement assessment of the time course in facial expression recognition: Neurophysiological implications. Cognitive, Affective, & Behavioral Neuroscience 9, 4(2009), 398–411. https://doi.org/10.3758/CABN.9.4.398
[12]
Pilar Carrera-Levillain and Jose-Miguel Fernandez-Dols. 1994. Neutral faces in context: Their emotional meaning and their function. Journal of Nonverbal Behavior 18, 4 (1994), 281–299. https://doi.org/10.1007/BF02172290
[13]
Susana Castillo, Tilke Judd, and Diego Gutierrez. 2011. Using eye-tracking to assess different image retargeting methods. In ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization (APVG). ACM, New York, NY, USA, 7–14. https://doi.org/10.1145/2077451.2077453
[14]
Susana Castillo, Christian Wallraven, and Douglas Cunningham. 2014. The Semantic Space for Facial Communication. Computer Animation and Virtual Worlds 25, 3-4 (May 2014), 223–231. https://doi.org/10.1002/cav.1593
[15]
BBC News Daniel Thomas. 2020. Deepfakes: A threat to democracy or just a bit of fun?BBC. Retrieved September 16, 2020 from https://www.bbc.com/news/business-51204954
[16]
DeepFaceLab. 2019. DeepFaceLab. https://github.com/iperov/DeepFaceLab Accessed: 2020-01-06.
[17]
DeepFakes. 2019. DeepFakes. https://github.com/deepfakes/faceswap Accessed: 2020-01-06.
[18]
Tom Dobber, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes de Vreese. 2020. Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?The International Journal of Press/Politics 0 (2020), 1940161220944364. https://doi.org/10.1177/1940161220944364
[19]
Howard E Egeth and Steven Yantis. 1997. Visual attention: Control, representation, and time course. Annual review of psychology 48, 1 (1997), 269–297. https://doi.org/10.1146/annurev.psych.48.1.2699
[20]
Hedwig Eisenbarth and Georg W Alpers. 2011. Happy mouth and sad eyes: scanning emotional facial expressions. Emotion 11, 4 (2011), 860. https://doi.org/10.1037/a0022758
[21]
Paul Ekman, E Richard Sorenson, and Wallace V Friesen. 1969. Pan-cultural elements in facial displays of emotion. Science 164, 3875 (1969), 86–88. https://doi.org/10.1126/science.164.3875.86
[22]
Ulrich Engelke, Daniel P Darcy, Grant H Mulliken, Sebastian Bosse, Maria G Martini, Sebastian Arndt, Jan-Niklas Antons, Kit Yan Chan, Naeem Ramzan, and Kjell Brunnström. 2016. Psychophysiology-based QoE assessment: A survey. IEEE Journal of Selected Topics in Signal Processing 11, 1(2016), 6–21. https://doi.org/10.1109/JSTSP.2016.2609843
[23]
FaceSwap. 2019. FaceSwap. https://github.com/MarekKowalski/FaceSwap Accessed: 2020-01-06.
[24]
Steven Fernandes, Sunny Raj, Eddy Ortiz, Iustina Vintila, Margaret Salter, Gordana Urosevic, and Sumit Jha. 2019. Predicting Heart Rate Variations of Deepfake Videos using Neural ODE. In IEEE International Conference on Computer Vision Workshops (CVPRW). IEEE, New York, NY, USA, 1721–1729. https://doi.org/10.1109/ICCVW.2019.00213
[25]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS). ACM, New York, NY, USA, 2672–2680. https://doi.org/10.5555/2969033.2969125
[26]
GoogleAIBlog. 2019. Contributing Data to Deepfake Detection Research. https://ai.googleblog.com/2019/09/contributing-data-to-deepfake-detection.html
[27]
D. Güera and E. J. Delp. 2018. Deepfake video detection using recurrent neural networks. In IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, New York, NY, USA, 1–6. https://doi.org/10.1109/AVSS.2018.8639163
[28]
Tianchu Guo, Yongchao Liu, Hui Zhang, Xiabing Liu, Youngjun Kwak, Byung In Yoo, Jae-Joon Han, and Changkyu Choi. 2019. A Generalized and Robust Method Towards Practical Gaze Estimation on Smart Phone. In IEEE International Conference on Computer Vision Workshops (ICCVW). IEEE, New York, NY, USA, 1131–1139. https://doi.org/10.1109/ICCVW.2019.00144
[29]
Parul Gupta, Komal Chugh, Abhinav Dhall, and Ramanathan Subramanian. 2020. The eyes know it: FakeET-An Eye-tracking Database to Understand Deepfake Perception. In International Conference on Multimodal Interaction (ICIM). ACM, New York, NY, USA, 519–527. https://doi.org/10.1145/3382507.3418857
[30]
Roy S Hessels, Jeroen S Benjamins, Tim HW Cornelissen, and Ignace TC Hooge. 2018. A validation of automatically-generated Areas-of-Interest in videos of a face for eye-tracking research. Frontiers in psychology 9 (2018), 1367. https://doi.org/10.3389/fpsyg.2018.01367
[31]
Stephen J Hinde, Tim J Smith, and Iain D Gilchrist. 2018. Does narrative drive dynamic attention to a prolonged stimulus?Cognitive research: principles and implications 3, 1(2018), 45. https://doi.org/10.1186/s41235-018-0140-5
[32]
Janet Hui-wen Hsiao and Garrison Cottrell. 2008. Two fixations suffice in face recognition. Psychological science 19, 10 (2008), 998–1006. https://doi.org/10.1111/j.1467-9280.2008.02191.x
[33]
Derek M Isaacowitz, Corinna E Löckenhoff, Richard D Lane, Ron Wright, Lee Sechrest, Robert Riedel, and Paul T Costa. 2007. Age differences in recognition of emotion in lexical stimuli and facial expressions. Psychology and aging 22, 1 (2007), 147. https://doi.org/10.1037/0882-7974.22.1.147
[34]
Stephen W Janik, A Rodney Wellens, Myron L Goldberg, and Louis F Dell’Osso. 1978. Eyes as the center of focus in the visual examination of human faces. Perceptual and Motor Skills 47, 3 (1978), 857–858. https://doi.org/10.2466/pms.1978.47.3.857
[35]
Liming Jiang, Ren Li, Wayne Wu, Chen Qian, and Chen Change Loy. 2020. DeeperForensics-1.0: A Large-Scale Dataset for Real-World Face Forgery Detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, New York, NY, USA, 2886–2895. https://doi.org/10.1109/CVPR42600.2020.00296
[36]
Tilke Judd, Krista Ehinger, Frédo Durand, and Antonio Torralba. 2009. Learning to predict where humans look. In IEEE International Conference on Computer Vision (ICCV). IEEE, New York, NY, USA, 2106–2113. https://doi.org/10.1109/ICCV.2009.5459462
[37]
Kathrin Kaulard, Douglas W. Cunningham, Heinrich H. Bülthoff, and Christian Wallraven. 2012. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions. PLoS ONE 7, 3 (03 2012), e32321. https://doi.org/10.1371/journal.pone.0032321
[38]
Davis E. King. 2009. Dlib-ml: A Machine Learning Toolkit. Journal of Machine Learning Research 10 (2009), 1755–1758. http://www.dlib.net.
[39]
I. Korshunova, W. Shi, J. Dambre, and L. Theis. 2017. Fast face-swap using convolutional neural networks. In IEEE International Conference on Computer Vision (ICCV). IEEE, New York, NY, USA, 3677–3685. https://doi.org/10.1109/ICCV.2017.397
[40]
Kyle Krafka, Aditya Khosla, Petr Kellnhofer, Harini Kannan, Suchendra Bhandarkar, Wojciech Matusik, and Antonio Torralba. 2016. Eye Tracking for Everyone. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, New York, NY, USA, 2176–2184. https://doi.org/10.1109/CVPR.2016.239.
[41]
Charissa R Lansing and George W McConkie. 2003. Word identification and eye fixation locations in visual and visual-plus-auditory presentations of spoken sentences. Perception & psychophysics 65, 4 (2003), 536–552. https://doi.org/10.3758/BF03194581
[42]
Steven M. Gillespie Laura J. Wells and Pia Rotshtein. 2016. Identification of emotional facial expressions: Effects of expression, intensity, and sex on eye gaze. PloS ONE 11, 12 (2016). https://doi.org/10.1371/journal.pone.0168307
[43]
Yuezun Li, Ming-Ching Chang, and Siwei Lyu. 2018. In ictu oculi: Exposing ai created fake videos by detecting eye blinking. In IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, New York, NY, USA, 1–7. https://doi.org/10.1109/WIFS.2018.8630787
[44]
Yuezun Li and Siwei Lyu. 2019. Exposing DeepFake Videos By Detecting Face Warping Artifacts. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vol. 2. IEEE, New York, NY, USA, 46–52.
[45]
Yuezun Li, Xin Yang, Pu Sun, Honggang Qi, and Siwei Lyu. 2020. Celeb-DF: A Large-Scale Challenging Dataset for DeepFake Forensics. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, New York, NY, USA, 3207–3216. https://doi.org/10.1109/CVPR42600.2020.00327
[46]
Tsoey Wun Man and Peter J Hills. 2016. Eye-tracking the own-gender bias in face recognition: Other-gender faces are viewed differently to own-gender faces. Visual Cognition 24, 9-10 (2016), 447–458. https://doi.org/10.1080/13506285.2017.1301614
[47]
Albert Mehrabian. 2008. Communication without words. Communication theory 6(2008), 193–200.
[48]
I Mertens, H Siegmund, and O-J Grüsser. 1993. Gaze motor asymmetries in the perception of faces during a memory task. Neuropsychologia 31, 9 (1993), 989–998. https://doi.org/10.1016/0028-3932(93)90154-R
[49]
Xiongkuo Min, Guangtao Zhai, Zhongpai Gao, and Chunjia Hu. 2014. Influence of compression artifacts on visual attention. In IEEE International Conference on Multimedia and Expo (ICME). IEEE, New York, NY, USA, 1–6. https://doi.org/10.1109/ICME.2014.6890189
[50]
Parag K Mital, Tim J Smith, Robin L Hill, and John M Henderson. 2011. Clustering of gaze during dynamic scene viewing is predicted by motion. Cognitive computation 3, 1 (2011), 5–24. https://doi.org/10.1007/s12559-010-9074-z
[51]
Nora A Murphy and Derek M Isaacowitz. 2010. Age effects and gaze patterns in recognising emotional expressions: An in-depth look at gaze measures and covariates. Cognition and Emotion 24, 3 (2010), 436–452. https://doi.org/10.1080/02699930802664623
[52]
Joao C Neves, Ruben Tolosana, Ruben Vera-Rodriguez, Vasco Lopes, Hugo Proença, and Julian Fierrez. 2020. GANprintR: Improved Fakes and Evaluation of the State of the Art in Face Manipulation Detection. IEEE Journal of Selected Topics in Signal Processing 14, 5(2020), 1038–1048. https://doi.org/10.1109/JSTSP.2020.3007250
[53]
Yuval Nirkin, Yosi Keller, and Tal Hassner. 2019. FSGAN: Subject Agnostic Face Swapping and Reenactment. In IEEE International Conference on Computer Vision (ICCV). IEEE, New York, NY, USA, 7183–7192. https://doi.org/10.1109/ICCV.2019.00728
[54]
Y. Nirkin, I. Masi, A. T. Tuan, T. Hassner, and G. Medioni. 2018. On face segmentation, face swapping, and face perception. In IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, New York, NY, USA, 98–105. https://doi.org/10.1109/FG.2018.00024
[55]
Manfred Nusseck, Douglas W. Cunningham, Christian Wallraven, and Heinrich H. Bülthoff. 2008. The contribution of different facial regions to the recognition of conversational expressions. Journal of Vision 8, 8 (06 2008), 1–1. https://doi.org/10.1167/8.8.1 arXiv:https://arvojournals.org/arvo/content_public/journal/jov/933530/jov-8-8-1.pdf
[56]
Effie J Pereira, Elina Birmingham, and Jelena Ristic. 2020. The eyes do not have it after all? Attention is not automatically biased towards faces and eyes. Psychological research 84, 5 (2020), 1407–1423. https://doi.org/10.1007/s00426-018-1130-4
[57]
Rista C Plate, Adrienne Wood, Kristina Woodard, and Seth D Pollak. 2019. Probabilistic learning of emotion categories. Journal of Experimental Psychology: General 148, 10 (2019), 1814. https://doi.org/10.1037/xge0000529
[58]
Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. 2018. FaceForensics: A Large-scale Video Dataset for Forgery Detection in Human Faces. CoRR abs/1803.09179(2018). arxiv:1803.09179
[59]
Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. 2019. FaceForensics++: Learning to Detect Manipulated Facial Images. In IEEE International Conference on Computer Vision (ICCV). IEEE, New York, NY, USA, 1–11. https://doi.org/10.1109/ICCV.2019.00009
[60]
Guillaume A Rousselet, Marc J-M Macé, and Michèle Fabre-Thorpe. 2003. Is it an animal? Is it a human face? Fast processing in upright and inverted natural scenes. Journal of Vision 3, 6 (2003), 5–5. https://doi.org/10.1167/3.6.5
[61]
Hannah Scott, Jonathan P Batten, and Gustav Kuhn. 2019. Why are you looking at me? It’s because I’m talking, but mostly because I’m staring or not doing much. Attention, Perception, & Psychophysics 81, 1 (2019), 109–118. https://doi.org/10.3758/s13414-018-1588-6
[62]
Jan-Philipp Tauscher, Maryam Mustafa, and Marcus Magnor. 2017. Comparative analysis of three different modalities for perception of artifacts in videos. ACM Transactions on Applied Perception (TAP) 14, 4 (Sep 2017), 1–12. https://doi.org/10.1145/3129289
[63]
Cristian Vaccari and Andrew Chadwick. 2020. Deepfakes and disinformation: exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media+ Society 6, 1 (2020), 2056305120903408. https://doi.org/10.1177/2056305120903408
[64]
Goedele Van Belle, Meike Ramon, Philippe Lefèvre, and Bruno Rossion. 2010. Fixation patterns during recognition of personally familiar and unfamiliar faces. Frontiers in Psychology 1 (2010), 20. https://doi.org/10.3389/fpsyg.2010.00020
[65]
Melissa L-H Võ, Tim J Smith, Parag K Mital, and John M Henderson. 2012. Do the eyes really have it? Dynamic allocation of attention when viewing moving faces. Journal of vision 12, 13 (2012), 3–3. https://doi.org/10.1167/12.13.3
[66]
Christian Wallraven, Heinrich H. Bülthoff, Douglas W. Cunningham, Jan Fischer, and Dirk Bartz. 2007. Evaluation of Real-World and Computer-Generated Stylized Facial Expressions. ACM Transactions on Applied Perception (TAP) 4, 3 (Nov. 2007), 16–es. https://doi.org/10.1145/1278387.1278390
[67]
Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros. 2020. CNN-generated images are surprisingly easy to spot... for now. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 7. IEEE, New York, NY, USA, 8692–8701. https://doi.org/10.1109/CVPR42600.2020.00872
[68]
Wenguan Wang, Jianbing Shen, Jianwen Xie, Ming-Ming Cheng, Haibin Ling, and Ali Borji. 2019. Revisiting video saliency prediction in the deep learning era. IEEE Transactions on Pattern Analysis and Machine Intelligence 43, 1(2019), 220–237. https://doi.org/10.1109/TPAMI.2019.2924417
[69]
Leslie Wöhler, Jann-Ole Henningson, Susana Castillo, and Marcus Magnor. 2020. PEFS: A Validated Dataset for Perceptual Experiments on Face Swap Portrait Videos. In International Conference on Computer Animation and Social Agents (CASA). Springer, Springer, Cham, 120–127. https://doi.org/10.1007/978-3-030-63426-1_13
[70]
Xin Yang, Yuezun Li, and Siwei Lyu. 2019. Exposing deep fakes using inconsistent head poses. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, New York, NY, USA, 8261–8265. https://doi.org/10.1109/ICASSP.2019.8683164
[71]
Gregory Zelinsky. 2013. Understanding scene understanding. Frontiers in Psychology 4 (2013), 954. https://doi.org/10.3389/fpsyg.2013.00954
[72]
Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. 2017. It’s written all over your face: Full-face appearance-based gaze estimation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, New York, NY, USA, 51–60. https://doi.org/10.1109/CVPRW.2017.284
[73]
Fabian Zimmermann and Matthias Kohring. 2020. Mistrust, disinforming news, and vote choice: A panel survey on the origins and consequences of believing disinformation in the 2017 German parliamentary election. Political Communication 37, 2 (2020), 215–237. https://doi.org/10.1080/10584609.2019.1686095

Cited By

View all
  • (2024)Can deepfakes be used to study emotion perception? A comparison of dynamic face stimuliBehavior Research Methods10.3758/s13428-024-02443-y56:7(7674-7690)Online publication date: 4-Jun-2024
  • (2024)What we know and don’t know about deepfakes: An investigation into the state of the research and regulatory landscapeNew Media & Society10.1177/14614448241253138Online publication date: 22-May-2024
  • (2024)Can You Tell Real from Fake Face Images? Perception of Computer-Generated Faces by HumansACM Transactions on Applied Perception10.1145/369666722:2(1-23)Online publication date: 30-Nov-2024
  • Show More Cited By

Index Terms

  1. Towards Understanding Perceptual Differences between Genuine and Face-Swapped Videos
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
        May 2021
        10862 pages
        ISBN:9781450380966
        DOI:10.1145/3411764
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 07 May 2021

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. eye tracking
        2. face swapping
        3. human perception
        4. video manipulation

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        Conference

        CHI '21
        Sponsor:

        Acceptance Rates

        Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

        Upcoming Conference

        CHI 2025
        ACM CHI Conference on Human Factors in Computing Systems
        April 26 - May 1, 2025
        Yokohama , Japan

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)145
        • Downloads (Last 6 weeks)13
        Reflects downloads up to 03 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Can deepfakes be used to study emotion perception? A comparison of dynamic face stimuliBehavior Research Methods10.3758/s13428-024-02443-y56:7(7674-7690)Online publication date: 4-Jun-2024
        • (2024)What we know and don’t know about deepfakes: An investigation into the state of the research and regulatory landscapeNew Media & Society10.1177/14614448241253138Online publication date: 22-May-2024
        • (2024)Can You Tell Real from Fake Face Images? Perception of Computer-Generated Faces by HumansACM Transactions on Applied Perception10.1145/369666722:2(1-23)Online publication date: 30-Nov-2024
        • (2024)Investigating the Perception of Facial Anonymization Techniques in 360° VideosACM Transactions on Applied Perception10.1145/369525421:4(1-17)Online publication date: 15-Nov-2024
        • (2024)Generative AI in User-Generated ContentExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3636315(1-7)Online publication date: 11-May-2024
        • (2024)Towards mitigating uncann(eye)ness in face swaps via gaze-centric loss termsComputers and Graphics10.1016/j.cag.2024.103888119:COnline publication date: 1-Apr-2024
        • (2023)Deepfakes: evolution and trendsSoft Computing - A Fusion of Foundations, Methodologies and Applications10.1007/s00500-023-08605-y27:16(11295-11318)Online publication date: 15-Jun-2023
        • (2022)Automatic Generation of Customized Areas of Interest and Evaluation of Observers' Gaze in Portrait VideosProceedings of the ACM on Human-Computer Interaction10.1145/35308856:ETRA(1-14)Online publication date: 13-May-2022
        • (2022)Zoom Obscura: Counterfunctional Design for Video-ConferencingProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3501973(1-17)Online publication date: 29-Apr-2022
        • (2021)Deepfake detection by human crowds, machines, and machine-informed crowdsProceedings of the National Academy of Sciences10.1073/pnas.2110013119119:1Online publication date: 28-Dec-2021

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media