[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3393527.3393567acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesacm-turcConference Proceedingsconference-collections
research-article

Embedding Backdoors as the Facial Features: Invisible Backdoor Attacks Against Face Recognition Systems

Published: 26 October 2020 Publication History

Abstract

Deep neural network (DNN) based face recognition systems have been widely applied in various identity authentication scenarios. However, recent studies show that the DNN models are vulnerable to backdoor attacks. An attacker can embed backdoors into the neural network by modifying its internal structure or poisoning the training set. In this way, the attacker can login into the system as the victim, while the normal use of the system by legitimate users will not be affected. However, the backdoors used in existing attacks are visually perceptible (black-frame glasses or purple sunglasses), which will arouse humans' suspicions thus lead to the failure of the attacks. In this paper, we propose a novel backdoor attack method, BHF2 (Backdoor Hidden as Facial Features), where the attacker can embed the backdoors as the inherent facial features. The proposed method can greatly enhance the concealment of the injected backdoor, which makes the backdoor attacks more difficult to be discovered. Besides, the BHF2 method can be launched under the black-box conditions, where the attacker is completely unaware of the target face recognition system. The proposed backdoor attack method can be applied in those rigorous identity authentication scenarios where the users are not allowed to wear any accessories. Experimental results show that the BHF2 method can achieve high attack success rate (up to 100%) on the state-of-the-art face recognition model, DeepID1, while the normal working performance of the system has hardly been affected (the recognition accuracy of the system has only dropped by 0.01% at the lowest).

References

[1]
Tolga Soyata, Rajani Muraleedharan, Colin Funai, Minseok Kwon, and Wendi B. Heinzelman. Cloud-vision: Real-time face recognition using a mobile-cloudlet-cloud acceleration architecture. In IEEE Symposium on Computers and Communications, pages 59--66, 2012.
[2]
Kwontaeg Choi, Kar-Ann Toh, and Hyeran Byun. Realtime training on mobile devices for face recognition applications. Pattern Recognition, 44(2): 386--400, 2011.
[3]
Christophe Pagano, Eric Granger, Robert Sabourin, and Dmitry Gorodnichy. Detector ensembles for face recognition in video surveillance. In International Joint Conference on Neural Networks, pages 1--8, 2012.
[4]
Minhui Zou, Yang Shi, Chengliang Wang, Fangyu Li, WenZhan Song, and Yu Wang. PoTrojan: Powerful neural-level Trojan designs in deep learning models. arXiv:1802.03043, 2018.
[5]
Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. Trojaning attack on neural networks. In 25th Annual Network and Distributed System Security Symposium, pages 1--15, 2018.
[6]
Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv:1708.06733, 2017.
[7]
Shaofeng Li, Benjamin Zi Hao Zhao, Jiahao Yu, Minhui Xue, Dali Kaafar, and Haojin Zhu. Invisible backdoor attacks against deep neural networks. arXiv:1909.02742, 2019.
[8]
Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv:1712.05526, 2017.
[9]
Yi Sun, Xiaogang Wang, and Xiaoou Tang. Deep learning face representation from predicting 10,000 classes. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1891--1898, 2014.
[10]
Marko Arsenovic, Srdjan Sladojevic, Andras Anderla, and Darko Stefanovic. Facetime: Deep learning based face recognition attendance system. In IEEE 15th International Symposium on Intelligent Systems and Informatics, pages 53--58, 2017.
[11]
Insider threats as the main security threat in 2017. https://www.tripwire.com/state-of-security/security-data-protection/insider-threats-main-security-threat-2017/.
[12]
The insider versus the outsider: Who poses the biggest security risk? https://www.helpnetsecurity.com/2015/08/19/the-insider-versus-the-outsider-who-poses-the-biggest-security-risk/.
[13]
Davis E. King. Dlib-ml: A machine learning toolkit. Journal of Machine Learning Research, 10: 1755--1758, 2009.
[14]
Lior Wolf, Tal Hassner, and Itay Maoz. Face recognition in unconstrained videos with matched background similarity. In IEEE Conference on Computer Vision and Pattern Recognition, pages 529--534, 2011.

Cited By

View all
  • (2024)Are Object Recognition Models Effective and Unbiased for Biometric Recognition?2024 IEEE International Joint Conference on Biometrics (IJCB)10.1109/IJCB62174.2024.10744463(1-10)Online publication date: 15-Sep-2024
  • (2024)Backdoor Attacks Leveraging Latent Representation in Competitive LearningComputer Security. ESORICS 2023 International Workshops10.1007/978-3-031-54129-2_41(700-718)Online publication date: 12-Mar-2024
  • (2023)CASSOCK: Viable Backdoor Attacks against DNN in the Wall of Source-Specific Backdoor DefensesProceedings of the 2023 ACM Asia Conference on Computer and Communications Security10.1145/3579856.3582829(938-950)Online publication date: 10-Jul-2023
  • Show More Cited By

Index Terms

  1. Embedding Backdoors as the Facial Features: Invisible Backdoor Attacks Against Face Recognition Systems

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        ACM TURC '20: Proceedings of the ACM Turing Celebration Conference - China
        May 2020
        220 pages
        ISBN:9781450375344
        DOI:10.1145/3393527
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        In-Cooperation

        • Baidu Research: Baidu Research

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 26 October 2020

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Artificial intelligence security
        2. backdoor attacks
        3. deep learning
        4. face recognition systems

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        • National Natural Science Foundation of China?

        Conference

        ACM TURC'20

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)21
        • Downloads (Last 6 weeks)3
        Reflects downloads up to 06 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Are Object Recognition Models Effective and Unbiased for Biometric Recognition?2024 IEEE International Joint Conference on Biometrics (IJCB)10.1109/IJCB62174.2024.10744463(1-10)Online publication date: 15-Sep-2024
        • (2024)Backdoor Attacks Leveraging Latent Representation in Competitive LearningComputer Security. ESORICS 2023 International Workshops10.1007/978-3-031-54129-2_41(700-718)Online publication date: 12-Mar-2024
        • (2023)CASSOCK: Viable Backdoor Attacks against DNN in the Wall of Source-Specific Backdoor DefensesProceedings of the 2023 ACM Asia Conference on Computer and Communications Security10.1145/3579856.3582829(938-950)Online publication date: 10-Jul-2023
        • (2023)A Novel Framework for Smart Cyber Defence: A Deep-Dive Into Deep Learning Attacks and DefencesIEEE Access10.1109/ACCESS.2023.330633311(88527-88548)Online publication date: 2023
        • (2022)FaceHack: Attacking Facial Recognition Systems Using Malicious Facial CharacteristicsIEEE Transactions on Biometrics, Behavior, and Identity Science10.1109/TBIOM.2021.31321324:3(361-372)Online publication date: Jul-2022
        • (2021)Backdoors hidden in facial features: a novel invisible backdoor attack against face recognition systemsPeer-to-Peer Networking and Applications10.1007/s12083-020-01031-z14:3(1458-1474)Online publication date: 8-Jan-2021

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media