[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2976749.2978392acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article
Open access

Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition

Published: 24 October 2016 Publication History

Abstract

Machine learning is enabling a myriad innovations, including new algorithms for cancer diagnosis and self-driving cars. The broad use of machine learning makes it important to understand the extent to which machine-learning algorithms are subject to attack, particularly when used in applications where physical security or safety is at risk.
In this paper, we focus on facial biometric systems, which are widely used in surveillance and access control. We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual. Our investigation focuses on white-box face-recognition systems, but we also demonstrate how similar techniques can be used in black-box scenarios, as well as to avoid face detection.

References

[1]
P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Analysis and Machine Intelligence, 19(7), 1997.
[2]
A. D. Bethke. Genetic Algorithms As Function Optimizers. PhD thesis, University of Michigan, 1980.
[3]
A. J. Booker, J. Dennis Jr, P. D. Frank, D. B. Serafini, V. Torczon, and M. W. Trosset. A rigorous framework for optimization of expensive functions by surrogates. Structural optimization, 17(1):1--13, 1999.
[4]
L. Bottou. Large-scale machine learning with stochastic gradient descent. In Proc. COMPSTAT, 2010.
[5]
N. Carlini, P. Mishra, T. Vaidya, Y. Zhang, M. Sherr, C. Shields, D. Wagner, and W. Zhou. Hidden voice commands. In Proc. USENIX Security, 2016.
[6]
R. Eberhart and J. Kennedy. A new optimizer using particle swarm theory. In Proc. MHS, 1995.
[7]
N. Erdogmus and S. Marcel. Spoofing in 2d face recognition with 3d masks and anti-spoofing with kinect. In Proc. IEEE BTAS, 2013.
[8]
H. Fan, Z. Cao, Y. Jiang, Q. Yin, and C. Doudou. Learning deep face representation. arXiv preprint arXiv:1403.2802, 2014.
[9]
A. Fawzi, O. Fawzi, and P. Frossard. Fundamental limits on adversarial robustness. In Proc. ICML, Workshop on Deep Learning, 2015.
[10]
R. Feng and B. Prabhakaran. Facilitating fashion camouflage art. In ACM MM, 2013.
[11]
M. Fredrikson, S. Jha, and T. Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proc. ACM CCS, 2015.
[12]
J. Galbally, C. McCool, J. Fierrez, S. Marcel, and J. Ortega-Garcia. On the vulnerability of face verification systems to hill-climbing attacks. Pattern Recognition, 43(3):1027--1038, 2010.
[13]
I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015.
[14]
A. Harvey. CV Dazzle: Camouflage from face detection. Master's thesis, New York University, 2010. Available at: http://cvdazzle.com.
[15]
R. Hassan, B. Cohanim, O. De Weck, and G. Venter. A comparison of particle swarm optimization and the genetic algorithm. In Proc. MDO, 2005.
[16]
G. E. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527--1554, 2006.
[17]
G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07--49, University of Massachusetts, Amherst, October 2007.
[18]
L. Introna and H. Nissenbaum. Facial recognition technology: A survey of policy and implementation issues. 2010. https://goo.gl/eIrldb.
[19]
Itseez. OpenCV: Open Source Computer Vision. http://opencv.org/.
[20]
N. Koren. Color management and color science. http://www.normankoren.com/color_management.html.
[21]
N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attribute and simile classifiers for face verification. In Proc. ICCV, 2009.
[22]
Y. Li, K. Xu, Q. Yan, Y. Li, and R. H. Deng. Understanding OSN-based facial disclosure against face authentication systems. In Proc. AsiaCCS, 2014.
[23]
B. Liang, M. Su, W. You, W. Shi, and G. Yang. Cracking classifiers for evasion: A case study on the Google's phishing pages filter. In Proc.\ WWW, 2016.
[24]
A. Mahendran and A. Vedaldi. Understanding deep image representations by inverting them. In Proc. CVPR, 2015.
[25]
Megvii Inc. Face
[26]
. http://www.faceplusplus.com/.
[27]
MobileSec. Mobilesec Android Authentication Framework. https://github.com/mobilesec/authentication-framework-module-face.
[28]
NEC. Face recognition. http://www.nec.com/en/global/solutions/biometrics/technologies/face_recognition.html.
[29]
NEURO Technology. SentiVeillance SDK. http://www.neurotechnology.com/sentiveillance.html.
[30]
J. Nocedal. Updating quasi-newton matrices with limited storage. Mathematics of computation, 35(151):773--782, 1980.
[31]
OpenALPR. OpenALPR - Automatic License Plate Recognition. http://www.openalpr.com/.
[32]
N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In Proc. IEEE Euro S&P, 2015.
[33]
N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In Proc. IEEE S&P, 2016.
[34]
O. M. Parkhi, A. Vedaldi, and A. Zisserman. Deep face recognition. In Proc. BMVC, 2015.
[35]
L. M. Rios and N. V. Sahinidis. Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization, 56(3):1247--1293, 2013.
[36]
N. Rndic and P. Laskov. Practical evasion of a learning-based classifier: A case study. In Proc. IEEE S&P, 2014.
[37]
P. Robinette, W. Li, R. Allen, A. M. Howard, and A. R. Wagner. Overtrust of robots in emergency evacuation scenarios. In Proc. HRI, 2016.
[38]
D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. Technical report, DTIC Document, 1985.
[39]
F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and clustering. In Proc.\ CVPR, 2015.
[40]
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In Proc. ICLR, 2014.
[41]
M. Turk and A. Pentland. Eigenfaces for recognition. Journal of cognitive neuroscience, 3(1):71--86, 1991.
[42]
A. Vedaldi and K. Lenc. MatConvNet -- Convolutional neural networks for MATLAB. In Proc. ACM MM, 2015.
[43]
P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Proc. CVPR, 2001.
[44]
G. L. Wittel and S. F. Wu. On attacking statistical spam filters. In Proc.\ CEAS, 2004.
[45]
T. Yamada, S. Gohshi, and I. Echizen. Privacy visor: Method based on light absorbing and reflecting properties for preventing face image detection. In Proc.\ SMC, 2013.
[46]
J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In Proc. NIPS, 2014.
[47]
E. Zhou, Z. Cao, and Q. Yin. Naive-deep face recognition: Touching the limit of LFW benchmark or not? arXiv preprint arXiv:1501.04690, 2015.

Cited By

View all
  • (2025)Improving Adversarial Robustness Against Universal Patch Attacks Through Feature Norm SuppressingIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.332687136:1(1410-1424)Online publication date: Jan-2025
  • (2025)Physical Adversarial Patch Attack for Optical Fine-Grained Aircraft RecognitionIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.351657720(436-448)Online publication date: 2025
  • (2025)VIWHard: Text adversarial attacks based on important-word discriminator in the hard-label black-box settingNeurocomputing10.1016/j.neucom.2024.128917616(128917)Online publication date: Feb-2025
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CCS '16: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security
October 2016
1924 pages
ISBN:9781450341394
DOI:10.1145/2976749
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 October 2016

Check for updates

Author Tags

  1. adversarial machine learning
  2. face detection
  3. face recognition
  4. neural networks

Qualifiers

  • Research-article

Conference

CCS'16
Sponsor:

Acceptance Rates

CCS '16 Paper Acceptance Rate 137 of 831 submissions, 16%;
Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

Upcoming Conference

CCS '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2,663
  • Downloads (Last 6 weeks)339
Reflects downloads up to 07 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Improving Adversarial Robustness Against Universal Patch Attacks Through Feature Norm SuppressingIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.332687136:1(1410-1424)Online publication date: Jan-2025
  • (2025)Physical Adversarial Patch Attack for Optical Fine-Grained Aircraft RecognitionIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.351657720(436-448)Online publication date: 2025
  • (2025)VIWHard: Text adversarial attacks based on important-word discriminator in the hard-label black-box settingNeurocomputing10.1016/j.neucom.2024.128917616(128917)Online publication date: Feb-2025
  • (2025)Fooling human detectors via robust and visually natural adversarial patchesNeurocomputing10.1016/j.neucom.2024.128915616(128915)Online publication date: Feb-2025
  • (2024)Data Augmentation and Graph Regularization for Adversarial TrainingGraph Theory - A Comprehensive Guide [Working Title]10.5772/intechopen.1006511Online publication date: 3-Oct-2024
  • (2024)A first physical-world trajectory prediction attack via LiDAR-induced deceptions in autonomous drivingProceedings of the 33rd USENIX Conference on Security Symposium10.5555/3698900.3699252(6291-6308)Online publication date: 14-Aug-2024
  • (2024)RAUCAProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3694638(62076-62087)Online publication date: 21-Jul-2024
  • (2024)Robust universal adversarial perturbationsProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3694347(55241-55266)Online publication date: 21-Jul-2024
  • (2024)Robust classification via a single diffusion modelProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692327(6643-6665)Online publication date: 21-Jul-2024
  • (2024)Crypto Travel and Its Future ImplicationsExploring the World With Blockchain Through Cryptotravel10.4018/979-8-3693-6562-5.ch018(317-358)Online publication date: 3-Dec-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media