[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3474085.3475338acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Adversarial Pixel Masking: A Defense against Physical Attacks for Pre-trained Object Detectors

Published: 17 October 2021 Publication History

Abstract

Object detection based on pre-trained deep neural networks (DNNs) has achieved impressive performance and enabled many applications. However, DNN-based object detectors are shown to be vulnerable to physical adversarial attacks. Despite that recent efforts have been made to defend against these attacks, they either use strong assumptions or become less effective with pre-trained object detectors. In this paper, we propose adversarial pixel masking (APM), a defense against physical attacks, which is designed specifically for pre-trained object detectors. APM does not require any assumptions beyond the "patch-like" nature of a physical attack and can work with different pre-trained object detectors of different architectures and weights, making it a practical solution in many applications. We conduct extensive experiments, and the empirical results show that APM can significantly improve model robustness without significantly degrading clean performance.

Supplementary Material

MP4 File (MM21-fp0975.mp4)
Presentation video

References

[1]
Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning, pages 274--283. PMLR, 2018.
[2]
Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples. In International conference on machine learning, pages 284--293. PMLR, 2018.
[3]
Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. Yolov4: Op-timal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934, 2020.
[4]
Wieland Brendel and Matthias Bethge. Approximating cnns with bag-of-local-features models works surprisingly well on imagenet. arXiv preprint arXiv:1904.00760, 2019.
[5]
Tom B Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017.
[6]
Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39--57. IEEE, 2017.
[7]
Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39--57. IEEE, 2017.
[8]
Shang-Tse Chen, Cory Cornelius, Jason Martin, and Duen Horng Polo Chau. Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 52--68. Springer, 2018.
[9]
Ping-Han Chiang, Chi-Shen Chan, and Shan-Hung Wu. Adversarial pixel mask-ing: Supplementary materials. http://www.cs.nthu.edu.tw/~shwu/pubs/shwu-mm-21-sup.pdf.
[10]
Edward Chou, Florian Tramèr, and Giancarlo Pellegrino. Sentinet: Detecting localized universal attacks against deep learning systems. In 2020 IEEE Security and Privacy Workshops (SPW), pages 48--54. IEEE, 2020.
[11]
Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05), volume 1, pages 886--893. Ieee, 2005.
[12]
Kaiwen Duan, Song Bai, Lingxi Xie, Honggang Qi, Qingming Huang, and Qi Tian. Centernet: Keypoint triplets for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6569--6578, 2019.
[13]
Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1625--1634, 2018.
[14]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and har-nessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
[15]
Jamie Hayes. On visible adversarial perturbations & digital watermarking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1597--1604, 2018.
[16]
Matthias Hein and Maksym Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. arXiv preprint arXiv:1705.08475, 2017.
[17]
Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adver-sarial attacks with limited queries and information. In International Conference on Machine Learning, pages 2137--2146. PMLR, 2018.
[18]
Daniel Jakubovitz and Raja Giryes. Improving dnn robustness to adversarial attacks using jacobian regularization. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
[19]
Harini Kannan, Alexey Kurakin, and Ian Goodfellow. Adversarial logit pairing. arXiv preprint arXiv:1803.06373, 2018.
[20]
Danny Karmon, Daniel Zoran, and Yoav Goldberg. Lavan: Localized and visible adversarial noise. In International Conference on Machine Learning, pages 2507--2515. PMLR, 2018.
[21]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
[22]
Mark Lee and Zico Kolter. On physical adversarial patches for object detection. arXiv preprint arXiv:1906.11897, 2019.
[23]
Juncheng Li, Frank Schmidt, and Zico Kolter. Adversarial camera stickers: A phys-ical camera-based attack on deep learning systems. In International Conference on Machine Learning, pages 3896--3904. PMLR, 2019.
[24]
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980--2988, 2017.
[25]
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740--755. Springer, 2014.
[26]
Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21--37. Springer, 2016.
[27]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
[28]
Michael McCoyd, Won Park, Steven Chen, Neil Shah, Ryan Roggenkemper, Min-june Hwang, Jason Xinyu Liu, and David Wagner. Minority reports defense: Defending against adversarial patches. In International Conference on Applied Cryptography and Network Security, pages 564--582. Springer, 2020.
[29]
Muzammal Naseer, Salman Khan, and Fatih Porikli. Local gradients smoothing: Defense against localized adversarial attacks. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1300--1307. IEEE, 2019.
[30]
Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, and Jun Zhu. Bag of tricks for adversarial training. arXiv preprint arXiv:2010.00467, 2020.
[31]
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506--519, 2017.
[32]
Sukrut Rao, David Stutz, and Bernt Schiele. Adversarial training against location-optimized adversarial patches. arXiv preprint arXiv:2005.02313, 2020.
[33]
Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779--788, 2016.
[34]
Joseph Redmon and Ali Farhadi. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.
[35]
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: To-wards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497, 2015.
[36]
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234--241. Springer, 2015.
[37]
Aniruddha Saha, Akshayvarun Subramanya, Koninika Patil, and Hamed Pirsi-avash. Role of spatial context in adversarial robustness for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 784--785, 2020.
[38]
Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! arXiv preprint arXiv:1904.12843, 2019.
[39]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K Reiter. Acces-sorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 acm sigsac conference on computer and communications security, pages 1528--1540, 2016.
[40]
Dawn Song, Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rah-mati, Florian Tramer, Atul Prakash, and Tadayoshi Kohno. Physical adversarial examples for object detectors. In 12th {USENIX} Workshop on Offensive Tech-nologies ({WOOT} 18), 2018.
[41]
Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Ried-miller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
[42]
Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5):828--841, 2019.
[43]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
[44]
Mingxing Tan, Ruoming Pang, and Quoc V Le. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10781--10790, 2020.
[45]
Simen Thys, Wiebe Van Ranst, and Toon Goedemé. Fooling automated surveil-lance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 0--0, 2019.
[46]
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017.
[47]
Tong Wu, Liang Tong, and Yevgeniy Vorobeychik. Defending against physically realizable attacks on image classification. arXiv preprint arXiv:1909.09552, 2019.
[48]
Zuxuan Wu, Ser-Nam Lim, Larry S Davis, and Tom Goldstein. Making an in-visibility cloak: Real world adversarial attacks on object detectors. In European Conference on Computer Vision, pages 1--17. Springer, 2020.
[49]
Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, and Prateek Mittal. Patch-guard: Provable defense against adversarial patches using masks on small recep-tive fields. arXiv preprint arXiv:2005.10884, 2020.
[50]
Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, Alan L Yuille, and Quoc V Le. Adversarial examples improve image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 819--828, 2020.
[51]
Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan L Yuille, and Kaiming He. Feature denoising for improving adversarial robustness. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 501--509, 2019.
[52]
Han Xu, Yao Ma, Hao-Chen Liu, Debayan Deb, Hui Liu, Ji-Liang Tang, and Anil K Jain. Adversarial attacks and defenses in images, graphs and text: A review. International Journal of Automation and Computing, 17(2):151--178, 2020.
[53]
Kaidi Xu, Gaoyuan Zhang, Sijia Liu, Quanfu Fan, Mengshu Sun, Hongge Chen, Pin-Yu Chen, Yanzhi Wang, and Xue Lin. Evading real-time person detectors by adversarial t-shirt. arXiv preprint arXiv:1910.11099, 3, 2019.
[54]
Zirui Xu, Fuxun Yu, and Xiang Chen. Lance: A comprehensive and lightweight cnn defense methodology against physical adversarial attacks on embedded multimedia applications. In 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC), pages 470--475. IEEE, 2020.
[55]
Darren Yu Yang, Jay Xiong, Xincheng Li, Xu Yan, John Raiti, Yuntao Wang, HuaQiang Wu, and Zhenyu Zhong. Building towards" invisible cloak": Robust physical adversarial attack on yolo object detector. In 2018 9th IEEE Annual Ubiq-uitous Computing, Electronics & Mobile Communication Conference (UEMCON), pages 368--374. IEEE, 2018.
[56]
Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. Adversarial examples: Attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems, 30(9):2805--2824, 2019.
[57]
Haichao Zhang and Jianyu Wang. Towards adversarially robust object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 421--430, 2019.
[58]
Yang Zhang, Hassan Foroosh, Philip David, and Boqing Gong. Camou: Learning physical vehicle camouflages to adversarially attack detectors in the wild. In International Conference on Learning Representations, 2018.
[59]
Zhanyuan Zhang, Benson Yuan, Michael McCoyd, and David Wagner. Clipped bagnet: defending against sticker attacks with clipped bag-of-features. In 2020 IEEE Security and Privacy Workshops (SPW), pages 55--61. IEEE, 2020.
[60]
Yue Zhao, Hong Zhu, Ruigang Liang, Qintao Shen, Shengzhi Zhang, and Kai Chen. Seeing isn't believing: Towards more robust adversarial attack against real world object detectors. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pages 1989--2004, 2019.
[61]
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2921--2929, 2016.
[62]
Guangzhi Zhou, Hongchao Gao, Peng Chen, Jin Liu, Jiao Dai, Jizhong Han, and Ruixuan Li. Information distribution based defense against physical attacks on object detection. In 2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), pages 1--6. IEEE, 2020.

Cited By

View all
  • (2024)I Don't Know You, But I Can Catch You: Real-Time Defense against Diverse Adversarial Patches for Object DetectorsProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security10.1145/3658644.3670317(3823-3837)Online publication date: 2-Dec-2024
  • (2024)On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous DrivingIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.331451235:12(18328-18342)Online publication date: Dec-2024
  • (2024)CARLA-GeAR: A Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Deep Learning Vision ModelsIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.341243225:8(9840-9851)Online publication date: Aug-2024
  • Show More Cited By

Index Terms

  1. Adversarial Pixel Masking: A Defense against Physical Attacks for Pre-trained Object Detectors

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MM '21: Proceedings of the 29th ACM International Conference on Multimedia
      October 2021
      5796 pages
      ISBN:9781450386517
      DOI:10.1145/3474085
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 17 October 2021

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. adversarial examples
      2. adversarial patches
      3. adversarial training
      4. attack
      5. defense
      6. distribution shift
      7. masknet
      8. object detection

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      MM '21
      Sponsor:
      MM '21: ACM Multimedia Conference
      October 20 - 24, 2021
      Virtual Event, China

      Acceptance Rates

      Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)109
      • Downloads (Last 6 weeks)13
      Reflects downloads up to 11 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)I Don't Know You, But I Can Catch You: Real-Time Defense against Diverse Adversarial Patches for Object DetectorsProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security10.1145/3658644.3670317(3823-3837)Online publication date: 2-Dec-2024
      • (2024)On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous DrivingIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.331451235:12(18328-18342)Online publication date: Dec-2024
      • (2024)CARLA-GeAR: A Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Deep Learning Vision ModelsIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.341243225:8(9840-9851)Online publication date: Aug-2024
      • (2024)Attention-Based Real-Time Defenses for Physical Adversarial Attacks in Vision Applications2024 ACM/IEEE 15th International Conference on Cyber-Physical Systems (ICCPS)10.1109/ICCPS61052.2024.00009(23-32)Online publication date: 13-May-2024
      • (2024)Enhancing robustness of person detection: A universal defense filter against adversarial patch attacksComputers & Security10.1016/j.cose.2024.104066(104066)Online publication date: Aug-2024
      • (2024)AFLF: a defensive framework to defeat multi-faceted adversarial attacks via attention feature fusionEvolving Systems10.1007/s12530-024-09643-z16:1Online publication date: 12-Dec-2024
      • (2024)X-Detect: explainable adversarial patch detection for object detectors in retailMachine Learning10.1007/s10994-024-06548-5113:9(6273-6292)Online publication date: 19-Jun-2024
      • (2023)Automated Segmentation to Make Hidden Trigger Backdoor Attacks Robust against Deep Neural NetworksApplied Sciences10.3390/app1307459913:7(4599)Online publication date: 5-Apr-2023
      • (2023)HARP: Let Object Detector Undergo Hyperplasia to Counter Adversarial PatchesProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612421(2673-2683)Online publication date: 26-Oct-2023
      • (2023)Kaleidoscope: Physical Backdoor Attacks Against Deep Neural Networks With RGB FiltersIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2023.323922520:6(4993-5004)Online publication date: 23-Jan-2023
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media