[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Oriole: Thwarting Privacy Against Trustworthy Deep Learning Models

  • Conference paper
  • First Online:
Information Security and Privacy (ACISP 2021)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 13083))

Included in the following conference series:

Abstract

Deep Neural Networks have achieved unprecedented success in the field of face recognition such that any individual can crawl the data of others from the Internet without their explicit permission for the purpose of training high-precision face recognition models, creating a serious violation of privacy. Recently, a well-known system named Fawkes [37] (published in USENIX Security 2020) claimed this privacy threat can be neutralized by uploading cloaked user images instead of their original images. In this paper, we present Oriole, a system that combines the advantages of data poisoning attacks and evasion attacks, to thwart the protection offered by Fawkes, by training the attacker face recognition model with multi-cloaked images generated by Oriole. Consequently, the face recognition accuracy of the attack model is maintained and the weaknesses of Fawkes are revealed. Experimental results show that our proposed Oriole system is able to effectively interfere with the performance of the Fawkes system to achieve promising attacking results. Our ablation study highlights multiple principal factors that affect the performance of the Oriole system, including the DSSIM perturbation budget, the ratio of leaked clean user images, and the numbers of multi-cloaks for each uncloaked image. We also identify and discuss at length the vulnerabilities of Fawkes. We hope that the new methodology presented in this paper will inform the security community of a need to design more robust privacy-preserving deep learning models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 71.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 89.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://mirror.cs.uchicago.edu/fawkes/files/target_data/.

  2. 2.

    Our source code is publicly available at https://git.io/JsWq7.

References

  1. Akbari, R., Mozaffari, S.: Performance enhancement of PCA-based face recognition system via gender classification method. In: 2010 6th Iranian Conference on Machine Vision and Image Processing, pp. 1–6. IEEE (2010)

    Google Scholar 

  2. Aware Nexa—Face\(\rm ^{TM}\). https://aware.com/biometrics/nexa-facial-recognition/

  3. Bashbaghi, S., Granger, E., Sabourin, R., Parchami, M.: Deep Learning architectures for face recognition in video surveillance. In: Jiang, X., Hadid, A., Pang, Y., Granger, E., Feng, X. (eds.) Deep Learning in Object Detection and Recognition, pp. 133–154. Springer, Singapore (2019). https://doi.org/10.1007/978-981-10-5152-4_6

    Chapter  Google Scholar 

  4. Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248 (2017)

  5. Cao, Q., Shen, L., Xie, W., Parkhi, O.M., Zisserman, A.: VGGFace2: a dataset for recognising faces across pose and age. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pp. 67–74. IEEE (2018)

    Google Scholar 

  6. Carlini, N., Wagner, D.: Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 3–14 (2017)

    Google Scholar 

  7. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  8. Chen, P., Sharma, Y., Zhang, H., Yi, J., Hsieh, C.: EAD: elastic-net attacks to deep neural networks via adversarial examples. In: McIlraith, S.A., Weinberger, K.Q. (eds.) Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, 2–7 February 2018, pp. 10–17. AAAI Press (2018). https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16893

  9. Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.J.: ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26 (2017)

    Google Scholar 

  10. Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the landscape of spatial robustness. In: International Conference on Machine Learning, pp. 1802–1811. PMLR (2019)

    Google Scholar 

  11. Face++ Face Searching API. https://faceplusplus.com/face-searching/

  12. Ford, N., Gilmer, J., Carlini, N., Cubuk, E.D.: Adversarial examples are a natural consequence of test error in noise. CoRR abs/1901.10513 (2019). http://arxiv.org/abs/1901.10513

  13. Gao, Y., Xu, C., Wang, D., Chen, S., Ranasinghe, D.C., Nepal, S.: Strip: a defence against trojan attacks on deep neural networks. In: Proceedings of the 35th Annual Computer Security Applications Conference, pp. 113–125 (2019)

    Google Scholar 

  14. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  15. Google Cloud Vision AI. https://cloud.google.com/vision/

  16. Grosse, K., Manoharan, P., Papernot, N., Backes, M., McDaniel, P.: On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280 (2017)

  17. Hendrycks, D., Dietterich, T.G.: Benchmarking neural network robustness to common corruptions and surface variations. arXiv preprint arXiv:1807.01697 (2018)

  18. Hill, K.: This tool could protect your photos from facial recognition (2020). https://www.forbes.com/sites/nicolemartin1/2019/09/25/the-major-concerns-around-facial-recognition-technology/?sh=3fe203174fe3

  19. Hosseini, H., Chen, Y., Kannan, S., Zhang, B., Poovendran, R.: Blocking transferability of adversarial examples in black-box learning systems. arXiv preprint arXiv:1703.04318 (2017)

  20. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

    Google Scholar 

  21. Ilyas, A., Engstrom, L., Athalye, A., Lin, J.: Black-box adversarial attacks with limited queries and information. In: International Conference on Machine Learning, pp. 2137–2146. PMLR (2018)

    Google Scholar 

  22. Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C., Li, B.: Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In: 2018 IEEE Symposium on Security and Privacy (SP), pp. 19–35. IEEE (2018)

    Google Scholar 

  23. Li, B., Vorobeychik, Y.: Evasion-robust classification on binary domains. ACM Trans. Knowl. Discov. Data (TKDD) 12(4), 1–32 (2018)

    Article  Google Scholar 

  24. Li, S., et al.: Hidden backdoors in human-centric language models. In: ACM Conference on Computer and Communications Security (CCS) (2021)

    Google Scholar 

  25. Li, S., Ma, S., Xue, M., Zhao, B.Z.H.: Deep learning backdoors. arXiv preprint arXiv:2007.08273 (2020)

  26. Li, S., Xue, M., Zhao, B., Zhu, H., Zhang, X.: Invisible backdoor attacks on deep neural networks via steganography and regularization. IEEE Trans. Dependable Secure Comput. (2020)

    Google Scholar 

  27. Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1778–1787 (2018)

    Google Scholar 

  28. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  29. Mei, S., Zhu, X.: Using machine teaching to identify optimal training-set attacks on machine learners. In: Bonet, B., Koenig, S. (eds.) Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 25–30 January 2015, Austin, Texas, USA, pp. 2871–2877. AAAI Press (2015). http://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/9472

  30. Meng, D., Chen, H.: MagNet: a two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135–147 (2017)

    Google Scholar 

  31. Nec Face Recognition API. https://nec.com/en/global/solutions/biometrics/ face/

  32. Pinto, N., Stone, Z., Zickler, T., Cox, D.: Scaling up biologically-inspired computer vision: a case study in unconstrained face recognition on Facebook. In: CVPR 2011 Workshops, pp. 35–42. IEEE (2011)

    Google Scholar 

  33. Rasti, P., Uiboupin, T., Escalera, S., Anbarjafari, G.: Convolutional neural network super resolution for face recognition in surveillance monitoring. In: Perales, F.J.J., Kittler, J. (eds.) AMDO 2016. LNCS, vol. 9756, pp. 175–184. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-41778-3_18

    Chapter  Google Scholar 

  34. Research, G.V.: Facial recognition market size, share & trends analysis report by technology (2D, 3D), by application (emotion recognition, attendance tracking & monitoring), by end-use, and segment forecasts, 2020–2027. https://www.grandviewresearch.com/checkout/select-license/facial-recognition-market

  35. Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815–823 (2015)

    Google Scholar 

  36. Shah, R., et al.: Evaluating evasion attack methods on binary network traffic classifiers. In: Proceedings of the Conference on Information Systems Applied Research ISSN, vol. 2167, p. 1508 (2019)

    Google Scholar 

  37. Shan, S., Wenger, E., Zhang, J., Li, H., Zheng, H., Zhao, B.Y.: Fawkes: Protecting privacy against unauthorized deep learning models. In: 29th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 20), pp. 1589–1604 (2020)

    Google Scholar 

  38. Suciu, O., Marginean, R., Kaya, Y., Daume III, H., Dumitras, T.: When does machine learning \(\{\)FAIL\(\}\)? Generalized transferability for evasion and poisoning attacks. In: 27th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 18), pp. 1299–1316 (2018)

    Google Scholar 

  39. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.: Inception-v4, inception-ResNet and the impact of residual connections on learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31 (2017)

    Google Scholar 

  40. Tong, L., Li, B., Hajaj, C., Xiao, C., Zhang, N., Vorobeychik, Y.: Improving robustness of \(\{\)ML\(\}\) classifiers against realizable evasion attacks using conserved features. In: 28th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 19), pp. 285–302 (2019)

    Google Scholar 

  41. Vazquez-Fernandez, E., Gonzalez-Jimenez, D.: Face recognition for authentication on mobile devices. Image Vis. Comput. 55, 31–33 (2016)

    Article  Google Scholar 

  42. Wang, B., Yao, Y., Viswanath, B., Zheng, H., Zhao, B.Y.: With great training comes great vulnerability: practical attacks against transfer learning. In: 27th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 18), pp. 1281–1297 (2018)

    Google Scholar 

  43. Wang, H., Pang, G., Shen, C., Ma, C.: Unsupervised representation learning by predicting random distances. arXiv preprint arXiv:1912.12186 (2019)

  44. Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. In: 2003 the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, vol. 2, pp. 1398–1402. IEEE (2003)

    Google Scholar 

  45. Wen, J., Zhao, B.Z.H., Xue, M., Oprea, A., Qian, H.: With great dispersion comes greater resilience: efficient poisoning attacks and defenses for linear regression models. IEEE Trans. Inf. Forensics Secur. (2021)

    Google Scholar 

  46. Wen, J., Zhao, B.Z.H., Xue, M., Qian, H.: PALOR: poisoning attacks against logistic regression. In: Liu, J.K., Cui, H. (eds.) ACISP 2020. LNCS, vol. 12248, pp. 447–460. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-55304-3_23

    Chapter  Google Scholar 

  47. Xiang, J., Zhu, G.: Joint face detection and facial expression recognition with MTCNN. In: 2017 4th International Conference on Information Science and Control Engineering (ICISCE), pp. 424–427. IEEE (2017)

    Google Scholar 

  48. Xu, J., Picek, S., et al.: Explainability-based backdoor attacks against graph neural networks. arXiv preprint arXiv:2104.03674 (2021)

  49. Yang, Z., Wilson, C., Wang, X., Gao, T., Zhao, B.Y., Dai, Y.: Uncovering social network sybils in the wild. ACM Trans. Knowl. Discov. Data (TKDD) 8(1), 1–29 (2014)

    Article  Google Scholar 

  50. Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014)

  51. Zhang, H., Wang, H., Li, Y., Cao, Y., Shen, C.: Robust watermarking using inverse gradient attention. arXiv preprint arXiv:2011.10850 (2020)

  52. Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process. Lett. 23(10), 1499–1503 (2016)

    Article  Google Scholar 

  53. Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2847–2856 (2018)

    Google Scholar 

Download references

Acknowledgments

The authors affiliated with East China Normal University were, in part, supported by NSFC-ISF Joint Scientific Research Program (61961146004) and Innovation Program of Shanghai Municipal Education Commission (2021-01-07-00-08-E00101). Minhui Xue was, in part, supported by the Australian Research Council (ARC) Discovery Project (DP210102670).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haifeng Qian .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, L., Wang, H., Zhao, B.Z.H., Xue, M., Qian, H. (2021). Oriole: Thwarting Privacy Against Trustworthy Deep Learning Models. In: Baek, J., Ruj, S. (eds) Information Security and Privacy. ACISP 2021. Lecture Notes in Computer Science(), vol 13083. Springer, Cham. https://doi.org/10.1007/978-3-030-90567-5_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-90567-5_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-90566-8

  • Online ISBN: 978-3-030-90567-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics