[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3427228.3427232acmotherconferencesArticle/Chapter ViewAbstractPublication PagesacsacConference Proceedingsconference-collections
research-article

Secure and Verifiable Inference in Deep Neural Networks

Published: 08 December 2020 Publication History

Abstract

Outsourced inference service has enormously promoted the popularity of deep learning, and helped users to customize a range of personalized applications. However, it also entails a variety of security and privacy issues brought by untrusted service providers. Particularly, a malicious adversary may violate user privacy during the inference process, or worse, return incorrect results to the client through compromising the integrity of the outsourced model. To address these problems, we propose SecureDL to protect the model’s integrity and user’s privacy in Deep Neural Networks (DNNs) inference process. In SecureDL, we first transform complicated non-linear activation functions of DNNs to low-degree polynomials. Then, we give a novel method to generate sensitive-samples, which can verify the integrity of a model’s parameters outsourced to the server with high accuracy. Finally, We exploit Leveled Homomorphic Encryption (LHE) to achieve the privacy-preserving inference. We shown that our sensitive-samples are indeed very sensitive to model changes, such that even a small change in parameters can be reflected in the model outputs. Based on the experiments conducted on real data and different types of attacks, we demonstrate the superior performance of SecureDL in terms of detection accuracy, inference accuracy, computation, and communication overheads.

References

[1]
Naman Agarwal, Ananda Theertha Suresh, Felix Xinnan X Yu, Sanjiv Kumar, and Brendan McMahan. 2018. cpSGD: Communication-efficient and differentially-private distributed SGD. In Advances in Neural Information Processing Systems. 7564–7575.
[2]
Yoshinori Aono, Takuya Hayashi, Lihua Wang, Shiho Moriai, 2018. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security 13, 5(2018), 1333–1345.
[3]
Marshall Ball, Brent Carmer, Tal Malkin, Mike Rosulek, and Nichole Schimanski. 2019. Garbled Neural Networks are Practical.IACR Cryptology ePrint Archive 2019 (2019), 338.
[4]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan Mcmahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical Secure Aggregation for Privacy-Preserving Machine Learning. In Proceedings of ACM CCS. 1175–1191.
[5]
Florian Bourse, Michele Minelli, Matthias Minihold, and Pascal Paillier. 2018. Fast homomorphic evaluation of deep discretized neural networks. In Annual International Cryptology Conference. Springer, 483–512.
[6]
Zvika Brakerski, Craig Gentry, and Vinod Vaikuntanathan. 2014. (Leveled) fully homomorphic encryption without bootstrapping. ACM Transactions on Computation Theory 6, 3 (2014), 13–24.
[7]
Yuhan Cai and Raymond Ng. 2004. Indexing spatio-temporal trajectories with Chebyshev polynomials. In Proceedings of the ACM SIGMOD. 599–610.
[8]
Jean-Paul Calvi and Norman Levenberg. 2008. Uniform approximation by discrete least squares polynomials. Journal of Approximation Theory 152, 1 (2008), 82–100.
[9]
Chaochao Chen, Liang Li, Wenjing Fang, Jun Zhou, Li Wang, Lei Wang, Shuang Yang, Alex Liu, and Hao Wang. 2020. Secret Sharing based Secure Regressions with Applications. arXiv preprint arXiv:2004.04898(2020).
[10]
Xuhui Chen, Jinlong Ji, Lixing Yu, Changqing Luo, and Pan Li. 2018. SecureNets: Secure Inference of Deep Neural Networks on an Untrusted Cloud. In Asian Conference on Machine Learning. 646–661.
[11]
Rida T Farouki. 2012. The Bernstein polynomial basis: A centennial retrospective. Computer Aided Geometric Design 29, 6 (2012), 379–419.
[12]
AD Gadjiev and C Orhan. 2002. Some approximation theorems via statistical convergence. The Rocky Mountain Journal of Mathematics(2002), 129–138.
[13]
Craig Gentry, Shai Halevi, Chris Peikert, and Nigel P Smart. 2012. Ring switching in BGV-style homomorphic encryption. In International Conference on Security and Cryptography for Networks. Springer, 19–37.
[14]
Ran Gilad-Bachrach, Nathan Dowlin, Kim Laine, Kristin Lauter, Michael Naehrig, and John Wernsing. 2016. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In Proceedings of the ICML. 201–210.
[15]
S Dov Gordon and Jonathan Katz. 2006. Rational secret sharing, revisited. In International Conference on Security and Cryptography for Networks. Springer, 229–241.
[16]
Robert J Gove, Keith Balmer, Nicholas K Ing-Simmons, and Karl M Guttag. 1993. Multi-processor reconfigurable in single instruction multiple data (SIMD) and multiple instruction multiple data (MIMD) modes and method of operation. US Patent 5,212,777.
[17]
Shai Halevi and Victor Shoup. 2014. Algorithms in helib. In Annual Cryptology Conference. Springer, 554–571.
[18]
Lucjan Hanzlik, Yang Zhang, Kathrin Grosse, Ahmed Salem, Max Augustin, Michael Backes, and Mario Fritz. 2018. Mlcapsule: Guarded offline deployment of machine learning as a service. arXiv preprint arXiv:1808.00590(2018).
[19]
Zecheng He, Tianwei Zhang, and Ruby Lee. 2019. Sensitive-Sample Fingerprinting of Deep Neural Networks. In Proceedings of the IEEE CVPR. 4729–4737.
[20]
Zecheng He, Tianwei Zhang, and Ruby B Lee. 2018. VerIDeep: Verifying Integrity of Deep Neural Networks through Sensitive-Sample Fingerprinting. arXiv preprint arXiv:1808.03277(2018).
[21]
Ehsan Hesamifard, Hassan Takabi, Mehdi Ghasemi, and Rebecca N Wright. 2018. Privacy-preserving machine learning as a service. Proceedings on Privacy Enhancing Technologies 2018, 3(2018), 123–142.
[22]
Briland Hitaj, Giuseppe Ateniese, and Fernando Pérez-Cruz. 2017. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning. In proceedings of the ACM CCS. 603–618.
[23]
Briland Hitaj, Giuseppe Ateniese, and Fernando Pérez-Cruz. 2017. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning. In Proceedings of the ACM CCS. 603–618.
[24]
Tyler Hunt, Congzheng Song, Reza Shokri, Vitaly Shmatikov, and Emmett Witchel. 2018. Chiron: Privacy-preserving machine learning as a service. arXiv preprint arXiv:1803.05961(2018).
[25]
M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li. 2018. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. In proceedings of the IEEE Security and Privacy. 19–35.
[26]
Chiraag Juvekar, Vinod Vaikuntanathan, and Anantha Chandrakasan. 2018. {GAZELLE}: A low latency framework for secure neural network inference. In Proceedings of the {USENIX} Security. 1651–1669.
[27]
Julien Keuffer, Refik Molva, and Hervé Chabanne. 2018. Efficient Proof Composition for Verifiable Computation. In European Symposium on Research in Computer Security. Springer, 152–171.
[28]
Yuyun Liao and David B Roberts. 2002. A high-performance and low-power 32-bit multiply-accumulate unit with single-instruction-multiple-data (SIMD) feature. IEEE Journal of Solid-State Circuits 37, 7 (2002), 926–931.
[29]
Jian Liu, Mika Juuti, Yao Lu, and N Asokan. 2017. Oblivious Neural Network Predictions via MiniONN Transformations. In Proceedings of ACM CCS. 619–631.
[30]
Qi Liu, Tao Liu, Zihao Liu, Yanzhi Wang, Yier Jin, and Wujie Wen. 2018. Security analysis and enhancement of model compressed deep learning systems under adversarial attacks. In Proceedings of the Asia and South Pacific Design Automation Conference. IEEE, 721–726.
[31]
Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2018. Trojaning attack on neural networks. In proceedings of the NDSS. 309–326.
[32]
Yuntao Liu, Yang Xie, and Ankur Srivastava. 2017. Neural trojans. In Proceedings of the IEEE ICCD. 45–48.
[33]
Changxue Ma, Yves Kamp, and Lei F Willems. 1994. A Frobenius norm approach to glottal closure detection from the speech signal. IEEE Transactions on Speech and Audio Processing 2, 2 (1994), 258–265.
[34]
Payman Mohassel and Peter Rindal. 2018. ABY3: A mixed protocol framework for machine learning. In Proceedings of the ACM CCS. 35–52.
[35]
Payman Mohassel and Yupeng Zhang. 2017. SecureML: A system for scalable privacy-preserving machine learning. In proceedings of IEEE Security and Privacy. 19–38.
[36]
Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In Proceedings of the IEEE Security and Privacy. IEEE, 582–597.
[37]
NhatHai Phan, Yue Wang, Xintao Wu, and Dejing Dou. 2016. Differential Privacy Preservation for Deep Auto-Encoders: an Application of Human Behavior Prediction. In AAAI, Vol. 16. 1309–1316.
[38]
NhatHai Phan, Xintao Wu, Han Hu, and Dejing Dou. 2017. Adaptive laplace mechanism: Differential privacy preservation in deep learning. In Proceedings of the IEEE ICDM. 385–394.
[39]
Tran Thi Phuong 2019. Privacy-preserving deep learning via weight transmission. IEEE Transactions on Information Forensics and Security 14, 11(2019), 3003–3015.
[40]
M Sadegh Riazi, Mohammad Samragh, Hao Chen, Kim Laine, Kristin Lauter, and Farinaz Koushanfar. 2019. {XONN}: XNOR-based Oblivious Deep Neural Network Inference. In Proceedings of the {USENIX} Security. 1501–1518.
[41]
M Sadegh Riazi, Christian Weinert, Oleksandr Tkachenko, Ebrahim M Songhori, Thomas Schneider, and Farinaz Koushanfar. 2018. Chameleon: A hybrid secure computation framework for machine learning applications. In Proceedings of the ACM AsiaCCS. 707–721.
[42]
Bita Darvish Rouhani, M Sadegh Riazi, and Farinaz Koushanfar. 2018. Deepsecure: Scalable provably-secure deep learning. In Proceedings of the Annual Design Automation Conference. ACM, 21–26.
[43]
Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. 2018. Poison frogs! targeted clean-label poisoning attacks on neural networks. In Advances in Neural Information Processing Systems. 6103–6113.
[44]
Jie Shen. 1994. Efficient spectral-Galerkin method I. Direct solvers of second-and fourth-order equations using Legendre polynomials. SIAM Journal on Scientific Computing 15, 6 (1994), 1489–1505.
[45]
Reza Shokri and Vitaly Shmatikov. 2015. Privacy-Preserving Deep Learning. In Proceedings of the ACM CCS. 1310–1321.
[46]
Jacob Steinhardt, Pang Wei W Koh, and Percy S Liang. 2017. Certified defenses for data poisoning attacks. In Advances in Neural Information Processing Systems. 3517–3529.
[47]
Ayush Tewari, Michael Zollhofer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Perez, and Christian Theobalt. 2017. Mofa: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In Proceedings of the IEEE ICCV. 1274–1283.
[48]
Shruti Tople, Karan Grover, Shweta Shinde, Ranjita Bhagwan, and Ramachandran Ramjee. 2018. Privado: Practical and secure DNN inference. arXiv preprint arXiv:1810.00602(2018).
[49]
Florian Tramer and Dan Boneh. 2018. Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware. arXiv preprint arXiv:1806.03287(2018).
[50]
Di Wang, Minwei Ye, and Jinhui Xu. 2017. Differentially private empirical risk minimization revisited: Faster and more general. In Advances in Neural Information Processing Systems. 2722–2731.
[51]
Honggang Wang, Raghu Pasupathy, and Bruce W Schmeiser. 2013. Integer-ordered simulation optimization using R-SPLINE: Retrospective search with piecewise-linear interpolation and neighborhood enumeration. ACM Transactions on Modeling and Computer Simulation 23, 3(2013), 1–24.
[52]
Xi Wu, Fengan Li, Arun Kumar, Kamalika Chaudhuri, Somesh Jha, and Jeffrey Naughton. 2017. Bolt-on differential privacy for scalable stochastic gradient descent-based analytics. In Proceedings of the ACM International Conference on Management of Data. 1307–1322.
[53]
Pengtao Xie, Misha Bilenko, Tom Finley, Ran Gilad-Bachrach, Kristin Lauter, and Michael Naehrig. 2014. Crypto-nets: Neural networks over encrypted data. arXiv preprint arXiv:1412.6181(2014).
[54]
Guowen Xu, Hongwei Li, Yuanshun Dai, Kan Yang, and Xiaodong Lin. 2019. Enabling Efficient and Geometric Range Query with Access Control over Encrypted Spatial Data. IEEE Transactions on Information Forensics and Security 14, 4(2019), 870–885.
[55]
Guowen Xu, Hongwei Li, Sen Liu, Mi Wen, and Rongxing Lu. 2019. Efficient and Privacy-preserving Truth Discovery in Mobile Crowd Sensing Systems. IEEE Transactions on Vehicular Technology 68, 4 (2019), 3854–3865.
[56]
Guowen Xu, Hongwei Li, Sen Liu, Kan Yang, and Xiaodong Lin. 2019. VerifyNet: Secure and Verifiable Federated Learning. IEEE Transactions on Information Forensics and Security (2019).
[57]
Guowen Xu, Hongwei Li, and Rongxing Lu. 2018. POSTER:Practical and Privacy-Aware Truth Discovery in Mobile Crowd Sensing Systems. In Proceedings of ACM CCS. 2312–2314.
[58]
G. Xu, H. Li, H. Ren, X. Lin, and X. S. Shen. 2020. DNA Similarity Search with Access Control over Encrypted Cloud Data. IEEE Transactions on Cloud Computing(2020).
[59]
G. Xu, H. Li, H. Ren, K. Yang, and R. H. Deng. 2019. Data Security Issues in Deep Learning: Attacks, Countermeasures and Opportunities. IEEE Communications Magazine 57, 11 (2019), 116–123.
[60]
G. Xu, H. Li, Y. Zhang, S. Xu, J. Ning, and R. Deng. 2020. Privacy-Preserving Federated Deep Learning with Irregular Users. IEEE Transactions on Dependable and Secure Computing (2020). https://doi.org/10.1109/TDSC.2020.3005909
[61]
Masashi Yamane and Keiichi Iwamura. 2020. Secure and Efficient Outsourcing of Matrix Multiplication based on Secret Sharing Scheme using only One Server. In Proceedings of the IEEE CCNC. IEEE, 1–6.
[62]
L Yu, L Liu, C Pu, M E Gursoy, and S Truex. 2019. Differentially Private Model Publishing for Deep Learning. In proceedings of the IEEE Security and Privacy. 309–326.
[63]
Ghodsi Zahra, Gu Tianyu, and Garg Siddharth. 2017. Safetynets: Verifiable execution of deep neural networks on an untrusted cloud. In Advances in Neural Information Processing Systems. 4672–4681.
[64]
Zhikun Zhang, Tianhao Wang, Ninghui Li, Shibo He, and Jiming Chen. 2018. Calm: Consistent adaptive local marginal for marginal release under local differential privacy. In Proceedings of the ACM CCS. 212–229.
[65]
Hongchao Zhou and Gregory Wornell. 2014. Efficient homomorphic encryption on integer vectors and its applications. In Information Theory and Applications Workshop. IEEE, 1–9.
[66]
Martin Zinkevich. 2003. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the ICML. 928–936.

Cited By

View all
  • (2024)VERITAS: Plaintext Encoders for Practical Verifiable Homomorphic EncryptionProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security10.1145/3658644.3670282(2520-2534)Online publication date: 2-Dec-2024
  • (2024)Towards Protecting On-Device Machine Learning with RISC-V based Multi-Enclave TEE2024 33rd International Conference on Computer Communications and Networks (ICCCN)10.1109/ICCCN61486.2024.10637594(1-6)Online publication date: 29-Jul-2024
  • (2024)Vehicle as a Service (VaaS): Leverage Vehicles to Build Service Networks and Capabilities for Smart CitiesIEEE Communications Surveys & Tutorials10.1109/COMST.2024.337016926:3(2048-2081)Online publication date: Nov-2025
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
ACSAC '20: Proceedings of the 36th Annual Computer Security Applications Conference
December 2020
962 pages
ISBN:9781450388580
DOI:10.1145/3427228
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 December 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Deep Learning
  2. Privacy Protection
  3. Verifiable Inference

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Sichuan Science and Technology Program
  • Peng Cheng Laboratory Project of Guangdong Province
  • National Key R&D Program of China
  • National Natural Science Foundation of China

Conference

ACSAC '20

Acceptance Rates

Overall Acceptance Rate 104 of 497 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)180
  • Downloads (Last 6 weeks)23
Reflects downloads up to 18 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)VERITAS: Plaintext Encoders for Practical Verifiable Homomorphic EncryptionProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security10.1145/3658644.3670282(2520-2534)Online publication date: 2-Dec-2024
  • (2024)Towards Protecting On-Device Machine Learning with RISC-V based Multi-Enclave TEE2024 33rd International Conference on Computer Communications and Networks (ICCCN)10.1109/ICCCN61486.2024.10637594(1-6)Online publication date: 29-Jul-2024
  • (2024)Vehicle as a Service (VaaS): Leverage Vehicles to Build Service Networks and Capabilities for Smart CitiesIEEE Communications Surveys & Tutorials10.1109/COMST.2024.337016926:3(2048-2081)Online publication date: Nov-2025
  • (2024)EPIDL: Towards efficient and privacy‐preserving inference in deep learningConcurrency and Computation: Practice and Experience10.1002/cpe.811036:14Online publication date: 4-Apr-2024
  • (2023)Overview of artificial intelligence model watermarkingJournal of Image and Graphics10.11834/jig.23001028:6(1792-1810)Online publication date: 2023
  • (2023)Privacy-Preserving and Verifiable Outsourcing Linear Inference Computing FrameworkIEEE Transactions on Services Computing10.1109/TSC.2023.333293316:6(4591-4604)Online publication date: Nov-2023
  • (2023)pvCNN: Privacy-Preserving and Verifiable Convolutional Neural Network TestingIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.326293218(2218-2233)Online publication date: 2023
  • (2023)VerSA: Verifiable Secure Aggregation for Cross-Device Federated LearningIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2021.312632320:1(36-52)Online publication date: 1-Jan-2023
  • (2023)Secure Decentralized Image Classification With Multiparty Homomorphic EncryptionIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2023.323427833:7(3185-3198)Online publication date: 1-Jul-2023
  • (2023)Privacy-Preserving and Secure Cloud Computing: A Case of Large-Scale Nonlinear ProgrammingIEEE Transactions on Cloud Computing10.1109/TCC.2021.309972011:1(484-498)Online publication date: 1-Jan-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media