[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3386901.3388946acmconferencesArticle/Chapter ViewAbstractPublication PagesmobisysConference Proceedingsconference-collections
research-article

DarkneTZ: towards model privacy at the edge using trusted execution environments

Published: 15 June 2020 Publication History

Abstract

We present DarkneTZ, a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against Deep Neural Networks (DNNs). Increasingly, edge devices (smartphones and consumer IoT devices) are equipped with pre-trained DNNs for a variety of applications. This trend comes with privacy risks as models can leak information about their training data through effective membership inference attacks (MIAs).
We evaluate the performance of DarkneTZ, including CPU execution time, memory usage, and accurate power consumption, using two small and six large image classification models. Due to the limited memory of the edge device's TEE, we partition model layers into more sensitive layers (to be executed inside the device TEE), and a set of layers to be executed in the untrusted part of the operating system. Our results show that even if a single layer is hidden, we can provide reliable model privacy and defend against state of the art MIAs, with only 3% performance overhead. When fully utilizing the TEE, DarkneTZ provides model protections with up to 10% overhead.

References

[1]
Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 308--318.
[2]
Hervé Abdi and Lynne J Williams. 2010. Tukey's honestly significant difference (HSD) test. Encyclopedia of Research Design. Thousand Oaks, CA: Sage (2010), 1--5.
[3]
Abbas Acar, Hidayet Aksu, A Selcuk Uluagac, and Mauro Conti. 2018. A survey on homomorphic encryption schemes: Theory and implementation. ACM Computing Surveys (CSUR) 51, 4 (2018), 79.
[4]
Francis Akowuah, Amit Ahlawat, and Wenliang Du. 2018. Protecting Sensitive Data in Android SQLite Databases Using TrustZone. In Proceedings of the International Conference on Security and Management (SAM). The Steering Committee of The World Congress in Computer Science, 227--233.
[5]
Galen Andrew, Steve Chien, and Nicolas Papernot. 2019. TensorFlow Privacy. https://github.com/tensorflow/privacy
[6]
Yoshinori Aono, Takuya Hayashi, Lihua Wang, Shiho Moriai, et al. 2018. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security 13, 5 (2018), 1333--1345.
[7]
A Arm. 2009. Security technology-building a secure system using TrustZone technology. ARM Technical White Paper (2009).
[8]
Ferdinand Brasser, David Gens, Patrick Jauernig, Ahmad-Reza Sadeghi, and Emmanuel Stapf. 2019. SANCTUARY: ARMing TrustZone with user-space enclaves. In Network and Distributed Systems Security (NDSS) Symposium 2019.
[9]
Rich Caruana, Steve Lawrence, and C Lee Giles. 2001. Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping. In Advances in Neural Information Processing Systems. 402--408.
[10]
Victor Costan and Srinivas Devadas. 2016. Intel SGX Explained. IACR Cryptology ePrint Archive 2016, 086 (2016), 1--118.
[11]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li FeiFei. 2009. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. Ieee, 248--255.
[12]
Pan Dong, Alan Burns, Zhe Jiang, and Xiangke Liao. 2018. TZDKS: A New TrustZone-Based Dual-Criticality System with Balanced Performance. In 2018 IEEE 24th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA). IEEE, 59--64.
[13]
Alexey Dosovitskiy and Thomas Brox. 2016. Inverting visual representations with convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4829--4837.
[14]
Cynthia Dwork, Aaron Roth, et al. 2014. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science 9, 3-4 (2014), 211--407.
[15]
Jan-Erik Ekberg, Kari Kostiainen, and N Asokan. 2014. The untapped potential of trusted execution environments on mobile devices. IEEE Security & Privacy 12, 4 (2014), 29--37.
[16]
Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 2015 ACM SIGSAC Conference on Computer and Communications Security. ACM, 1322--1333.
[17]
Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Hani Jamjoom, Ankita Lamba, Dimitrios Pendarakis, and Ian Molloy. 2018. YerbaBuena: Securing Deep Learning Inference Data via Enclave-based Ternary Model Partitioning. arXiv preprint arXiv:1807.00969 (2018).
[18]
Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149. In International Conference on Learning Representations (ICLR). https://arxiv.org/abs/1510.00149
[19]
Lucjan Hanzlik, Yang Zhang, Kathrin Grosse, Ahmed Salem, Max Augustin, Michael Backes, and Mario Fritz. 2018. Mlcapsule: Guarded offline deployment of machine learning as a service. arXiv preprint arXiv:1808.00590 (2018).
[20]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 770--778.
[21]
Briland Hitaj, Giuseppe Ateniese, and Fernando Perez-Cruz. 2017. Deep models under the GAN: information leakage from collaborative deep learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 603--618.
[22]
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017).
[23]
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 4700--4708.
[24]
Tyler Hunt, Congzheng Song, Reza Shokri, Vitaly Shmatikov, and Emmett Witchel. 2018. Chiron: Privacy-preserving Machine Learning as a Service. arXiv preprint arXiv:1803.05961 (2018).
[25]
Nick Hynes, Raymond Cheng, and Dawn Song. 2018. Efficient deep learning on multi-source private data. arXiv preprint arXiv:1807.06689 (2018).
[26]
Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size. arXiv preprint arXiv:1602.07360 (2016).
[27]
Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2704--2713.
[28]
Bargav Jayaraman and David Evans. 2019. Evaluating Differentially Private Machine Learning in Practice. In 28th USENIX Security Symposium (USENIX Security 19). USENIX Association, Santa Clara, CA, 1895--1912. https://www.usenix.org/conference/usenixsecurity19/presentation/jayaraman
[29]
Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. 2019. MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 259--274.
[30]
Hugo Krawczyk. 2003. SIGMA: The `SIGn-and-MAc' approach to authenticated Diffie-Hellman and its use in the IKE protocols. In Annual International Cryptology Conference. Springer, 400--425.
[31]
Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. [n.d.]. CIFAR-100 (Canadian Institute for Advanced Research). http://www.cs.toronto.edu/~kriz/cifar.html
[32]
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature 521, 7553 (2015), 436--444.
[33]
Ninghui Li, Wahbeh Qardaji, Dong Su, Yi Wu, and Weining Yang. 2013. Membership privacy: a unifying framework for privacy definitions. In Proceedings of the 2013 ACM SIGSAC conference on Computer and Communications Security. ACM, 889--900.
[34]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting Unintended Feature Leakage in Collaborative Learning. In Proceedings of 40th IEEE Symposium on Security & Privacy. IEEE, 480--495.
[35]
Ilya Mironov. 2017. Rényi differential privacy. In 2017 IEEE 30th Computer Security Foundations Symposium (CSF). IEEE, 263--275.
[36]
Fan Mo, Ali Shahin Shamsabadi, Kleomenis Katevas, Andrea Cavallaro, and Hamed Haddadi. 2019. Poster: Towards Characterizing and Limiting Information Exposure in DNN Layers. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2653--2655.
[37]
Michael Naehrig, Kristin Lauter, and Vinod Vaikuntanathan. 2011. Can homomorphic encryption be practical?. In Proceedings of the 3rd ACM workshop on Cloud computing security workshop. ACM, 113--124.
[38]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks. In Proceedings of 40th IEEE Symposium on Security & Privacy. IEEE.
[39]
Olga Ohrimenko, Felix Schuster, Cedric Fournet, Aastha Mehta, Sebastian Nowozin, Kapil Vaswani, and Manuel Costa. 2016. Oblivious Multi-Party Machine Learning on Trusted Processors. In 25th USENIX Security Symposium (USENIX Security 16). USENIX Association, Austin, TX, 619--636. https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/ohrimenko
[40]
Seyed Ali Osia, Ali Shahin Shamsabadi, Ali Taheri, Kleomenis Katevas, Sina Sajadmanesh, Hamid R Rabiee, Nicholas D Lane, and Hamed Haddadi. 2020. A hybrid deep learning architecture for privacy-preserving mobile analytics. IEEE Internet of Things Journal (2020).
[41]
Seyed Ali Osia, Ali Shahin Shamsabadi, Ali Taheri, Hamid R Rabiee, and Hamed Haddadi. 2018. Private and Scalable Personal Data Analytics Using Hybrid Edge-to-Cloud Deep Learning. Computer 51, 5 (2018), 42--49.
[42]
Heejin Park, Shuang Zhai, Long Lu, and Felix Xiaozhu Lin. 2019. StreamBox-TZ: secure stream analytics at the edge with TrustZone. In 2019 {USENIX} Annual Technical Conference 19. 537--554.
[43]
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic Differentiation in PyTorch. In NIPS Autodiff Workshop.
[44]
Adityanarayanan Radhakrishnan, Mikhail Belkin, and Caroline Uhler. 2018. Downsampling leads to Image Memorization in Convolutional Autoencoders. arXiv preprint arXiv:1810.10333 (2018).
[45]
Md Atiqur Rahman, Tanzila Rahman, Robert Laganière, Noman Mohammed, and Yang Wang. 2018. Membership Inference Attack against Differentially Private Deep Learning Model. Transactions on Data Privacy 11, 1 (2018), 61--79.
[46]
Joseph Redmon. 2013--2016. Darknet: Open Source Neural Networks in C. http://pjreddie.com/darknet/.
[47]
Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. 2018. Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint arXiv:1806.01246. In Network and Distributed Systems Security (NDSS) Symposium 2018. https://arxiv.org/abs/1806.01246
[48]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In Proceedings of 38th IEEE Symposium on Security & Privacy. IEEE, 3--18.
[49]
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 1--9.
[50]
Shruti Tople, Karan Grover, Shweta Shinde, Ranjita Bhagwan, and Ramachandran Ramjee. 2018. Privado: Practical and secure DNN inference. arXiv preprint arXiv:1810.00602 (2018).
[51]
Florian Tramèr and Dan Boneh. 2019. Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware. arXiv preprint arXiv:1806.03287. In International Conference on Learning Representations (ICLR). https://arxiv.org/abs/1806.03287
[52]
Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, and Song Han. 2019. Haq: Hardware-aware automated quantization with mixed precision. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 8612--8620.
[53]
Wenhao Wang, Guoxing Chen, Xiaorui Pan, Yinqian Zhang, XiaoFeng Wang, Vincent Bindschaedler, Haixu Tang, and Carl A Gunter. 2017. Leaky cauldron on the dark land: Understanding memory side-channel hazards in SGX. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2421--2434.
[54]
Chugui Xu, Ju Ren, Deyu Zhang, Yaoxue Zhang, Zhan Qin, and Kui Ren. 2019. GANobfuscator: Mitigating information leakage under GAN via differential privacy. IEEE Transactions on Information Forensics and Security 14, 9 (2019), 2358--2371.
[55]
Mengwei Xu, Jiawei Liu, Yuanqiang Liu, Felix Xiaozhu Lin, Yunxin Liu, and Xuanzhe Liu. 2019. A first look at deep learning apps on smartphones. In The World Wide Web Conference. 2125--2136.
[56]
Ziqi Yang, Jiyi Zhang, Ee-Chien Chang, and Zhenkai Liang. 2019. Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. ACM, 225--240.
[57]
Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. 2018. Privacy risk in machine learning: Analyzing the connection to overfitting. In 2018 IEEE 31st Computer Security Foundations Symposium (CSF). IEEE, 268--282.
[58]
Kailiang Ying, Amit Ahlawat, Bilal Alsharifi, Yuexin Jiang, Priyank Thavai, and Wenliang Du. 2018. TruZ-Droid: Integrating TrustZone with mobile operating system. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 14--27.
[59]
Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks?. In Advances in Neural Information Processing Systems. 3320--3328.
[60]
Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. 2015. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579. In Deep Learning Workshop in International Conference on Machine Learning. https://arxiv.org/abs/1506.06579
[61]
Lei Yu, Ling Liu, Calton Pu, Mehmet Emre Gursoy, and Stacey Truex. 2019. Differentially private model publishing for deep learning. In Proceedings of 40th IEEE Symposium on Security & Privacy. IEEE, 332--349.
[62]
Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In European conference on computer vision. Springer, 818--833.
[63]
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530. In International Conference on Learning Representations (ICLR). https://arxiv.org/abs/1611.03530
[64]
C. Zhang, P. Patras, and H. Haddadi. 2019. Deep Learning in Mobile and Wireless Networking: A Survey. IEEE Communications Surveys Tutorials 21, 3 (2019), 2224--2287.
[65]
Shengjia Zhao, Jiaming Song, and Stefano Ermon. 2017. Learning hierarchical features from deep generative models. In International Conference on Machine Learning. 4091--4099.
[66]
Shijun Zhao, Qianying Zhang, Yu Qin, Wei Feng, and Dengguo Feng. 2019. SecTEE: A Software-based Approach to Secure Enclave Architecture Using TEE. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. ACM, 1723--1740.
[67]
Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. In Advances in Neural Information Processing Systems. 14747--14756.
[68]
Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. 2014. RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security. Scottsdale, Arizona. https://arxiv.org/abs/1407.6981

Cited By

View all
  • (2025)Mind your indices! Index hijacking attacks on collaborative unpooling autoencoder systemsInternet of Things10.1016/j.iot.2024.10146229(101462)Online publication date: Jan-2025
  • (2024)Slalom at the Carnival: Privacy-preserving Inference with Masks from Public KnowledgeIACR Communications in Cryptology10.62056/akp-49qgxqOnline publication date: 7-Oct-2024
  • (2024)Implementation of an Image Tampering Detection System with a CMOS Image Sensor PUF and OP-TEESensors10.3390/s2422712124:22(7121)Online publication date: 5-Nov-2024
  • Show More Cited By

Index Terms

  1. DarkneTZ: towards model privacy at the edge using trusted execution environments

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MobiSys '20: Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services
      June 2020
      496 pages
      ISBN:9781450379540
      DOI:10.1145/3386901
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 15 June 2020

      Permissions

      Request permissions for this article.

      Check for updates

      Qualifiers

      • Research-article

      Funding Sources

      • EPSRC Databox
      • EPSRC DADA

      Conference

      MobiSys '20
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 274 of 1,679 submissions, 16%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)388
      • Downloads (Last 6 weeks)59
      Reflects downloads up to 11 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Mind your indices! Index hijacking attacks on collaborative unpooling autoencoder systemsInternet of Things10.1016/j.iot.2024.10146229(101462)Online publication date: Jan-2025
      • (2024)Slalom at the Carnival: Privacy-preserving Inference with Masks from Public KnowledgeIACR Communications in Cryptology10.62056/akp-49qgxqOnline publication date: 7-Oct-2024
      • (2024)Implementation of an Image Tampering Detection System with a CMOS Image Sensor PUF and OP-TEESensors10.3390/s2422712124:22(7121)Online publication date: 5-Nov-2024
      • (2024)A First Look At Efficient And Secure On-Device LLM Inference Against KV LeakageProceedings of the 19th Workshop on Mobility in the Evolving Internet Architecture10.1145/3691555.3696827(13-18)Online publication date: 18-Nov-2024
      • (2024)Machine Learning with Confidential Computing: A Systematization of KnowledgeACM Computing Surveys10.1145/367000756:11(1-40)Online publication date: 3-Jun-2024
      • (2024)TransLinkGuard: Safeguarding Transformer Models Against Model Stealing in Edge DeploymentProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680786(3479-3488)Online publication date: 28-Oct-2024
      • (2024)HyperTheft: Thieving Model Weights from TEE-Shielded Neural Networks via Ciphertext Side ChannelsProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security10.1145/3658644.3690317(4346-4360)Online publication date: 2-Dec-2024
      • (2024)DM-TEE: Trusted Execution Environment for Disaggregated MemoryProceedings of the Great Lakes Symposium on VLSI 202410.1145/3649476.3658702(204-209)Online publication date: 12-Jun-2024
      • (2024)GuaranTEEProceedings of the 4th Workshop on Machine Learning and Systems10.1145/3642970.3655845(1-9)Online publication date: 22-Apr-2024
      • (2024)SpotOn: Adversarially Robust Keyword Spotting on Resource-Constrained IoT PlatformsProceedings of the 19th ACM Asia Conference on Computer and Communications Security10.1145/3634737.3637668(684-699)Online publication date: 1-Jul-2024
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      EPUB

      View this article in ePub.

      ePub

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media