[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3442381.3449965acmconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article
Open access

Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy

Published: 03 June 2021 Publication History

Abstract

When receiving machine learning services from the cloud, the provider does not need to receive all features; in fact, only a subset of the features are necessary for the target prediction task. Discerning this subset is the key problem of this work. We formulate this problem as a gradient-based perturbation maximization method that discovers this subset in the input feature space with respect to the functionality of the prediction model used by the provider. After identifying the subset, our framework, Cloak, suppresses the rest of the features using utility-preserving constant values that are discovered through a separate gradient-based optimization process. We show that Cloak does not necessarily require collaboration from the service provider beyond its normal service, and can be applied in scenarios where we only have black-box access to the service provider’s model. We theoretically guarantee that Cloak’s optimizations reduce the upper bound of the Mutual Information (MI) between the data and the sifted representations that are sent out. Experimental results show that Cloak reduces the mutual information between the input and the sifted representations by 85.01% with only negligible reduction in utility (1.42%). In addition, we show that Cloak greatly diminishes adversaries’ ability to learn and infer non-conducive features.

References

[1]
Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep Learning with Differential Privacy. In ACM Conference on Computer and Communications Security (CCS).
[2]
Nitin Agrawal, Ali Shahin Shamsabadi, Matt J Kusner, and Adrià Gascón. 2019. QUOTIENT: two-party secure neural network training and prediction. In ACM Conference on Computer and Communications Security (CCS).
[3]
Devansh Arpit, Stanislaw Jastrzkebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, 2017. A closer look at memorization in deep networks. In International Conference on Machine Learning (ICML).
[4]
Babak Amin Azad, Pierre Laperdrix, and Nick Nikiforakis. 2019. Less is more: quantifying the security benefits of debloating web applications. In 28th {USENIX} Security Symposium ({USENIX} Security 19). 1697–1714.
[5]
Borja Balle, Peter Kairouz, H Brendan McMahan, Om Thakkar, and Abhradeep Thakurta. 2020. Privacy amplification via random check-ins. arXiv preprint arXiv:2007.06605(2020).
[6]
Timothy Barron, Najmeh Miramirkhani, and Nick Nikiforakis. 2019. Now You See It, Now You Don’t: A Large-scale Analysis of Early Domain Deletions. In 22nd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2019). USENIX Association, Chaoyang District, Beijing, 383–397. https://www.usenix.org/conference/raid2019/presentation/barron
[7]
Normand J. Beaudry and Renato Renner. 2011. An intuitive proof of the data processing inequality. arXiv preprint arXiv:1107.0740(2011). arxiv:1107.0740 [quant-ph]
[8]
Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. 2015. Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424(2015).
[9]
Fabian Boemer, Rosario Cammarota, Daniel Demmler, Thomas Schneider, and Hossein Yalame. 2020. MP2ML: A Mixed-Protocol Machine Learning Framework for Private Inference. In Proceedings of the 15th International Conference on Availability, Reliability and Security (Virtual Event, Ireland) (ARES ’20). Association for Computing Machinery, New York, NY, USA, Article 14, 10 pages. https://doi.org/10.1145/3407023.3407045
[10]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 1175–1191.
[11]
Nicholas Carlini, Matthew Jagielski, and Ilya Mironov. 2020. Cryptanalytic extraction of neural network models. In Annual International Cryptology Conference. Springer, 189–218.
[12]
Kamalika Chaudhuri and Claire Monteleoni. 2009. Privacy-preserving logistic regression. In Advances in Neural Information Processing Systems 21, D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou (Eds.). Curran Associates, Inc., 289–296. http://papers.nips.cc/paper/3486-privacy-preserving-logistic-regression.pdf
[13]
Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. 2009. Differentially Private Empirical Risk Minimization. arXiv preprint arXiv:0912.0071(2009). arxiv:0912.0071 [cs.LG]
[14]
Kamalika Chaudhuri, Anand D. Sarwate, and Kaushik Sinha. 2013. A Near-Optimal Algorithm for Differentially-Private Principal Components. J. Mach. Learn. Res. 14, 1 (Jan. 2013), 2905–2943.
[15]
Thomas M Cover and Joy A Thomas. 2012. Elements of information theory. John Wiley & Sons.
[16]
Paul W. Cuff and Lanqing Yu. 2016. Differential Privacy as a Mutual Information Constraint. In ACM Conference on Computer and Communications Security (CCS).
[17]
Wei Dong, Minghui Qiu, and Feida Zhu. 2014. Who am I on twitter? a cross-country comparison. In Proceedings of the 23rd International Conference on World Wide Web. 253–254.
[18]
Nathan Dowlin, Ran Gilad-Bachrach, Kim Laine, Kristin Lauter, Michael Naehrig, and John Wernsing. 2016. CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy. In International Conference on Machine Learning (ICML).
[19]
Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. 2006. Our Data, Ourselves: Privacy via Distributed Noise Generation. In Proceedings of the 24th Annual International Conference on The Theory and Applications of Cryptographic Techniques (St. Petersburg, Russia) (EUROCRYPT’06). Springer-Verlag, Berlin, Heidelberg, 486–503. https://doi.org/10.1007/11761679_29
[20]
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating Noise to Sensitivity in Private Data Analysis. In Proceedings of the Third Conference on Theory of Cryptography (New York, NY) (TCC’06). Springer-Verlag, Berlin, Heidelberg, 265–284. https://doi.org/10.1007/11681878_14
[21]
Cynthia Dwork and Aaron Roth. 2014. The Algorithmic Foundations of Differential Privacy. Found. Trends Theor. Comput. Sci. 9 (Aug. 2014), 211–407. https://doi.org/10.1561/0400000042
[22]
Facebook. 2019. A research tool for secure machine learning in PyTorch. online–accessed June 2020, url: https://crypten.ai.
[23]
Bo Feng, Qian Lou, Lei Jiang, and Geoffrey C Fox. 2020. CryptoGRU: Low Latency Privacy-Preserving Text Analysis With GRU. arXiv preprint arXiv:2010.11796(2020).
[24]
Lucjan Hanzlik, Yang Zhang, Kathrin Grosse, Ahmed Salem, Max Augustin, Michael Backes, and Mario Fritz. 2018. MLCapsule: Guarded Offline Deployment of Machine Learning as a Service. arXiv preprint arXiv:1808.00590(2018). arxiv:1808.00590 [cs.CR]
[25]
Hanieh Hashemi, Yongqin Wang, and Murali Annavaram. 2020. DarKnight: A Data Privacy Scheme for Training and Inference of Deep Neural Networks. arXiv preprint arXiv:2006.01300(2020).
[26]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27]
Zecheng He, Tianwei Zhang, and Ruby B Lee. 2019. Model inversion attacks against collaborative inference. In Proceedings of the 35th Annual Computer Security Applications Conference. 148–162.
[28]
Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. 2019. MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples. In ACM Conference on Computer and Communications Security (CCS).
[29]
Xiaoqian Jiang, Zhanglong Ji, Shuang Wang, Noman Mohammed, Samuel Cheng, and Lucila Ohno-Machado. 2013. Differential-private data publishing through component analysis. Transactions on data privacy 6, 1 (2013), 19.
[30]
Chiraag Juvekar, Vinod Vaikuntanathan, and Anantha Chandrakasan. 2018. GAZELLE: A Low Latency Framework for Secure Neural Network Inference. In USENIX Security Symposium (USENIX Security).
[31]
Peter Kairouz, Jiachun Liao, Chong Huang, and Lalitha Sankar. 2019. Censored and Fair Universal Representations using Generative Adversarial Models. arXiv preprint arXiv:1910.00411(2019). arxiv:1910.00411 [cs.LG]
[32]
Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, 2019. Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977(2019).
[33]
K. Kalantari, L. Sankar, and O. Kosut. 2017. On information-theoretic privacy with general distortion cost functions. In 2017 IEEE International Symposium on Information Theory (ISIT). 2865–2869. https://doi.org/10.1109/ISIT.2017.8007053
[34]
Malvin H. Kalos and Paula A. Whitlock. 1986. Monte Carlo Methods. Vol. 1: Basics. Wiley-Interscience, USA.
[35]
Paul Kocher, Jann Horn, Anders Fogh, Daniel Genkin, Daniel Gruss, Werner Haas, Mike Hamburg, Moritz Lipp, Stefan Mangard, Thomas Prescher, Michael Schwarz, and Yuval Yarom. 2019. Spectre Attacks: Exploiting Speculative Execution. In IEEE Symposium on Security and Privacy (S&P).
[36]
Kalpesh Krishna, Gaurav Singh Tomar, Ankur P Parikh, Nicolas Papernot, and Mohit Iyyer. 2019. Thieves on sesame street! model extraction of bert-based apis. arXiv preprint arXiv:1910.12366(2019).
[37]
Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. [n.d.]. CIFAR-100 (Canadian Institute for Advanced Research). ([n. d.]). http://www.cs.toronto.edu/~kriz/cifar.html url: http://www.cs.toronto.edu/~kriz/cifar.html.
[38]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 60(2012), 84–90.
[39]
Yann LeCun. 1998. Gradient-based learning applied to document recognition.
[40]
Yann LeCun and Corinna Cortes. [n.d.]. The MNIST Dataset Of Handwritten Digits. online accessed May 2019 https://doi.org/10.1145/1367497.1367564.
[41]
Sam Leroux, Tim Verbelen, Pieter Simoens, and Bart Dhoedt. 2018. Privacy Aware Offloading of Deep Neural Networks. arxiv:1805.12024 [cs.LG]
[42]
Jiachun Liao, Oliver Kosut, Lalitha Sankar, and Flávio P. Calmon. 2017. A General Framework for Information Leakage.
[43]
Moritz Lipp, Michael Schwarz, Daniel Gruss, Thomas Prescher, Werner Haas, Anders Fogh, Jann Horn, Stefan Mangard, Paul Kocher, Daniel Genkin, Yuval Yarom, and Mike Hamburg. 2018. Meltdown: Reading Kernel Memory from User Space. In USENIX Security Symposium (USENIX Security).
[44]
C. Liu, S. Chakraborty, and P. Mittal. 2017. DEEProtect: Enabling Inference-based Access Control on Mobile Sensing Applications. ArXiv abs/1702.06159(2017).
[45]
Jian Liu, Mika Juuti, Yao Lu, and Nadarajah Asokan. 2017. Oblivious neural network predictions via minionn transformations. In ACM Conference on Computer and Communications Security (CCS).
[46]
Zaoxing Liu, Tian Li, Virginia Smith, and Vyas Sekar. 2019. Enhancing the privacy of federated learning with sketching. arXiv preprint arXiv:1911.01812(2019).
[47]
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep Learning Face Attributes in the Wild. In International Conference on Computer Vision (ICCV).
[48]
Qian Lou, Bian Song, and Lei Jiang. 2020. AutoPrivacy: Automated Layer-wise Parameter Selection for Secure Neural Network Inference. arXiv preprint arXiv:2006.04219(2020).
[49]
Jiajun Lu, Theerasit Issaranon, and David Forsyth. 2017. Safetynet: Detecting and rejecting adversarial examples robustly. In International Conference on Computer Vision (ICCV).
[50]
David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. 2018. Learning adversarially fair and transferable representations. arXiv preprint arXiv:1802.06309(2018).
[51]
Mohammad Mannan and Paul C. van Oorschot. 2008. Privacy-Enhanced Sharing of Personal Content on the Web. In Proceedings of the 17th International Conference on World Wide Web (Beijing, China) (WWW ’08). Association for Computing Machinery, New York, NY, USA, 487–496. https://doi.org/10.1145/1367497.1367564
[52]
Fatemehsadat Mireshghallah, Mohammadkazem Taram, Prakash Ramrakhyani, Ali Jalali, Dean Tullsen, and Hadi Esmaeilzadeh. 2020. Shredder: Learning Noise Distributions to Protect Inference Privacy. In International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS).
[53]
Fatemehsadat Mireshghallah, Mohammadkazem Taram, Praneeth Vepakomma, Abhishek Singh, Ramesh Raskar, and Hadi Esmaeilzadeh. 2020. Privacy in Deep Learning: A Survey. In ArXiv, Vol. abs/2004.12254.
[54]
Pratyush Mishra, Ryan Lehmkuhl, Akshayaram Srinivasan, Wenting Zheng, and Raluca Ada Popa. 2020. Delphi: A Cryptographic Inference Service for Neural Networks. In USENIX Security Symposium (USENIX Security). https://www.usenix.org/conference/usenixsecurity20/presentation/mishra
[55]
MLPerf Organization. 2020. MLPerf Benchmark Suite. url: https://mlperf.org.
[56]
P. Mohassel and Y. Zhang. 2017. SecureML: A System for Scalable Privacy-Preserving Machine Learning. In IEEE Symposium on Security and Privacy (S&P).
[57]
Mainack Mondal, Johnnatan Messias, Saptarshi Ghosh, Krishna P Gummadi, and Aniket Kate. 2016. Forgetting in social media: Understanding and controlling longitudinal exposure of socially shared data. In Twelfth Symposium on Usable Privacy and Security ({SOUPS} 2016). 287–299.
[58]
Krishna Giri Narra, Zhifeng Lin, Yongqin Wang, Keshav Balasubramaniam, and Murali Annavaram. 2019. Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments. arXiv preprint arXiv:1912.03485(2019).
[59]
Alyssa Newcomb. 2018. Facebook data harvesting scandal widens to 87 million people. online–accessed February 2020, url:https://www.nbcnews.com/tech/tech-news/facebook-data-harvesting-scandal-widens-87-million-people-n862771.
[60]
S. A. Osia, A. S. Shamsabadi, S. Sajadmanesh, A. Taheri, K. Katevas, H. R. Rabiee, N. D. Lane, and H. Haddadi. 2020. A Hybrid Deep Learning Architecture for Privacy-Preserving Mobile Analytics. IEEE Internet of Things Journal(2020), 1–1. https://doi.org/10.1109/JIOT.2020.2967734
[61]
Seyed Ali Osia, Ali Taheri, Ali Shahin Shamsabadi, Kleomenis Katevas, Hamed Haddadi, and Hamid R. Rabiee. 2020. Deep Private-Feature Extraction. IEEE Transactions on Knowledge and Data Engineering 32, 1 (Jan 2020), 54–66. https://doi.org/10.1109/tkde.2018.2878698
[62]
Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, and Kunal Talwar. 2016. Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data. arXiv preprint arXiv:1610.05755(2016). arxiv:1610.05755 [stat.ML]
[63]
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In ACM on Asia conference on computer and communications security (AsiaCCS).
[64]
Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Úlfar Erlingsson. 2018. Scalable Private Learning with PATE. arXiv preprint arXiv:1802.08908(2018).
[65]
Maarten G. Poirot, Praneeth Vepakomma, K. Chang, J. Kalpathy-Cramer, R. Gupta, and R. Raskar. 2019. Split Learning for collaborative deep learning in healthcare. ArXiv abs/1912.12115(2019).
[66]
Vincent Primault, Vasileios Lampos, Ingemar Cox, and Emiliano De Cristofaro. 2019. Privacy-Preserving Crowd-Sourcing of Web Searches with Private Data Donor. In The World Wide Web Conference (San Francisco, CA, USA) (WWW ’19). Association for Computing Machinery, New York, NY, USA, 1487–1497. https://doi.org/10.1145/3308558.3313474
[67]
Swaroop Ramaswamy, Om Thakkar, Rajiv Mathews, Galen Andrew, H Brendan McMahan, and Françoise Beaufays. 2020. Training production language models without memorizing user data. arXiv preprint arXiv:2009.10031(2020).
[68]
Omer Rana and Joe Weinman. 2015. Data as a Currency and Cloud-Based Data Lockers. IEEE Cloud Computing 2, 2 (2015), 16–20.
[69]
Vijay Janapa Reddi, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, Maximilien Breughe, Mark Charlebois, William Chou, 2020. MLPerf Inference Benchmark. In International Symposium on Computer Architecture (ISCA).
[70]
Théo Ryffel, David Pointcheval, and Francis Bach. 2020. Ariann: Low-interaction privacy-preserving deep learning via function secret sharing. arXiv preprint arXiv:2006.04593(2020).
[71]
Théo Ryffel, David Pointcheval, Francis Bach, Edouard Dufour-Sans, and Romain Gay. 2019. Partially encrypted deep learning using functional encryption. Advances in Neural Information Processing Systems 32 (2019), 4517–4528.
[72]
Sina Sajadmanesh and Daniel Gatica-Perez. 2020. When Differential Privacy Meets Graph Neural Networks. arXiv preprint arXiv:2006.05535(2020).
[73]
Sina Sajadmanesh, Sina Jafarzadeh, Seyed Ali Ossia, Hamid R Rabiee, Hamed Haddadi, Yelena Mejova, Mirco Musolesi, Emiliano De Cristofaro, and Gianluca Stringhini. 2017. Kissing cuisines: Exploring worldwide culinary habits on the web. In Proceedings of the 26th international conference on world wide web companion. 1013–1021.
[74]
Reza Shokri and Vitaly Shmatikov. 2015. Privacy-Preserving Deep Learning. In ACM Conference on Computer and Communications Security (CCS).
[75]
R. Shokri, M. Stronati, C. Song, and V. Shmatikov. 2017. Membership Inference Attacks Against Machine Learning Models. In IEEE Symposium on Security and Privacy (S&P).
[76]
Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv preprint arXiv:1409.1556(2014).
[77]
Abhishek Singh, Praneeth Vepakomma, Otkrist Gupta, and R. Raskar. 2019. Detailed comparison of communication efficiency of split learning and federated learning. ArXiv abs/1909.09145(2019).
[78]
Yeon sup Lim, M. Srivatsa, S. Chakraborty, and I. Taylor. 2018. Learning Light-Weight Edge-Deployable Privacy Models. 2018 IEEE International Conference on Big Data (Big Data) (2018), 1290–1295.
[79]
Zoltán Szabó. 2014. Information Theoretical Estimators Toolbox. Journal of Machine Learning Research 15 (2014), 283–287.
[80]
M. Taram, A. Venkat, and D. Tullsen. 2020. Packet Chasing: Spying on Network Packets over a Cache Side-Channel. In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA). 721–734. https://doi.org/10.1109/ISCA45697.2020.00065
[81]
Stuart A. Thompson and Charlie Warzel. 2019. The Privacy Project: Twelve Million Phones, One Dataset, Zero Privacy. online–accessed February 2020, url: https://www.nytimes.com/interactive/2019/12/19/opinion/location-tracking-cell-phone.html.
[82]
Florian Tramer and Dan Boneh. 2019. Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware. In International Conference on Learning Representations (ICLR). https://openreview.net/forum?id=rJVorjCcKQ
[83]
Tom Van Goethem, Najmeh Miramirkhani, Wouter Joosen, and Nick Nikiforakis. 2019. Purchased Fame: Exploring the Ecosystem of Private Blog Networks. In Proceedings of the 2019 ACM Asia Conference on Computer and Communications Security (Auckland, New Zealand) (Asia CCS ’19). Association for Computing Machinery, New York, NY, USA, 366–378. https://doi.org/10.1145/3321705.3329830
[84]
Praneeth Vepakomma, Abhishek Singh, Otkrist Gupta, and Ramesh Raskar. 2020. NoPeek: Information leakage reduction to share activations in distributed deep learning. ArXiv abs/2008.09161(2020).
[85]
Sameer Wagh, Shruti Tople, Fabrice Benhamouda, Eyal Kushilevitz, Prateek Mittal, and Tal Rabin. 2020. FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning. arXiv preprint arXiv:2004.02229(2020).
[86]
Ji Wang, Jianguo Zhang, Weidong Bao, Xiaomin Zhu, Bokai Cao, and Philip S. Yu. 2018. Not Just Privacy. ACM International Conference on Knowledge Discovery and Data Mining (KDD) (2018). https://doi.org/10.1145/3219819.3220106
[87]
Zhifei Zhang, Yang Song, and Hairong Qi. 2017. Age Progression/Regression by Conditional Adversarial Autoencoder. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), 4352–4360.
[88]
Elena Zheleva and Lise Getoor. 2009. To Join or Not to Join: The Illusion of Privacy in Social Networks with Mixed Public and Private User Profiles. In Proceedings of the 18th International Conference on World Wide Web (Madrid, Spain) (WWW ’09). Association for Computing Machinery, New York, NY, USA, 531–540. https://doi.org/10.1145/1526709.1526781

Cited By

View all
  • (2024)Once-for-all: Efficient Visual Face Privacy Protection via Person-specific VeilsProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681371(7705-7713)Online publication date: 28-Oct-2024
  • (2024)Recoverable Privacy-Preserving Image Classification through Noise-like Adversarial ExamplesACM Transactions on Multimedia Computing, Communications, and Applications10.1145/365367620:7(1-27)Online publication date: 16-May-2024
  • (2024)PATROL: Privacy-Oriented Pruning for Collaborative Inference Against Model Inversion Attacks2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV57701.2024.00465(4704-4713)Online publication date: 3-Jan-2024
  • Show More Cited By
  1. Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    WWW '21: Proceedings of the Web Conference 2021
    April 2021
    4054 pages
    ISBN:9781450383127
    DOI:10.1145/3442381
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 03 June 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Deep Learning
    2. Fairness
    3. Privacy-preserving Machine Learning

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    WWW '21
    Sponsor:
    WWW '21: The Web Conference 2021
    April 19 - 23, 2021
    Ljubljana, Slovenia

    Acceptance Rates

    Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)242
    • Downloads (Last 6 weeks)31
    Reflects downloads up to 12 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Once-for-all: Efficient Visual Face Privacy Protection via Person-specific VeilsProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681371(7705-7713)Online publication date: 28-Oct-2024
    • (2024)Recoverable Privacy-Preserving Image Classification through Noise-like Adversarial ExamplesACM Transactions on Multimedia Computing, Communications, and Applications10.1145/365367620:7(1-27)Online publication date: 16-May-2024
    • (2024)PATROL: Privacy-Oriented Pruning for Collaborative Inference Against Model Inversion Attacks2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV57701.2024.00465(4704-4713)Online publication date: 3-Jan-2024
    • (2024)PRO-Face C: Privacy-Preserving Recognition of Obfuscated Face via Feature CompensationIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.338897619(4930-4944)Online publication date: 2024
    • (2024)Discriminative Feature Learning-Based Federated Lightweight Distillation Against Multiple AttacksIEEE Internet of Things Journal10.1109/JIOT.2024.336009411:10(17663-17677)Online publication date: 15-May-2024
    • (2024)PrivyNAS: Privacy-Aware Neural Architecture Search for Split Computing in Edge–Cloud SystemsIEEE Internet of Things Journal10.1109/JIOT.2023.331176111:4(6638-6651)Online publication date: 15-Feb-2024
    • (2024)Context-Aware Hybrid Encoding for Privacy-Preserving Computation in IoT DevicesIEEE Internet of Things Journal10.1109/JIOT.2023.328852311:1(1054-1064)Online publication date: 1-Jan-2024
    • (2024)Privacy-Preserving Face Recognition Using Trainable Feature Subtraction2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.00036(297-307)Online publication date: 16-Jun-2024
    • (2024)Private compression for intermediate feature in IoT-supported mobile cloud inferenceDisplays10.1016/j.displa.2024.10285785(102857)Online publication date: Dec-2024
    • (2024)Attribute inference privacy protection for pre-trained modelsInternational Journal of Information Security10.1007/s10207-024-00839-723:3(2269-2285)Online publication date: 2-Apr-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media