[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3576915.3623193acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article
Public Access

Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks

Published: 21 November 2023 Publication History

Abstract

Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users. As local training data comes from different users who may not be trustworthy, several studies have shown that FL is vulnerable to poisoning attacks. Meanwhile, to protect the privacy of local users, FL is usually trained in a differentially private way (DPFL). Thus, in this paper, we ask: What are the underlying connections between differential privacy and certified robustness in FL against poisoning attacks? Can we leverage the innate privacy property of DPFL to provide certified robustness for FL? Can we further improve the privacy of FL to improve such robustness certification? We first investigate both user-level and instance-level privacy of FL and provide formal privacy analysis to achieve improved instance-level privacy. We then provide two robustness certification criteria: certified prediction and certified attack inefficacy for DPFL on both user and instance levels. Theoretically, we provide the certified robustness of DPFL based on both criteria given a bounded number of adversarial users or instances. Empirically, we conduct extensive experiments to verify our theories under a range of poisoning attacks on different datasets. We find that increasing the level of privacy protection in DPFL results in stronger certified attack inefficacy; however, it does not necessarily lead to a stronger certified prediction. Thus, achieving the optimal certified prediction requires a proper balance between privacy and utility loss.

References

[1]
Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 308--318.
[2]
Naman Agarwal, Ananda Theertha Suresh, Felix Yu, Sanjiv Kumar, and H Brendan McMahan. 2018. cpSGD: communication-efficient and differentially-private distributed SGD. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 7575--7586.
[3]
Shahab Asoodeh and F Calmon. 2020. Differentially private federated learning: An information-theoretic perspective. In ICML Workshop on Federated Learning for User Privacy and Data Confidentiality.
[4]
Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics. PMLR, 2938--2948.
[5]
Borja Balle, Gilles Barthe, Marco Gaboardi, Justin Hsu, and Tetsuya Sato. 2020. Hypothesis testing interpretations and renyi differential privacy. In International Conference on Artificial Intelligence and Statistics. PMLR, 2496--2506.
[6]
Raef Bassily, Adam Smith, and Abhradeep Thakurta. 2014. Private empirical risk minimization: Efficient algorithms and tight error bounds. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science. IEEE, 464--473.
[7]
Michael Ben-Or, Shafi Goldwasser, and Avi Wigderson. 1988. Completeness theorems for non-cryptographic fault-tolerant distributed computation. In Proceedings of the twentieth annual ACM symposium on Theory of computing. 1--10.
[8]
Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. 2019. Analyzing Federated Learning through an Adversarial Lens. In International Conference on Machine Learning. 634--643.
[9]
Abhishek Bhowmick, John Duchi, Julien Freudiger, Gaurav Kapoor, and Ryan Rogers. 2018. Protection against reconstruction and its applications in private federated learning. arXiv preprint arXiv:1812.00984 (2018).
[10]
Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector machines. Proceedings of the 29th International Coference on International Conference on Machine Learning. 1467--1474.
[11]
Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In NeurIPS. 118--128.
[12]
Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konečnỳ, Stefano Mazzocchi, Brendan McMahan, et al. 2019. Towards federated learning at scale: System design. Proceedings of Machine Learning and Systems, Vol. 1 (2019), 374--388.
[13]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In CCS.
[14]
Theodora S Brisimi, Ruidi Chen, Theofanie Mela, Alex Olshevsky, Ioannis Ch Paschalidis, and Wei Shi. 2018. Federated learning of predictive models from federated electronic health records. International journal of medical informatics, Vol. 112 (2018), 59--67.
[15]
Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. 2021. Provably secure federated learning against malicious clients. In Proceedings of the AAAI conference on artificial intelligence, Vol. 35. 6885--6893.
[16]
Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017).
[17]
Anda Cheng, Peisong Wang, Xi Sheryl Zhang, and Jian Cheng. 2022. Differentially Private Federated Learning with Local Regularization and Sparsification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10122--10131.
[18]
Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. 2019a. Certified adversarial robustness via randomized smoothing. In international conference on machine learning. PMLR, 1310--1320.
[19]
Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. 2019b. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning. PMLR, 1310--1320.
[20]
Krishnamurthy (Dj) Dvijotham, Jamie Hayes, Borja Balle, Zico Kolter, Chongli Qin, Andras Gyorgy, Kai Xiao, Sven Gowal, and Pushmeet Kohli. 2020. A framework for robustness certification of smoothed classifiers using f-divergences. In International Conference on Learning Representations.
[21]
Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. 2006. Our data, ourselves: Privacy via distributed noise generation. In Advances in Cryptology - EUROCRYPT.
[22]
Cynthia Dwork and Aaron Roth. 2014. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science, Vol. 9, 3--4 (2014), 211--407.
[23]
El Mahdi El Mhamdi, Rachid Guerraoui, and Sébastien Louis Alexandre Rouault. 2018. The Hidden Vulnerability of Distributed Learning in Byzantium. In International Conference on Machine Learning.
[24]
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. 2020. Local model poisoning attacks to Byzantine-robust federated learning. In USENIX Security Symposium. 1605--1622.
[25]
Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. 2021. Sharpness-aware Minimization for Efficiently Improving Generalization. In International Conference on Learning Representations.
[26]
Shuhao Fu, Chulin Xie, Bo Li, and Qifeng Chen. 2019. Attack-resistant federated learning with residual-based reweighting. arXiv preprint arXiv:1912.11464 (2019).
[27]
Clement Fung, Chris JM Yoon, and Ivan Beschastnikh. 2020. The Limitations of Federated Learning in Sybil Settings. In 23rd International Symposium on Research in Attacks, Intrusions and Defenses ({RAID} 2020). 301--316.
[28]
Robin C Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557 (2017).
[29]
Antonious Girgis, Deepesh Data, Suhas Diggavi, Peter Kairouz, and Ananda Theertha Suresh. 2021. Shuffled model of differential privacy in federated learning. In International Conference on Artificial Intelligence and Statistics. PMLR, 2521--2529.
[30]
Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. (2009).
[31]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[32]
Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2019. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, Vol. 7 (2019), 47230--47244.
[33]
Wassily Hoeffding. 1994. Probability inequalities for sums of bounded random variables. The Collected Works of Wassily Hoeffding. Springer, 409--426.
[34]
Sanghyun Hong, Varun Chandrasekaran, Yiug itcan Kaya, Tudor Dumitracs, and Nicolas Papernot. 2020. On the effectiveness of mitigating data poisoning attacks with gradient shaping. arXiv preprint arXiv:2002.11497 (2020).
[35]
Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. 2019. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335 (2019).
[36]
Ling Huang, Anthony D Joseph, Blaine Nelson, Benjamin IP Rubinstein, and J Doug Tygar. 2011. Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence. 43--58.
[37]
Matthew Jagielski, Jonathan Ullman, and Alina Oprea. 2020. Auditing Differentially Private Machine Learning: How Private is Private SGD? Advances in Neural Information Processing Systems, Vol. 33 (2020).
[38]
Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. 2021. Intrinsic certified robustness of bagging against data poisoning attacks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 7961--7969.
[39]
Jinyuan Jia, Yupei Liu, Xiaoyu Cao, and Neil Zhenqiang Gong. 2022. Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks. AAAI.
[40]
Peter Kairouz, H Brendan McMahan, Brendan Avent, et al. 2021. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, Vol. 14, 1--2 (2021), 1--210.
[41]
Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. Technical Report.
[42]
Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. 2019. Certified Robustness to Adversarial Examples with Differential Privacy. In 2019 IEEE Symposium on Security and Privacy (SP). 656--672. https://doi.org/10.1109/SP.2019.00044
[43]
Alexander Levine and Soheil Feizi. 2021. Deep partition aggregation: Provable defense against general poisoning attacks. ICLR (2021).
[44]
Linyi Li, Tao Xie, and Bo Li. 2022. SoK: Certified Robustness for Deep Neural Networks. In 2023 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, 94--115.
[45]
Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2018. Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127 (2018).
[46]
Zhicong Liang, Bao Wang, Quanquan Gu, Stanley Osher, and Yuan Yao. 2020. Exploring private federated learning with laplacian smoothing. arXiv preprint arXiv:2005.00218 (2020).
[47]
Ao Liu, Xiaoyu Chen, Sijia Liu, Lirong Xia, and Chuang Gan. 2022a. Certifiably Robust Interpretation via Rényi Differential Privacy. Artif. Intell., Vol. 313, C (dec 2022), 14.
[48]
Junxu Liu, Jian Lou, Li Xiong, Jinfei Liu, and Xiaofeng Meng. 2021. Projected federated averaging with heterogeneous differential privacy. Proceedings of the VLDB Endowment, Vol. 15, 4 (2021), 828--840.
[49]
Ken Liu, Shengyuan Hu, Steven Z Wu, and Virginia Smith. 2022b. On privacy and personalization in cross-silo federated learning. Advances in Neural Information Processing Systems, Vol. 35 (2022), 5925--5940.
[50]
Yuzhe Ma, Xiaojin Zhu Zhu, and Justin Hsu. 2019. Data Poisoning against Differentially-Private Learners: Attacks and Defenses. In International Joint Conference on Artificial Intelligence.
[51]
Mohammad Malekzadeh, Burak Hasircioglu, Nitish Mital, Kunal Katarya, Mehmet Emre Ozfatura, and Deniz Gunduz. 2021. Dopamine: Differentially Private Federated Learning on Medical Data. The Second AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-21) (2021).
[52]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Vol. 54. PMLR, 1273--1282.
[53]
H Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2018. Learning Differentially Private Recurrent Language Models. In International Conference on Learning Representations.
[54]
Frank D McSherry. 2009. Privacy integrated queries: an extensible platform for privacy-preserving data analysis. In Proceedings of the 2009 ACM SIGMOD International Conference on Management of data. 19--30.
[55]
Ilya Mironov. 2017. Rényi differential privacy. In 2017 IEEE 30th Computer Security Foundations Symposium (CSF). IEEE, 263--275.
[56]
Mohammad Naseri, Jamie Hayes, and Emiliano De Cristofaro. 2022. Local and Central Differential Privacy for Robustness and Privacy in Federated Learning. NDSS (2022).
[57]
Thien Duc Nguyen, Phillip Rieger, Roberta De Viti, Huili Chen, et al. 2022. {FLAME}: Taming backdoors in federated learning. In USENIX Security Symposium.
[58]
Maxence Noble, Aurélien Bellet, and Aymeric Dieuleveut. 2022. Differentially private federated learning on heterogeneous data. In International Conference on Artificial Intelligence and Statistics. PMLR, 10110--10145.
[59]
Adam Paszke, Sam Gross, Francisco Massa, et al. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. NeurIPS. 8024--8035.
[60]
Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 1532--1543.
[61]
Krishna Pillutla, Sham M Kakade, and Zaid Harchaoui. 2019. Robust aggregation for federated learning. arXiv preprint arXiv:1912.13445 (2019).
[62]
PyTorch. 2021. Opacus - Train PyTorch models with Differential Privacy. (2021). https://opacus.ai/
[63]
Google Research. 2023. Distributed differential privacy for federated learning. https://ai.googleblog.com/2023/03/distributed-differential-privacy-for.html. (2023). Accessed: 2023-08-16.
[64]
MIT Technology Review. 2019. How Apple personalizes Siri without hoovering up your data. https://www.technologyreview.com/2019/12/11/131629/apple-ai-personalizes-siri-federated-learning/. (2019). Accessed: 2023-08-16.
[65]
Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, and Zico Kolter. 2020. Certified robustness to label-flipping attacks via randomized smoothing. In International Conference on Machine Learning. PMLR, 8230--8241.
[66]
Bita Darvish Rouhani, M Sadegh Riazi, and Farinaz Koushanfar. 2018. DeepSecure: Scalable provably-secure deep learning. In Proceedings of the 55th Annual Design Automation Conference. 1--6.
[67]
Virat Shejwalkar and Amir Houmansadr. 2021. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. In NDSS.
[68]
Yifan Shi, Yingqi Liu, Kang Wei, Li Shen, Xueqian Wang, and Dacheng Tao. 2023. Make Landscape Flatter in Differentially Private Federated Learning. CVPR (2023).
[69]
Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H Brendan McMahan. 2019. Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963 (2019).
[70]
Brandon Tran, Jerry Li, and Aleksander Madry. 2018. Spectral Signatures in Backdoor Attacks. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), Vol. 31.
[71]
Stephen Tu. 2013. Lecture 20: Introduction to Differential Privacy. (2013). https://stephentu.github.io/writeups/6885-lec20-b.pdf
[72]
Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, and Dimitris Papailiopoulos. 2020. Attack of the tails: Yes, you really can backdoor federated learning. NeurIPS (2020).
[73]
Wenxiao Wang, Alexander J Levine, and Soheil Feizi. 2022. Improved certified defenses against data poisoning with (deterministic) finite aggregation. In International Conference on Machine Learning. PMLR, 22769--22783.
[74]
Wenjie Wang, Pengfei Tang, Jian Lou, and Li Xiong. 2021. Certified robustness to word substitution attack with differential privacy. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 1102--1112.
[75]
Yu-Xiang Wang, Borja Balle, and Shiva Prasad Kasiviswanathan. 2019. Subsampled rényi differential privacy and analytical moments accountant. In The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 1226--1235.
[76]
Maurice Weber, Xiaojun Xu, Bojan Karlavs, Ce Zhang, and Bo Li. 2023. Rab: Provable robustness against backdoor attacks. IEEE Symposium on Security and Privacy (SP) (2023).
[77]
Chen Wu, Xian Yang, Sencun Zhu, and Prasenjit Mitra. 2020. Mitigating Backdoor Attacks in Federated Learning. arXiv preprint arXiv:2011.01767 (2020).
[78]
Chulin Xie, Minghao Chen, Pin-Yu Chen, and Bo Li. 2021. Crfl: Certifiably robust federated learning against backdoor attacks. In International Conference on Machine Learning. PMLR, 11372--11382.
[79]
Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li. 2020. Dba: Distributed backdoor attacks against federated learning. In International Conference on Learning Representations.
[80]
Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. 2022. ZenoPS: A Distributed Learning System Integrating Communication Efficiency and Security. Algorithms, Vol. 15, 7 (2022), 233.
[81]
Wensi Yang, Yuhang Zhang, Kejiang Ye, Li Li, and Cheng-Zhong Xu. 2019. FFD: a federated learning based method for credit card fraud detection. In International Conference on Big Data. Springer, 18--32.
[82]
Yuchen Yang, Bo Hui, Haolin Yuan, Neil Gong, and Yinzhi Cao. 2023. PRIVATEFL: Accurate, Differentially Private Federated Learning via Personalized Data Transformation. In USENIX Security Symposium.
[83]
Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning. PMLR, 5650--5659.
[84]
Lei Yu, Ling Liu, Calton Pu, Mehmet Emre Gursoy, and Stacey Truex. 2019. Differentially private model publishing for deep learning. In 2019 IEEE symposium on security and privacy (SP). IEEE, 332--349.
[85]
Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep Leakage from Gradients. In NeurIPS, H. Wallach, H. Larochelle, A. Beygelzimer, F. dtextquotesingle Alché-Buc, E. Fox, and R. Garnett (Eds.), Vol. 32. Curran Associates, Inc.
[86]
Yuqing Zhu, Xiang Yu, Yi-Hsuan Tsai, Francesco Pittaluga, Masoud Faraki, Manmohan Chandraker, and Yu-Xiang Wang. 2021. Voting-based Approaches For Differentially Private Federated Learning. (2021).

Cited By

View all
  • (2024)Enhancing Model Poisoning Attacks to Byzantine-Robust Federated Learning via Critical Learning PeriodsProceedings of the 27th International Symposium on Research in Attacks, Intrusions and Defenses10.1145/3678890.3678915(496-512)Online publication date: 30-Sep-2024
  • (2024)Two-Tier Data Packing in RLWE-based Homomorphic Encryption for Secure Federated LearningProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security10.1145/3658644.3690191(2844-2858)Online publication date: 2-Dec-2024
  • (2024)Adversarial Machine Learning for Social Good: Reframing the Adversary as an AllyIEEE Transactions on Artificial Intelligence10.1109/TAI.2024.33834075:9(4322-4343)Online publication date: Sep-2024
  • Show More Cited By

Index Terms

  1. Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        CCS '23: Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security
        November 2023
        3722 pages
        ISBN:9798400700507
        DOI:10.1145/3576915
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 21 November 2023

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. certified robustness
        2. differential privacy
        3. federated learning
        4. poisoning attacks

        Qualifiers

        • Research-article

        Funding Sources

        Conference

        CCS '23
        Sponsor:

        Acceptance Rates

        Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

        Upcoming Conference

        CCS '25

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)1,061
        • Downloads (Last 6 weeks)109
        Reflects downloads up to 13 Dec 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Enhancing Model Poisoning Attacks to Byzantine-Robust Federated Learning via Critical Learning PeriodsProceedings of the 27th International Symposium on Research in Attacks, Intrusions and Defenses10.1145/3678890.3678915(496-512)Online publication date: 30-Sep-2024
        • (2024)Two-Tier Data Packing in RLWE-based Homomorphic Encryption for Secure Federated LearningProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security10.1145/3658644.3690191(2844-2858)Online publication date: 2-Dec-2024
        • (2024)Adversarial Machine Learning for Social Good: Reframing the Adversary as an AllyIEEE Transactions on Artificial Intelligence10.1109/TAI.2024.33834075:9(4322-4343)Online publication date: Sep-2024
        • (2024)Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)10.1109/SaTML59370.2024.00029(443-471)Online publication date: 9-Apr-2024
        • (2024)Attacking Byzantine Robust Aggregation in High Dimensions2024 IEEE Symposium on Security and Privacy (SP)10.1109/SP54263.2024.00217(1325-1344)Online publication date: 19-May-2024
        • (2024)An Overview of Trustworthy AI: Advances in IP Protection, Privacy-preserving Federated Learning, Security Verification, and GAI Safety AlignmentIEEE Journal on Emerging and Selected Topics in Circuits and Systems10.1109/JETCAS.2024.3477348(1-1)Online publication date: 2024

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media