[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Noise-aware Local Model Training Mechanism for Federated Learning

Published: 15 June 2023 Publication History

Abstract

As a new paradigm in training intelligent models, federated learning is widely used to train a global model without requiring local data to be uploaded from end devices. However, there are often mislabeled samples (i.e., noisy samples) in the dataset, which will cause the model update to deviate from the correct direction during the training process, thus reducing the convergence accuracy of the global model. Existing works employ noisy label correction techniques to reduce the impact of noisy samples on model updates by correcting labels; however, such methods necessitate the use of prior knowledge and additional communication costs, which cannot be directly applied to federated learning due to data privacy concerns and limited communication resources. Therefore, this paper proposes a noise-aware local model training method that corrects the noisy labels directly at the end device under the constraints of federated learning. By constructing a label correction model, a joint optimization problem is formally defined for optimizing both the label correction model and the client-side local training model (e.g., classification model). As a solution to this optimization problem, we propose a robustness training algorithm using label correction, along with a cross-validation data sampling algorithm that updates both models simultaneously. It is verified through experiments that the mechanism can effectively improve the model convergence accuracy on noisy datasets in federated learning scenarios.

References

[1]
Pengfei Chen, Ben Ben Liao, Guangyong Chen, and Shengyu Zhang. 2019. Understanding and utilizing deep neural networks trained with noisy labels. In International Conference on Machine Learning. PMLR, 1062–1070.
[2]
Yiqiang Chen, Xiaodong Yang, Xin Qin, Han Yu, Piu Chan, and Zhiqi Shen. 2020. Dealing with label quality disparity in federated learning. Federated Learning: Privacy and Incentive (2020), 108–121.
[3]
Ziqi Chen, Liangxiao Jiang, and Chaoqun Li. 2022. Label distribution-based noise correction for multiclass crowdsourcing. International Journal of Intelligent Systems 37, 9 (2022), 5752–5767.
[4]
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning. PMLR, 1126–1135.
[5]
Benoît Frénay and Michel Verleysen. 2013. Classification in the presence of label noise: A survey. IEEE Transactions on Neural Networks and Learning Systems 25, 5 (2013), 845–869.
[6]
Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya Zhang, and Masashi Sugiyama. 2018. Masking: A new perspective of noisy supervision. Advances in Neural Information Processing Systems 31 (2018).
[7]
Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. 2018. Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in Neural Information Processing Systems 31 (2018).
[8]
Peng Hu, Xi Peng, Hongyuan Zhu, Liangli Zhen, and Jie Lin. 2021. Learning cross-modal retrieval with noisy labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5403–5413.
[9]
Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. 2018. MentorNet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In International Conference on Machine Learning. PMLR, 2304–2313.
[10]
Nan Jiang, Li Gao, Fuxian Duan, Jie Wen, Tao Wan, and Honglong Chen. 2022. SAN: Attention-based social aggregation neural networks for recommendation system. International Journal of Intelligent Systems 37, 6 (2022), 3373–3393.
[11]
Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, and Li Fei-Fei. 2016. The unreasonable effectiveness of noisy data for fine-grained recognition. In European Conference on Computer Vision. Springer, 301–320.
[12]
Alex Krizhevsky et al. 2009. Learning multiple layers of features from tiny images. Citeseer (2009).
[13]
Soonil Kwon. 2021. Optimal feature selection based speech emotion recognition using two-stream deep convolutional neural network. International Journal of Intelligent Systems 36, 9 (2021), 5116–5135.
[14]
Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, and Jinwoo Shin. 2019. Robust inference via generative classifiers for handling noisy labels. In International Conference on Machine Learning. PMLR, 3763–3772.
[15]
Kuang-Huei Lee, Xiaodong He, Lei Zhang, and Linjun Yang. 2018. CleanNet: Transfer learning for scalable image classifier training with label noise. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5447–5456.
[16]
Yang Liu and Hongyi Guo. 2020. Peer loss functions: Learning from noisy labels without knowing noise rates. In International Conference on Machine Learning. PMLR, 6226–6236.
[17]
Zhiwei Liu, Liangwei Yang, Ziwei Fan, Hao Peng, and Philip S. Yu. 2022. Federated social recommendation with graph neural network. ACM Trans. Intell. Syst. Technol. 13, 4, Article 55 (Aug. 2022), 24 pages.
[18]
Yang Lu. 2019. Artificial intelligence: A survey on evolution, models, applications and future trends. Journal of Management Analytics 6, 1 (2019), 1–29.
[19]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics. PMLR, 1273–1282.
[20]
Gabriele Paolacci, Jesse Chandler, and Panagiotis G. Ipeirotis. 2010. Running experiments on Amazon Mechanical Turk. Judgment and Decision Making 5, 5 (2010), 411–419.
[21]
Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. 2017. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1944–1952.
[22]
Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. 2014. Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596 (2014).
[23]
Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. 2019. Meta-weight-net: Learning an explicit mapping for sample weighting. Advances in Neural Information Processing Systems 32 (2019).
[24]
Hwanjun Song, Minseok Kim, and Jae-Gil Lee. 2019. Selfie: Refurbishing unclean samples for robust deep learning. In International Conference on Machine Learning. PMLR, 5907–5915.
[25]
Xiaoguang Tu, Zheng Ma, Jian Zhao, Guodong Du, Mei Xie, and Jiashi Feng. 2020. Learning generalizable and identity-discriminative representations for face anti-spoofing. ACM Trans. Intell. Syst. Technol. 11, 5, Article 60 (Jul. 2020), 19 pages.
[26]
Arash Vahdat. 2017. Toward robustness against label noise in training deep discriminative neural networks. Advances in Neural Information Processing Systems 30 (2017).
[27]
Qizhou Wang, Bo Han, Tongliang Liu, Gang Niu, Jian Yang, and Chen Gong. 2021. Tackling instance-dependent label noise via a universal probabilistic model. arXiv preprint arXiv:2101.05467 (2021).
[28]
Hongxin Wei, Lue Tao, Renchunzi Xie, and Bo An. 2021. Open-set label noise can improve robustness against inherent label noise. Advances in Neural Information Processing Systems 34 (2021).
[29]
Xiaobo Xia, Tongliang Liu, Bo Han, Chen Gong, Nannan Wang, Zongyuan Ge, and Yi Chang. 2020. Robust early-learning: Hindering the memorization of noisy labels. In International Conference on Learning Representations.
[30]
Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, and Masashi Sugiyama. 2019. Are anchor points really indispensable in label-noise learning? Advances in Neural Information Processing Systems 32 (2019).
[31]
Han Xiao, Kashif Rasul, and Roland Vollgraf. 2017. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017).
[32]
Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xiaogang Wang. 2015. Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2691–2699.
[33]
Seunghan Yang, Hyoungseob Park, Junyoung Byun, and Changick Kim. 2022. Robust federated learning with noisy labels. IEEE Intelligent Systems 37, 2 (2022), 35–43.
[34]
Bixiao Zeng, Xiaodong Yang, Yiqiang Chen, Hanchao Yu, and Yingwei Zhang. 2022. CLC: A consensus-based label correction approach in federated learning. ACM Transactions on Intelligent Systems and Technology (TIST) 13, 5 (2022), 1–23.
[35]
Zixing Zhang, Jürgen Geiger, Jouni Pohjalainen, Amr El-Desoky Mousa, Wenyu Jin, and Björn Schuller. 2018. Deep learning for environmentally robust speech recognition: An overview of recent developments. ACM Trans. Intell. Syst. Technol. 9, 5, Article 49 (Apr. 2018), 28 pages.
[36]
Guoqing Zheng, Ahmed Hassan Awadallah, and Susan Dumais. 2021. Meta label correction for noisy label learning. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI’21), Vol. 35. 11053–11061.
[37]
Qing Zhu, Lijian Gao, Heping Song, and Qirong Mao. 2021. Learning to disentangle emotion factors for facial expression recognition in the wild. International Journal of Intelligent Systems 36, 6 (2021), 2511–2527.

Cited By

View all
  • (2024)Robust Learning under Hybrid NoiseACM Transactions on Intelligent Systems and Technology10.1145/3709149Online publication date: 23-Dec-2024
  • (2024)DESIGN: Online Device Selection and Edge Association for Federated Synergy Learning-enabled AIoTACM Transactions on Intelligent Systems and Technology10.1145/3673237Online publication date: 15-Jun-2024
  • (2024)Labeling Chaos to Learning Harmony: Federated Learning with Noisy LabelsACM Transactions on Intelligent Systems and Technology10.1145/362624215:2(1-26)Online publication date: 22-Feb-2024
  • Show More Cited By

Index Terms

  1. Noise-aware Local Model Training Mechanism for Federated Learning

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Intelligent Systems and Technology
      ACM Transactions on Intelligent Systems and Technology  Volume 14, Issue 4
      August 2023
      481 pages
      ISSN:2157-6904
      EISSN:2157-6912
      DOI:10.1145/3596215
      • Editor:
      • Huan Liu
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 15 June 2023
      Online AM: 02 May 2023
      Accepted: 31 March 2023
      Revised: 03 March 2023
      Received: 07 October 2022
      Published in TIST Volume 14, Issue 4

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Federated learning
      2. mislabeled samples
      3. noise-aware
      4. meta learning
      5. label correction

      Qualifiers

      • Research-article

      Funding Sources

      • National Natural Science Foundation of China
      • Jiangsu Provincial Key Laboratory of Network and Information Security
      • Key Laboratory of Computer Network and Information Integration of Ministry of Education of China
      • University Synergy Innovation Program of Anhui Province
      • Collaborative Innovation Center of Novel Software Technology and Industrialization, the Fundamental Research Funds for the Central Universities, CCF-Baidu Open Fund
      • Future Network Scientific Research Fund Project

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)201
      • Downloads (Last 6 weeks)10
      Reflects downloads up to 06 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Robust Learning under Hybrid NoiseACM Transactions on Intelligent Systems and Technology10.1145/3709149Online publication date: 23-Dec-2024
      • (2024)DESIGN: Online Device Selection and Edge Association for Federated Synergy Learning-enabled AIoTACM Transactions on Intelligent Systems and Technology10.1145/3673237Online publication date: 15-Jun-2024
      • (2024)Labeling Chaos to Learning Harmony: Federated Learning with Noisy LabelsACM Transactions on Intelligent Systems and Technology10.1145/362624215:2(1-26)Online publication date: 22-Feb-2024
      • (2024)PRIMϵ: Novel Privacy-Preservation Model With Pattern Mining and Genetic AlgorithmIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.332476919(571-585)Online publication date: 1-Jan-2024
      • (2023)Local differentially private federated learning with homomorphic encryptionThe Journal of Supercomputing10.1007/s11227-023-05378-x79:17(19365-19395)Online publication date: 27-May-2023

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media