Abstract
Federated learning is a distributed machine learning framework for edge computing devices that provides several benefits, such as eliminating over-fitting and protecting privacy. However, the majority of federated learning paradigms have not taken fairness into account. Since the quality and quantity of the data held by each participant varies, their contributions are always diverse. In other words, the fact that all devices receive the same model as a reward, regardless of their various contributions, is unfair to those who contribute the most. In this work, we provide s-CFFL, a federated framework for edge computing devices that ingeniously combines the reputation mechanism with distributed selective stochastic gradient descent (DSSGD) to achieve collaborative fairness. In addition, we investigate the resistance of the framework against free-riders and several other common adversaries. We perform comprehensive trials comparing our framework to FedAvg, DSSGD, and other related approaches. The results indicate that our strategy strikes a compromise between models’ prediction accuracy and collaborative fairness while simultaneously boosting model robustness.
Similar content being viewed by others
Data Availability
The datasets generated and analyzed during the current study is not publicly available, but are available from the corresponding author on an reasonable request.
References
Alistarh, D., Hoefler, T., Johansson, M., Konstantinov, N., Khirirat, S., Renggli, C.: The convergence of sparsified gradient methods. Advances in Neural Information Processing Systems 31 (2018)
Bernstein, J., Zhao, J., Azizzadenesheli, K., Anandkumar, A.: signsgd with majority vote is communication efficient and byzantine fault tolerant (2018)
Biggio, B., Nelson, B., Laskov, P.: Support vector machines under adversarial label noise. In: Asian Conference on Machine Learning (2011)
Blanchard, P., El Mhamdi, E.M., Guerraoui, R., Stainer, J.: Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in Neural Information Processing Systems 30 (2017)
Cao, K., Liu, Y., Meng, G., Sun, Q.: An overview on edge computing research. IEEE Access 8, 85714–85728 (2020). https://doi.org/10.1109/ACCESS.2020.2991734
Chen, J., Ran, X.: Deep learning with edge computing: A review. Proceedings of the IEEE 107(8), 1655–1674 (2019). https://doi.org/10.1109/JPROC.2019.2921977
Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., Ranzato, M., Senior, A., Tucker, P., Yang, K., et al.: Large scale distributed deep networks. Advances in neural information processing systems 25 (2012)
Dong, Y., Chen, Y., Ramchandran, K., Bartlett, P.: Byzantine-robust distributed learning: Towards optimal statistical rates (2018)
Fung, C., Yoon, C., Beschastnikh, I.: The limitations of federated learning in sybil settings. In: Recent Advances in Intrusion Detection (2020)
Fung, C., Yoon, C.J., Beschastnikh, I.: The limitations of federated learning in sybil settings. In: 23rd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020), pp. 301–316 (2020)
Gollapudi, S., Kollias, K., Panigrahi, D., Pliatsika, V.: Profit sharing and efficiency in utility games (2017)
Huang, T., Lin, W., Wu, W., He, L., Li, K., Zomaya, A.Y.: An efficiency-boosting client selection scheme for federated learning with fairness guarantee. IEEE (7) (2021)
Jebreel, N.M., Domingo-Ferrer, J., Sánchez, D., Blanco-Justicia, A.: Defending against the Label-flipping Attack in Federated Learning. arXiv preprint arXiv:2207.01982 (2022)
Kairouz, P., McMahan, H.B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A.N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., others: Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977 (2019)
Kantarcioglu, M., Clifton, C.: Privacy-preserving distributed mining of association rules on horizontally partitioned data. IEEE transactions on knowledge and data engineering 16(9), 1026–1037 (2004)
Krizhevsky, A., Hinton, G., others: Learning multiple layers of features from tiny images (2009). Publisher: Citeseer
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998). Publisher: Ieee
Li, D., Wang, J.: Fedmd: Heterogenous federated learning via model distillation. arXiv:1910.03581 (2019)
Li, L., Fan, Y., Tse, M., Lin, K.Y.: A review of applications in federated learning. Computers & Industrial Engineering 149(5), 106854 (2020)
Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine 37(3), 50–60 (2020)
Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine 37(3), 50–60 (2020). Publisher: IEEE
Li, T., Sanjabi, M., Beirami, A., Smith, V.: Fair resource allocation in federated learning (2019)
Li, X., Huang, K., Yang, W., Wang, S., Zhang, Z.: On the convergence of fedavg on non-iid data. arXiv:1907.02189 (2019)
Lyu, L., Xu, X., Wang, Q.: Collaborative fairness in federated learning (2020)
Lyu, L., Yu, J., Nandakumar, K., Li, Y., Ng, K.S.: Towards fair and privacy-preserving federated deep models. IEEE Transactions on Parallel and Distributed Systems PP(99), 1–1 (2020)
Mansour, Y., Mohri, M., Ro, J., Suresh, A.T.: Three approaches for personalization with applications to federated learning. Computer Science (2020)
Mcmahan, H.B., Moore, E., Ramage, D., Hampson, S., Arcas, B.: Communication-efficient learning of deep networks from decentralized data (2016)
Mohri, M., Sivek, G., Suresh, A.T.: Agnostic federated learning. In: International Conference on Machine Learning, pp. 4615–4625. PMLR (2019)
Regatti, J., Gupta, A.: Befriending the byzantines through reputation scores (2020)
Richardson, A., Filos-Ratsikas, A., Faltings, B.: Rewarding high-quality data via influence functions (2019)
Saadat, H., Aboumadi, A., Mohamed, A., Erbad, A., Guizani, M.: Hierarchical federated learning for collaborative ids in iot applications. In: 2021 10th Mediterranean Conference on Embedded Computing (MECO) (2021)
Shi, W., Dustdar, S.: The promise of edge computing. Computer 49(5), 78–81 (2016). https://doi.org/10.1109/MC.2016.145
Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pp. 1310–1321 (2015)
Shukla, P., Nasrin, S., Darabi, N., Gomes, W., Trivedi, A.R.: Mc-cim: Compute-in-memory with monte-carlo dropouts for bayesian edge intelligence (2021)
Sim, R., Zhang, Y., Chan, M.C., Low, B.: Collaborative machine learning with incentive-aware model rewards (2020)
Song, T., Tong, Y., Wei, S.: Profit allocation for federated learning. In: 2019 IEEE International Conference on Big Data (Big Data) (2020)
Wang, T., Rausch, J., Zhang, C., Jia, R., Song, D.: A principled approach to data valuation for federated learning (2020)
Xu, X., Lyu, L.: A reputation mechanism is all you need: Collaborative fairness and adversarial robustness in federated learning. arXiv:2011.10464 (2020)
Yan, Z., Xiao, D., Chen, M., Zhou, J., Wu, W.: Dual-way gradient sparsification for asynchronous distributed deep learning. In: 49th International Conference on Parallel Processing-ICPP, pp. 1–10 (2020)
Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST) 10(2), 1–19 (2019). Publisher: ACM New York, NY, USA
Yang, Q., Liu, Y., Cheng, Y., Kang, Y., Chen, T., Yu, H.: Federated learning. Synthesis Lectures on Artificial Intelligence and Machine Learning 13(3), 1–207 (2019). Publisher: Morgan & Claypool Publishers
Yang, S., Wu, F., Tang, S., Gao, X., Yang, B., Chen, G.: On designing data quality-aware truth estimation and surplus sharing method for mobile crowdsensing. IEEE Journal on Selected Areas in Communications pp. 832–847 (2017)
Yu, H., Liu, Z., Liu, Y., Chen, T., Cong, M., Weng, X., Niyato, D., Yang, Q.: A fairness-aware incentive scheme for federated learning. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 393–399 (2020)
Zhang, G., Malekmohammadi, S., Chen, X., Yu, Y.: Equality is not equity: Proportional fairness in federated learning (2022)
Zhang, J., Li, C., Robles-Kelly, A., Kankanhalli, M.: Hierarchically fair federated learning. arXiv:2004.10386 (2020)
Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-iid data (2018)
Zhao, Y., Zhao, J., Jiang, L., Tan, R., Niyato, D., Li, Z., Lyu, L., Liu, Y.: Privacy-preserving blockchain-based federated learning for iot devices. IEEE (3) (2021)
Zhao, Z., Joshi, G.: A dynamic reweighting strategy for fair federated learning. In: ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8772–8776. IEEE (2022)
Acknowledgements
Firstly, I would like to thank my best friend, Zijie Wang, who gave me a lot of moral support, encouragement and enlightenment when I was depressed, and secondly, I would like to thank my tutors and classmates who provided me with a lot of valuable and useful advice, and it was through discussions and exchanges with them that this manuscript was completed.
Funding
The study was supported by National Key Research and Development Program (2022YFB3305200).
Author information
Authors and Affiliations
Contributions
Hailin Yang came up with the original idea for the thesis, carried out a methodological feasibility study, then completed the design of the proposal with the help of Yanhong Huang and Jianqi Shi, followed by experimental design and data collection graphing. Hailin Yang wrote the main manuscript text, and Yanhong Huang, Jianqi Shi, and Yang Yang provided much help in reviewing and editing.
Corresponding author
Ethics declarations
Ethics Approval
Not applicable.
Consent to Participate
Not applicable.
Consent for Publication
All authors approved the final manuscript and the submission to this journal.
Competing Interests
The authors have no competing interests to declare that are relevant to the content of this manuscript.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Additional Experimental Results
Appendix: Additional Experimental Results
We examine two other types of untargeted attacks, namely value-inserting and re-scaling; value-inserting refers to randomly inverting the element-wise values of the gradients, and re-scaling refers to arbitrarily rescaling gradients. Table 5 shows the experimental data under re-scaling and value-inverting attack when the data is distributed independently and identically across participants, with 5 honest participants and 2 adversaries.
Compared with other similar methods, our proposed method is more robust under both attacks, can effectively distinguish between honest participants and adversaries, and facilitates the discharge of adversary interference earlier.
The experimental results on cifar10 dataset with 5 and 20 participants are shown as Table 6, Figs. 11 and 12. The experimental results complement the validity of our method. The reputation value reflects the quality of the data and shows a positive correlation with the final model accuracy. Based on the experimental data, we can see that our method achieves comparable or better results than CFFL in terms of fairness, which is much better than fedavg and DSSGD, which do not consider fairness, and q-FFL, which does consider equality.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yang, H., Huang, Y., Shi, J. et al. A Federated Framework for Edge Computing Devices with Collaborative Fairness and Adversarial Robustness. J Grid Computing 21, 36 (2023). https://doi.org/10.1007/s10723-023-09658-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10723-023-09658-x