Abstract
Federated learning is a distributed machine learning paradigm, which is able to achieve model training without sharing clients’ private data. In each round, after receiving a global model, selected clients train the model with local private data and report updated parameters to server. Then the server performs aggregation to generate a new global model. Currently, aggregations are generally conducted in a heuristic manner, and show great challenges with non-Independent and Identically Distributed (non-IID) data. In this paper, we propose to employ Q-learning to solve the aggregation problem under non-IID data. Specifically, we define state, action as well as reward in the target aggregation scenario, and fit it into Q-learning framework. With the learning procedure, effective actions indicating weights assignment for aggregation can be figured out according to certain system states. Evaluation shows that the proposed FedQL strategy can improve the convergence speed obviously under non-IID data, when compared with existing schemes FedAvg, FedProx and FedAdp.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Note that the starting point of one round is that the server has a global model, either an initialized one or aggregated one from the last round.
References
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282 (2017)
Xia, Q., Ye, W., Tao, Z., Wu, J., Li, Q.: A survey of federated learning for edge computing: research problems and solutions. High-Confidence Comput. 1(1), 100008 (2021)
Yang, Q., Liu, Y., Cheng, Y., Kang, Y., Chen, T., Yu, H.: Federated learning. Synth. Lect. Artif. Intell. Mach. Learn. 13(3), 1–207 (2019)
Xie, Z., Huang, Y., Yu, D., Parizi, R.M., Zheng, Y., Pang, J.: Fedee: a federated graph learning solution for extended enterprise collaboration. IEEE Trans. Ind. Inf. 19(7), 8061–8071 (2023)
Brisimi, T.S., Chen, R., Mela, T., Olshevsky, A., Paschalidis, I.C., Shi, W.: Federated learning of predictive models from federated electronic health records. Int. J. Med. Inf. 112, 59–67 (2018)
Sheller, M.J., Reina, G.A., Edwards, B., Martin, J., Bakas, S.: Multi-institutional deep learning modeling without sharing patient data: a feasibility study on brain tumor segmentation. In: Crimi, A., Bakas, S., Kuijf, H., Keyvan, F., Reyes, M., van Walsum, T. (eds.) BrainLes 2018. LNCS, vol. 11383, pp. 92–104. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11723-8_9
Hard, A., et al.: Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604 (2018)
Yang, T., et al.: Applied federated learning: improving google keyboard query suggestions. arXiv preprint arXiv:1812.02903 (2018)
Leroy, D., Coucke, A., Lavril, T., Gisselbrecht, T., Dureau, J.: Federated learning for keyword spotting. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6341–6345 (2019)
Konečnỳ, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016)
Chen, S., Wang, Y., Yu, D., Ren, J., Xu, C., Zheng, Y.: Privacy-enhanced decentralized federated learning at dynamic edge. IEEE Trans. Computers 72(8), 2165–2180 (2023)
Ma, Z., Zhao, M., Cai, X., Jia, Z.: Fast-convergent federated learning with class-weighted aggregation. J. Syst. Architect. 117, 102125 (2021)
Wang, Y., et al.: Theoretical convergence guaranteed resource-adaptive federated learning with mixed heterogeneity. In: KDD, pp. 2444–2455 (2023)
Arachchige, P.C.M., Bertok, P., Khalil, I., Liu, D., Camtepe, S., Atiquzzaman, M.: A trustworthy privacy preserving framework for machine learning in industrial iot systems. IEEE Trans. Ind. Inf. 16(9), 6092–6102 (2020)
Kim, H., Park, J., Bennis, M., Kim, S.-L.: Blockchained on-device federated learning. IEEE Commun. Lett. 24(6), 1279–1283 (2019)
Li, Y., Chen, C., Liu, N., Huang, H., Zheng, Z., Yan, Q.: A blockchain-based decentralized federated learning framework with committee consensus. IEEE Netw. 35(1), 234–241 (2020)
Lu, Y., Huang, X., Zhang, K., Maharjan, S., Zhang, Y.: Blockchain empowered asynchronous federated learning for secure data sharing in internet of vehicles. IEEE Trans. Veh. Technol. 69(4), 4298–4311 (2020)
Majeed, U., Hong, C.S.: Flchain: federated learning via mec-enabled blockchain network. In: 20th Asia-Pacific Network Operations and Management Symposium, pp. 1–4 (2019)
Hu, F., Zhou, W., Liao, K., Li, H.: Contribution-and participation-based federated learning on non-iid data. IEEE Intell. Syst. 37(4), 35–43 (2022)
Xu, J., Chen, Z., Quek, T.Q., Chong, K.F.E.: Fedcorr: multi-stage federated learning for label noise correction. In: Conference on Computer Vision and Pattern Recognition, pp. 10 184–10 193 (2022)
Zhang, L., Shen, L., Ding, L., Tao, D., Duan, L.-Y.: Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. In: Conference on Computer Vision and Pattern Recognition, pp. 10 174–10 183 (2022)
Zheng, Y., Lai, S., Liu, Y., Yuan, X., Yi, X., Wang, C.: Aggregation service for federated learning: an efficient, secure, and more resilient realization. IEEE Trans. Depend. Secure Comput. 20(2), 988–1001 (2022)
Nishio, T., Yonetani, R.: Client selection for federated learning with heterogeneous resources in mobile edge. In: IEEE International Conference on Communications, pp. 1–7 (2019)
Lin, W., Xu, Y., Liu, B., Li, D., Huang, T., Shi, F.: Contribution-based federated learning client selection. Int. J. Intell. Syst. 37(10), 7235–7260 (2022)
Fang, X., Ye, M.: Robust federated learning with noisy and heterogeneous clients. In: Conference on Computer Vision and Pattern Recognition, pp. 10 072–10 081 (2022)
Wang, H., Kaplan, Z., Niu, D., Li, B.: Optimizing federated learning on non-iid data with reinforcement learning. In: IEEE Conference on Computer Communications, pp. 1698–1707 (2020)
Zhang, S.Q., Lin, J., Zhang, Q.: A multi-agent reinforcement learning approach for efficient client selection in federated learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 8, pp. 9091–9099 (2022)
Li, Z., Zhou, Y., Wu, D., Wang, R.: Local model update for blockchain enabled federated learning: approach and analysis. In: International Conference on Blockchain, pp. 113–121 (2021)
Xu, C., Hong, Z., Huang, M., Jiang, T.: Acceleration of federated learning with alleviated forgetting in local training. In: Conference on Learning Representations, ICLR (2022)
Jhunjhunwala, D., Gadhikar, A., Joshi, G., Eldar, Y.C.: Adaptive quantization of model updates for communication-efficient federated learning. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3110–3114 (2021)
Liu, W., Chen, L., Chen, Y., Zhang, W.: Accelerating federated learning via momentum gradient descent. IEEE Trans. Parallel Distrib. Syst. 31(8), 1754–1766 (2020)
Ullah, S., Kim, D.: Federated learning convergence on IID features via optimized local model parameters. In: International Conference on Big Data and Smart Computing, pp. 92–95 (2022)
Xu, J., Du, W., Jin, Y., He, W., Cheng, R.: Ternary compression for communication-efficient federated learning. IEEE Trans. Neural Netw. Learn. Syst. 33(3), 1162–1176 (2022)
Cui, L., Su, X., Zhou, Y., Liu, J.: Optimal rate adaption in federated learning with compressed communications. In: Conference on Computer Communications, pp. 1459–1468 (2022)
Reisizadeh, A., Mokhtari, A., Hassani, H., Jadbabaie, A., Pedarsani, R.: Fedpaq: a communication-efficient federated learning method with periodic averaging and quantization. In: International Conference on Artificial Intelligence and Statistics, pp. 2021–2031 (2020)
Caldas, S., Konečny, J., McMahan, H.B., Talwalkar, A.: Expanding the reach of federated learning by reducing client resource requirements. arXiv preprint arXiv:1812.07210 (2018)
Paragliola, G.: Evaluation of the trade-off between performance and communication costs in federated learning scenario. Future Gener. Comput. Syst. 136, 282–293 (2022)
Abasi, A.K., Aloqaily, M., Guizani, M.: Grey wolf optimizer for reducing communication cost of federated learning. In: IEEE Global Communications Conference 2022, pp. 1049–1154 (2022)
Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2, 429–450 (2020)
Nguyen, V.-D., Sharma, S.K., Vu, T.X., Chatzinotas, S., Ottersten, B.: Efficient federated learning algorithm for resource allocation in wireless IoT networks. IEEE Internet Things J. 8(5), 3394–3409 (2020)
Song, Q., Lei, S., Sun, W., Zhang, Y.: Adaptive federated learning for digital twin driven industrial internet of things. In: IEEE Wireless Communications and Networking Conference 2021, pp. 1–6 (2021)
Huang, W., Li, T., Wang, D., Du, S., Zhang, J.: Fairness and accuracy in federated learning. arXiv preprint arXiv:2012.10069 (2020)
Tan, L., et al.: Adafed: optimizing participation-aware federated learning with adaptive aggregation weights. IEEE Trans. Netw. Sci. Eng. 9, 2708–2720 (2022)
Mohri, M., Sivek, G., Suresh, A.T.: Agnostic federated learning. In: International Conference on Machine Learning, pp. 4615–4625 (2019)
Prauzek, M., Mourcet, N.R., Hlavica, J., Musilek, P.: Q-learning algorithm for energy management in solar powered embedded monitoring systems. In: IEEE Congress on Evolutionary Computation 2018, pp. 1–7 (2018)
Wu, H., Wang, P.: Fast-convergent federated learning with adaptive weighting. IEEE Trans. Cogn. Commun. Network. 7(4), 1078–1088 (2021)
LeCun, Y., Bottou, L., Bengio, Y.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Krizhevsky, A.: One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997 (2014)
Xiao, H., Rasul, K., Vollgraf, R.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)
Duan, M., et al.: Astraea: self-balancing federated learning for improving classification accuracy of mobile deep learning applications. In: International Conference on Computer Design, pp. 246–254 (2019)
Jiao, Y., Wang, P., Niyato, D., Lin, B., Kim, D.I.: Toward an automated auction framework for wireless federated learning services market. IEEE Trans. Mob. Comput. 20(10), 3034–3048 (2020)
Yonetani, R., Takahashi, T., Hashimoto, A., Ushiku, Y.: Decentralized learning of generative adversarial networks from non-iid data. arXiv preprint arXiv:1905.09684 (2019)
Yoon, T., Shin, S., Hwang, S.J., Yang, E.: Fedmix: approximation of mixup under mean augmented federated learning. arXiv preprint arXiv:2107.00233 (2021)
Acknowledgements
This paper is supported by National Natural Science Foundation of China (Grant No. 61973214), Shandong Provincial Natural Science Foundation (Grant No. ZR2020MF069), and Shandong Provincial Postdoctoral Innovation Project (Grant No. 202003005).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Cao, M., Zhao, M., Zhang, T., Yu, N., Lu, J. (2024). FedQL: Q-Learning Guided Aggregation for Federated Learning. In: Tari, Z., Li, K., Wu, H. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2023. Lecture Notes in Computer Science, vol 14487. Springer, Singapore. https://doi.org/10.1007/978-981-97-0834-5_16
Download citation
DOI: https://doi.org/10.1007/978-981-97-0834-5_16
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-0833-8
Online ISBN: 978-981-97-0834-5
eBook Packages: Computer ScienceComputer Science (R0)