[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.5555/3692070.3694040guideproceedingsArticle/Chapter ViewAbstractPublication PagesicmlConference Proceedingsconference-collections
research-article

Ranking-based client imitation selection for efficient federated learning

Published: 21 July 2024 Publication History

Abstract

Federated Learning (FL) enables multiple devices to collaboratively train a shared model while ensuring data privacy. The selection of participating devices in each training round critically affects both the model performance and training efficiency, especially given the vast heterogeneity in training capabilities and data distribution across devices. To deal with these challenges, we introduce a novel device selection solution called FedRank, which is based on an end-to-end, ranking-based model that is pre-trained by imitation learning against state-of-the-art analytical approaches. It not only considers data and system heterogeneity at runtime but also adaptively and efficiently chooses the most suitable clients for model training. Specifically, FedRank views client selection in FL as a ranking problem and employs a pairwise training strategy for the smart selection process. Additionally, an imitation learning-based approach is designed to counteract the cold-start issues often seen in state-of-the-art learning-based approaches. Experimental results reveal that FedRank boosts model accuracy by 5.2% to 56.9%, accelerates the training convergence up to 2.01× and saves the energy consumption up to 40.1%.

References

[1]
Balakrishnan, R., Li, T., Zhou, T., Himayat, N., Smith, V., and Bilmes, J. Diverse client selection for federated learning via submodular maximization. In International Conference on Learning Representations, 2022.
[2]
Burges, C., Shaked, T., Renshaw, E., Lazier, A., Deeds, M., Hamilton, N., and Hullender, G. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, pp. 89-96, 2005.
[3]
Casalegno, F. Learning to rank: A complete guide to ranking using machine learning. https://towardsdatascience.com, 2022.
[4]
Chai, Z., Ali, A., Zawad, S., Truex, S., Anwar, A., Baracaldo, N., Zhou, Y., Ludwig, H., Yan, F., and Cheng, Y. Tifl: A tier-based federated learning system. In Proceedings of the 29th international symposium on highperformance parallel and distributed computing, pp. 125-136, 2020.
[5]
Cho, Y. J., Wang, J., and Joshi, G. Client selection in federated learning: Convergence analysis and power-of-choice selection strategies. arXiv preprint arXiv:2010.01243, 2020.
[6]
Darlow, L. N., Crowley, E. J., Antoniou, A., and Storkey, A. J. Cinic-10 is not imagenet or cifar-10. arXiv preprint arXiv:1810.03505, 2018.
[7]
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Ieee, 2009.
[8]
Diao, E., Ding, J., and Tarokh, V. Heterofl: Computation and communication efficient federated learning for heterogeneous clients. In International Conference on Learning Representations, 2021.
[9]
Goetz, J., Malik, K., Bui, D., Moon, S., Liu, H., and Kumar, A. Active federated learning. arXiv preprint arXiv:1909.12641, 2019.
[10]
He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
[11]
Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural computation, 9(8):1735-1780, 1997.
[12]
Joachims, T. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 133-142, 2002.
[13]
Kim, Y. G. and Wu, C.-J. Autofl: Enabling heterogeneity-aware energy efficient federated learning. In MICRO- 54: 54th Annual IEEE/ACM International Symposium on Microarchitecture, pp. 183-198, 2021.
[14]
Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. Citeseer, 2009.
[15]
Lai, F., Zhu, X., Madhyastha, H. V., and Chowdhury, M. Oort: Efficient federated learning via guided participant selection. In OSDI, pp. 19-35, 2021.
[16]
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
[17]
Lecun, Y., Cortes, C., and Burges, C. J. The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/, 1998.
[18]
Li, H. A short introduction to learning to rank. IEICE TRANSACTIONS on Information and Systems, 94(10): 1854-1862, 2011.
[19]
Li, L., Xiong, H., Guo, Z., Wang, J., and Xu, C.-Z. Smartpc: Hierarchical pace control in real-time federated learning system. In 2019 IEEE Real-Time Systems Symposium (RTSS), pp. 406-418. IEEE, 2019.
[20]
Li, Q., He, B., and Song, D. Model-contrastive federated learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10713-10722, 2021.
[21]
Liu, E., Hashemi, M., Swersky, K., Ranganathan, P., and Ahn, J. An imitation learning approach for cache replacement. In International Conference on Machine Learning, pp. 6237-6247. PMLR, 2020.
[22]
Liu, T.-Y. et al. Learning to rank for information retrieval. Foundations and Trends® in Information Retrieval, 3(3): 225-331, 2009.
[23]
Ma, J., Tian, C., Li, L., and Xu, C. Fedmg: A federated multi-global optimization framework for autonomous driving control. In 2024 IEEE/ACM 32st International Symposium on Quality of Service (IWQoS), pp. 1-10. IEEE, 2024.
[24]
McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp. 1273-1282. PMLR, 2017.
[25]
Monsoon. Monsoon high voltage power monitor. https://www.msoon.com/online-store/High-Voltage-Power-Monitor-p90002590, 2023.
[26]
Ning, Z., Wang, P., Wang, P., Qiao, Z., Fan, W., Zhang, D., Du, Y., and Zhou, Y. Graph soft-contrastive learning via neighborhood ranking. arXiv preprint arXiv:2209.13964, 2022.
[27]
Ning, Z., Tian, C., Xiao, M., Fan, W., Wang, P., Li, L., Wang, P., and Zhou, Y. Fedgcs: A generative framework for efficient client selection in federated learning via gradient-based optimization. arXiv preprint arXiv:2405.06312, 2024.
[28]
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
[29]
Qin, T., Liu, T.-Y., Xu, J., and Li, H. Letor: A benchmark collection for research on learning to rank for information retrieval. Information Retrieval, 13:346-374, 2010.
[30]
Ross, S. and Bagnell, J. A. Reinforcement and imitation learning via interactive no-regret learning. arXiv preprint arXiv:1406.5979, 2014.
[31]
Sabour, S., Chan, W., and Norouzi, M. Optimal completion distillation for sequence learning. arXiv preprint arXiv:1810.01398, 2018.
[32]
Shepard, C., Rahmati, A., Tossell, C., Zhong, L., and Kortum, P. Livelab: measuring wireless networks and smartphone users in the field. ACM SIGMETRICS Performance Evaluation Review, 38(3):15-20, 2011.
[33]
Simonyan. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[34]
Sun, W., Venkatraman, A., Gordon, G. J., Boots, B., and Bagnell, J. A. Deeply aggrevated: Differentiable imitation learning for sequential prediction. In International conference on machine learning, pp. 3309-3318. PMLR, 2017.
[35]
Sunehag, P., Lever, G., Gruslys, A., Czarnecki, W. M., Zambaldi, V., Jaderberg, M., Lanctot, M., Sonnerat, N., Leibo, J. Z., Tuyls, K., et al. Value-decomposition networks for cooperative multi-agent learning. arXiv preprint arXiv:1706.05296, 2017.
[36]
Sutton, R. S. and Barto, A. G. Reinforcement learning: An introduction. MIT press, 2018.
[37]
Tam, K., Li, L., Han, B., Xu, C., and Fu, H. Federated noisy client learning. IEEE Transactions on Neural Networks and Learning Systems, pp. 1-14, 2023.
[38]
Tian, C., Li, L., Shi, Z., Wang, J., and Xu, C. Harmony: Heterogeneity-aware hierarchical management for federated learning system. In 2022 55th IEEE/ACM International Symposium on Microarchitecture (MICRO), pp. 631-645. IEEE, 2022.
[39]
Tian, C., Shi, Z., and Li, L. Learn to select: Efficient cross-device federated learning via reinforcement learning. In Maughan, K., Liu, R., and Burns, T. F. (eds.), The First Tiny Papers Track at ICLR 2023, Tiny Papers @ ICLR 2023, Kigali, Rwanda, May 5, 2023. OpenReview. net, 2023. URL https://openreview.net/pdf?id=wecTsVkrjit.
[40]
Wang, H., Kaplan, Z., Niu, D., and Li, B. Optimizing federated learning on non-iid data with reinforcement learning. In IEEE INFOCOM 2020-IEEE Conference on Computer Communications, pp. 1698-1707. IEEE, 2020a.
[41]
Wang, J., Liu, Q., Liang, H., Joshi, G., and Poor, H. V. Tackling the objective inconsistency problem in heterogeneous federated optimization. Advances in neural information processing systems, 33:7611-7623, 2020b.
[42]
Wu, Y., Li, L., Tian, C., and Xu, C. Breaking the memory wall for heterogeneous federated learning with progressive training. arXiv preprint arXiv:2404.13349, 2024.
[43]
Zhan, Y., Li, P., and Guo, S. Experience-driven computational resource allocation of federated learning by deep reinforcement learning. In 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 234-243. IEEE, 2020.
[44]
Zhang, S. Q., Lin, J., and Zhang, Q. A multi-agent reinforcement learning approach for efficient client selection in federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 9091-9099, 2022.
[45]
Zhang, X., Zhou, X., Lin, M., and Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6848-6856, 2018.
[46]
Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., and Chandra, V. Federated learning with non-iid data. arXiv preprint arXiv:1806.00582, 2018.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
ICML'24: Proceedings of the 41st International Conference on Machine Learning
July 2024
63010 pages

Publisher

JMLR.org

Publication History

Published: 21 July 2024

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Acceptance Rates

Overall Acceptance Rate 140 of 548 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 24 Jan 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media