[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Hierarchical Adaptive Collaborative Learning: A Distributed Learning Framework for Customized Cloud Services in 6G Mobile Systems

Published: 01 March 2023 Publication History

Abstract

The Fifth Generation (5G) mobile systems support many kinds of intelligent cloud services. The upcoming Sixth Generation (6G) mobile systems aim to provide customized cloud services since the users have different capabilities and needs. However, as an essential scenario in 6G mobile systems, collaborative AI failed to deliver customized cloud services to the users because of system heterogeneity. Due to the lack of efficient resource coordination and service orchestration strategy, collaborative learning suffers from high latency and low resource utilization. This article proposes Hierarchical Adaptive Collaborative Learning, a distributed learning framework for customized cloud services in 6G mobile systems, to improve the training efficiency and resource utilization by dynamically adjusting the collaborative training process according to the service and resources. We introduce the critical scenarios and challenges of distributed learning. Then we present our framework to handle heterogeneous cloud services and resources. Next, we illustrate various model training and collaborating techniques to improve training efficiency. Finally, we provide a case study to show the priority of the modules in our framework and provide future research directions on collaborative learning for customized cloud services in 6G mobile systems.

References

[1]
Z. Liu et al., “POST: Parallel Offloading of Splittable Tasks in Heterogeneous Fog Networks,” IEEE Internet Things J., vol. 7, no. 4, 2020, pp. 3170–83.
[2]
Y. Yang, “Multi-Tier Computing Networks for Intelligent IoT,” Nat. Electron., vol. 2, no. 1. 2019. pp. 4–5.
[3]
Y. Yang et al., “POMT: Paired Offloading of Multiple Tasks in Heterogeneous Fog Networks,” IEEE Internet Things J., vol. 6, no. 5, 2019, pp. 8658–69.
[4]
Y. Yang et al., “6G Network AI Architecture for Every-one-Centric: Customized Services.” IEEE Netw., 2022.
[5]
X. Hou et al., “Edge Intelligence for Mission-Critical 6G Services in Space-Air-Ground Integrated Networks,” IEEE Netw., vol. 36, no. 2, 2022, no. 181–89.
[6]
X. Huang et al., “Collaborative Machine Learning for Energy-Efficient Edge Networks in 6G,” IEEE Netw., vol. 35, no. 6, 2021, pp. 12–19.
[7]
X. Gao et al., “PORA: Predictive Offloading and Resource Allocation in Dynamic Fog Computing Systems,” Proc. IEEE ICC, 2019, pp. 1–6.
[8]
Y. Wu et al., “Non-Orthogonal Multiple Access Assisted Fed-erated Learning via Wireless Power Transfer: A Cost-Efficient Approach,” IEEE Trans. Commun., vol. 70, no. 4, 2022, pp. 2853–69.
[9]
F. Zhou et al., “Intelligence-Endogenous Networks: Innovative Network Paradigm for 6G,” IEEE Wirel. Commun., vol. 29, no. 1, 2022, pp.40–47.
[10]
J. Perazzone et al., “Communication-Efficient Device Scheduling for Federated Learning Using Stochastic Optimization,” Proc. 41 st IEEE INFOCOM, 2022, pp. 1449–58.
[11]
Z. Meng et al., “Learning-Driven Decentralized Machine Learning in Resource-Constrained Wireless Edge Computing,” Proc. 40th IEEE INFOCOM, 2021, pp. 1–10.
[12]
G. Wu and S. Gong, “Peer Collaborative Learning for Online Knowledge Distillation,” Proc. 35th AAAI, 2021, pp. 10302–10.
[13]
B. Luo et al., “Tackling System and Statistical Heterogeneity for Federated Learning with Adaptive Client Sampling,” Proc. 41 st IEEE INFOCOM, 2022, pp. 1739–48.
[14]
S. Fan et al., “DAPPLE: A Pipelined Data Parallel Approach for Training Large Models,” Proc. 26th ACM PPoPP, 2021, pp.431–45.
[15]
M. Baines et al., “Fairscale: A General Purpose Modular PyTorch Library for High Performance and Large Scale Training,” 2021.
[16]
L. Zhang et al., “Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation,” Proc. IEEE ICCV, 2019, pp. 3712–21.
[17]
M. Choudhury, S. Mahmud,. and M. M. Khan, “A Particle Swarm Based Algorithm for Functional Distributed Constraint Optimization Problems,” Proc. 34th AAAI, 2020, pp. 7111–18.
[18]
M. Suganuma, S. Shirakawa, and T. Nagao, “A Genetic Programming Approach to Designing Convolutional Neural Network Architectures,” Proc. 27th IJCAI, 2018, pp. 5369–73.
[19]
L. Li et al., “To Talk or to Work: Flexible Communication Compression for Energy Efficient Federated Learning over Heterogeneous Mobile Edge Devices,” Proc. 40th IEEE INFOCOM, 2021, pp. 1–10.

Cited By

View all
  • (2024)Poster: Flexible Scheduling of Network and Computing Resources for Distributed AI TasksProceedings of the ACM SIGCOMM 2024 Conference: Posters and Demos10.1145/3672202.3673744(60-62)Online publication date: 4-Aug-2024
  • (2024)FedREAS: A Robust Efficient Aggregation and Selection Framework for Federated LearningACM Transactions on Asian and Low-Resource Language Information Processing10.1145/3670689Online publication date: 4-Jun-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Network: The Magazine of Global Internetworking
IEEE Network: The Magazine of Global Internetworking  Volume 37, Issue 2
March/April 2023
299 pages

Publisher

IEEE Press

Publication History

Published: 01 March 2023

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 14 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Poster: Flexible Scheduling of Network and Computing Resources for Distributed AI TasksProceedings of the ACM SIGCOMM 2024 Conference: Posters and Demos10.1145/3672202.3673744(60-62)Online publication date: 4-Aug-2024
  • (2024)FedREAS: A Robust Efficient Aggregation and Selection Framework for Federated LearningACM Transactions on Asian and Low-Resource Language Information Processing10.1145/3670689Online publication date: 4-Jun-2024

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media