[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3556557.3557952acmconferencesArticle/Chapter ViewAbstractPublication PagesmobicomConference Proceedingsconference-collections
research-article

Towards energy-aware federated learning on battery-powered clients

Published: 22 November 2022 Publication History

Abstract

Federated learning (FL) is a newly emerged branch of AI that facilitates edge devices to collaboratively train a global machine learning model without centralizing data and with privacy by default. However, despite the remarkable advancement, this paradigm comes with various challenges. Specifically, in large-scale deployments, client heterogeneity is the norm which impacts training quality such as accuracy, fairness, and time. Moreover, energy consumption across these battery-constrained devices is largely unexplored and a limitation for wide-adoption of FL. To address this issue, we develop EAFL, an energy-aware FL selection method that considers energy consumption to maximize the participation of heterogeneous target devices. EAFL is a power-aware training algorithm that cherry-picks clients with higher battery levels in conjunction with its ability to maximize the system efficiency. Our design jointly minimizes the time-to-accuracy and maximizes the remaining on-device battery levels. EAFL improves the testing model accuracy by up to 85% and decreases the drop-out of clients by up to 2.45X.1

References

[1]
Ahmed M. Abdelmoniem and Marco Canini. 2021. DC2: Delay-aware Compression Control for Distributed Machine Learning. In IEEE INFOCOM.
[2]
Ahmed M. Abdelmoniem and Marco Canini. 2021. Towards Mitigating Device Heterogeneity in Federated Learning via Adaptive Model Quantization. In ACM EuroMLSys.
[3]
Ahmed M. Abdelmoniem, Ahmed Elzanaty, Mohamed-Slim Alouini, and Marco Canini. 2021. An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems. In MLSys.
[4]
Ahmed M. Abdelmoniem, Chen-Yu Ho, Pantelis Papageorgiou, Muhammad Bilal, and Marco Canini. 2021. On the Impact of Device and Behavioral Heterogeneity in FederatedLearning. arXiv 2102.07500 (2021).
[5]
Ahmed M. Abdelmoniem, Chen-Yu Ho, Pantelis Papageorgiou, and Marco Canini. 2022. Empirical Analysis of Federated Learning in Heterogeneous Environments. In ACM EuroMLSys.
[6]
Ahmed M. Abdelmoniem, Atal Narayan Sahu, Marco Canini, and Suhaib A. Fahmy. 2021. Resource-Efficient Federated Learning. arXiv 2111.01108 (2021).
[7]
AI Benchmark. 2021. Performance Ranking. https://ai-benchmark.com/ranking.html
[8]
Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konečný, Stefano Mazzocchi, H. Brendan McMahan, Timon Van Overveldt, David Petrou, Daniel Ramage, and Jason Roselander. 2019. Towards Federated Learning at Scale: System Design. In MLSys.
[9]
Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H. Brendan McMahan, Virginia Smith, and Ameet Talwalkar. 2018. LEAF: A Benchmark for Federated Settings. arXiv 1812.01097 (2018).
[10]
Ning Ding and Y. Charlie Hu. 2017. GfxDoctor: A Holistic Graphics Energy Profiler for Mobile Devices. In Proceedings of the European Conference on Computer Systems (EuroSys).
[11]
Aritra Dutta, El Houcine Bergou, Ahmed M. Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, and Panos Kalnis. 2020. On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning. In The AAAI Conference on Artificial Intelligence.
[12]
Keith Bonawitz et al. 2017. Practical Secure Aggregation for Privacy-Preserving Machine Learning. In In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS).
[13]
Chip 1 Exchange. 2022. The Wave of Wearables.
[14]
Ronald Fisher. 1971. The Design of Experiments (9 ed.). Macmillan.
[15]
Rishikesh R. Gajjala, Shashwat Banchhor, Ahmed M. Abdelmoniem, Aritra Dutta, Marco Canini, and Panos Kalnis. 2020. Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning. In Workshop on Distributed Machine Learning - CoNext (DistributedML).
[16]
GFXBench. 2022. GFXBench 5.0. https://gfxbench.com/result.jsp.
[17]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In CVPR.
[18]
Goran Kalic, Iva Bojic, and Mario Kusek. 2012. Energy consumption in android phones when using wireless communication technologies. In 2012 Proceedings of the 35th International Convention MIPRO.
[19]
Young Geun Kim and Carole-Jean Wu. 2021. AutoFL: Enabling Heterogeneity-Aware Energy Efficient Federated Learning. ArXiv 2107.08147 (2021).
[20]
Jakub Konečný, H. Brendan McMahan, Felix X. Yu, Peter Richtarik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated Learning: Strategies for Improving Communication Efficiency. In Workshop on Private Multi-Party Machine Learning - NeurIPS.
[21]
Fan Lai, Xiangfeng Zhu, Harsha V. Madhyastha, and Mosharaf Chowdhury. 2021. Efficient Federated Learning via Guided Participant Selection. In USENIX OSDI.
[22]
Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated Optimization in Heterogeneous Networks. In MLSys.
[23]
M-Lab. 2021. MobiPerf: an open source application for measuring network performance on mobile platforms. https://www.measurementlab.net/tests/mobiperf/
[24]
H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. In AISTATS.
[25]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting Unintended Feature Leakage in Collaborative Learning. In IEEE Symposium on Security and Privacy (SP).
[26]
Mehryar Mohri, Gary Sivek, and Ananda Theertha Suresh. 2019. Agnostic Federated Learning. In ICML.
[27]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In IEEE Symposium on Security and Privacy (SP).
[28]
Xingchao Peng, Zijun Huang, Yizhe Zhu, and Kate Saenko. 2020. Federated Adversarial Domain Adaptation. In International Conference on Learning Representations (ICLR).
[29]
Xinchi Qiu, Titouan Parcollet, Daniel J. Beutel, Taner Topal, Akhil Mathur, and Nicholas D. Lane. 2020. A first look into the carbon footprint of federated learning. ArXiv 2010.06537 (2020).
[30]
Swaroop Ramaswamy, Om Thakkar, Rajiv Mathews, Galen Andrew, H. Brendan McMahan, and Françoise Beaufays. 2020. Training Production Language Models without Memorizing User Data. arXiv 2009.10031 (2020).
[31]
Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari, and Ramtin Pedarsani. 2021. Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity. In International Workshop on Federated Learning - ICML.
[32]
Theo Ryffel, Andrew Trask, Morten Dahl, Bobby Wagner, Jason Mancuso, Daniel Rueckert, and Jonathan Passerat-Palmbach. 2018. A generic framework for privacy preserving deep learning. arXiv 1811.04017 (2018).
[33]
Atal Sahu, Aritra Dutta, Ahmed M. Abdelmoniem, Trambak Banerjee, Marco Canini, and Panos Kalnis. 2021. Rethinking gradient sparsification as total error minimization. In NeurIPS.
[34]
C. Wang, Y. Yang, and P. Zhou. 2021. Towards Efficient Scheduling of Federated Mobile Devices Under Computational and Statistical Heterogeneity. IEEE Transactions on Parallel & Distributed Systems 32, 02 (2021), 394--410.
[35]
Heqiang Wang and Jie Xu. 2021. Friends to Help: Saving Federated Learning from Client Dropout. ArXiv 2205.13222 (2021).
[36]
P. Warden. 2018. Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition. ArXiv 1804.03209 (2018).
[37]
Cong Xie, Sanmi Koyejo, and Indranil Gupta. 2020. Asynchronous Federated Optimization. In Annual Workshop on Optimization for Machine Learning - NeurIPS.
[38]
Hang Xu, Chen-Yu Ho, Ahmed M. Abdelmoniem, Aritra Dutta, El Houcine Bergou, Konstantinos Karatsenidis, Marco Canini, and Panos Kalnis. 2021. GRACE: A Compressed Communication Framework for Distributed Machine Learning. In IEEE ICDCS.
[39]
Timothy Yang, Galen Andrew, Hubert Eichner, Haicheng Sun, Wei Li, Nicholas Kong, Daniel Ramage, and Françoise Beaufays. 2018. Applied Federated Learning: Improving Google Keyboard Query Suggestions. arXiv 1812.02903 (2018).

Cited By

View all
  • (2024)Evaluating Federated Learning Simulators: A Comparative Analysis of Horizontal and Vertical ApproachesSensors10.3390/s2416514924:16(5149)Online publication date: 9-Aug-2024
  • (2024)A survey of energy-efficient strategies for federated learning inmobile edge computing移动边缘计算中联邦学习的能效策略综述Frontiers of Information Technology & Electronic Engineering10.1631/FITEE.230018125:5(645-663)Online publication date: 7-Jun-2024
  • (2024)Blockchain-based federal learning program for drone safetyInternational Conference on Computer Network Security and Software Engineering (CNSSE 2024)10.1117/12.3031895(8)Online publication date: 6-Jun-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
FedEdge '22: Proceedings of the 1st ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network
October 2022
34 pages
ISBN:9781450395212
DOI:10.1145/3556557
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 November 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. efficiency
  2. energy
  3. federated learning
  4. heterogeneity

Qualifiers

  • Research-article

Conference

ACM MobiCom '22
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)141
  • Downloads (Last 6 weeks)14
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Evaluating Federated Learning Simulators: A Comparative Analysis of Horizontal and Vertical ApproachesSensors10.3390/s2416514924:16(5149)Online publication date: 9-Aug-2024
  • (2024)A survey of energy-efficient strategies for federated learning inmobile edge computing移动边缘计算中联邦学习的能效策略综述Frontiers of Information Technology & Electronic Engineering10.1631/FITEE.230018125:5(645-663)Online publication date: 7-Jun-2024
  • (2024)Blockchain-based federal learning program for drone safetyInternational Conference on Computer Network Security and Software Engineering (CNSSE 2024)10.1117/12.3031895(8)Online publication date: 6-Jun-2024
  • (2024)Resource-Aware Split Federated Learning for Edge Intelligence2024 IEEE 3rd Workshop on Machine Learning on Edge in Sensor Systems (SenSys-ML)10.1109/SenSys-ML62579.2024.00008(15-20)Online publication date: 13-May-2024
  • (2024)Performance Profiling of Federated Learning Across Heterogeneous Mobile Devices2024 IEEE 24th International Conference on Software Quality, Reliability, and Security Companion (QRS-C)10.1109/QRS-C63300.2024.00053(363-372)Online publication date: 1-Jul-2024
  • (2024)CC-FedAvg: Computationally Customized Federated AveragingIEEE Internet of Things Journal10.1109/JIOT.2023.330008011:3(4826-4841)Online publication date: 1-Feb-2024
  • (2024)Federated Learning-Empowered Mobile Network Management for 5G and Beyond Networks: From Access to CoreIEEE Communications Surveys & Tutorials10.1109/COMST.2024.335291026:3(2176-2212)Online publication date: Nov-2025
  • (2024)SCALE: Self-Regulated Clustered FederAted LEarning in a Homogeneous Environment2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC)10.1109/COMPSAC61105.2024.00112(801-806)Online publication date: 2-Jul-2024
  • (2024)Federated learning energy saving through client selectionPervasive and Mobile Computing10.1016/j.pmcj.2024.101948103:COnline publication date: 1-Oct-2024
  • (2024)RTIFedComputer Networks: The International Journal of Computer and Telecommunications Networking10.1016/j.comnet.2024.110192241:COnline publication date: 25-Jun-2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media