Cited By
View all- Piccialli FChiaro DQi PBellandi VDamiani E(2025)Federated and edge learning for large language modelsInformation Fusion10.1016/j.inffus.2024.102840117(102840)Online publication date: May-2025
On-device Deep Neural Network (DNN) training has been recognized as crucial for privacy-preserving machine learning at the edge. However, the intensive training workload and limited onboard computing resources pose significant challenges to the ...
Deep Neural Networks (DNNs) have become increasingly computationally intensive and have larger parameters, requiring efficient parallelization or distribution using multiple accelerators. Pipeline parallelism has been proposed as an effective way to ...
Pipeline parallelism organizes a parallel program as a linear sequence of s stages. Each stage processes elements of a data stream, passing each processed data element to the next stage, and then taking on a new element before the subsequent stages have ...
Association for Computing Machinery
New York, NY, United States
View or Download as a PDF file.
PDFView online with eReader.
eReaderView this article in HTML Format.
HTML FormatCheck if you have access through your login credentials or your institution to get full access on this article.
Sign in