Construction of a Deep Learning Model for Unmanned Aerial Vehicle-Assisted Safe Lightweight Industrial Quality Inspection in Complex Environments
<p>Example of a workplace for UAV-assisted industrial quality inspection.</p> "> Figure 2
<p>The architecture of CP-FL.</p> "> Figure 3
<p>Algorithm flow diagram.</p> "> Figure 4
<p>Schematic of knowledge distillation and federal learning processes. The teacher model generates soft labels for training the student model. Local updates are performed on the client, and the updated parameters are aggregated globally by the server.</p> "> Figure 5
<p>Diagram of the ConvNeXt network model (using depth-separable convolution to decouple the fusion of spatial information and the fusion of channel information, expanding the overall width of the model).</p> "> Figure 6
<p>Impact of different schemes on the performance of ResNet on MNIST.</p> "> Figure 7
<p>Impact of different schemes on the performance of ResNet on CIFAR-10.</p> "> Figure 8
<p>Performance test of model with different pruning rate based on MNIST dataset.</p> "> Figure 9
<p>Performance test of models with different pruning rates based on CIFAR-10 dataset.</p> "> Figure 10
<p>Layer-wise quantization bit statistics for VGG16.</p> "> Figure 11
<p>Layer-wise compression ratio statistics for VGG16.</p> "> Figure 12
<p>Classification accuracy of the CP-FL framework for participant models on the MNIST dataset.</p> "> Figure 13
<p>Classification accuracy of the CP-FL framework for participant models on the DTB70 dataset.</p> "> Figure 14
<p>Variation in accuracy on the CIFAR-10 dataset.</p> "> Figure 15
<p>Variation in the loss function on the CIFAR-10 dataset.</p> "> Figure 16
<p>Loss function vs. rounds for compression methods on MNIST.</p> "> Figure 17
<p>Loss function vs. rounds for compression method on CIFAR-10.</p> ">
Abstract
:1. Introduction
- (1)
- We propose CP-FL, a method that addresses privacy, utility, and communication efficiency concerns by processing the data volume for both the uplink and downlink during model training and by introducing noise to parameters when they are uploaded from edge servers to the central server.
- (2)
- A network-sparsification pruning training method based on channel importance is introduced, converting the pruning process into a constrained optimization problem. Additionally, a quantization-aware training method is proposed to enable automated learning of quantization bitwidth, enhancing the adaptability between feature representation and data precision.
- (3)
- To further enhance privacy, after model parameter pruning and quantization, we employ differential privacy techniques to protect users’ uploaded data. Following the aggregation of model parameters at the central server, knowledge distillation is applied to the model, reducing the amount of data transmitted downstream without compromising utility.
2. Related Work
2.1. UAV-Assisted Federal Learning Algorithms
2.2. Model Compression and Quantization
2.3. Federated Learning with Differential Privacy
2.4. Federated Learning Combined with Knowledge Distillation
3. Problem Statement and Proposed Scheme
3.1. Threat Modeling and Designing Program Goals
3.1.1. System Architecture
3.1.2. Threat Model
3.1.3. Design Objectives
3.2. Framework Design
- (1)
- Download global model: the following UAVs download the latest global model from the pilot UAV and use it as the initial local model: ;
- (2)
- Local model training: the following UAVs receive training data from smart devices in the coverage area and execute stochastic gradient descent method for local model training: is the learning rate of the local model, and is the loss function. The local model training is stopped and the local model at this point is denoted as ;
- (3)
- Upload local model: the UAV node processes the local model and uploads the model to the pilot UAV for model aggregation;
- (4)
- Global model aggregation: the pilot UAV weights and aggregates the received local models through the federal average algorithm to obtain a new global model , and then the model is processed by knowledge distillation.
3.3. Model Pruning and Quantization
3.4. Protection of Model Parameters
3.5. Model Aggregation and Knowledge Distillation on Pilot UAVs
3.6. Security Analysis
4. Experiments
4.1. Experimental Environment and Setup
- (a)
- Federated Averaging (FedAvg) [33]: The federated averaging algorithm is a classic benchmark algorithm in federated learning. Edge servers update the model using local data based on the model issued and upload it back to the central server. The central server aggregates the models from the edge servers collected, using a weighted average based on the number of samples from each party, to obtain the model for the next round.
- (b)
- FedProx [34]: When the training data are heterogeneous, FedProx has demonstrated stronger convergence than FedAvg on a set of real federated datasets by allowing each participant to perform a variable amount of work in compliance with device-level system constraints.
- (c)
- FedDrop [35]: Based on the classic dropout scheme, a federated dropout scheme for stochastic model pruning is proposed. In each iteration of the FL algorithm, dropout is used to independently generate several subnets from the global model of the server. Each subnet is adapted to the assigned channel state and downloaded to the associated device for updating.
- (d)
- UDP [36]: This is a localized differential privacy federated learning method based on the Gaussian mechanism, which includes an adaptive pruning threshold strategy. UDP ensures the privacy of local data from each participant in each round of global communication by fixing the privacy budget, and the noise in each round of global communication is adaptively variable.
- (e)
- DP-SCAFFOLD [37]: This is a representative method in the field of federated learning based on the Gaussian mechanism, which takes the median of the gradient norm in each local training as the pruning threshold. The noise added to the model gradients uploaded by all participants in each round of global communication is fixed.
4.2. Knowledge Distillation Temperature Experiment
4.3. Model Performance Testing
4.4. Performance Testing Under Different Pruning Rates
4.5. Compression Effect Comparison
4.6. Privacy and Usability Testing
4.7. Ablation Experiment
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Klaib, A.F.; Alsrehin, N.O.; Melhem, W.Y.; Bashtawi, H.O.; Magableh, A.A. Eye tracking algorithms, techniques, tools, and applications with an emphasis on machine learning and Internet of Things technologies. Expert Syst. Appl. 2021, 166, 114037. [Google Scholar] [CrossRef]
- Zhu, J.; Cao, J.; Saxena, D.; Jiang, S.; Ferradi, H. Blockchain-empowered federated learning: Challenges, solutions, and future directions. ACM Comput. Surv. 2023, 55, 1–31. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, J.; Li, J.; Niu, S.; Song, H. Machine learning for the detection and identification of Internet of Things devices: A survey. IEEE Internet Things J. 2021, 9, 298–320. [Google Scholar] [CrossRef]
- Ma, Z.; Ma, J.; Miao, Y.; Li, Y.; Deng, R.H. ShieldFL: Mitigating Model Poisoning Attacks in Privacy-Preserving Federated Learning. IEEE Trans. Inf. Forensics Secur. 2022, 17, 1639–1654. [Google Scholar] [CrossRef]
- Ghimire, B.; Rawat, D.B. Recent advances on federated learning for cybersecurity and cybersecurity for federated learning for internet of things. IEEE Internet Things J. 2022, 9, 8229–8249. [Google Scholar] [CrossRef]
- Jiang, M.; Wang, Z.; Dou, Q. Harmofl: Harmonizing local and global drifts in federated learning on heterogeneous medical images. Proc. AAAI Conf. Artif. Intell. 2022, 36, 1087–1095. [Google Scholar] [CrossRef]
- Li, H.; Zhang, J.; Li, Z.; Liu, J.; Wang, Y. Improvement of min-entropy evaluation based on pruning and quantized deep neural network. IEEE Trans. Inf. Forensics Secur. 2023, 18, 1410–1420. [Google Scholar] [CrossRef]
- Yi, M.K.; Lee, W.K.; Hwang, S.O. A human activity recognition method based on lightweight feature extraction combined with pruned and quantized CNN for wearable device. IEEE Trans. Consum. Electron. 2023, 69, 657–670. [Google Scholar] [CrossRef]
- Nguyen, D.C.; Ding, M.; Pathirana, P.N.; Seneviratne, A.; Li, J.; Poor, H.V. Federated learning for internet of things: A comprehensive survey. IEEE Commun. Surv. Tutor. 2021, 23, 1622–1658. [Google Scholar] [CrossRef]
- Wibawa, F.; Catak, F.O.; Kuzlu, M.; Sarp, S.; Cali, U. Homomorphic encryption and federated learning based privacy-preserving cnn training: Covid-19 detection use-case. In Proceedings of the 2022 European Interdisciplinary Cybersecurity Conference, Barcelona, Spain, 15–16 June 2022; pp. 85–90. [Google Scholar]
- AbdulRahman, S.; Ould-Slimane, H.; Chowdhury, R.; Mourad, A.; Talhi, C.; Guizani, M. Adaptive upgrade of client resources for improving the quality of federated learning model. IEEE Internet Things J. 2022, 10, 4677–4687. [Google Scholar] [CrossRef]
- Dai, Z.; Zhang, Y.; Zhang, W.; Luo, X.; He, Z. A multi-agent collaborative environment learning method for UAV deployment and resource allocation. IEEE Trans. Signal Inf. Process. Over Netw. 2022, 8, 120–130. [Google Scholar] [CrossRef]
- Akbari, M.; Syed, A.; Kennedy, W.S.; Erol-Kantarci, M. Constrained federated learning for AoI-limited SFC in UAV-Aided MEC for smart agriculture. IEEE Trans. Mach. Learn. Commun. Netw. 2023, 1, 277–295. [Google Scholar] [CrossRef]
- Qian, L.P.; Li, M.; Ye, P.; Wang, Q.; Lin, B.; Wu, Y.; Yang, X. Secrecy-driven energy minimization in federated learning-assisted marine digital twin networks. IEEE Internet Things J. 2023, 11, 5155–5168. [Google Scholar] [CrossRef]
- Tang, J.; Nie, J.; Zhang, Y.; Xiong, Z.; Jiang, W.; Guizani, M. Multi-UAV-assisted federated learning for energy-aware distributed edge training. IEEE Trans. Netw. Serv. Manag. 2023, 21, 280–294. [Google Scholar] [CrossRef]
- Yang, S.; He, S.; Duan, H.; Chen, W.; Zhang, X.; Wu, T.; Yin, Y. APQ: Automated DNN Pruning and Quantization for ReRAM-based Accelerators. IEEE Trans. Parallel Distrib. Syst. 2023, 34, 2498–2511. [Google Scholar] [CrossRef]
- Gonzalez-Carabarin, L.; Huijben, I.A.; Veeling, B.; Schmid, A.; van Sloun, R.J. Dynamic probabilistic pruning: A general framework for hardware-constrained pruning at different granularities. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 733–744. [Google Scholar] [CrossRef]
- Wiedemann, S.; Kirchhoffer, H.; Matlage, S.; Haase, P.; Marban, A.; Marinc, T.; Neumann, D.; Nguyen, T.; Schwarz, H.; Wiegand, T.; et al. Deepcabac: A universal compression algorithm for deep neural networks. IEEE J. Sel. Top. Signal Process. 2020, 14, 700–714. [Google Scholar] [CrossRef]
- Marinó, G.C.; Petrini, A.; Malchiodi, D.; Frasca, M. Deep neural networks compression: A comparative survey and choice recommendations. Neurocomputing 2023, 520, 152–170. [Google Scholar] [CrossRef]
- Kirchhoffer, H.; Haase, P.; Samek, W.; Muller, K.; Rezazadegan-Tavakoli, H.; Cricri, F.; Aksu, E.B.; Hannuksela, M.M.; Jiang, W.; Wang, W.; et al. Overview of the neural network compression and representation (NNR) standard. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 3203–3216. [Google Scholar] [CrossRef]
- Giannopoulos, A.E.; Spantideas, S.T.; Zetas, M.; Nomikos, N.; Trakadas, P. FedShip: Federated Over-the-Air Learning for Communication-Efficient and Privacy-Aware Smart Shipping in 6G Communications. IEEE Trans. Intell. Transp. Syst. 2024, 99, 1–16. [Google Scholar] [CrossRef]
- Khan, L.U.; Saad, W.; Han, Z.; Hossain, E.; Hong, C.S. Federated learning for internet of things: Recent advances, taxonomy, and open challenges. IEEE Commun. Surv. Tutor. 2021, 23, 1759–1799. [Google Scholar] [CrossRef]
- Yin, X.; Zhu, Y.; Hu, J. A comprehensive survey of privacy-preserving federated learning: A taxonomy, review, and future directions. ACM Comput. Surv. 2021, 54, 1–36. [Google Scholar] [CrossRef]
- Yu, R.; Li, P. Toward resource-efficient federated learning in mobile edge computing. IEEE Netw. 2021, 35, 148–155. [Google Scholar] [CrossRef]
- Song, M.; Wang, Z.; Zhang, Z.; Song, Y.; Wang, Q.; Ren, J.; Qi, H. Analyzing user-level privacy attack against federated learning. IEEE J. Sel. Areas Commun. 2020, 38, 2430–2444. [Google Scholar] [CrossRef]
- Zhao, Y.; Zhao, J.; Yang, M.; Wang, T.; Wang, N.; Lyu, L.; Niyato, D.; Lam, K.Y. Local differential privacy based federated learning for Internet of Things. IEEE Internet Things J. 2020, 8, 8836–8853. [Google Scholar] [CrossRef]
- Zhang, Z.; Wu, L.; Ma, C.; Li, J.; Wang, J.; Wang, Q.; Yu, S. LSFL: A lightweight and secure federated learning scheme for edge computing. IEEE Trans. Inf. Forensics Secur. 2022, 18, 365–379. [Google Scholar] [CrossRef]
- Qu, L.; Song, S.; Tsui, C.Y. Feddq: Communication-efficient federated learning with descending quantization. In Proceedings of the GLOBECOM 2022—2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, 4–8 December 2022; pp. 281–286. [Google Scholar]
- Wu, Z.; Sun, S.; Wang, Y.; Liu, M.; Pan, Q.; Jiang, X.; Gao, B. Fedict: Federated multi-task distillation for multi-access edge computing. IEEE Trans. Parallel Distrib. Syst. 2023, 35, 1107–1121. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, J.; Zhu, J.; Li, W. HBMD-FL: Heterogeneous federated learning algorithm based on blockchain and model distillation. In International Symposium on Emerging Information Security and Applications; Springer Nature: Cham, Switzerland, 2022; pp. 145–159. [Google Scholar]
- Zhou, X.; Zheng, X.; Cui, X.; Shi, J.; Liang, W.; Yan, Z.; Yang, L.T.; Shimizu, S.; Wang, K.I.-K. Digital twin enhanced federated reinforcement learning with lightweight knowledge distillation in mobile networks. IEEE J. Sel. Areas Commun. 2023, 41, 3191–3211. [Google Scholar] [CrossRef]
- Itahara, S.; Nishio, T.; Koda, Y.; Morikura, M.; Yamamoto, K. Distillation-based semi-supervised federated learning for communication-efficient collaborative training with non-iid private data. IEEE Trans. Mob. Comput. 2021, 22, 191–205. [Google Scholar] [CrossRef]
- McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A.Y. Communication-efficient learning of deep networks from decentralized data. arXiv 2016, arXiv:1602.05629. [Google Scholar]
- Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2020, 2, 429–450. [Google Scholar]
- Caldas, S.; Konečny, J.; McMahan, H.B.; Talwalkar, A. Expanding the Reach of Federated Learning by Reducing Client Resource Requirements. arXiv 2018, arXiv:1812.07210. [Google Scholar]
- Wei, K.; Li, J.; Ding, M.; Ma, C.; Su, H.; Zhang, B.; Poor, H.V. User-level privacy-preserving federated learning: Analysis and performance optimization. IEEE Trans. Mob. Comput. 2021, 21, 3388–3401. [Google Scholar] [CrossRef]
- Noble, M.; Bellet, A.; Dieuleveut, A. Differentially private federated learning on heterogeneous data. In Proceedings of the International Conference on Artificial Intelligence and Statistics, PMLR, Virtual Conference, 28–30 March 2022; pp. 10110–10145. [Google Scholar]
- Sun, Q.; Cao, S.; Chen, Z. Filter pruning via automatic pruning rate search. In Proceedings of the Asian Conference on Computer Vision, Macao, China, 4–8 December 2022; pp. 4293–4309. [Google Scholar]
- Lin, M.; Ji, R.; Wang, Y.; Zhang, Y.; Zhang, B.; Tian, Y.; Shao, L. Hrank: Filter pruning using high-rank feature map. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1529–1538. [Google Scholar]
- Huang, C.; Liu, P.; Fang, L. MXQN: Mixed quantization for reducing bit-width of weights and activations in deep convolutional neural networks. Appl. Intell. 2021, 51, 4561–4574. [Google Scholar] [CrossRef]
- Gong, R.; Liu, X.; Jiang, S.; Li, T.; Hu, P.; Lin, J.; Yu, F.; Yan, J. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 4852–4861. [Google Scholar]
- Razani, R.; Morin, G.; Sari, E.; Nia, V.P. Adaptive binary-ternary quantization. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual Conference, 19–25 June 2021; pp. 4613–4618.
- Li, Y.; Dong, X.; Wang, W. Additive powers-of-two quantization: An efficient non-uniform discretization for neural networks. arXiv 2019, arXiv:1909.13144. [Google Scholar]
- Dong, Z.; Yao, Z.; Gholami, A.; Mahoney, M.W.; Keutzer, K. Hawq: Hessian aware quantization of neural networks with mixed-precision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 293–302. [Google Scholar]
- Zhu, F.; Gong, R.; Yu, F.; Liu, X.; Wang, Y.; Li, Z.; Yang, X.; Yan, J. Towards unified int8 training for convolutional neural network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1969–1979. [Google Scholar]
- Fan, Y.; Pang, W.; Lu, S. HFPQ: Deep neural network compression by hardware-friendly pruning-quantization. Appl. Intell. 2021, 51, 7016–7028. [Google Scholar] [CrossRef]
- Yang, H.; Gui, S.; Zhu, Y.; Liu, J. Automatic neural network compression by sparsity-quantization joint learning: A constrained optimization-based approach. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2178–2188. [Google Scholar]
- Mei, Z.; Shao, X.; Xia, Y.; Liu, J. Enhanced Fixed-time Collision-free Elliptical Circumnavigation Coordination for UAVs. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 4257–4270. [Google Scholar] [CrossRef]
Parameters | Definition | Value |
---|---|---|
Coefficient of weight | 0.8 | |
Iteration cycle number | 100 | |
Maximum coverage | 25 | |
Minimum distance between UAVs | 10 | |
Altitude of flight | 15 | |
Bandwidth | 1 | |
Transmit power | 0.1 | |
Gain of channel | −40 | |
Power of noise | −80 |
Scheme | 70% Pruning Rate | 80% Pruning Rate | 90% Pruning Rate |
---|---|---|---|
FedAvg | 80 ± 0.25 | 82 ± 0.64 | 70 ± 0.57 |
FedProx | 78 ± 0.43 | 80 ± 0.34 | 71 ± 0.25 |
FedDrop | 77 ± 0.62 | 78 ± 0.33 | 73 ± 0.82 |
CP-FL | 82 ± 0.01 | 84 ± 0.69 | 76 ± 0.11 |
Compression Method | Acc. % | Ave. Bits | Comp. Ratio |
---|---|---|---|
APRS | 0.9 | 32.0 | 5.1× |
Hrank | −2.7 | 32.0 | 12.5× |
MXQN | 0.4 | 9.0 | 3.6× |
DSQ | −0.1 | 1.0 | 32.0× |
SQ | 0.2 | 5.7 | 25.1× |
DPP | 0.4 | 8.0 | 25.6× |
HFPQ | 1.1 | 5.0 | 45.7× |
CP-FL | 1.3 | 4.1 | 99.3× |
ResNet or ConvNeXt | Compression Method | Acc. % | Ave. Bits | Comp. Ratio |
---|---|---|---|---|
ResNet 18 | DDP | −0.2 | 32.0 | 3.7× |
ResNet 18 | CP-FL | 1.6 | 3.9 | 102.3× |
ResNet 20 | DSQ | 0.5 | 1.0 | 3.6× |
ResNet 20 | APOT | 0.6 | 2.0 | 32.0× |
ResNet 20 | HAWQ | 0.2 | 2.1 | 25.1× |
ResNet 20 | CP-FL | 2.1 | 4.3 | 83.5× |
ResNet 56 | Hrank | 2.5 | 5.0 | 5.7× |
ResNet 56 | CP-FL | 2.3 | 4.4 | 123.2× |
ResNet 110 | APRS | −0.4 | 32.0 | 3.2× |
ResNet 110 | Hrank | 0.9 | 32.0 | 3.2× |
ResNet 110 | CP-FL | 1.1 | 4.0 | 110.5× |
ConvNeXt | U.INT8 | −1.2 | 8.0 | 8× |
ConvNeXt | CP-FL | 0.9 | 4.2 | 28.7× |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jing, Z.; Wang, R. Construction of a Deep Learning Model for Unmanned Aerial Vehicle-Assisted Safe Lightweight Industrial Quality Inspection in Complex Environments. Drones 2024, 8, 707. https://doi.org/10.3390/drones8120707
Jing Z, Wang R. Construction of a Deep Learning Model for Unmanned Aerial Vehicle-Assisted Safe Lightweight Industrial Quality Inspection in Complex Environments. Drones. 2024; 8(12):707. https://doi.org/10.3390/drones8120707
Chicago/Turabian StyleJing, Zhongyuan, and Ruyan Wang. 2024. "Construction of a Deep Learning Model for Unmanned Aerial Vehicle-Assisted Safe Lightweight Industrial Quality Inspection in Complex Environments" Drones 8, no. 12: 707. https://doi.org/10.3390/drones8120707
APA StyleJing, Z., & Wang, R. (2024). Construction of a Deep Learning Model for Unmanned Aerial Vehicle-Assisted Safe Lightweight Industrial Quality Inspection in Complex Environments. Drones, 8(12), 707. https://doi.org/10.3390/drones8120707