[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

EMG-YOLO: : An efficient fire detection model for embedded devices

Published: 01 January 2025 Publication History

Abstract

The number of edge embedded devices has been increasing with the development of Internet of Things (IoT) technology. In urban fire detection, improving the accuracy of fire detection based on embedded devices requires substantial computational resources, which exacerbates the conflict between the high precision needed for fire detection and the low computational capabilities of many embedded devices. To address this issue, this paper introduces a fire detection algorithm named EMG-YOLO. The goal is to improve the accuracy and efficiency of fire detection on embedded devices with limited computational resources. Initially, a Multi-scale Attention Module (MAM) is proposed, which effectively integrates multi-scale information to enhance feature representation. Subsequently, a novel Efficient Multi-scale Convolution Module (EMCM) is incorporated into the C2f structure to enhance the extraction of flame and smoke features, thereby providing additional feature information without increasing computational complexity. Moreover, a Global Feature Pyramid Network (GFPN) is integrated into the model neck to further enhance computational efficiency and mitigate information loss. Finally, the model undergoes pruning via a slimming algorithm to meet the deployment constraints of mobile embedded devices. Experimental results on customized flame and smoke datasets demonstrate that EMG-YOLO increases mAP@50 by 3.2%, decreases the number of parameters by 53.5%, and lowers GFLOPs to 49.8% of those in YOLOv8-n. These results show that EMG-YOLO significantly reduces the computational requirements while improving the accuracy of fire detection, and has a wide range of practical applications, especially for resource-constrained embedded devices.

Highlights

The proposed method improves smoke and flame detection accuracy for non-uniform shapes and different scales.
Two innovative modules are proposed to mitigate the problem of poor feature extraction for flame and smoke targets.
Improved real-time performance of fire detection models on mobile embedded devices.
The creation of a new fire detection dataset improves the current lack of datasets in the field.

References

[1]
M. Kobes, I. Helsloot, B. De Vries, J.G. Post, Building safety and human behaviour in fire: a literature review, Fire Saf. J. 45 (2010) 1–11.
[2]
T.M.H. Nguyen, C.W. Bark, Self-powered uvc photodetector based on europium metal–organic framework for facile monitoring invisible fire, ACS Appl. Mater. Interfaces 14 (2022) 45573–45581.
[3]
A. Gaur, A. Singh, A. Kumar, K.S. Kulkarni, S. Lala, K. Kapoor, V. Srivastava, A. Kumar, S.C. Mukhopadhyay, Fire sensing technologies: a review, IEEE Sens. J. 19 (2019) 3191–3202.
[4]
Y. Liu, Z. Su, T. Yang, The recent progress and state-of-art applications for ultraviolet photodetectors, Highlights Sci. Eng. Technol. 5 (2022) 94–101.
[5]
R. Girshick, Fast r-cnn, in: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1440–1448.
[6]
J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: unified, real-time object detection, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 779–788.
[7]
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, A.C. Berg, Ssd: single shot multibox detector, in: Computer Vision–ECCV 2016: 14th European Conference, Proceedings, Part I 14, Amsterdam, the Netherlands, October 11–14, 2016, Springer, 2016, pp. 21–37.
[8]
X. Cao, Y. Su, X. Geng, Y. Wang, Yolo-sf: Yolo for fire segmentation detection, IEEE Access (2023).
[9]
Mehta, S.; Rastegari, M. (2022): Separable self-attention for mobile vision transformers. arXiv preprint arXiv:2206.02680.
[10]
S. Woo, J. Park, J.-Y. Lee, I.S. Kweon, Cbam: convolutional block attention module, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 3–19.
[11]
S. Kim, I.-s. Jang, B.C. Ko, Domain-free fire detection using the spatial–temporal attention transform of the yolo backbone, Pattern Anal. Appl. 27 (2024) 45.
[12]
X. Geng, Y. Su, X. Cao, H. Li, L. Liu, Yolofm: an improved fire and smoke object detection algorithm based on yolov5n, Sci. Rep. 14 (2024) 4543.
[13]
T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, S. Belongie, Feature pyramid networks for object detection, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2117–2125.
[14]
Xu, X.; Jiang, Y.; Chen, W.; Huang, Y.; Zhang, Y.; Sun, X. (2022): Damo-yolo: a report on real-time object detection design. arXiv preprint arXiv:2211.15444.
[15]
Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, C. Zhang, Learning efficient convolutional networks through network slimming, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2736–2744.
[16]
E. Arkin, N. Yadikar, X. Xu, A. Aysa, K. Ubul, A survey: object detection methods from cnn to transformer, Multimed. Tools Appl. 82 (2023) 21353–21383.
[17]
A. Gaur, A. Singh, A. Kumar, A. Kumar, K. Kapoor, Video flame and smoke based fire detection algorithms: a literature review, Fire Technol. 56 (2020) 1943–1980.
[18]
K. He, G. Gkioxari, P. Dollár, R. Girshick, Mask r-cnn, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2961–2969.
[19]
Y. Wang, Y. Han, Z. Tang, P. Wang, A fast video fire detection of irregular burning feature in fire-flame using in indoor fire sensing robots, IEEE Trans. Instrum. Meas. 71 (2022) 1–14.
[20]
L. Zhao, L. Zhi, C. Zhao, W. Zheng, Fire-yolo: a small target object detection method for fire inspection, Sustainability 14 (2022) 4930.
[21]
B. Koonce, B. Koonce, Efficientnet, in: Convolutional Neural Networks with Swift for Tensorflow: Image Recognition and Dataset Categorization, 2021, pp. 109–123.
[22]
R. Xu, H. Lin, K. Lu, L. Cao, Y. Liu, A forest fire detection system based on ensemble learning, Forests 12 (2021) 217.
[23]
M. Tan, R. Pang, Q.V. Le, Efficientdet: scalable and efficient object detection, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10781–10790.
[24]
Z. Xue, H. Lin, F. Wang, A small target forest fire detection model based on yolov5 improvement, Forests 13 (2022) 1332.
[25]
J. Chen, H. Mai, L. Luo, X. Chen, K. Wu, Effective feature fusion network in bifpn for small object detection, in: 2021 IEEE International Conference on Image Processing (ICIP), IEEE, 2021, pp. 699–703.
[26]
H. Zhao, J. Jin, Y. Liu, Y. Guo, Y. Shen, Fsdf: a high-performance fire detection framework, Expert Syst. Appl. 238 (2024).
[27]
X. Wang, S. Takaki, J. Yamagishi, S. King, K. Tokuda, A vector quantized variational autoencoder (vq-vae) autoregressive neural f _ 0 model for statistical parametric speech synthesis, IEEE/ACM Trans. Audio Speech Lang. Process. 28 (2019) 157–170.
[28]
J. Huang, Z. He, Y. Guan, H. Zhang, Real-time forest fire detection by ensemble lightweight yolox-l and defogging method, Sensors 23 (2023) 1894.
[29]
H. Cheng, M. Zhang, J.Q. Shi, A survey on deep neural network pruning: taxonomy, comparison, analysis, and recommendations, IEEE Trans. Pattern Anal. Mach. Intell. (2024).
[30]
Z. Li, P. Xu, X. Chang, L. Yang, Y. Zhang, L. Yao, X. Chen, When object detection meets knowledge distillation: a survey, IEEE Trans. Pattern Anal. Mach. Intell. 45 (2023) 10555–10579.
[31]
B. Rokh, A. Azarpeyvand, A. Khanteymoori, A comprehensive survey on model quantization for deep neural networks in image classification, ACM Trans. Intell. Syst. Technol. 14 (2023) 1–50.
[32]
I. Al-Shourbaji, P.H. Kachare, L. Abualigah, M.E. Abdelhag, B. Elnaim, A.M. Anter, A.H. Gandomi, A deep batch normalized convolution approach for improving covid-19 detection from chest x-ray images, Pathogens 12 (2023) 17.
[33]
I. Al-Shourbaji, S. Duraibi, Iwqp4net: an efficient convolution neural network for irrigation water quality prediction, Water 15 (2023) 1657.
[34]
D.V. Puri, P.H. Kachare, S.B. Sangle, R. Kirner, A. Jabbari, I. Al-Shourbaji, M. Abdalraheem, A. Alameen, Leadnet: detection of Alzheimer's disease using spatiotemporal eeg analysis and low-complexity cnn, IEEE Access (2024).
[35]
S. Ma, W. Li, L. Wan, G. Zhang, A lightweight fire detection algorithm based on the improved yolov8 model, Appl. Sci. 14 (2024) 6878.
[36]
H. Li, Z. Ma, S.-H. Xiong, Q. Sun, Z.-S. Chen, Image-based fire detection using an attention mechanism and pruned dense network transfer learning, Inf. Sci. 670 (2024).
[37]
M.-H. Guo, C.-Z. Lu, Q. Hou, Z. Liu, M.-M. Cheng, S.-M. Hu, Segnext: rethinking convolutional attention design for semantic segmentation, Adv. Neural Inf. Process. Syst. 35 (2022) 1140–1156.
[38]
M. Tan, B. Chen, R. Pang, V. Vasudevan, M. Sandler, A. Howard, Q.V. Le, Mnasnet: platform-aware neural architecture search for mobile, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2820–2828.
[39]
S. Liu, L. Qi, H. Qin, J. Shi, J. Jia, Path aggregation network for instance segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8759–8768.
[40]
C.-Y. Wang, H.-Y.M. Liao, Y.-H. Wu, P.-Y. Chen, J.-W. Hsieh, I.-H. Yeh, Cspnet: a new backbone that can enhance learning capability of cnn, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 390–391.
[41]
N. Bjorck, C.P. Gomes, B. Selman, K.Q. Weinberger, Understanding batch normalization, Adv. Neural Inf. Process. Syst. 31 (2018).
[42]
C. Yang, Z. Yang, A.M. Khattak, L. Yang, W. Zhang, W. Gao, M. Wang, Structured pruning of convolutional neural networks via l1 regularization, IEEE Access 7 (2019) 106385–106394.
[43]
J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132–7141.
[44]
Q. Hou, D. Zhou, J. Feng, Coordinate attention for efficient mobile network design, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 13713–13722.
[45]
G. Ghiasi, T.-Y. Lin, Q.V. Le, Nas-fpn: learning scalable feature pyramid architecture for object detection, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7036–7045.
[46]
Jocher, G.; Chaurasia, A.; Stoken, A. (2020): YOLOv5. https://github.com/ultralytics/yolov5.
[47]
Jocher, G.; Chaurasia, A.; Qiu, J.; Ultralytics, Y.O.L.O. (2023) : https://github.com/ultralytics/ultralytics.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Digital Signal Processing
Digital Signal Processing  Volume 156, Issue PB
Jan 2025
593 pages

Publisher

Academic Press, Inc.

United States

Publication History

Published: 01 January 2025

Author Tags

  1. Fire detection
  2. Multi-scale attention module
  3. Efficient multi-scale convolution module
  4. GFPN
  5. Slimming

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media