Vision-Based Detection of Low-Emission Sources in Suburban Areas Using Unmanned Aerial Vehicles
<p>Main sources of harmful pollutants in Poland—PM2.5 and benzo[a]pyrene in 2017.</p> "> Figure 2
<p>Example frames captured from a drone for different altitudes.</p> "> Figure 3
<p>Block diagram of smoke detection algorithm in stationary video sequences.</p> "> Figure 4
<p>Motion mask generation (<b>a</b>–<b>f</b>) and final smoke region detection (<b>g</b>,<b>h</b>). (<b>a</b>) Input frame <span class="html-italic">F</span>(<span class="html-italic">i</span>); (<b>b</b>) Temporal gradient <math display="inline"><semantics> <mrow> <mi mathvariant="script">G</mi> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </semantics></math>; (<b>c</b>) Raw thresholding result <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </semantics></math>; (<b>d</b>) Motion masks after morphological processing <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>e</b>) Contours of objects in motion; (<b>f</b>) Final result after contour filtering; (<b>g</b>) Moving objects <math display="inline"><semantics> <msub> <mi>C</mi> <mi>m</mi> </msub> </semantics></math> and rooftops <span class="html-italic">R</span>; (<b>h</b>) Final smoke areas.</p> "> Figure 4 Cont.
<p>Motion mask generation (<b>a</b>–<b>f</b>) and final smoke region detection (<b>g</b>,<b>h</b>). (<b>a</b>) Input frame <span class="html-italic">F</span>(<span class="html-italic">i</span>); (<b>b</b>) Temporal gradient <math display="inline"><semantics> <mrow> <mi mathvariant="script">G</mi> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </semantics></math>; (<b>c</b>) Raw thresholding result <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </semantics></math>; (<b>d</b>) Motion masks after morphological processing <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>e</b>) Contours of objects in motion; (<b>f</b>) Final result after contour filtering; (<b>g</b>) Moving objects <math display="inline"><semantics> <msub> <mi>C</mi> <mi>m</mi> </msub> </semantics></math> and rooftops <span class="html-italic">R</span>; (<b>h</b>) Final smoke areas.</p> "> Figure 5
<p>Learning process for rooftops training set.</p> "> Figure 6
<p>Results obtained for the testing set.</p> "> Figure 7
<p>Results of the roof detection algorithm under different weather conditions.</p> "> Figure 8
<p>Learning process for validation set using models of different complexity.</p> "> Figure 9
<p>Smoke detection efficiency on testing set with different YOLOv7 models and <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>o</mi> <mi>U</mi> </mrow> </semantics></math> values. (<b>a</b>) YOLOv7-x model (<math display="inline"><semantics> <mrow> <mi>I</mi> <mi>o</mi> <mi>U</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>); (<b>b</b>) YOLOv7-tiny model (<math display="inline"><semantics> <mrow> <mi>I</mi> <mi>o</mi> <mi>U</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>); (<b>c</b>) YOLOv7-x model (<math display="inline"><semantics> <mrow> <mi>I</mi> <mi>o</mi> <mi>U</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>); (<b>d</b>) YOLOv7-tiny model (<math display="inline"><semantics> <mrow> <mi>I</mi> <mi>o</mi> <mi>U</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>).</p> "> Figure 10
<p>Example detection results for the YOLOv7-x model. (<b>a</b>) Detections from DJI Drones; (<b>b</b>) Detections from Xiaomi Mi Drone Video; (<b>c</b>) Detections for oblique flight.</p> ">
Abstract
:1. Introduction
1.1. Air Pollution and Its Impact on Our Health
1.2. Vision-Based Smoke Detection Techniques
1.2.1. Smoke Detection from Static Cameras
- Detection of chrominance changes—smoke usually has a lower chrominance value [7];
- Estimation and background subtraction using Gaussian Mixture Model (GMM) algorithms [8];
- Optical flow based techniques [10].
1.2.2. Smoke Detection Using Unmanned Aerial Vehicles
2. Detecting Smoke from Low Emission Sources in Aerial Photography
2.1. Choosing a UAV Platform
2.2. Image Acquisition—Flight Plan
- Type of data acquired—video in motion, stationary video, static images (also orthophotos acquired from photogrammetric flights);
- The angle and tilt of the photos—vertical, near-vertical, tilted and, perspective;
- Acquisition band—RGB, multi/hyperspectral, and thermal imaging;
- Altitude and flight range.
3. Smoke Detection Algorithm in Stationary Video Sequences
3.1. Motion Masks Detection
3.1.1. Preprocessing and Digital Video Stabilization
3.1.2. Motion Masks from Gradients
3.1.3. Motion Masks Postprocessing
Algorithm 1 Motion objects contour processing | |
Input:—binary mask of objects in motion, —height and width of the image | |
Output:—list of final objects in motion | |
1: | functionfilterMotionMasks () |
2: | ▹ min. contour area |
3: | ▹ max. contour area |
4: | ▹ min. distance for contour merging |
5: | findContours() ▹ motion mask contours and object labeling |
6: | filterContoursByArea(,,) ▹ filtering objects of extreme sizes |
7: | mergeCloseContours(, ) ▹ merging close objects using |
Euclidean distance | |
8: | |
9: | for do |
10: | convexHull(c) ▹ convex hulls of objects |
11: | .append(c) |
12: | end for |
13: | return ▹ filtered moving objects |
14: | end function |
3.2. Rooftop Area Detection
Training Set for Rooftop Detection
4. Final Smoke Detection and Smoke Training Set
Training the YOLO Smoke Detector
5. Summary and Future Works
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
ECC | Enhanced Correlation Coefficient Maximization |
EEA | European Environment Agency |
DCT | Discrete Cosine Transform |
DWT | Discrete Wavelet Transform |
DGPS | Diferential GPS |
GPS | Global Positioning System |
IoU | Intersection over Union |
KOBiZE | Krajowy Ośrodek Bilansowania i Zarządzania Emisjami |
—The National Centre for Emissions Management | |
LBP | Local Binary Patterns |
MOG | Mixture of Gaussian |
MP | megapixel |
RTK | Real-Time Kinematics |
SoR | Smoke-over-Roofs ratio |
UAV | Unmanned Aerial Vehicle |
References
- Ortiz, A.G.; Guerreiro, C.; Soares, J. EEA Report No 09/2020 (Air Quality in Europe 2020); Annual Report; The European Environment Agency: Copenhagen, Denmark, 2020. [Google Scholar]
- Program PAS dla Czystego Powietrza w Polsce. Presentation, Polish Smog Alert (PAS). 2020. Available online: https://polskialarmsmogowy.pl/wp-content/uploads/2021/08/PAS_raport_2020.pdf (accessed on 5 February 2023).
- Bebkiewicz, K.; Chłopek, Z.; Chojnacka, K.; Doberska, A.; Kanafa, M.; Kargulewicz, I.; Olecka, A.; Rutkowski, J.; Walęzak, M.; Waśniewska, S.; et al. Krajowy bilans emisji SO2, NOX, CO, NH3, NMLZO, pyłów, metali ciężkich i TZO za lata 1990—2019; Presentation; The National Centre for Emissions Management (KOBiZE): Warsaw, Poland, 2021. [Google Scholar]
- Chaturvedi, S.; Khanna, P.; Ojha, A. A survey on vision-based outdoor smoke detection techniques for environmental safety. ISPRS J. Photogramm. Remote Sens. 2022, 185, 158–187. [Google Scholar] [CrossRef]
- Xu, Z.; Xu, J. Automatic Fire Smoke Detection Based on Image Visual Features. In Proceedings of the International Conference on Computational Intelligence and Security Workshops (CISW 2007), Harbin, China, 15–19 December 2007; pp. 316–319. [Google Scholar]
- Chunyu, Y.; Jun, F.; Jinjun, W.; Yongming, Z. Video Fire Smoke Detection Using Motion and Color Features. Fire Technol. 2010, 46, 651–663. [Google Scholar] [CrossRef]
- Yuan, F. A fast accumulative motion orientation model based on integral image for video smoke detection. Pattern Recognit. Lett. 2008, 29, 925–932. [Google Scholar] [CrossRef]
- Calderara, S.; Piccinini, P.; Cucchiara, R. Smoke Detection in Video Surveillance: A MoG Model in the Wavelet Domain. In Proceedings of the Computer Vision Systems, Santorini, Greece, 12–15 May 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 119–128. [Google Scholar]
- Gubbi, J.; Marusic, S.; Palaniswami, M. Smoke detection in video using wavelets and support vector machines. Fire Saf. J. 2009, 44, 1110–1115. [Google Scholar] [CrossRef]
- Kolesov, I.; Karasev, P.; Tannenbaum, A.; Haber, E. Fire and smoke detection in video with optimal mass transport based optical flow and neural networks. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 761–764. [Google Scholar] [CrossRef] [Green Version]
- Yuan, F. Video-based smoke detection with histogram sequence of LBP and LBPV pyramids. Fire Saf. J. 2011, 46, 132–139. [Google Scholar] [CrossRef]
- Olivares-Mercado, J.; Toscano-Medina, K.; Sánchez-Perez, G.; Hernandez-Suarez, A.; Perez-Meana, H.; Sandoval Orozco, A.L.; García Villalba, L.J. Early Fire Detection on Video Using LBP and Spread Ascending of Smoke. Sustainability 2019, 11, 3261. [Google Scholar] [CrossRef] [Green Version]
- Panchanathan, S.; Zhao, Y.; Zhou, Z.; Xu, M. Forest Fire Smoke Video Detection Using Spatiotemporal and Dynamic Texture Features. J. Electr. Comput. Eng. 2015, 2015, 706187. [Google Scholar] [CrossRef] [Green Version]
- Xu, G.; Zhang, Y.; Zhang, Q.; Lin, G.; Wang, J. Deep domain adaptation based video smoke detection using synthetic smoke images. Fire Saf. J. 2017, 93, 53–59. [Google Scholar] [CrossRef] [Green Version]
- Favorskaya, M.; Pyataeva, A.; Popov, A. Verification of Smoke Detection in Video Sequences Based on Spatio-temporal Local Binary Patterns. Procedia Comput. Sci. 2015, 60, 671–680. [Google Scholar] [CrossRef] [Green Version]
- Tao, C.; Zhang, J.; Wang, P. Smoke Detection Based on Deep Convolutional Neural Networks. In Proceedings of the 2016 International Conference on Industrial Informatics—Computing Technology, Intelligent Technology, Industrial Information Integration (ICIICII), Wuhan, China, 3–4 December 2016; pp. 150–153. [Google Scholar] [CrossRef]
- Li, P.; Zhao, W. Image fire detection algorithms based on convolutional neural networks. Case Stud. Therm. Eng. 2020, 19, 100625. [Google Scholar] [CrossRef]
- Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. A Wildfire Smoke Detection System Using Unmanned Aerial Vehicle Images Based on the Optimized YOLOv5. Sensors 2022, 22, 9384. [Google Scholar] [CrossRef] [PubMed]
- Bouguettaya, A.; Zarzour, H.; Taberkit, A.M.; Kechida, A. A review on early wildfire detection from unmanned aerial vehicles using deep learning-based computer vision algorithms. Signal Process. 2022, 190, 108309. [Google Scholar] [CrossRef]
- Abdusalomov, A.; Baratov, N.; Kutlimuratov, A.; Whangbo, T.K. An Improvement of the Fire Detection and Classification Method Using YOLOv3 for Surveillance Systems. Sensors 2021, 21, 6519. [Google Scholar] [CrossRef]
- Hu, Y.; Zhan, J.; Zhou, G.; Chen, A.; Cai, W.; Guo, K.; Hu, Y.; Li, L. Fast forest fire smoke detection using MVMNet. Knowl.-Based Syst. 2022, 241, 108219. [Google Scholar] [CrossRef]
- Hossain, F.A.; Zhang, Y.M.; Tonima, M.A. Forest fire flame and smoke detection from UAV-captured images using fire-specific color features and multi-color space local binary pattern. J. Unmanned Veh. Syst. 2020, 8, 285–309. [Google Scholar] [CrossRef]
- Barmpoutis, P.; Papaioannou, P.; Dimitropoulos, K.; Grammalidis, N. A Review on Early Forest Fire Detection Systems Using Optical Remote Sensing. Sensors 2020, 20, 6442. [Google Scholar] [CrossRef] [PubMed]
- Srinivas, K.; Dua, M. Fog Computing and Deep CNN Based Efficient Approach to Early Forest Fire Detection with Unmanned Aerial Vehicles. In Inventive Computation Technologies 4; Smys, S., Bestak, R., Rocha, Á., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 646–652. [Google Scholar]
- Lee, W.; Kim, S.; Lee, Y.T.; Lee, H.W.; Choi, M. Deep neural networks for wild fire detection with unmanned aerial vehicle. In Proceedings of the 2017 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 8–10 January 2017; pp. 252–253. [Google Scholar] [CrossRef]
- Chen, Y.; Zhang, Y.; Xin, J.; Wang, G.; Mu, L.; Yi, Y.; Liu, H.; Liu, D. UAV Image-based Forest Fire Detection Approach Using Convolutional Neural Network. In Proceedings of the 2019 14th IEEE Conference on Industrial Electronics and Applications (ICIEA), Xi’an, China, 19–21 June 2019; pp. 2118–2123. [Google Scholar] [CrossRef]
- Zhang, Q.; Xu, J.; Xu, L.; Guo, H. Deep convolutional neural networks for forest fire detection. In Proceedings of the 2016 International Forum on Management, Education and Information Technology Application, Guangzhou, China, 30–31 January 2016; pp. 568–575. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Alexandrov, D.; Pertseva, E.; Berman, I.; Pantiukhin, I.; Kapitonov, A. Analysis of machine learning methods for wildfire security monitoring with an unmanned aerial vehicles. In Proceedings of the 2019 24th Conference of Open Innovations Association (FRUCT), Moscow, Russia, 8–12 April 2019; pp. 3–9. [Google Scholar]
- Jiao, Z.; Zhang, Y.; Mu, L.; Xin, J.; Jiao, S.; Liu, H.; Liu, D. A yolov3-based learning strategy for real-time uav-based forest fire detection. In Proceedings of the 2020 Chinese Control and Decision Conference (CCDC), Hefei, China, 22–24 August 2020; pp. 4963–4967. [Google Scholar]
- Jiao, Z.; Zhang, Y.; Xin, J.; Mu, L.; Yi, Y.; Liu, H.; Liu, D. A deep learning based forest fire detection approach using UAV and YOLOv3. In Proceedings of the 2019 1st International Conference on Industrial Artificial Intelligence (IAI), Shenyang, China, 22–26 July 2019; pp. 1–5. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [Green Version]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Ghali, R.; Akhloufi, M.A.; Mseddi, W.S. Deep Learning and Transformer Approaches for UAV-Based Wildfire Detection and Segmentation. Sensors 2022, 22, 1977. [Google Scholar] [CrossRef]
- Qiao, L.; Zhang, Y.; Qu, Y. Pre-processing for UAV Based Wildfire Detection: A Loss U-net Enhanced GAN for Image Restoration. In Proceedings of the 2020 2nd International Conference on Industrial Artificial Intelligence (IAI), Shenyang, China, 23–25 October 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Li, Z.; Sun, Y.; Zhang, L.; Tang, J. CTNet: Context-based tandem network for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 9904–9917. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
- Evangelidis, G.; Psarakis, E. Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1858–1865. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lin, T.; Maire, M.; Belongie, S.J.; Bourdev, L.D.; Girshick, R.B.; Hays, J.; Perona, P.; Ramanan, D.; Doll’a r, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision; Springer: Cham, Swizerland, 2014. [Google Scholar]
- Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
- Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.Y. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Advances in Neural Information Processing Systems 30 (NIPS 2017); Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
P | R | [email protected] | [email protected]:.95 | |
---|---|---|---|---|
YOLOv7-x | 0.513 | 0.518 | 0.418 | 0.236 |
YOLOv7-tiny | 0.472 | 0.521 | 0.402 | 0.213 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Szczepański, M. Vision-Based Detection of Low-Emission Sources in Suburban Areas Using Unmanned Aerial Vehicles. Sensors 2023, 23, 2235. https://doi.org/10.3390/s23042235
Szczepański M. Vision-Based Detection of Low-Emission Sources in Suburban Areas Using Unmanned Aerial Vehicles. Sensors. 2023; 23(4):2235. https://doi.org/10.3390/s23042235
Chicago/Turabian StyleSzczepański, Marek. 2023. "Vision-Based Detection of Low-Emission Sources in Suburban Areas Using Unmanned Aerial Vehicles" Sensors 23, no. 4: 2235. https://doi.org/10.3390/s23042235
APA StyleSzczepański, M. (2023). Vision-Based Detection of Low-Emission Sources in Suburban Areas Using Unmanned Aerial Vehicles. Sensors, 23(4), 2235. https://doi.org/10.3390/s23042235