A Foggy Weather Simulation Algorithm for Traffic Image Synthesis Based on Monocular Depth Estimation
<p>Depth estimation network structure used by Monodepth2.</p> "> Figure 2
<p>The above figure calculates the projection from 2D space to 3D space and the pairing of 8 neighbouring points. Points of the same colour are paired and then two vectors are formed separately from the centre point, and these two vectors are perpendicular to each other. Four surface normals can be calculated from the four vector pairs and used to form a surface normal.</p> "> Figure 3
<p>Physical modelling of atmospheric scattering. The light received in the camera sensor consists of light scattered by particles suspended in the air and light reflected by the imaging object, which is attenuated as it travels through the air.</p> "> Figure 4
<p>Fog simulation based on Equation (<a href="#FD10-sensors-24-01966" class="html-disp-formula">10</a>).</p> "> Figure 5
<p>Process flow of AuthESI.</p> "> Figure 6
<p>Mean SROCC and PLCC of the five NR-IQA methods across 1000 trials of the DHQ database.</p> "> Figure 7
<p>Examples of depth estimation and foggy image synthesis, (<b>a</b>) original images from BDD100K, (<b>b</b>) relative depth maps predicted by Monodepth2, (<b>c</b>) absolute depth maps obtained by dense geometric constraints, and (<b>d</b>) synthetic fog (evaluation of the corresponding fog image in the upper right corner of the image).</p> "> Figure 8
<p>Simulated fog images with scores at three different densities and examples of clear images, with fog thickness decreasing as visibility increases.</p> "> Figure 9
<p>Example of nighttime fog image synthesis, (<b>a</b>) original nighttime image at BDD100K, (<b>b</b>) relative depth map (the parts circled in blue are significantly less effective relative to the depth map), (<b>c</b>) absolute depth map, and (<b>d</b>) synthesised fog (quality scores of the corresponding fog images are shown in the upper right corner of the image).</p> "> Figure 10
<p>Visibility is the percentage distribution of the number of 100-m simulated fog plots across the different scoring intervals (only cases where the percentage of the number of distributions is greater than 3.5 percent are shown).</p> "> Figure 11
<p>The curve of the data quantity percentage of foggy simulated images changing with the score (visibility 100 m).</p> ">
Abstract
:1. Introduction
- (1)
- A new enhancement and simulation method for a foggy sky target detection dataset is proposed, which significantly improves the accuracy and generalisation of the foggy sky target detection model without introducing additional computational cost and complex algorithmic framework.
- (2)
- We have created a new dataset, FoggyBDD100K, which contains 16,702 foggy sky images. This will help the community and researchers to develop and validate their own data-driven target detection or defogging algorithms.
- (3)
- We quantitatively evaluated the obtained simulated foggy sky images and obtained quantitative evaluation results while validating the reliability of this foggy sky simulation framework.
2. Methods
2.1. Self-Supervised Relative Depth Estimation Network
2.2. Absolute Depth Scale Recovery Based on Monodepth2
2.3. Atmospheric Scattering Model
2.4. Simulated Fog Map for Absolute Depth Estimation
3. Fog Day Simulation Image Feature Evaluation Algorithm
3.1. Introduction to Evaluation Algorithms
3.2. Comparison of Evaluation Algorithms
4. Discussion of Experimental Process
4.1. Experimental Dataset and Experimental Environment
4.2. Fog Image Simulation Process Analysis
4.3. Comparison of Simulated Foggy Images under Different Visibility Conditions
4.4. Night-Time Fog Image Simulation Assessment
4.5. Evaluation of Foggy Simulated Image Features
5. Conclusions
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Codur, M.Y.; Kaplan, N.H. Increasing the visibility of traffic signs in foggy weather. Fresenius Environ. Bull. 2019, 28, 705–709. [Google Scholar]
- Schechner, Y.Y.; Narasimhan, S.G.; Nayar, S.K. Instant dehazing of images using polarization. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001; p. 1. [Google Scholar]
- Fattal, R. Single image dehazing. ACM Trans. Graph. 2008, 27, 1–9. [Google Scholar] [CrossRef]
- Fattal, R. Dehazing using color-lines. ACM Trans. Graph. 2014, 34, 1–14. [Google Scholar] [CrossRef]
- Berman, D.; Avidan, S. Non-local image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1674–1682. [Google Scholar]
- He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar]
- Ancuti, C.O.; Ancuti, C. Single image dehazing by multi-scale fusion. IEEE Trans. Image Process. 2013, 22, 3271–3282. [Google Scholar] [CrossRef] [PubMed]
- Zhu, Q.; Mai, J.; Shao, L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar]
- Liu, X.; Lin, Y. YOLO-GW: Quickly and Accurately Detecting Pedestrians in a Foggy Traffic Environment. Sensors 2023, 23, 5539. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3213–3223. [Google Scholar]
- Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2636–2645. [Google Scholar]
- Hu, X.; Fu, C.-W.; Zhu, L.; Heng, P.-A. Depth-attentional features for single-image rain removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8022–8031. [Google Scholar]
- Liu, T.; Chen, Z.; Yang, Y.; Wu, Z.; Li, H. Lane detection in low-light conditions using an efficient data enhancement: Light conditions style transfer. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 1394–1399. [Google Scholar]
- Nie, X.; Xu, Z.; Zhang, W.; Dong, X.; Liu, N.; Chen, Y. Foggy Lane Dataset Synthesized from Monocular Images for Lane Detection Algorithms. Sensors 2022, 22, 5210. [Google Scholar] [CrossRef]
- Tarel, J.-P.; Hautiere, N.; Cord, A.; Gruyer, D.; Halmaoui, H. Improved visibility of road scene images under heterogeneous fog. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010; pp. 478–485. [Google Scholar]
- Tarel, J.-P.; Hautiere, N.; Caraffa, L.; Cord, A.; Halmaoui, H.; Gruyer, D. Vision enhancement in homogeneous and heterogeneous fog. IEEE Intell. Transp. Syst. Mag. 2012, 4, 6–20. [Google Scholar] [CrossRef]
- Sakaridis, C.; Dai, D.; Van Gool, L. Semantic foggy scene understanding with synthetic data. Int. J. Comput. Vis. 2018, 126, 973–992. [Google Scholar] [CrossRef]
- Mai, N.A.M.; Duthon, P.; Khoudour, L.; Crouzil, A.; Velastin, S.A. 3D Object Detection with SLS-Fusion Network in Foggy Weather Conditions. Sensors 2021, 21, 6711. [Google Scholar] [CrossRef] [PubMed]
- Bartoccioni, F.; Zablocki, É.; Pérez, P.; Cord, M.; Alahari, K. LiDARTouch: Monocular metric depth estimation with a few-beam LiDAR. Comput. Vis. Image Underst. 2023, 227, 103601. [Google Scholar] [CrossRef]
- Eigen, D.; Puhrsch, C.; Fergus, R. Depth Map Prediction from a Single Image using a Multi-Scale Deep Network. arXiv 2014, arXiv:1406.2283. [Google Scholar]
- Wong, A.; Fei, X.; Tsuei, S.; Soatto, S. Unsupervised Depth Completion from Visual Inertial Odometry. IEEE Robot. Autom. Lett. 2008, 5, 1899–1906. [Google Scholar] [CrossRef]
- Seo, B.-S.; Park, B.; Choi, H. Sensing Range Extension for Short-Baseline Stereo Camera Using Monocular Depth Estimation. Sensors 2022, 22, 4605. [Google Scholar] [CrossRef]
- Mo, H.; Li, B.; Shi, W.; Zhang, X. Cross-based dense depth estimation by fusing stereo vision with measured sparse depth. Vis. Comput. 2022, 39, 4339–4350. [Google Scholar] [CrossRef]
- Fu, S.; Safaei, F.; Li, W. Optimization of Camera Arrangement Using Correspondence Field to Improve Depth Estimation. IEEE Trans. Image Process. 2017, 26, 3038–3050. [Google Scholar] [CrossRef]
- Jang, W.-S.; Ho, Y.-S. Disparity Fusion Using Depth and Stereo Cameras for Accurate Stereo Correspondence. In Proceedings of the Three-Dimensional Image Processing, Measurement (3DIPM), and Applications, SPIE/IS&T Electronic Imaging, San Francisco, CA, USA, 8–12 February 2015; Volume 9393, pp. 227–234. [Google Scholar]
- Ding, X.; Xu, L.; Wang, H.; Wang, X.; Lv, G. Stereo depth estimation under different camera calibration and alignment errors. Appl. Opt. 2011, 50, 1289–1310. [Google Scholar] [CrossRef]
- Watson, J.; Aodha, O.M.; Prisacariu, V.; Brostow, G.; Firman, M. The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 1164–1174. [Google Scholar]
- Xue, F.; Zhuo, G.; Huang, Z.; Fu, W.; Wu, Z.; Ang, M.H. Toward Hierarchical Self-Supervised Monocular Absolute Depth Estimation for Autonomous Driving Applications. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 2330–2337. [Google Scholar]
- Casser, V.; Pirk, S.; Mahjourian, R.; Angelova, A. Depth Prediction without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos. In Proceedings of the AAAI Conference on Artificial Intelligence, Hilton Hawaiian Village, Honolulu, HI, USA, 27 January 2019–1 February 2019; pp. 8001–8008. [Google Scholar]
- Yang, Z.; Wang, P.; Wang, Y.; Xu, W.; Nevatia, R. LEGO: Learning Edge with Geometry all at Once by Watching Videos. In Proceedings of the Name of the Conference, Salt Lake City, UT, USA, 18–23 June 2018; pp. 225–234. [Google Scholar]
- Godard, C.; Aodha, O.M.; Michael, F.; Gabriel, J.B. Digging Into Self-Supervised Monocular Depth Estimation. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision(ICCV), Seoul, Republic of Korea, 27 October 2019–2 November; pp. 3828–3838. [Google Scholar]
- Zhou, T.; Brown, M.; Snavely, N.; Lowe, D.G. Title of presentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1851–1858. [Google Scholar]
- Zhang, N.; Zhang, L.; Cheng, Z. Towards simulating foggy and hazy images and evaluating their authenticity. In Proceedings of the Neural Information Processing, Guangzhou, China, 14–18 November 2017; pp. 405–415. [Google Scholar]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
- Saad, M.A.; Bovik, A.C.; Charrier, C. Blind Image Quality Assessment: A Natural Scene Statistics Approach in the DCT Domain. IEEE Trans. Image Process. 2012, 21, 3339–3352. [Google Scholar] [CrossRef]
- Moorthy, A.K.; Bovik, A.C. Blind Image Quality Assessment: From Natural Scene Statistics to Perceptual Quality. IEEE Trans. Image Process. 2011, 20, 3350–3364. [Google Scholar] [CrossRef] [PubMed]
- Ou, F.Z.; Wang, Y.G.; Zhu, G. A Novel Blind Image Quality Assessment Method Based on Refined Natural Scene Statistics. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 1004–1008. [Google Scholar]
- Jin, Z.; Feng, H.; Xu, Z.; Chen, Y. Nighttime Image Dehazing by Render. J. Imaging 2023, 9, 153. [Google Scholar] [CrossRef] [PubMed]
- Zhang, J.; Cao, Y.; Zha, Z.-J.; Tao, D. Nighttime Dehazing with a Synthetic Benchmark. In Proceedings of the 28th ACM International Conference on Multimedia, New York, NY, USA, 10 August 2020; pp. 2355–2363. [Google Scholar]
Category | Dense Fog | Heavy Fog | Moderate Fog | Light Mist |
---|---|---|---|---|
Visibility/km | <0.05 | 0.05–0.1 | 0.1–0.2 | >0.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tang, M.; Zhao, Z.; Qiu, J. A Foggy Weather Simulation Algorithm for Traffic Image Synthesis Based on Monocular Depth Estimation. Sensors 2024, 24, 1966. https://doi.org/10.3390/s24061966
Tang M, Zhao Z, Qiu J. A Foggy Weather Simulation Algorithm for Traffic Image Synthesis Based on Monocular Depth Estimation. Sensors. 2024; 24(6):1966. https://doi.org/10.3390/s24061966
Chicago/Turabian StyleTang, Minan, Zixin Zhao, and Jiandong Qiu. 2024. "A Foggy Weather Simulation Algorithm for Traffic Image Synthesis Based on Monocular Depth Estimation" Sensors 24, no. 6: 1966. https://doi.org/10.3390/s24061966
APA StyleTang, M., Zhao, Z., & Qiu, J. (2024). A Foggy Weather Simulation Algorithm for Traffic Image Synthesis Based on Monocular Depth Estimation. Sensors, 24(6), 1966. https://doi.org/10.3390/s24061966