Abstract
Foggy weather conditions significantly impact the visibility required for autonomous vehicle navigation, resulting in performance degradation of on-road object detectors. Mobile computing devices deployed at the edge are often resource-constrained and cannot cope well with the performance degradation of detectors. Image dehazing and ensuring the resource efficiency of dehazing algorithms are crucial for any navigation task of vehicle on the road. To ensure safe and smooth operation in foggy weather conditions, the clarity of the image needs to be improved. To address the challenges, an improved detection model called Lightweight Defog Detector (LDD) is proposed to improve detection performance. First, the dark channel prior and positional normalization algorithm is used to dehaze the image and improve image clarity. Second, a lightweight MOELAN feature extraction module is constructed, significantly improving detection efficiency and making it suitable for deployment on edge devices. Finally, the attention mechanism is introduced to further extract features from the feature maps used for detection, which improves the model's ability to extract feature information. Experiments are conducted on the foggy on-Road dataset. The experimental results show that compared with the YOLOv9 and other models, such as IA-YOLO, the proposed algorithm has a slightly higher average accuracy and can effectively improve the performance of vehicle detection in foggy weather while reducing the parameters.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Qin, Q., Chang, K., Huang, M., Li, G.: DENet: detection-driven enhancement network for object detection under adverse weather conditions. In: Proceedings of the Asian Conference on Computer Vision, pp. 2813–2829 (2022)
Yuanming, H., Hao, H., Chenxi, X., Baoyuan, W., Stephen, L.: Exposure: a white-box photo post-processing framework. ACM Trans. Graph. 37(2), 17 (2018)
Chen, Y., Wang, H., Li, W., et al.: Scale-aware domain adaptive faster RCNN. Int. J. Comput. Vis. 129(7), 2223–2243 (2021)
Liu, W., Ren, G., Yu, R., Guo, S., Zhu, J., Zhang, L.: Image-Adaptive YOLO for object detection in adverse weather conditions. Proc AAAI Conf. Artif. Intell. 36(2), 1792–1800 (2022)
Han, X.: Modified cascade RCNN based on contextual information for vehicle detection. Sens. Imaging 22(1), 19 (2021)
Zhou, H., Jiang, F., Lu, H.: SSDA-YOLO: semi-supervised domain adaptive YOLO for cross-domain object detection. Comput. Vis. Image Underst. 229, 103649 (2023)
Wang, L., Qin, H., Zhou, X., Lu, X., Zhang, F.: R-YOLO: a robust object detector in adverse weather. IEEE Trans. Instrum. Meas. 72, 1–11 (2022)
Dong, H., et al.: Multi-scale boosted dehazing network with dense feature fusion. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2157–2167 (2020)
Wang, H., Xu, Y., He, Y., et al.: A multi objective visual detection algorithm for fog driving scenes based on improved YOLOv5. IEEE Trans. Instrum. Meas. 71, 1–12 (2022)
Wang, C., Yeh, I., Liao, H.: YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information (2024). arXiv:2402.13616
Wang, C., Bochkovskiy, A., Liao, H.: YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7464–7475 (2023)
Cai, J., Zuo, W., Zhang, L.: Dark and bright channel prior embedded network for dynamic scene deblurring. IEEE Trans. Image Process. 29, 6885–6897 (2020)
Li, B., Wu, F., Weinberger, K.Q., Belongie, S.: Positional normalization. In: Advances in Neural Information Processing Systems, pp. 1620–1632 (2019)
Vasu, P., Gabriel, J., Zhu, J., Tuzel, O., Ranjan, A.: MobileOne: an improved one millisecond mobile backbone. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7907–7917 (2023)
Zhang, Y., Li, K., Li, K., et al.: MR Image Super-Resolution with squeeze and excitation reasoning attention network. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13420–13429 (2021)
Arkin, E., Yadikar, N., Xu, X., et al.: A survey: object detection methods from CNN to transformer. Multimed. Tools Appl. 82, 21353–21383 (2023)
Woo, S., Park, J., Lee, J.Y., Kweon, I.: CBAM: convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018)
Li, B., et al.: Benchmarking single-image dehazing and beyond. IEEE Trans. Image Process. 28(1), 492–505 (2019)
Soviany, P., Ionescu, R.T., Rota, P., Sebe, N.: Curriculum self-paced learning for cross-domain object detection. Comput. Vis. Image Underst. 204, 103166 (2021)
Everingham, M., Van Gool, L., Williams, C., et al.: The PASCAL Visual Object Classes (VOC) challenge. Int. J. Comput. Vis. 88, 303–338 (2010)
Abbasi, H., Amini, M., Yu, F.: Fog-aware adaptive yolo for object detection in adverse weather. In: IEEE Sensors Applications Symposium (SAS), pp. 1–6 (2023)
Zhang, Z., Zheng, H., Hong, R., Xu, M., Yan, S., Wang, M.: Deep color consistent network for low-light image enhancement. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1899–1908 (2022)
Hnewa, M.: Integrated multiscale domain adaptive YOLO. IEEE Trans. Image Process. 32, 1857–1867 (2023)
Kalwar, S., Patel, D., Aanegola, A., Konda, K., Garg, S., Krishna, K.: GDIP: gated differentiable image processing for object detection in adverse conditions. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 7083–7089 (2023)
Acknowledgment
This work is funded by the Natural Science Foundation of China (No. 62162003), and Guangxi University Natural Science and Technology innovation and development doubling plan project (NO. 2023BZXM002).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Gan, S., Chen, N., Qin, H. (2025). Lightweight Defog Detection for Autonomous Vehicles: Balancing Clarity, Efficiency, and Accuracy. In: Lin, Z., et al. Pattern Recognition and Computer Vision. PRCV 2024. Lecture Notes in Computer Science, vol 15042. Springer, Singapore. https://doi.org/10.1007/978-981-97-8858-3_21
Download citation
DOI: https://doi.org/10.1007/978-981-97-8858-3_21
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-8857-6
Online ISBN: 978-981-97-8858-3
eBook Packages: Computer ScienceComputer Science (R0)