Robot Localization Method Based on Multi-Sensor Fusion in Low-Light Environment
<p>Algorithm procedure.</p> "> Figure 2
<p>Selection of image enhancement algorithm parameters. (<b>a</b>) Fixed low-frequency gain at 0.5, sharpening coefficient at 1, and contrast threshold set to 4. (<b>b</b>) Fixed high-frequency gain at 1.6, sharpening coefficient of 1, and contrast threshold of 4. (<b>c</b>) High-frequency gain is set to 1.6, low-frequency gain to 0.3, and the contrast threshold to 4. (<b>d</b>) The fixed high-frequency gain is 1.6, the low-frequency gain is 0.3, and the sharpening coefficient is 1.5.</p> "> Figure 3
<p>Comparison of feature point extraction effect. (<b>a</b>) The feature point extraction result of the original image. (<b>b</b>) The feature point extraction result after CLAHE processing. (<b>c</b>) The feature point extraction result after homomorphic filtering processing. (<b>d</b>) The feature point extraction result after both CLAHE processing and homomorphic filtering processing.</p> "> Figure 4
<p>Estimation of gyroscope bias coefficients on the MH02 and V203 sequences. (<b>a</b>) Variation in gyroscope bias for L-MSCKF and MSCKF-VIO on the MH02 sequence. (<b>b</b>) Estimated gyroscope bias values by L-MSCKF and MSCKF-VIO on the V203 sequence.</p> "> Figure 5
<p>The trajectory of the algorithm on sequences V103 and V203 of the EuRoC dataset. (<b>a</b>) The trajectory on the V103 sequence. (<b>b</b>) The X, Y, and Z triaxial values on the V103 sequence. (<b>c</b>) The trajectory on the V203 sequence. (<b>d</b>) The X, Y, and Z triaxial values on the V203 sequence.</p> "> Figure 6
<p>Comparison of absolute trajectory errors of each algorithm on weak light sequence V203.</p> "> Figure 7
<p>Comparison of the computational efficiency of each algorithm. (<b>a</b>) Average CPU usage in % of the total available CPU, by the algorithms running the same experiment. (<b>b</b>) Total running time of each algorithm on the same dataset.</p> ">
Abstract
:1. Introduction
- (1)
- Application of image enhancement technology. The integration of homomorphic filtering and Contrast Limited Adaptive Histogram Equalization (CLAHE) technologies enhances the brightness and clarity of images captured by the camera in low-light conditions. The performance of the vision sensor in these kinds of settings is greatly enhanced by this method.
- (2)
- Introduction of the complementary Kalman filter. To address the impact of IMU zero-bias on the accuracy of attitude estimation, we innovatively integrate a complementary Kalman filter into our approach. This filter leverages accelerometer data to correct the gyroscope bias, and by integrating this correction into the MSCKF framework, we improve the system’s initial pose estimation’s precision and stability.
- (3)
- System design for indoor low-light environments. The L-MSCKF algorithm is proposed to address the challenge of insufficient lighting in indoor settings. By integrating the aforementioned technologies, it effectively enhances the pose estimation accuracy and robustness of the mobile robot binocular VIO system.
2. Related Work
2.1. Image Enhancement Algorithm
2.2. IMU Bias Correction Algorithm
3. Materials and Methods
3.1. Image Processing
3.2. IMU Bias Correction Model
4. Results
4.1. Analysis of Image Enhancement Effect
4.2. IMU Bias Correction Result
4.3. Comprehensive Evaluation of Algorithm Performance
4.3.1. Verification of Pose Estimation Results
4.3.2. Algorithm Efficiency Analysis
5. Discussion
6. Conclusions
- (1)
- A module for improving photographs is developed to lessen the negative impact that poor quality photos have on the accuracy of stereo visual odometry. Validated through testing on public datasets, this module shows a notable rise in feature point count, an improvement in picture quality, and an improvement in feature matching accuracy.
- (2)
- To address the issue of IMU bias instability, a complementary Kalman filter is introduced. This filter utilizes the acceleration data of the IMU to compensate for the gyroscope bias. According to experimental findings, this method may successfully reduce IMU bias and improve bias coefficient stability, which would increase the localization algorithm’s accuracy.
- (3)
- The L-MSCKF algorithm is tested on the EuRoC dataset, outperforming two other methods. Test outcomes indicate that the L-MSCKF algorithm achieves an average RMSE of 0.152 m, outperforming the original MSCKF-VIO algorithm and exhibiting enhanced accuracy.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Li, X.; Song, B.; Shen, Z.; Zhou, Y.; Lyu, H.; Qin, Z. Consistent localization for autonomous robots with inter-vehicle GNSS information fusion. IEEE Commun. Lett. 2022, 27, 120–124. [Google Scholar] [CrossRef]
- Benachenhou, K.; Bencheikh, M.L. Detection of global positioning system spoofing using fusion of signal quality monitoring metrics. Comput. Electr. Eng. 2021, 92, 107159. [Google Scholar] [CrossRef]
- Tian, Y.; Lian, Z.; Núñez-Andrés, M.A.; Yue, Z.; Li, K.; Wang, P.; Wang, M. The application of gated recurrent unit algorithm with fused attention mechanism in UWB indoor localization. Measurement 2024, 234, 114835. [Google Scholar] [CrossRef]
- Gao, X.; Lin, X.; Lin, F.; Huang, H. Segmentation Point Simultaneous Localization and Mapping: A Stereo Vision Simultaneous Localization and Mapping Method for Unmanned Surface Vehicles in Nearshore Environments. Electronics 2024, 13, 3106. [Google Scholar] [CrossRef]
- Sun, T.; Liu, Y.; Wang, Y.; Xiao, Z. An improved monocular visual-inertial navigation system. IEEE Sens. J. 2020, 21, 11728–11739. [Google Scholar] [CrossRef]
- Zhang, J.; Xu, L.; Bao, C. An Adaptive Pose Fusion Method for Indoor Map Construction. ISPRS Int. J. Geo-Inf. 2021, 10, 800. [Google Scholar] [CrossRef]
- Zhai, G.; Zhang, W.; Hu, W.; Ji, Z. Coal mine rescue robots based on binocular vision: A review of the state of the art. IEEE Access 2020, 8, 130561–130575. [Google Scholar] [CrossRef]
- Wang, H.; Li, Z.; Wang, H.; Cao, W.; Zhang, F.; Wang, Y. A Roadheader Positioning Method Based on Multi-Sensor Fusion. Electronics 2023, 12, 4556. [Google Scholar] [CrossRef]
- Cheng, J.; Li, H.; Ma, K.; Liu, B.; Sun, D.; Ma, Y.; Yin, G.; Wang, G.; Li, H. Architecture and Key Technologies of Coalmine Underground Vision Computing. Coal Sci. Technol. 2023, 51, 202–218. [Google Scholar]
- Dai, X.; Mao, Y.; Huang, T.; Li, B.; Huang, D. Navigation of simultaneous localization and mapping by fusing RGB-D camera and IMU on UAV. In Proceedings of the 2019 CAA Symposium on Fault Detection, Supervision and Safety for Technical Processes (SAFEPROCESS), Xiamen, China, 5–7 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 6–11. [Google Scholar]
- Sun, K.; Mohta, K.; Pfrommer, B.; Watterson, M.; Liu, S.; Mulgaonkar, Y.; Taylor, C.J.; Kumar, V. Robust stereo visual inertial odometry for fast autonomous flight. IEEE Robot. Autom. Lett. 2018, 3, 965–972. [Google Scholar] [CrossRef]
- Syed, Z.F.; Aggarwal, P.; Goodall, C.; Niu, X.; El-Sheimy, N. A new multi-position calibration method for MEMS inertial navigation systems. Meas. Sci. Technol. 2007, 18, 1897. [Google Scholar] [CrossRef]
- Liu, J.; Sun, L.; Pu, J.; Yan, Y. Hybrid cooperative localization based on robot-sensor networks. Signal Process. 2021, 188, 108242. [Google Scholar] [CrossRef]
- Cen, R.; Jiang, T.; Tan, Y.; Su, X.; Xue, F. A low-cost visual inertial odometry for mobile vehicle based on double stage Kalman filter. Signal Process. 2022, 197, 108537. [Google Scholar] [CrossRef]
- Wang, H.; Zhang, Y.; Shen, H.; Zhang, J. Review of image enhancement algorithms. Chin. Opt. 2017, 10, 438–448. [Google Scholar] [CrossRef]
- Huang, S.C.; Cheng, F.C.; Chiu, Y.S. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 2012, 22, 1032–1041. [Google Scholar] [CrossRef]
- Wang, F.; Zhang, B.; Zhang, C.; Yan, W.; Zhao, Z.; Wang, M. Low-light image joint enhancement optimization algorithm based on frame accumulation and multi-scale Retinex. Ad Hoc Netw. 2021, 113, 102398. [Google Scholar] [CrossRef]
- Dong, S.; Ma, J.; Su, Z.; Li, C. Robust circular marker localization under non-uniform illuminations based on homomorphic filtering. Measurement 2021, 170, 108700. [Google Scholar] [CrossRef]
- Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vision Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
- Baek, J.; Kim, Y.; Chung, B.; Yim, C. Linear Spectral Clustering with Contrast-limited Adaptive Histogram Equalization for Superpixel Segmentation. IEIE Trans. Smart Process. Comput. 2019, 8, 255–264. [Google Scholar] [CrossRef]
- Çiğ, H.; Güllüoğlu, M.T.; Er, M.B.; Kuran, U.; Kuran, E.C. Enhanced Disease Detection Using Contrast Limited Adaptive Histogram Equalization and Multi-Objective Cuckoo Search in Deep Learning. Trait. Signal 2023, 40, 915. [Google Scholar] [CrossRef]
- Aboshosha, S.; Zahran, O.; Dessouky, M.I.; Abd El-Samie, F.E. Resolution and quality enhancement of images using interpolation and contrast limited adaptive histogram equalization. Multimed. Tools Appl. 2019, 78, 18751–18786. [Google Scholar] [CrossRef]
- Yoon, J.; Choi, J.; Choe, Y. Efficient image enhancement using sparse source separation in the Retinex theory. Opt. Eng. 2017, 56, 113103. [Google Scholar] [CrossRef]
- Cheng, J.; Yan, P.; Yu, H.; Shi, M.; Xiao, H. Image stitching method for the complicated scene of coalmine tunnel based on mismatched elimination with directed line segments. Coal Sci. Technol. 2022, 50, 179–191. [Google Scholar]
- Gong, Y.; Xie, X. Research on coal mine underground image recognition technology based on homomorphic filtering method. Coal Sci. Technol. 2023, 51, 241–250. [Google Scholar]
- Tu, Z.; Chen, C.; Pan, X.; Liu, R.; Cui, J.; Mao, J. Ema-vio: Deep visual–inertial odometry with external memory attention. IEEE Sens. J. 2022, 22, 20877–20885. [Google Scholar] [CrossRef]
- Zhou, X.; Wen, X.; Wang, Z.; Gao, Y.; Li, H.; Wang, Q.; Yang, T.; Lu, H.; Cao, Y.; Xu, C.; et al. Swarm of micro flying robots in the wild. Sci. Robot. 2022, 7, eabm5954. [Google Scholar] [CrossRef]
- Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-manifold preintegration for real-time visual–inertial odometry. IEEE Trans. Robot. 2016, 33, 1–21. [Google Scholar] [CrossRef]
- Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
- Mourikis, A.I.; Roumeliotis, S.I. A multi-state constraint Kalman filter for vision-aided inertial navigation. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 3565–3572. [Google Scholar]
- Dissanayake, G.; Sukkarieh, S.; Nebot, E.; Durrant-Whyte, H. The aiding of a low-cost strapdown inertial measurement unit using vehicle model constraints for land vehicle applications. IEEE Trans. Robot. Autom. 2001, 17, 731–747. [Google Scholar] [CrossRef]
- Tong, X.; Su, Y.; Li, Z.; Si, C.; Han, G.; Ning, J.; Yang, F. A double-step unscented Kalman filter and HMM-based zero-velocity update for pedestrian dead reckoning using MEMS sensors. IEEE Trans. Ind. Electron. 2019, 67, 581–591. [Google Scholar] [CrossRef]
- Chen, S.; Li, X.; Huang, G.; Zhang, Q.; Wang, S. NHC-LIO: A Novel Vehicle Lidar-inertial Odometry (LIO) with Reliable Non-holonomic Constraint (NHC) Factor. IEEE Sens. J. 2023, 23, 26513–26523. [Google Scholar] [CrossRef]
- Sun, R.; Yang, Y.; Chiang, K.W.; Duong, T.T.; Lin, K.Y.; Tsai, G.J. Robust IMU/GPS/VO integration for vehicle navigation in GNSS degraded urban areas. IEEE Sens. J. 2020, 20, 10110–10122. [Google Scholar] [CrossRef]
- He, X. Research About Image Tampering Detection Based On Processing Traces–Blur Traces Detection. Master’s Thesis, Beijing Jiaotong University, Beijing, China, 2012. [Google Scholar]
- He, Z.; Mo, H.; Xiao, Y.; Cui, G.; Wang, P.; Jia, L. Multi-scale fusion for image enhancement in shield tunneling: A combined MSRCR and CLAHE approach. Meas. Sci. Technol. 2024, 35, 056112. [Google Scholar] [CrossRef]
- Trawny, N.; Roumeliotis, S.I. Indirect Kalman Filter for 3D Attitude Estimation; Technical Report; University of Minnesota, Department of Computer Science&Engineering: Minneapolis, MN, USA, 2005; Volume 2. [Google Scholar]
- Burri, M.; Nikolic, J.; Gohl, P.; Schneider, T.; Rehder, J.; Omari, S.; Achtelik, M.W.; Siegwart, R. The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 2016, 35, 1157–1163. [Google Scholar] [CrossRef]
- Xue, C. Research on Image Quality Evaluation Methods Based on Visual Perception and Feature Fusion. Master’s Thesis, Xi’an University of Technology, Xi’an, China, 2024. [Google Scholar]
- Lin, H.; Hosu, V.; Saupe, D. KADID-10k: A large-scale artificially distorted IQA database. In Proceedings of the 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5–7 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–3. [Google Scholar]
Scene | Original | CLAHE | Homomorphic Filtering | Combine |
---|---|---|---|---|
Scene 1 | 1761 | 5829 | 1625 | 8570 |
Scene 2 | 11 | 200 | 5 | 248 |
Scene 3 | 436 | 2092 | 453 | 8728 |
Scene 4 | 36 | 141 | 11 | 199 |
Method | I09 | I61 | ||||||
---|---|---|---|---|---|---|---|---|
Original | CLAHE | Homomorphic Filtering | Combine | Original | CLAHE | Homomorphic Filtering | Combine | |
SSIM | 0.440 | 0.517 | 0.350 | 0.528 | 0.316 | 0.405 | 0.237 | 0.438 |
PSNR/dB | 10.615 | 13.142 | 10.769 | 13.888 | 11.563 | 14.067 | 11.857 | 14.116 |
MSE | 5643.83 | 3153.81 | 5447.59 | 2656.52 | 4536.71 | 2549.33 | 4240.16 | 2520.64 |
Sequence | VINS-mono/m | MSCKF-VIO/m | L-MSCKF/m | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
RMSE | Mean | Median | STD | RMSE | Mean | Median | STD | RMSE | Mean | Median | STD | |
V101 | 0.092 | 0.080 | 0.067 | 0.045 | 0.100 | 0.090 | 0.076 | 0.045 | 0.081 | 0.074 | 0.073 | 0.034 |
V102 | 0.188 | 0.145 | 0.121 | 0.120 | 0.128 | 0.114 | 0.106 | 0.058 | 0.111 | 0.098 | 0.088 | 0.053 |
V103 | 0.195 | 0.170 | 0.156 | 0.095 | 0.207 | 0.195 | 0.192 | 0.070 | 0.166 | 0.143 | 0.127 | 0.084 |
V201 | 0.096 | 0.084 | 0.078 | 0.047 | 0.072 | 0.063 | 0.052 | 0.035 | 0.061 | 0.054 | 0.050 | 0.029 |
V202 | 0.141 | 0.119 | 0.087 | 0.075 | 0.152 | 0.141 | 0.134 | 0.058 | 0.149 | 0.140 | 0.132 | 0.049 |
V203 | 0.373 | 0.335 | 0.294 | 0.163 | 1.778 | 1.683 | 1.492 | 0.573 | 0.249 | 0.228 | 0.205 | 0.100 |
MH02 | 0.193 | 0.174 | 0.164 | 0.085 | 0.184 | 0.160 | 0.134 | 0.092 | 0.148 | 0.123 | 0.102 | 0.084 |
MH03 | 0.218 | 0.190 | 0.166 | 0.107 | 0.217 | 0.200 | 0.173 | 0.084 | 0.163 | 0.151 | 0.150 | 0.063 |
MH04 | 0.373 | 0.355 | 0.400 | 0.116 | 0.299 | 0.272 | 0.255 | 0.123 | 0.238 | 0.221 | 0.212 | 0.088 |
Sequence | VINS-mono/rad | MSCKF-VIO/rad | L-MSCKF/rad | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
RMSE | Mean | Median | STD | RMSE | Mean | Median | STD | RMSE | Mean | Median | STD | |
V101 | 0.109 | 0.108 | 0.104 | 0.011 | 0.097 | 0.095 | 0.096 | 0.016 | 0.095 | 0.094 | 0.095 | 0.014 |
V102 | 0.082 | 0.078 | 0.075 | 0.026 | 0.044 | 0.042 | 0.041 | 0.014 | 0.038 | 0.036 | 0.040 | 0.014 |
V103 | 0.078 | 0.074 | 0.078 | 0.025 | 0.088 | 0.082 | 0.074 | 0.033 | 0.083 | 0.075 | 0.088 | 0.035 |
V201 | 0.040 | 0.033 | 0.026 | 0.022 | 0.027 | 0.025 | 0.026 | 0.009 | 0.011 | 0.011 | 0.010 | 0.004 |
V202 | 0.042 | 0.040 | 0.035 | 0.014 | 0.037 | 0.034 | 0.034 | 0.014 | 0.027 | 0.026 | 0.025 | 0.007 |
V203 | 0.061 | 0.051 | 0.042 | 0.033 | 0.157 | 0.155 | 0.145 | 0.027 | 0.035 | 0.033 | 0.031 | 0.012 |
MH02 | 0.038 | 0.038 | 0.037 | 0.006 | 0.048 | 0.047 | 0.046 | 0.011 | 0.048 | 0.045 | 0.044 | 0.017 |
MH03 | 0.030 | 0.027 | 0.026 | 0.011 | 0.026 | 0.024 | 0.022 | 0.010 | 0.025 | 0.021 | 0.015 | 0.013 |
MH04 | 0.027 | 0.020 | 0.017 | 0.018 | 0.034 | 0.031 | 0.024 | 0.014 | 0.024 | 0.021 | 0.016 | 0.012 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, M.; Lian, Z.; Núñez-Andrés, M.A.; Wang, P.; Tian, Y.; Yue, Z.; Gu, L. Robot Localization Method Based on Multi-Sensor Fusion in Low-Light Environment. Electronics 2024, 13, 4346. https://doi.org/10.3390/electronics13224346
Wang M, Lian Z, Núñez-Andrés MA, Wang P, Tian Y, Yue Z, Gu L. Robot Localization Method Based on Multi-Sensor Fusion in Low-Light Environment. Electronics. 2024; 13(22):4346. https://doi.org/10.3390/electronics13224346
Chicago/Turabian StyleWang, Mengqi, Zengzeng Lian, María Amparo Núñez-Andrés, Penghui Wang, Yalin Tian, Zhe Yue, and Lingxiao Gu. 2024. "Robot Localization Method Based on Multi-Sensor Fusion in Low-Light Environment" Electronics 13, no. 22: 4346. https://doi.org/10.3390/electronics13224346
APA StyleWang, M., Lian, Z., Núñez-Andrés, M. A., Wang, P., Tian, Y., Yue, Z., & Gu, L. (2024). Robot Localization Method Based on Multi-Sensor Fusion in Low-Light Environment. Electronics, 13(22), 4346. https://doi.org/10.3390/electronics13224346