An Outline of Multi-Sensor Fusion Methods for Mobile Agents Indoor Navigation
<p>Extended Kalman filter (EKF) algorithm flow is divided into two parts: time update and measurement update. Among them, <span class="html-italic">k</span>−1 and <span class="html-italic">k</span> represent the previous state and current state respectively, <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mover accent="true"> <mi>x</mi> <mo stretchy="false">^</mo> </mover> <mo stretchy="false">¯</mo> </mover> </mrow> <mi>k</mi> </msub> </mrow> </semantics></math> is the priori estimate of current state, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>x</mi> <mo stretchy="false">^</mo> </mover> <mi>k</mi> </msub> </mrow> </semantics></math> is the posteriori estimate of the current state, <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>k</mi> </msub> </mrow> </semantics></math> is the measurement vector, <span class="html-italic">f</span> (·) is the nonlinear mapping equation from the previous state to the current state, <span class="html-italic">h</span> (·) is the nonlinear mapping equation between state and measurement, <b>Q</b> is the covariance matrix, <b>A</b> is the Jacobian matrix of the partial derivative of <span class="html-italic">f</span> (·) with respect to <span class="html-italic">x</span>, <b>W</b> is the Jacobian matrix of the partial derivative of <span class="html-italic">f</span> (·) with respect to noise, <b>H</b> is the Jacobian matrix of the partial derivative of <span class="html-italic">h</span> (·) with respect to <span class="html-italic">x</span>, and <b>V</b> is the Jacobian matrix of the partial derivative of <span class="html-italic">h</span> (·) with respect to noise.</p> "> Figure 2
<p>Several types of data provided by the Robot-at-Home dataset: (<b>a</b>) RGB image, (<b>b</b>) depth image, (<b>c</b>) 2D LiDAR map, (<b>d</b>) 3D room reconstruction, and (<b>e</b>) 3D room semantic reconstruction [<a href="#B108-sensors-21-01605" class="html-bibr">108</a>].</p> "> Figure 3
<p>Three types of the data provided by Microsoft 7 Scenes Dataset: (<b>a</b>) RGB image, (<b>b</b>) depth image, and (<b>c</b>) TSDF volume for scene reconstruction [<a href="#B119-sensors-21-01605" class="html-bibr">119</a>].</p> "> Figure 4
<p>Two kinds of different view modes are designed in ICL-NUIM RGB-D Benchmark Dataset [<a href="#B121-sensors-21-01605" class="html-bibr">121</a>]. Image (<b>a</b>) shows the view mode of a variety of camera perspectives. Image (<b>b</b>) shows the view mode of a head-up camera perspective.</p> ">
Abstract
:1. Introduction
2. Single Sensor Based Navigation
2.1. Visual Sensors
2.2. LiDAR
2.3. Inertial Measurement Unit
2.4. Ultra-Wide Band Localizationing System
2.5. Others and a View
3. Multi-Sensors Fusion Navigation Methods
3.1. Visual Sensor-Dominant Navigation
3.1.1. Vision Combined with Depth Information
- (1)
- Feature Point Matching
- (2)
- Optical Flow Method
- (3)
- Direct Method
- (4)
- Semantic SLAM Based on Deep Architectures
3.1.2. Vision Combined with LiDAR
- (1)
- Localization
- (2)
- Map Reconstruction
3.1.3. Vision Combined with IMU
- (1)
- Loosely Coupled Methods
- (2)
- Tightly Coupled Methods
- (3)
- Deep Learning based Coupled Methods
3.1.4. Vision Combined with UWB
- (1)
- Feature-Level Fusion
- (2)
- Decision-Level Fusion
3.2. LiDAR Dominant Navigation
3.2.1. LiDAR Combined with Visual Information
3.2.2. LiDAR Combined with IMU
- (1)
- Feature-level Fusion
- (2)
- Decision-level Fusion
3.3. UWB Combied with IMU
3.4. Others
4. Multi-Modal Datasets
4.1. Robot@Home: An Example of Datasets Acquired by The Devices Setup on Robot
4.2. Microsoft 7 Scenes: An Example of Datasets Acquired by Handheld Devices
4.3. ICL-NUIM RGB-D Benchmark Dataset: An Example of Datasets Generated in Virtual 3D Environments
4.4. Others
5. Discussions and Future Trends
5.1. Discussions
5.2. Future Trends
5.2.1. Uniform Fusion Framework
5.2.2. Evaluation Methods
5.2.3. On-Line Learning and Planning
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Bresson, G.; Alsayed, Z.; Yu, L.; Glaser, S. Simultaneous localization and mapping: A survey of current trends in autonomous driving. IEEE Trans. Intell. Veh. 2017, 2, 194–220. [Google Scholar] [CrossRef] [Green Version]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
- Kohlbrecher, S.; Von Stryk, O.; Meyer, J.; Klingauf, U. A flexible and scalable SLAM system with full 3D motion estimation. In Proceedings of the 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics, Kyoto, Japan, 31 October–5 November 2011; pp. 155–160. [Google Scholar]
- Huang, G. Visual-inertial navigation: A concise review. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 9572–9582. [Google Scholar]
- Liu, L.; Liu, Z.; Barrowes, B.E. Through-wall bio-radiolocation with UWB impulse radar: Observation, simulation and signal extraction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 791–798. [Google Scholar] [CrossRef]
- He, S.; Chan, S.-H.G. Wi-Fi Fingerprint-based indoor positioning: Recent advances and comparisons. IEEE Commun. Surv. Tutor. 2015, 18, 466–490. [Google Scholar] [CrossRef]
- Faragher, R.; Harle, R. Location fingerprinting with bluetooth low energy beacons. IEEE J. Sel. Areas Commun. 2015, 33, 2418–2428. [Google Scholar] [CrossRef]
- Kaemarungsi, K.; Ranron, R.; Pongsoon, P. Study of received signal strength indication in ZigBee location cluster for indoor localization. In Proceedings of the 2013 10th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, Krabi, Thailand, 15–17 May 2013; pp. 1–6. [Google Scholar]
- Shin, Y.-S.; Kim, A. Sparse depth enhanced direct thermal-infrared SLAM beyond the visible spectrum. IEEE Robot. Autom. Lett. 2019, 4, 2918–2925. [Google Scholar] [CrossRef] [Green Version]
- Freye, C.; Bendicks, C.; Lilienblum, E.; Al-Hamadi, A. Multiple camera approach for SLAM based ultrasonic tank roof inspection. In Image Analysis and Recognition, Proceedings of the ICIAR 2014, Vilamoura, Portugal, 22–24 October 2014; Springer: Cham, Switzerland, 2014; Volume 8815, pp. 453–460. [Google Scholar]
- Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.D.; Leonard, J.J. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef] [Green Version]
- Davison, A.J. Davison real-time simultaneous localisation and mapping with a single camera. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; Volume 2, pp. 1403–1410. [Google Scholar]
- Debeunne, C.; Vivet, D. A review of visual-LiDAR fusion based simultaneous localization and mapping. Sensors 2020, 20, 2068. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kunhoth, J.; Karkar, A.; Al-Maadeed, S.; Al-Ali, A. Indoor positioning and wayfinding systems: A survey. Hum. Cent. Comput. Inf. Sci. 2020, 10, 18. [Google Scholar] [CrossRef]
- Otero, R.; Lagüela, S.; Garrido, I.; Arias, P. Mobile indoor mapping technologies: A review. Autom. Constr. 2020, 120, 103399. [Google Scholar] [CrossRef]
- Maehara, Y.; Saito, S. The relationship between processing and storage in working memory span: Not two sides of the same coin. J. Mem. Lang. 2007, 56, 212–228. [Google Scholar] [CrossRef]
- Town, C. Multi-sensory and multi-modal fusion for sentient computing. Int. J. Comput. Vis. 2006, 71, 235–253. [Google Scholar] [CrossRef] [Green Version]
- Yang, M.; Tao, J. A review on data fusion methods in multimodal human computer dialog. Virtual Real. Intell. Hardw. 2019, 1, 21–38. [Google Scholar] [CrossRef]
- Graeter, J.; Wilczynski, A.; Lauer, M. Limo: LiDAR-monocular visual odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 7872–7879. [Google Scholar]
- Ji, Z.; Singh, S. Visual-LiDAR odometry and mapping: Low-drift, robust, and fast. In Proceedings of the IEEE International Conference on Robotics & Automation, Seattle, WA, USA, 26–30 May 2015. [Google Scholar]
- Mur-Artal, R.; Tardos, J.D. ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
- Labbé, M.; Michaud, F. RTAB-map as an open-source LiDAR and visual simultaneous localization and mapping library for large-scale and long-term online operation: Labb and michaud. J. Field Robot. 2018, 36, 416–446. [Google Scholar] [CrossRef]
- Klein, G.; Murray, D. Parallel tracking and mapping for small ar workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 225–234. [Google Scholar]
- Engel, J.; Schps, T.; Cremers, D. LSD-SLAM: Large-scale direct monocular slam. In Proceedings of the 2014 European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 834–849. [Google Scholar]
- Forster, C.; Zhang, Z.; Gassner, M.; Werlberger, M.; Scaramuzza, D. SVO: Semidirect visual odometry for monocular and multicamera systems. IEEE Trans. Robot. 2016, 33, 249–265. [Google Scholar] [CrossRef] [Green Version]
- Taketomi, T.; Uchiyama, H.; Ikeda, S. Visual SLAM algorithms: A survey from 2010 to 2016. IPSJ Trans. Comput. Vis. Appl. 2017, 9, 16. [Google Scholar] [CrossRef]
- Endres, F.; Hess, J.; Sturm, J.; Cremers, D.; Burgard, W. 3-D mapping with an RGB-D camera. IEEE Trans. Robot. 2014, 30, 177–187. [Google Scholar] [CrossRef]
- Kerl, C.; Sturm, J.; Cremers, D. Dense visual SLAM for RGB-D cameras. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 2100–2106. [Google Scholar]
- Grisetti, G.; Stachniss, C.; Burgard, W. Improved techniques for grid mapping with Rao-Blackwellized particle filters. IEEE Trans. Robot. 2007, 23, 34–46. [Google Scholar] [CrossRef] [Green Version]
- Deschaud, J.E. IMLS-SLAM: Scan-to-model matching based on 3d data. In Proceedings of the IEEE International Conference on Robotics and Automation, Brisbane, Australia, 21–25 May 2018. [Google Scholar]
- Zhang, J.; Singh, S. LOAM: LiDAR odometry and mapping in real-time. In Proceedings of the Robotics: Science and Systems, Berkeley, CA, USA, 12–16 July 2014. [Google Scholar]
- Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LiDAR SLAM. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278. [Google Scholar]
- Zhang, R.; Hoflinger, F.; Reindl, L. Inertial sensor based indoor localization and monitoring system for emergency responders. IEEE Sensors J. 2013, 13, 838–848. [Google Scholar] [CrossRef]
- Gui, J.; Gu, D.; Wang, S.; Hu, H. A review of visual inertial odometry from filtering and optimisation perspectives. Adv. Robot. 2015, 29, 1289–1301. [Google Scholar] [CrossRef]
- Ye, H.; Chen, Y.; Liu, M. Tightly coupled 3D LiDAR inertial odometry and mapping. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
- Mourikis, A.I.; Roumeliotis, S.I. A Multi-state constraint kalman filter for vision-aided inertial navigation. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 3565–3572. [Google Scholar] [CrossRef]
- Young, D.P.; Keller, C.M.; Bliss, D.W.; Forsythe, K.W. Ultra-wideband (UWB) transmitter location using time difference of arrival (TDOA) techniques. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers 2003, Pacific Grove, CA, USA, 9–12 November 2013. [Google Scholar]
- Porcino, D.; Hirt, W. Ultra-wideband radio technology: Potential and challenges ahead. IEEE Commun. Mag. 2003, 41, 66–74. [Google Scholar] [CrossRef] [Green Version]
- Despaux, F.; Bossche, A.V.D.; Jaffrès-Runser, K.; Val, T. N-TWR: An accurate time-of-flight-based N-ary ranging protocol for Ultra-Wide band. Ad Hoc Netw. 2018, 79, 1–19. [Google Scholar] [CrossRef] [Green Version]
- Iwakiri, N.; Kobayashi, T. Joint TOA and AOA estimation of UWB signal using time domain smoothing. In Proceedings of the 2007 2nd International Symposium on Wireless Pervasive Computing, San Juan, PR, USA, 5–7 February 2007. [Google Scholar] [CrossRef]
- Al-Madani, B.; Orujov, F.; Maskeliūnas, R.; Damaševičius, R.; Venčkauskas, A. Fuzzy logic type-2 based wireless indoor localization system for navigation of visually impaired people in buildings. Sensors 2019, 19, 2114. [Google Scholar] [CrossRef] [Green Version]
- Orujov, F.; Maskeliūnas, R.; Damaeviius, R.; Wei, W.; Li, Y. Smartphone based intelligent indoor positioning using fuzzy logic. Future Gener. Comput. Syst. 2018, 89, 335–348. [Google Scholar] [CrossRef]
- Wietrzykowski, J.; Skrzypczynski, P. A fast and practical method of indoor localization for resource-constrained devices with limited sensing. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 293–299. [Google Scholar]
- Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L.; Bennamoun, M. Deep learning for 3D point clouds: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 1. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Li, S.; Xu, C.; Xie, M. A robust O(n) solution to the perspective-n-point problem. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1444–1450. [Google Scholar] [CrossRef] [PubMed]
- Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
- Pomerleau, F.; Colas, F.; Siegwart, R. A review of point cloud registration algorithms for mobile robotics. Found. Trends Robot. 2015, 4, 1–104. [Google Scholar] [CrossRef] [Green Version]
- Barone, F.; Marrazzo, M.; Oton, C.J. Camera calibration with weighted direct linear transformation and anisotropic uncertainties of image control points. Sensors 2020, 20, 1175. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, T.; Pei, L.; Xiang, Y.; Wu, Q.; Xia, S.; Tao, L.; Yu, W. P3-LOAM: PPP/LiDAR loosely coupled SLAM with accurate covariance estimation and robust RAIM in urban canyon environment. IEEE Sens. J. 2021, 21, 6660–6671. [Google Scholar] [CrossRef]
- Zhang, H.; Ye, C. DUI-VIO: Depth uncertainty incorporated visual inertial odometry based on an RGB-D camera. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 5002–5008. [Google Scholar]
- Sorkine, O. Least-squares rigid motion using SVD. Tech. Notes 2009, 120, 52. [Google Scholar]
- Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle adjustment—A modern synthesis. In Vision Algorithms: Theory and Practice, Proceedings of the International Workshop on Vision Algorithms, Corfu, Greece, 21–22 September 1999; Springer: Cham, Switzerland, 1999. [Google Scholar]
- Bouguet, J.Y. Pyramidal implementation of the affine Lucas Kanade feature tracker description of the algorithm. Intel Corp. 2001, 5, 1–10. [Google Scholar]
- Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
- Zhang, T.; Zhang, H.; Nakamura, Y.; Yang, L.; Zhang, L. Flowfusion: Dynamic dense RGB-D SLAM based on optical flow. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar]
- Xu, J.; Ranftl, R.; Koltun, V. Accurate optical flow via direct cost volume processing. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5807–5815. [Google Scholar]
- Ma, L.; Stuckler, J.; Kerl, C.; Cremers, D. Multi-view deep learning for consistent semantic mapping with RGB-D cameras. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 598–605. [Google Scholar]
- Qi, X.; Liao, R.; Jia, J.; Fidler, S.; Urtasun, R. In 3D graph neural networks for RGBD semantic segmentation. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
- Liao, Y.; Huang, L.; Wang, Y.; Kodagoda, S.; Yu, Y.; Liu, Y. Parse geometry from a line: Monocular depth estimation with partial laser observation. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5059–5066. [Google Scholar]
- Shin, Y.S.; Park, Y.S.; Kim, A. Direct visual SLAM using sparse depth for camera-LiDAR system. In Proceedings of the 2018 International Conference on Robotics and Automation, Brisbane, Australia, 21–25 May 2018. [Google Scholar]
- De Silva, V.; Roche, J.; Kondoz, A. Fusion of LiDAR and camera sensor data for environment sensing in driverless vehicles. arXiv 2017, arXiv:1710.06230. [Google Scholar]
- Scherer, S.; Rehder, J.; Achar, S.; Cover, H.; Chambers, A.; Nuske, S.; Singh, S. River mapping from a flying robot: State estimation, river detection, and obstacle mapping. Auton. Robot. 2012, 33, 189–214. [Google Scholar] [CrossRef]
- Huang, K.; Xiao, J.; Stachniss, C. Accurate direct visual-laser odometry with explicit occlusion handling and plane detection. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
- Pascoe, G.; Maddern, W.; Newman, P. Direct visual localisation and calibration for road vehicles in changing city environments. In Proceedings of the 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), Santiago, Chile, 7–13 December 2015; pp. 98–105. [Google Scholar]
- Zhen, W.; Hu, Y.; Yu, H.; Scherer, S. LiDAR-enhanced structure-from-motion. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6773–6779. [Google Scholar]
- Park, C.; Moghadam, P.; Kim, S.; Sridharan, S.; Fookes, C. Spatiotemporal camera-LiDAR calibration: A targetless and structureless approach. IEEE Robot. Autom. Lett. 2020, 5, 1556–1563. [Google Scholar] [CrossRef] [Green Version]
- Kummerle, J.; Kuhner, T. Unified intrinsic and extrinsic camera and LiDAR calibration under uncertainties. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA) Paris, France, 31 May–31 August 2020; pp. 6028–6034. [Google Scholar]
- Zhu, Y.; Li, C.; Zhang, Y. Online camera-LiDAR calibration with sensor semantic information. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 4970–4976. [Google Scholar]
- Delmerico, J.; Scaramuzza, D. A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 2502–2509. [Google Scholar] [CrossRef]
- Sun, S.-L.; Deng, Z.-L. Multi-sensor optimal information fusion Kalman filter. Automatica 2004, 40, 1017–1023. [Google Scholar] [CrossRef]
- Weiss, S.; Siegwart, R. Real-time metric state estimation for modular vision-inertial systems. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 4531–4537. [Google Scholar] [CrossRef] [Green Version]
- Lynen, S.; Achtelik, M.W.; Weiss, S.; Chli, M.; Siegwart, R. A robust and modular multi-sensor fusion approach applied to MAV navigation. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 3923–3929. [Google Scholar]
- Bloesch, M.; Omari, S.; Hutter, M.; Siegwart, R. Robust visual inertial odometry using a direct EKF-based approach. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 298–304. [Google Scholar]
- Leutenegger, S.; Lynen, S.; Bosse, M.; Siegwart, R.; Furgale, P. Keyframe-based visual–inertial odometry using nonlinear optimization. Int. J. Robot. Res. 2015, 34, 314–334. [Google Scholar] [CrossRef] [Green Version]
- Qin, T.; Li, P.; Shen, S. VINS-Mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
- Li, M.; Mourikis, A.I. High-precision, consistent EKF-based visual-inertial odometry. Int. J. Robot. Res. 2013, 32, 690–711. [Google Scholar] [CrossRef]
- Kim, C.; Sakthivel, R.; Chung, W.K. Unscented FastSLAM: A robust and efficient solution to the SLAM problem. IEEE Trans. Robot. 2008, 24, 808–820. [Google Scholar] [CrossRef]
- Thrun, S.; Montemerlo, M. The graph SLAM algorithm with applications to large-scale mapping of urban structures. Int. J. Robot. Res. 2006, 25, 403–429. [Google Scholar] [CrossRef]
- Chen, C.; Wang, B.; Lu, C.X.; Trigoni, N.; Markham, A. A survey on deep learning for localization and mapping: Towards the age of spatial machine intelligence. arXiv 2020, arXiv:2006.12567. [Google Scholar]
- Clark, R.; Wang, S.; Wen, H.; Markham, A.; Trigoni, N. Vinet: Visual-inertial odometry as a sequence-to-sequence learning problem. In Proceedings of the 2017 AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
- Han, L.; Lin, Y.; Du, G.; Lian, S. DeepVIO: Self-supervised deep learning of monocular visual inertial odometry using 3D geometric constraints. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 6906–6913. [Google Scholar]
- Benini, A.; Mancini, A.; Longhi, S. An IMU/UWB/vision-based extended Kalman filter for mini-UAV localization in indoor environment using 802.15.4a wireless sensor network. J. Intell. Robot. Syst. 2013, 70, 461–476. [Google Scholar] [CrossRef]
- Masiero, A.; Perakis, H.; Gabela, J.; Toth, C.; Gikas, V.; Retscher, G.; Goel, S.; Kealy, A.; Koppányi, Z.; Błaszczak-Bak, W.; et al. Indoor navigation and mapping: Performance analysis of UWB-based platform positioning. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 549–555. [Google Scholar] [CrossRef]
- Queralta, J.P.; Almansa, C.M.; Schiano, F.; Floreano, D.; Westerlund, T. UWB-based system for UAV localization in GNSS-denied environments: Characterization and dataset. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 4521–4528. [Google Scholar]
- Zhu, Z.; Yang, S.; Dai, H.; Li, F. Loop detection and correction of 3D laser-based SLAM with visual information. In Proceedings of the Proceedings of the 31st International Conference on Computer Animation and Social Agents—CASA 2018, Beijing, China, 21–23 May 2018; Association for Computing Machinery (ACM): New York, NY, USA, 2018; pp. 53–58. [Google Scholar]
- Pandey, G.; Mcbride, J.R.; Savarese, S.; Eustice, R.M. Visually bootstrapped generalized ICP. In Proceedings of the IEEE International Conference on Robotics & Automation, Shanghai, China, 9–13 May 2011. [Google Scholar]
- Ratz, S.; Dymczyk, M.; Siegwart, R.; Dubé, R. Oneshot global localization: Instant LiDAR-visual pose estimation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar]
- Zhang, J.; Ramanagopal, M.S.; Vasudevan, R.; Johnson-Roberson, M. LiStereo: Generate dense depth maps from LiDAR and Stereo Imagery. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 7829–7836. [Google Scholar]
- Liang, J.; Patel, U.; Sathyamoorthy, A.J.; Manocha, D. Realtime collision avoidance for mobile robots in dense crowds using implicit multi-sensor fusion and deep reinforcement learning. arXiv 2020, arXiv:2004.03089v2. [Google Scholar]
- Surmann, H.; Jestel, C.; Marchel, R.; Musberg, F.; Elhadj, H.; Ardani, M. Deep reinforcement learning for real autonomous mobile robot navigation in indoor environments. arXiv 2020, arXiv:2005.13857. [Google Scholar]
- Hol, J.D.; Dijkstra, F.; Luinge, H.; Schon, T.B. Tightly coupled UWB/IMU pose estimation. In Proceedings of the 2009 IEEE International Conference on Ultra-Wideband, Vancouver, BC, Canada, 9–11 September 2009; pp. 688–692. [Google Scholar]
- Qin, C.; Ye, H.; Pranata, C.; Han, J.; Zhang, S.; Liu, M. R-lins: A robocentric LiDAR-inertial state estimator for robust and efficient navigation. arXiv 2019, arXiv:1907.02233. [Google Scholar]
- Moore, J.B. Discrete-time fixed-lag smoothing algorithms. Automatica 1973, 9, 163–173. [Google Scholar] [CrossRef]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Rus, D. Lio-sam: Tightly-coupled LiDAR inertial odometry via smoothing and mapping. arXiv 2020, arXiv:2007.00258v2. [Google Scholar]
- Velas, M.; Spanel, M.; Hradis, M.; Herout, A. CNN for IMU assisted odometry estimation using velodyne LiDAR. In Proceedings of the 2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Lisbon, Portugal, 25–27 April 2018; pp. 71–77. [Google Scholar]
- Le Gentil, C.; Vidal-Calleja, T.; Huang, S. 3D LiDAR-IMU calibration based on upsampled preintegrated measurements for motion distortion correction. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 2149–2155. [Google Scholar]
- Mueller, M.W.; Hamer, M.; D’Andrea, R. Fusing ultra-wideband range measurements with accelerometers and rate gyroscopes for quadrocopter state estimation. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 1730–1736. [Google Scholar]
- Corrales, J.A.; Candelas, F.A.; Torres, F. Hybrid tracking of human operators using IMU/UWB data fusion by a Kalman filter. In Proceedings of the 3rd International Conference on Intelligent Information Processing; Association for Computing Machinery (ACM), Amsterdam, The Netherlands, 12–15 March 2008; pp. 193–200. [Google Scholar]
- Zhang, M.; Xu, X.; Chen, Y.; Li, M. A Lightweight and accurate localization algorithm using multiple inertial measurement units. IEEE Robot. Autom. Lett. 2020, 5, 1508–1515. [Google Scholar] [CrossRef] [Green Version]
- Ding, X.; Wang, Y.; Li, D.; Tang, L.; Yin, H.; Xiong, R. Laser map aided visual inertial localization in changing environment. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4794–4801. [Google Scholar]
- Zuo, X.; Yang, Y.; Geneva, P.; Lv, J.; Liu, Y.; Huang, G.; Pollefeys, M. Lic-fusion 2.0: LiDAR-inertial-camera odometry with sliding-window plane-feature tracking. arXiv 2020, arXiv:2008.07196. [Google Scholar]
- Jiang, G.; Yin, L.; Jin, S.; Tian, C.; Ma, X.; Ou, Y. A simultaneous localization and mapping (SLAM) framework for 2.5D map building based on low-cost LiDAR and vision fusion. Appl. Sci. 2019, 9, 2105. [Google Scholar] [CrossRef] [Green Version]
- Tian, M.; Nie, Q.; Shen, H. 3D scene geometry-aware constraint for camera localization with deep learning. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 4211–4217. [Google Scholar]
- Robot@Home Dataset. Available online: http://mapir.isa.uma.es/mapirwebsite/index.php/mapir-downloads/203-robot-at-home-dataset (accessed on 9 January 2021).
- Rgb-D Dataset 7-Scenes—Microsoft Research. Available online: https://www.microsoft.com/en-us/research/project/rgb-d-dataset-7-scenes/ (accessed on 9 January 2021).
- Imperial College London. ICL-NUIM RGB-D Benchmark Dataset. Available online: http://www.doc.ic.ac.uk/~ahanda/VaFRIC/iclnuim.html (accessed on 9 January 2021).
- Ruiz-Sarmiento, J.R.; Galindo, C.; Gonzalez-Jimenez, J. Robot@Home, a robotic dataset for semantic mapping of home environments. Int. J. Robot. Res. 2017, 36, 131–141. [Google Scholar] [CrossRef] [Green Version]
- Ruiz-Sarmiento, J.R.; Galindo, C.; Gonzalez-Jimenez, J. Building multiversal semantic maps for mobile robot operation. Knowl. Based Syst. 2017, 119, 257–272. [Google Scholar] [CrossRef]
- Mariano, J.; Javier, M.; Manuel, L.A.; Javier, G.J. Robust planar odometry based on symmetric range flow and multiscan alignment. IEEE Trans. Robot. 2018, 34, 1623–1635. [Google Scholar]
- Moreno, F.-A.; Monroy, J.; Ruiz-Sarmiento, J.-R.; Galindo, C.; Gonzalez-Jimenez, J. Automatic waypoint generation to improve robot navigation through narrow spaces. Sensors 2019, 20, 240. [Google Scholar] [CrossRef] [Green Version]
- Fallon, M.; Johannsson, H.; Kaess, M.; Leonard, J.J. The MIT Stata Center dataset. Int. J. Robot. Res. 2013, 32, 1695–1699. [Google Scholar] [CrossRef] [Green Version]
- Huitl, R.; Schroth, G.; Hilsenbeck, S.; Schweiger, F.; Steinbach, E. TUMindoor: An extensive image and point cloud dataset for visual indoor localization and mapping. In Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 1773–1776. [Google Scholar]
- Blanco-Claraco, J.-L.; Moreno-Dueñas, F.-Á.; González-Jiménez, J. The Málaga urban dataset: High-rate stereo and LiDAR in a realistic urban scenario. Int. J. Robot. Res. 2014, 33, 207–214. [Google Scholar] [CrossRef] [Green Version]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef] [Green Version]
- Rusli, I.; Trilaksono, B.R.; Adiprawita, W. RoomSLAM: Simultaneous localization and mapping with objects and indoor layout structure. IEEE Access 2020, 8, 196992–197004. [Google Scholar] [CrossRef]
- Nikoohemat, S.; Diakité, A.A.; Zlatanova, S.; Vosselman, G. Indoor 3D reconstruction from point clouds for optimal routing in complex buildings to support disaster management. Autom. Constr. 2020, 113, 103109. [Google Scholar] [CrossRef]
- Feng, D.; Haase-Schutz, C.; Rosenbaum, L.; Hertlein, H.; Glaser, C.; Timm, F.; Wiesbeck, W.; Dietmayer, K. Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges. IEEE Trans. Intell. Transp. Syst. 2020, 2020, 2972974. [Google Scholar] [CrossRef] [Green Version]
- Glocker, B.; Izadi, S.; Shotton, J.; Criminisi, A. Real-time RGB-D camera relocalization. In Proceedings of the IEEE International Symposium on Mixed & Augmented Reality, Adelaide, Australia, 1–4 October 2013. [Google Scholar]
- Shotton, J.; Glocker, B.; Zach, C.; Izadi, S.; Criminisi, A.; FitzGibbon, A. Scene coordinate regression forests for camera relocalization in RGB-D images. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 2930–2937. [Google Scholar]
- Handa, A.; Whelan, T.; McDonald, J.; Davison, A.J. A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 1524–1531. [Google Scholar]
- Shetty, A.; Gao, G.X. UAV pose estimation using cross-view geolocalization with satellite imagery. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
- Whelan, T.; Leutenegger, S.; Salas-Moreno, R.F.; Glocker, B.; Davison, A.J. Elasticfusion: Dense SLAM without a pose graph. In Proceedings of the Robotics: Science & Systems 2015, Rome, Italy, 13–17 July 2015. [Google Scholar]
- Tateno, K.; Tombari, F.; Laina, I.; Navab, N. CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6565–6574. [Google Scholar]
- Delmerico, J.; Cieslewski, T.; Rebecq, H.; Faessler, M.; Scaramuzza, D. Are we ready for autonomous drone racing? The UZH-FPV drone racing dataset. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 6713–6719. [Google Scholar]
- Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A benchmark for the evaluation of RGB-D SLAM systems. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 573–580. [Google Scholar]
- Dai, A.; Chang, A.X.; Savva, M.; Halber, M.; Funkhouser, T.; Niessner, M. ScanNet: Richly-annotated 3D reconstructions of indoor scenes. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2432–2443. [Google Scholar]
- Silberman, N.; Hoiem, D.; Kohli, P.; Fergus, R. Indoor segmentation and support inference from RGBD images. In Proceedings of the 2012 European Conference on Computer Vision (ECCV), Firenze, Italy, 7–13 October 2012; pp. 746–760. [Google Scholar]
- Li, W.; Saeedi, S.; McCormac, J.; Clark, R.; Tzoumanikas, D.; Ye, Q.; Huang, Y.; Tang, R.; Leutenegger, S. Interiornet: Mega-scale multi-sensor photo-realistic indoor scenes dataset. arXiv 2018, arXiv:1809.00716. [Google Scholar]
- McCormac, J.; Handa, A.; Leutenegger, S.; Davison, A.J. Scenenet RGB-D: 5m photorealistic images of synthetic indoor trajectories with ground truth. arXiv 2016, arXiv:1612.05079. [Google Scholar]
- Gehrig, D.; Rebecq, H.; Gallego, G.; Scaramuzza, D. EKLT: Asynchronous photometric feature tracking using events and frames. Int. J. Comput. Vis. 2020, 128, 601–618. [Google Scholar] [CrossRef]
- Rodriguez-Gomez, J.; Eguiluz, A.G.; Dios, J.M.-D.; Ollero, A. Asynchronous event-based clustering and tracking for intrusion monitoring in UAS. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 8518–8524. [Google Scholar]
- Leibe, B.; Matas, J.; Sebe, N.; Welling, M. Real-time Large-Scale Dense 3D Reconstruction with Loop Closure. In Proceedings of the 2016 European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016; pp. 500–516. [Google Scholar]
- Taira, H.; Okutomi, M.; Sattler, T.; Cimpoi, M.; Pollefeys, M.; Sivic, J.; Pajdla, T.; Torii, A. InLoc: Indoor visual localization with dense matching and view synthesis. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 1. [Google Scholar] [CrossRef] [Green Version]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. arXiv 2017, arXiv:1706.02413. [Google Scholar]
- Li, Y.; Bu, R.; Sun, M.; Wu, W.; Di, X.; Chen, B. Pointcnn: Convolution on x-transformed points. arXiv 2018, arXiv:1801.07791. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Eigen, D.; Fergus, R. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 2650–2658. [Google Scholar]
- Behley, J.; Garbade, M.; Milioto, A.; Quenzel, J.; Behnke, S.; Stachniss, C.; Gall, J. SemanticKITTI: A dataset for semantic scene understanding of LiDAR Sequences. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 9296–9306. [Google Scholar]
- Xu, B.; Li, W.; Tzoumanikas, D.; Bloesch, M.; Davison, A.; Leutenegger, S. MID-Fusion: Octree-based object-level multi-instance dynamic SLAM. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
- Zhang, S.; He, F.; Ren, W.; Yao, J. Joint learning of image detail and transmission map for single image dehazing. Vis. Comput. 2018, 36, 305–316. [Google Scholar] [CrossRef]
- Armeni, I.; Sax, S.; Zamir, A.R.; Savarese, S. Joint 2D–3D-semantic data for indoor scene understanding. arXiv 2017, arXiv:1702.01105. [Google Scholar]
- Tremblay, J.; To, T.; Sundaralingam, B.; Xiang, Y.; Fox, D.; Birchfield, S. Deep object pose estimation for semantic robotic grasping of household objects. arXiv 2018, arXiv:1809.10790. [Google Scholar]
- Bujanca, M.; Gafton, P.; Saeedi, S.; Nisbet, A.; Bodin, B.; O’Boyle, M.F.P.; Davison, A.J.; Kelly, P.H.J.; Riley, G.; Lennox, B.; et al. SLAMbench 3.0: Systematic automated reproducible evaluation of slam systems for robot vision challenges and scene understanding. In Proceedings of the 2019 International Conference on Robotics and Automation, Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
- Zhang, Z.; Scaramuzza, D. A tutorial on quantitative trajectory evaluation for visual(-inertial) odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 7244–7251. [Google Scholar]
Sensor | Methods | Environments | Advantages | Disadvantages |
---|---|---|---|---|
Monocular Camera | ORB-SLAM [2] LSD-SLAM [24] SVO [25] | Small/medium indoors | Robust and versatile | Sensitive to illumination changes |
Stereo Camera | ORB-SLAM 2 [21] RTAB-MAP [22] | Small/medium indoors | Higher accuracy but fine calculation | Sensitive to illumination changes |
RGB-D Camera | RGBD-SLAM-V2 [27] DVO-SLAM [28] RTAB-MAP [22] | Small/medium indoors | Acquisitively of depth information | Significant noise |
LiDAR | GMapping [29] Cartographer [32] IMLS-SLAM [30] LOAM [31] | Both indoors and outdoors | High measurement accuracy | Insufficient in supporting object detection |
IMU | Combined with LiDAR: LIO [35] Combined with camera: MSCKF [36] | Only measure itself | Accurate in a short time | Unavoidable and obvious cumulative errors |
UWB System | Ranging algorithm: TOF [39], AOA [40], TOA [40] and TDOA [37] | Indoors | High location accuracy | Insufficient flexibility in applications |
Wi-Fi, Bluetooth, ZigBee, infrared, ultrasonic, etc. | Distance measurement method which is similar to UWB system [6,7,8,9,10] | Mainly indoors | High applicability of visually impaired environments | Insufficient location accuracy and vulnerable to occlusion interference |
Dominant Sensor | Assisted Sensor | Related Works |
---|---|---|
Visual Sensor | Depth | Motion Estimation Methods: Feature points’ matching (PnP [46]/ICP [47]) Optical flow (LK optical flow [54], based on deep learning [56,57]) Direct method (sparse/dense/semi-dense [27,28]) Semantic SLAM based on deep architectures [58,59] |
LiDAR | Fusion location in feature level [34,60,61,62] Assistance of edge detection of visual detection [63,64,65] | |
IMU | Loosely Coupled (ssf [72]/msf [73]) Tightly Coupled (Filtering Methods: MSCKF [36]/ROVIO [74] and Optimization Methods: OKVIS [75]/VINS-mono [76]) Deep learning based coupled methods [81,82] | |
UWB System | Feature-level fusion (EKF fusion [83], based on deep learning [84]) Decision-level fusion (Fusion loop detection [85]) | |
LiDAR | Visual Sensor | Fusion loop detection [86,87] Optimization of 3D LiDAR point cloud [13] Based on deep learning [88,89] Based on DRL [90,91] |
IMU | Feature-level fusion (based on EKF [92,93], nonlinear optimization method [35,94,95] Decision-level fusion (Fusion motion estimation [31,97]) | |
UWB System | IMU | EKF fusion location [98,99] |
Others | No dominant sensor (V-LOAM [20], and others [102,103,104]) |
Category | Representatives | Modalities | Related Works | Year |
---|---|---|---|---|
Devices Setup on Robots | Robot@Home [108] | RGB Depth LiDAR IMU | Semantic Structure Analysis [109], LiDAR Odometery [110], Path Planning [111], etc. | 2017 |
UZH-FPV Drone Racing [125] | RGB Depth IMU | Visual Feature Tracking [131], UAV Navigation [132], etc. | 2019 | |
TUM RGB-D Dataset [126] | RGB Depth Camera Posture | Visual SLAM [2,21,24,25,28] | 2012 | |
Datasets Collected by Hand | Microsoft 7 Scenes [119] | RGB Depth Camera Posture | Camera Relocation [119,120], 3D Reconstruction [133], RGB-D SLAM [134], etc. | 2013 |
ScanNet [127] | RGB Depth Camera Posture Semantic Labels | Visual Feature Extraction [135], 3D Point Cloud [136], etc. | 2017 | |
NYU V2 [128] | RGB Depth Semantic Labels | Semantic Segmentation [137,138,139] | 2012 | |
Datasets of 3D Virtual Scenes | ICL-NUIM RGB-D Benchmark [121] | RGB Depth Camera Posture | UAV Pose Estimation [122], RGB-D SLAM [123,124], etc. | 2014 |
InteriorNet [129] | RGB Depth Semantic Labels | Semantic Dataset [140], RGB-D SLAM [141], Computer Vision [142], etc. | 2018 | |
SceneNet RGB-D [130] | RGB Depth Camera Posture, Semantic Labels | Semantic Dataset [143], Fusion Pose Estimation [144], etc. | 2016 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Qu, Y.; Yang, M.; Zhang, J.; Xie, W.; Qiang, B.; Chen, J. An Outline of Multi-Sensor Fusion Methods for Mobile Agents Indoor Navigation. Sensors 2021, 21, 1605. https://doi.org/10.3390/s21051605
Qu Y, Yang M, Zhang J, Xie W, Qiang B, Chen J. An Outline of Multi-Sensor Fusion Methods for Mobile Agents Indoor Navigation. Sensors. 2021; 21(5):1605. https://doi.org/10.3390/s21051605
Chicago/Turabian StyleQu, Yuanhao, Minghao Yang, Jiaqing Zhang, Wu Xie, Baohua Qiang, and Jinlong Chen. 2021. "An Outline of Multi-Sensor Fusion Methods for Mobile Agents Indoor Navigation" Sensors 21, no. 5: 1605. https://doi.org/10.3390/s21051605
APA StyleQu, Y., Yang, M., Zhang, J., Xie, W., Qiang, B., & Chen, J. (2021). An Outline of Multi-Sensor Fusion Methods for Mobile Agents Indoor Navigation. Sensors, 21(5), 1605. https://doi.org/10.3390/s21051605