Three-Dimensional Visualization System with Spatial Information for Navigation of Tele-Operated Robots
<p>Demonstration of the wrap-around view monitor (WAVM).</p> "> Figure 2
<p>Schematic of the proposed system. RoI: region of interest.</p> "> Figure 3
<p>Fisheye camera image with distortion.</p> "> Figure 4
<p>The spherical geometric model.</p> "> Figure 5
<p>Undistorted image.</p> "> Figure 6
<p>Cropped and warped images of RoIs A and B in <a href="#sensors-19-00746-f005" class="html-fig">Figure 5</a>.</p> "> Figure 7
<p>Blended and stitched RoI A and B images using the four images in <a href="#sensors-19-00746-f006" class="html-fig">Figure 6</a>.</p> "> Figure 8
<p>Coordinate representation according to spatial information.</p> "> Figure 9
<p>Splitting image <span class="html-italic">I</span> with three-dimensional points by Equations (7)–(10).</p> "> Figure 10
<p>Splitting wall and floor images in <span class="html-italic">I</span><span class="html-italic"><sub>n</sub></span>. The upper region (blue line) of the divided image is mapped to the wall, while the lower region (red line) is mapped to the blank floor.</p> "> Figure 11
<p>An example of mapping the stitched RoI A and B images to the spatial model. The regions of the yellow and blue lines are the RoI B and RoI A images in <a href="#sensors-19-00746-f007" class="html-fig">Figure 7</a>. The region of the red line is the same as the region of the red line in <a href="#sensors-19-00746-f010" class="html-fig">Figure 10</a>.</p> "> Figure 12
<p>Results of mapping the stitched image to the spatial model in the corridor.</p> "> Figure 13
<p>Configuration of four cameras and a 360° laser scanner on the robot.</p> "> Figure 14
<p>Robot’s location and direction (<b>A–C</b>) in the corridor drawing.</p> "> Figure 15
<p>Comparison of the experimental results between the existing WAVM (left) [<a href="#B10-sensors-19-00746" class="html-bibr">10</a>] and the proposed system (right) at locations (<b>A–C</b>) in <a href="#sensors-19-00746-f014" class="html-fig">Figure 14</a>.</p> "> Figure 15 Cont.
<p>Comparison of the experimental results between the existing WAVM (left) [<a href="#B10-sensors-19-00746" class="html-bibr">10</a>] and the proposed system (right) at locations (<b>A–C</b>) in <a href="#sensors-19-00746-f014" class="html-fig">Figure 14</a>.</p> "> Figure 16
<p>Result comparisons of four methods in the same viewpoint. (<b>a</b>) Ground truth, (<b>b</b>) SLAM [<a href="#B17-sensors-19-00746" class="html-bibr">17</a>,<a href="#B18-sensors-19-00746" class="html-bibr">18</a>,<a href="#B19-sensors-19-00746" class="html-bibr">19</a>], (<b>c</b>) existing WAVM [<a href="#B10-sensors-19-00746" class="html-bibr">10</a>], (<b>d</b>) proposed system.</p> "> Figure 17
<p>Resulting images in various and complex environments.</p> "> Figure 17 Cont.
<p>Resulting images in various and complex environments.</p> ">
Abstract
:1. Introduction
2. Proposed Method
2.1. Image Stitching
2.1.1. Undistortion
2.1.2. RoI Cropping and Warping
2.1.3. Blending
2.1.4. Acceleration for Image Stitching
2.2. Spatial Modeling
2.2.1. Gathering Spatial Information Data
2.2.2. Converting 2D Points to 3D Points
2.3. Mapping the Stitched Image to the Spatial Model
3. Experimental Results
3.1. Subjective Comparative Experiment
3.2. Objective Comparative Experiment
4. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Lin, W.; Hu, J.; Xu, H.; Ye, C.; Ye, X.; Li, Z. Graph-based SLAM in indoor environment using corner feature from laser sensor. In Proceedings of the IEEE Youth Academic Annual Conference of Chinese Association of Automation, Hefei, China, 19–21 May 2017; pp. 1211–1216. [Google Scholar]
- Cheng, Y.; Bai, J.; Xiu, C. Improved RGB-D vision SLAM algorithm for mobile robot. In Proceedings of the Chinese Control and Decision Conference, Chongqing, China, 28–30 May 2017; pp. 5419–5423. [Google Scholar]
- Yuan, W.; Li, Z.; Su, C.-Y. RGB-D Sensor-based Visual SLAM for Localization and Navigation of Indoor Mobile Robot. In Proceedings of the IEEE International Conference on Advanced Robotics and Mechatronics, Macau, China, 18–20 August 2016; pp. 82–87. [Google Scholar]
- Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Huang, S.; Zhao, L.; Ge, J.; He, S.; Zhang, C.; Wang, X. High Quality 3D Reconstruction of Indoor Environments using RGB-D Sensors. In Proceedings of the IEEE Conference on Industrial Electronics and Applications, Siem Reap, Cambodia, 18–20 June 2017. [Google Scholar]
- Choi, S.; Zhou, Q.-Y.; Koltun, V. Robust Reconstruction of Indoor Scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Chen, J.; Bautembach, D.; Izadi, S. Scalable real-time volumetric surface reconstruction. ACM Trans. Graph. 2013, 32, 113. [Google Scholar] [CrossRef]
- Lin, C.-C.; Wang, M.-S. Topview Transform Model for the Vehicle Parking Assistance System. In Proceedings of the 2010 International Computer Symposium, Tainan, Taiwan, 16–18 December 2010; pp. 306–311. [Google Scholar]
- Jia, M.; Sun, Y.; Wang, J. Obstacle Detection in Stereo Bird’s Eye View Images. In Proceedings of the 2014 Information Technology and Artificial Intelligence Conference, Chongqing, China, 20–21 December 2014; pp. 254–257. [Google Scholar]
- Awashima, Y.; Komatsu, R.; Fujii, H.; Tamura, Y.; Yamashita, A.; Asama, H. Visualization of Obstacles on Bird’s-eye View Using Depth Sensor for Remote Controlled Robot. In Proceedings of the 2017 International Workshop on Advanced Image Technology, Penang, Malaysia, 6–8 January 2017. [Google Scholar]
- Szeliski, R. Computer Vision: Algorithms and Applications; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
- Shin, J.H.; Nam, D.H.; Kwon, G.J. Non-Metric Fish-Eye Lens Distortion Correction Using Ellipsoid Model; Human Computer Interaction Korea: Seoul, Korea, 2005; pp. 83–89. [Google Scholar]
- Sung, K.; Lee, J.; An, J.; Chang, E. Development of Image Synthesis Algorithm with Multi-Camera. In Proceedings of the IEEE Vehicular Technology Conference, Yokohama, Japan, 6–9 May 2012; pp. 1–5. [Google Scholar]
- Triggs, B.; Mclauchlan, P.F.; Hartley, R.I.; FitzGibbon, A.W. Bundle adjustment—A modern synthesis. In International Workshop on Vision Algorithms; Springer: Berlin/Heidelberg, Germany, 1999; pp. 298–375. [Google Scholar]
- Yebes, J.J.; Alcantarilla, P.F.; Bergasa, L.M.; Gonzalez, A.; Almazan, J. Surrounding View for Enhancing Safety on Vehicles. In Proceedings of the IEEE Intelligent Vehicles Symposium Workshops, Alcal de Henares, Madrid, Spain, 3–7 June 2012; pp. 92–95. [Google Scholar]
- Salomon, D. Transformations and Projections in Computer Graphics; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Labbe, M.; Michaud, F. Online Global Loop Closure Detection for Large-Scale Multi-Session Graph-Based SLAM. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 2661–2666. [Google Scholar]
- Labbe, M.; Michaud, F. Appearance-Based Loop Closure Detection for Online Large-Scale and Long-Term Operation. IEEE Trans. Robot. 2013, 29, 734–745. [Google Scholar] [CrossRef]
- Labbe, M.; Michaud, F. Memory management for real-time appearance-based loop closure detection. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and System, San Francisco, CA, USA, 25–30 September 2011; pp. 1271–1276. [Google Scholar]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kim, S.-H.; Jung, C.; Park, J. Three-Dimensional Visualization System with Spatial Information for Navigation of Tele-Operated Robots. Sensors 2019, 19, 746. https://doi.org/10.3390/s19030746
Kim S-H, Jung C, Park J. Three-Dimensional Visualization System with Spatial Information for Navigation of Tele-Operated Robots. Sensors. 2019; 19(3):746. https://doi.org/10.3390/s19030746
Chicago/Turabian StyleKim, Seung-Hun, Chansung Jung, and Jaeheung Park. 2019. "Three-Dimensional Visualization System with Spatial Information for Navigation of Tele-Operated Robots" Sensors 19, no. 3: 746. https://doi.org/10.3390/s19030746
APA StyleKim, S. -H., Jung, C., & Park, J. (2019). Three-Dimensional Visualization System with Spatial Information for Navigation of Tele-Operated Robots. Sensors, 19(3), 746. https://doi.org/10.3390/s19030746