Three-Dimensional Reconstruction of Indoor and Outdoor Environments Using a Stereo Catadioptric System
<p>Panoramic 3D reconstruction system formed by two catadioptric cameras (upper camera and lower camera), each composed by a parabolic mirror and a Marlin F-080c camera.</p> "> Figure 2
<p>Calibration pattern. (<b>a</b>) Chessboard pattern (<b>b</b>) Images of the pattern using the upper camera. (<b>c</b>) Images of the pattern using the lower camera.</p> "> Figure 3
<p>Epipolar geometry between two catadioptric cameras with parabolic mirrors. <span class="html-italic">R</span> is the rotation matrix and <math display="inline"><semantics> <mi mathvariant="bold">T</mi> </semantics></math> is the translation vector, between the upper and lower mirrors (PMU→PML) expressed as a skew symmetric matrix. <math display="inline"><semantics> <mi mathvariant="sans-serif">Π</mi> </semantics></math> is the epipolar plane.</p> "> Figure 4
<p>Panoramic 3D reconstruction (best seen in color). The procedure is the following: (1) Capture images from the catadioptric camera system, (2) unwrap the omnidirectional images to obtain panoramic images (3) to search for feature matches and (4) filter the matches using epipolar constraints. (5) Convert features back to catadioptric image coordinates and (6) to mirror coordinates. (7) Perform 3D reconstruction.</p> "> Figure 5
<p>Feature point and epipolar curve (best seen in color). (<b>a</b>) Feature point (red mark) in the upper catadioptric image, (<b>b</b>) Feature point (red mark) and epipolar curve in the lower catadioptric image, (<b>c</b>) Feature point (red mark) in the upper panoramic image, (<b>d</b>) Feature point (red mark) and epipolar curve in the lower panoramic image.</p> "> Figure 6
<p>Feature matching points on the upper and lower cameras and 3D reconstruction for each comparing method.</p> "> Figure 7
<p>DeepMatching features and reconstruction. (<b>a</b>,<b>b</b>) show the original DeepMatching on the upper and lower camera, respectively. (<b>c</b>) shows the 3D reconstruction using the original DeepMatching. (<b>d</b>,<b>e</b>) show the filtered DeepMatching using epipolar constraints with <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> on the upper and lower cameras. (<b>f</b>) shows the 3D reconstruction using filtered DeepMatching.</p> "> Figure 8
<p>Amount of DeepMatching features obtained with different levels of filtering. The more we increase the distance threshold, the more matches we get, but also, the more error we are allowing in the reconstruction.</p> "> Figure 9
<p>Matching points in the panoramic mirrors (best seen in color). Pink points denote the features in the panoramic upper mirror PMU, and blue points are the features on the panoramic lower mirror PML. (<b>a</b>) Shows Harris corners on the mirrors, and (<b>b</b>) shows the DeepMatching points on the mirrors.</p> "> Figure 10
<p>3D reconstruction using DeepMatching. Figures (<b>a</b>–<b>c</b>) show the reconstruction with features filtered at <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math> respectively.</p> "> Figure 11
<p>DeepMatching and filtered DeepMatching for 3D reconstruction of a known pattern.</p> "> Figure 12
<p>Qualitative reconstructions results using filtered DeepMatching. (<b>a</b>) Shows the reconstruction of a squared box, (<b>b</b>) shows the reconstruction of a hat, and (<b>c</b>) shows the reconstruction of a clay pot.</p> ">
Abstract
:1. Introduction
2. Catadioptric Vision System
Experimental Setup
3. Methodology
3.1. Catadioptric Camera Calibration
3.2. Epipolar Geometry for Panoramic Cameras
3.3. Stereo Reconstruction
- 1
- Capture one image of the environment from each camera of the calibrated system.
- 2
- Transform the catadioptric images to panoramic images using Algorithm 1.
- 3
- Extract the features and descriptors from the panoramic images using a feature point detector such as SIFT, SURF, KAZE, a corner detector such as Harris, or more advanced feature detectors such as DeepMatching [40], CPM [45], or SuperGlue [39]. Match the points between the upper and lower camera features as described in Section 3.3.1.
- 4
- Filter the wrong matches using epipolar constraints as described in Section 3.2.
- 5
- Map the matching points coordinates from panoramic back to catadioptric images coordinates, using Algorithm 2.
- 6
- Transform the catadioptric image points to the corresponding mirror PMU and PML.
- 7
- Obtain the 3D reconstruction by triangulating the mirror points points using Equation (15) describe in Section 3.3.2.
Algorithm 1:Transforming catadioptric images to panoramic images |
Input: img_cat, catadioptric image , , optical center Output: img_pan, panoramic image , Look up table X-coordinates , Look up table Y-coordinates Procedure; ; ; ; to ; to ; ; ; ; ; ; |
Algorithm 2:Mapping between catadioptric to panoramic images |
3.3.1. Feature Detection and Matching
3.3.2. 3D Reconstruction
4. Results
4.1. Calibration Results
4.2. Epipolar Geometry Results
4.3. Features Matching Results
4.4. 3D Reconstruction Results
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Jiang, W.; Okutomi, M.; Sugimoto, S. Panoramic 3D reconstruction using rotational stereo camera with simple epipolar constraints. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; Volume 1, pp. 371–378. [Google Scholar]
- Deng, X.; Wu, F.; Wu, Y.; Wan, C. Automatic spherical panorama generation with two fisheye images. In Proceedings of the 2008 7th World Congress on Intelligent Control and Automation, Chongqing, China, 25–27 June 2008; pp. 5955–5959. [Google Scholar]
- Sagawa, R.; Kurita, N.; Echigo, T.; Yagi, Y. Compound catadioptric stereo sensor for omnidirectional object detection. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2612–2617. [Google Scholar]
- Chen, S.; Xiang, Z.; Zou, N.; Chen, Y.; Qiao, C. Multi-stereo 3D reconstruction with a single-camera multi-mirror catadioptric system. Meas. Sci. Technol. 2019, 31, 015102. [Google Scholar] [CrossRef]
- Fiala, M.; Basu, A. Panoramic stereo reconstruction using non-SVP optics. Comput. Vis. Image Underst. 2005, 98, 363–397. [Google Scholar] [CrossRef]
- Jaramillo, C.; Valenti, R.G.; Xiao, J. GUMS: A generalized unified model for stereo omnidirectional vision (demonstrated via a folded catadioptric system). In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 2528–2533. [Google Scholar]
- Lin, S.S.; Bajcsy, R. High resolution catadioptric omni-directional stereo sensor for robot vision. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), Taipei, Taiwan, 14–19 September 2003; Volume 2, pp. 1694–1699. [Google Scholar]
- Ragot, N.; Ertaud, J.; Savatier, X.; Mazari, B. Calibration of a panoramic stereovision sensor: Analytical vs. interpolation-based methods. In Proceedings of the Iecon 2006-32nd Annual Conference on Ieee Industrial Electronics, Paris, France, 6–10 November 2006; pp. 4130–4135. [Google Scholar]
- Cabral, E.L.; De Souza, J.; Hunold, M.C. Omnidirectional stereo vision with a hyperbolic double lobed mirror. In Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 26 August 2004; Volume 1, pp. 1–9. [Google Scholar]
- Jang, G.; Kim, S.; Kweon, I. Single-camera panoramic stereo system with single-viewpoint optics. Opt. Lett. 2006, 31, 41–43. [Google Scholar] [CrossRef] [Green Version]
- Su, L.; Luo, C.; Zhu, F. Obtaining obstacle information by an omnidirectional stereo vision system. In Proceedings of the 2006 IEEE International Conference on Information Acquisition, Weihai, China, 20–23 August 2006; pp. 48–52. [Google Scholar]
- Caron, G.; Marchand, E.; Mouaddib, E.M. 3D model based pose estimation for omnidirectional stereovision. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 5228–5233. [Google Scholar]
- Yi, S.; Ahuja, N. An omnidirectional stereo vision system using a single camera. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; Volume 4, pp. 861–865. [Google Scholar]
- Li, W.; Li, Y.F. Single-camera panoramic stereo imaging system with a fisheye lens and a convex mirror. Opt. Express 2011, 19, 5855–5867. [Google Scholar] [CrossRef] [PubMed]
- Xu, J.; Wang, P.; Yao, Y.; Liu, S.; Zhang, G. 3D multi-directional sensor with pyramid mirror and structured light. Opt. Lasers Eng. 2017, 93, 156–163. [Google Scholar] [CrossRef] [Green Version]
- Tan, K.H.; Hua, H.; Ahuja, N. Multiview panoramic cameras using mirror pyramids. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 941–946. [Google Scholar] [CrossRef]
- Schönbein, M.; Kitt, B.; Lauer, M. Environmental Perception for Intelligent Vehicles Using Catadioptric Stereo Vision Systems. In Proceedings of the 5th European Conference on Mobile Robots (ECMR), Berlin, Germany, 1 January 2018; pp. 189–194. [Google Scholar]
- Ehlgen, T.; Pajdla, T.; Ammon, D. Eliminating blind spots for assisted driving. IEEE Trans. Intell. Transp. Syst. 2008, 9, 657–665. [Google Scholar] [CrossRef]
- Xu, J.; Gao, B.; Liu, C.; Wang, P.; Gao, S. An omnidirectional 3D sensor with line laser scanning. Opt. Lasers Eng. 2016, 84, 96–104. [Google Scholar] [CrossRef] [Green Version]
- Lauer, M.; Schönbein, M.; Lange, S.; Welker, S. 3D-objecttracking with a mixed omnidirectional stereo camera system. Mechatronics 2011, 21, 390–398. [Google Scholar] [CrossRef]
- Jaramillo, C.; Valenti, R.; Guo, L.; Xiao, J. Design and Analysis of a Single—Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs). Sensors 2016, 16, 217. [Google Scholar] [CrossRef] [Green Version]
- Jaramillo, C.; Yang, L.; Muñoz, J.P.; Taguchi, Y.; Xiao, J. Visual odometry with a single-camera stereo omnidirectional system. Mach. Vis. Appl. 2019, 30, 1145–1155. [Google Scholar] [CrossRef]
- Almaraz-Cabral, C.C.; Gonzalez-Barbosa, J.J.; Villa, J.; Hurtado-Ramos, J.B.; Ornelas-Rodriguez, F.J.; Córdova-Esparza, D.M. Fringe projection profilometry for panoramic 3D reconstruction. Opt. Lasers Eng. 2016, 78, 106–112. [Google Scholar] [CrossRef]
- Flores, V.; Casaletto, L.; Genovese, K.; Martinez, A.; Montes, A.; Rayas, J. A panoramic fringe projection system. Opt. Lasers Eng. 2014, 58, 80–84. [Google Scholar] [CrossRef]
- Kerkaou, Z.; Alioua, N.; El Ansari, M.; Masmoudi, L. A new dense omnidirectional stereo matching approach. In Proceedings of the 2018 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 2–4 April 2018; pp. 1–8. [Google Scholar]
- Ma, C.; Shi, L.; Huang, H.; Yan, M. 3D reconstruction from full-view fisheye camera. arXiv 2015, arXiv:1506.06273. [Google Scholar]
- Song, M.; Watanabe, H.; Hara, J. Robust 3D reconstruction with omni-directional camera based on structure from motion. In Proceedings of the 2018 International Workshop on Advanced Image Technology (IWAIT), Chiang Mai, Thailand, 7–9 January 2018; pp. 1–4. [Google Scholar]
- Corke, P. Robotics, Vision and Control: Fundamental Algorithms in MATLAB® Second, Completely Revised; Springer: New York, NY, USA, 2017; Volume 118. [Google Scholar]
- Boutteau, R.; Savatier, X.; Ertaud, J.Y.; Mazari, B. An omnidirectional stereoscopic system for mobile robot navigation. In Proceedings of the 2008 International Workshop on Robotic and Sensors Environments, Ottawa, ON, Canada, 17–18 October 2008; pp. 138–143. [Google Scholar]
- Zhou, F.; Chai, X.; Chen, X.; Song, Y. Omnidirectional stereo vision sensor based on single camera and catoptric system. Appl. Opt. 2016, 55, 6813–6820. [Google Scholar] [CrossRef]
- Jang, G.; Kim, S.; Kweon, I. Single camera catadioptric stereo system. In Proceedings of the 6th Workshop on Omnidirectional Vision, Camera Networks and Non-Classical Cameras, Beijing, China, 1 January 2006. [Google Scholar]
- Ragot, N.; Rossi, R.; Savatier, X.; Ertaud, J.; Mazari, B. 3D volumetric reconstruction with a catadioptric stereovision sensor. In Proceedings of the 2008 IEEE International Symposium on Industrial Electronics, Cambridge, UK, 30 June–2 July 2008; pp. 1306–1311. [Google Scholar]
- Ricci, E.; Ouyang, W.; Wang, X.; Sebe, N. Monocular depth estimation using multi-scale continuous CRFs as sequential deep networks. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 1426–1440. [Google Scholar]
- Xu, D.; Ouyang, W.; Wang, X.; Sebe, N. Pad-net: Multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 675–684. [Google Scholar]
- Fu, H.; Gong, M.; Wang, C.; Batmanghelich, K.; Tao, D. Deep ordinal regression network for monocular depth estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2002–2011. [Google Scholar]
- Won, C.; Ryu, J.; Lim, J. Omnimvs: End-to-end learning for omnidirectional stereo matching. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–3 November 2019; pp. 8987–8996. [Google Scholar]
- Ma, W.C.; Wang, S.; Hu, R.; Xiong, Y.; Urtasun, R. Deep rigid instance scene flow. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–21 June 2019; pp. 3614–3622. [Google Scholar]
- Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas, C.; Golkov, V.; Van Der Smagt, P.; Cremers, D.; Brox, T. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2758–2766. [Google Scholar]
- Sarlin, P.E.; DeTone, D.; Malisiewicz, T.; Rabinovich, A. SuperGlue: Learning Feature Matching with Graph Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–18 June 2020. [Google Scholar]
- Revaud, J.; Weinzaepfel, P.; Harchaoui, Z.; Schmid, C. Deepmatching: Hierarchical deformable dense matching. Int. J. Comput. Vis. 2016, 120, 300–323. [Google Scholar] [CrossRef] [Green Version]
- Córdova-Esparza, D.M.; Gonzalez-Barbosa, J.J.; Hurtado-Ramos, J.B.; Ornelas-Rodriguez, F.J. A panoramic 3D reconstruction system based on the projection of patterns. Int. J. Adv. Robot. Syst. 2014, 11, 55. [Google Scholar] [CrossRef] [Green Version]
- Scaramuzza, D.; Martinelli, A.; Siegwart, R. A toolbox for easily calibrating omnidirectional cameras. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 5695–5701. [Google Scholar]
- Gonzalez-Barbosa, J.J.; Lacroix, S. Fast dense panoramic stereovision. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 1210–1215. [Google Scholar]
- Svoboda, T.; Pajdla, T. Epipolar geometry for central catadioptric cameras. Int. J. Comput. Vis. 2002, 49, 23–37. [Google Scholar] [CrossRef]
- Hu, Y.; Song, R.; Li, Y. Efficient coarse-to-fine patchmatch for large displacement optical flow. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 5704–5712. [Google Scholar]
- Harris, C.G.; Stephens, M. A combined corner and edge detector. In Proceedings of the Fourth Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 147–151. [Google Scholar]
- Shi, J. Good features to track. In Proceedings of the 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 593–600. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In European Conference on Computer Vision; Springer: New York, NY, USA, 2006; pp. 404–417. [Google Scholar]
- Leutenegger, S.; Chli, M.; Siegwart, R. BRISK: Binary robust invariant scalable keypoints. In Proceedings of the 2011 IEEE international conference on computer vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar]
- Rosten, E.; Drummond, T. Machine learning for high-speed corner detection. In European Conference on Computer Vision; Springer: New York, NY, USA, 2006; pp. 430–443. [Google Scholar]
- Alcantarilla, P.F.; Bartoli, A.; Davison, A.J. KAZE features. In European Conference on Computer Vision; Springer: New York, NY, USA, 2012; pp. 214–227. [Google Scholar]
- Schönbein, M. Omnidirectional Stereo Vision for Autonomous Vehicles; KIT Scientific Publishing: Karlsruhe, Germany, 2015; Volume 32. [Google Scholar]
- Corke, P. The Machine Vision Toolbox: A MATLAB toolbox for vision and vision-based control, Omnidirectional Stereo Vision for Autonomous Vehicles, IEEE Robot. IEEE Robot. Autom. Mag. 2005, 12, 16–25. [Google Scholar] [CrossRef]
- MagicLeap. SuperGlue Inference and Evaluation Demo Script. Available online: https://github.com/magicleap/SuperGluePretrainedNetwork (accessed on 9 July 2020).
- Revaud, J. DeepMatching: Deep Convolutional Matching. Available online: https://thoth.inrialpes.fr/src/deepmatching/ (accessed on 12 April 2020).
Polynomial Coefficients | Optical Center | ||||||
---|---|---|---|---|---|---|---|
Upper camera | −347.7309 | 0 | 0.0056 | 0 | 0 | 696.3600 | 474.1928 |
Lower camera | −162.3562 | 0 | 0.0023 | 0 | 0 | 698.3097 | 524.4199 |
Error [Pixels] | |
---|---|
Upper camera | 0.86 |
Lower camera | 0.52 |
Method | Number of Matches | Time (Msec) CPU/GPU |
---|---|---|
DeepMatching () | 1114 | 3400/210 |
Harris corner detector | 845 | 100 |
KAZE | 463 | 125 |
SuperGlue | 200 | 900/70 |
SURF | 105 | 150 |
SIFT | 76 | 350 |
MIN-EIGEN | 62 | 120 |
BRISK | 52 | 110 |
FAST | 36 | 80 |
Mean Error (mm) | Standard Deviation (mm) | |
---|---|---|
Pattern right | 20.59 | 2.23 |
Pattern center | 8.74 | 4.89 |
Pattern left | 9.57 | 5.56 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Córdova-Esparza, D.-M.; Terven, J.; Romero-González, J.-A.; Ramírez-Pedraza, A. Three-Dimensional Reconstruction of Indoor and Outdoor Environments Using a Stereo Catadioptric System. Appl. Sci. 2020, 10, 8851. https://doi.org/10.3390/app10248851
Córdova-Esparza D-M, Terven J, Romero-González J-A, Ramírez-Pedraza A. Three-Dimensional Reconstruction of Indoor and Outdoor Environments Using a Stereo Catadioptric System. Applied Sciences. 2020; 10(24):8851. https://doi.org/10.3390/app10248851
Chicago/Turabian StyleCórdova-Esparza, Diana-Margarita, Juan Terven, Julio-Alejandro Romero-González, and Alfonso Ramírez-Pedraza. 2020. "Three-Dimensional Reconstruction of Indoor and Outdoor Environments Using a Stereo Catadioptric System" Applied Sciences 10, no. 24: 8851. https://doi.org/10.3390/app10248851
APA StyleCórdova-Esparza, D. -M., Terven, J., Romero-González, J. -A., & Ramírez-Pedraza, A. (2020). Three-Dimensional Reconstruction of Indoor and Outdoor Environments Using a Stereo Catadioptric System. Applied Sciences, 10(24), 8851. https://doi.org/10.3390/app10248851