Landing System Development Based on Inverse Homography Range Camera Fusion (IHRCF)
<p>Overview of the IHRCF algorithm and evaluation process.</p> "> Figure 2
<p>Drone downward camera view of the landing platform-<span class="html-italic">u</span> and <span class="html-italic">v</span> Pixel coordinate.</p> "> Figure 3
<p>Cartesian coordinates of the landing platform and the designated drone.</p> "> Figure 4
<p>Camera range calibration procedures in each stage.</p> "> Figure 5
<p>Range sensors’ touch points centralization with their corresponding color pads.</p> "> Figure 6
<p>Color segmentation result. (<b>a</b>) Color image-(<b>b</b>) Masked color pads’ image.</p> "> Figure 7
<p>Range sensor positions and angles w.r.t the camera.</p> "> Figure 8
<p>Touch points of the range sensor (blue), AprilTag center (cyan), AprilTag corners (cyan), and of the sensor unit principle point (yellow).</p> "> Figure 9
<p>Transformation from world-corresponding points to sensor touch points in the image plane.</p> "> Figure 10
<p>Image plane, red plane, and world plane, in light green, relations by homography.</p> "> Figure 11
<p>Estimation of the real-world 3D points in the camera frame.</p> "> Figure 12
<p>Experiment bench for the assessment of the technique along with Coordinate system used.</p> "> Figure 13
<p>Software-in-the-Loop (SIL) data transmission and the connection protocol.</p> "> Figure 14
<p>Euler angles, namely roll-pitch-yaw for the Stewart platform using IHRCF, GT, and ATDA.</p> "> Figure 15
<p>Translational results in <span class="html-italic">X</span>, <span class="html-italic">Y</span>, and <span class="html-italic">Z</span> directions for the Stewart platform using IHRCF, GT, and ATDA.</p> "> Figure 16
<p>Plots of 3D trajectories of the Stewart platform using the IHRCF, ATDA base pose estimation and the GT, with the <span class="html-italic">X</span>-<span class="html-italic">Z</span> view, <span class="html-italic">Y</span>-<span class="html-italic">Z</span> view, <span class="html-italic">X</span>-<span class="html-italic">Y</span> view, and 3D trajectory displayed in (<b>a</b>–<b>d</b>), respectively.</p> "> Figure 16 Cont.
<p>Plots of 3D trajectories of the Stewart platform using the IHRCF, ATDA base pose estimation and the GT, with the <span class="html-italic">X</span>-<span class="html-italic">Z</span> view, <span class="html-italic">Y</span>-<span class="html-italic">Z</span> view, <span class="html-italic">X</span>-<span class="html-italic">Y</span> view, and 3D trajectory displayed in (<b>a</b>–<b>d</b>), respectively.</p> "> Figure 17
<p>IHRCF-based Stewart platform Euler angles estimation error along the <span class="html-italic">X</span> (blue), <span class="html-italic">Y</span> (red), and <span class="html-italic">Z</span> (yellow) axes.</p> "> Figure 18
<p>IHRCF translational pose estimation errors for the Stewart platform along the <span class="html-italic">X</span> (blue), <span class="html-italic">Y</span> (red), and <span class="html-italic">Z</span> (yellow) axes.</p> "> Figure 19
<p>ATDA base Stewart platform translational estimation error along the <span class="html-italic">X</span> (blue), <span class="html-italic">Y</span> (red), and <span class="html-italic">Z</span> (yellow) axes.</p> "> Figure 20
<p>ATDA base Stewart platform angular estimation error along the <span class="html-italic">X</span> (blue), <span class="html-italic">Y</span> (red), and <span class="html-italic">Z</span> (yellow) axes.</p> ">
Abstract
:1. Introduction
1.1. Problem Statement
1.2. Literature Review
2. Proposing Inverse Homography Range Camera Fusion (IHRCF) Methodology
2.1. Camera Calibration
- Obtain the chessboard images with different rotations and translations in the camera frame.
- Calculate the grayscale image from the acquired chessboard images.
- Apply the corner detection algorithm; and
2.2. Camera Range Sensor Calibration
2.3. Image Range Acquisition
2.4. Calculate the Homography between the Pixels and the World Coordinates
2.5. Mapping from Apriltag Pixels to World Coordinates by Inverse Homography
2.6. Calculation of Points’ Altitude in the Camera Frame
2.7. Estimate Rigid Body Transformation
2.8. Transformation of the Coordinates
3. Experimental Design
3.1. Test Platform
3.2. Software Implementation
4. Results and Discussion
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Aspri, M.; Tsagkatakis, G.; Tsakalides, P. Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification. Remote Sens. 2020, 12, 2670. [Google Scholar] [CrossRef]
- Sehgal, A.; Kehtarnavaz, N. Guidelines and Benchmarks for Deployment of Deep Learning Models on Smartphones as Real-Time Apps. Mach. Learn. Knowl. Extr. 2019, 1, 27. [Google Scholar] [CrossRef] [Green Version]
- Ang, L.; Seng, K. GPU-Based Embedded Intelligence Architectures and Applications. Electronics 2021, 10, 952. [Google Scholar] [CrossRef]
- Ristic, B.; Arulampalam, S.; Gordon, N. Beyond the Kalman Filter: Particle Filters for Tracking Applications; Artech House: London, UK, 2003. [Google Scholar]
- Dong, L.; Xu, H.; Feng, X.; Han, X.; Yu, C. An Adaptive Target Tracking Algorithm Based on EKF for AUV with Unknown Non-Gaussian Process Noise. Appl. Sci. 2020, 10, 3413. [Google Scholar] [CrossRef]
- Jeon, J.; Hwang, Y.; Jeong, Y.; Park, S.; Kweon, I.S.; Choi, S.B. Lane Detection Aided Online Dead Reckoning for GNSS Denied Environments. Sensors 2021, 21, 6805. [Google Scholar] [CrossRef]
- Liu, X.; Wen, C.; Sun, X. Design Method of High-Order Kalman Filter for Strong Nonlinear System Based on Kronecker Product Transform. Sensors 2022, 22, 653. [Google Scholar] [CrossRef]
- Wang, D.; Zhang, H.; Ge, B. Adaptive Unscented Kalman Filter for Target Tacking with Time-Varying Noise Covariance Based on Multi-Sensor Information Fusion. Sensors 2021, 21, 5808. [Google Scholar] [CrossRef]
- Tanskanen, P.; Naegeli, T.; Pollefeys, M.; Hilliges, O. Semi-Direct EKF-Based Monocular Visual-Inertial Odometry. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–3 October 2015. [Google Scholar] [CrossRef]
- Liang, Q.; Liu, M. A Tightly Coupled VLC-Inertial Localization System by EKF. IEEE Robot. Autom. Lett. 2020, 5, 3129–3136. [Google Scholar] [CrossRef]
- Alatise, M.B.; Hancke, G.P. Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter. Sensors 2017, 17, 2164. [Google Scholar] [CrossRef] [Green Version]
- Zhang, L.; Zhai, Z.; He, L.; Wen, P.; Niu, W. Infrared-Inertial Navigation for Commercial Aircraft Precision Landing in Low Visibility and GPS-Denied Environments. Sensors 2019, 19, 408. [Google Scholar] [CrossRef] [Green Version]
- Santos, N.P.; Lobo, V.; Bernardino, A. Unmanned Aerial Vehicle Tracking Using a Particle Filter Based Approach. In Proceedings of the 2019 IEEE Underwater Technology (UT), Kaohsiung, Taiwan, 16–19 April 2019. [Google Scholar] [CrossRef]
- Kim, S.-B.; Lee, S.-Y.; Choi, J.-H.; Choi, K.-H.; Jang, B.-T. A Bimodal Approach for GPS and IMU Integration for Land Vehicle Applications. In Proceedings of the 2003 IEEE 58th Vehicular Technology Conference, VTC 2003-Fall, (IEEE Cat. No.03CH37484), Orlando, FL, USA, 6–9 October 2003. [Google Scholar] [CrossRef]
- Carrillo, L.R.G.; López, A.E.D.; Lozano, R.; Pégard, C. Combining Stereo Vision and Inertial Navigation System for a Quad-Rotor UAV. J. Intell. Robot. Syst. 2011, 65, 373–387. [Google Scholar] [CrossRef]
- Yang, T.; Ren, Q.; Zhang, F.; Xie, B.; Ren, H.; Li, J.; Zhang, Y. Hybrid Camera Array-Based UAV Auto-Landing on Moving UGV in GPS-Denied Environment. Remote Sens. 2018, 10, 1829. [Google Scholar] [CrossRef] [Green Version]
- Wang, Z.; She, H.; Si, W. Autonomous Landing of Multi-Rotors UAV with Monocular Gimbaled Camera on Moving Vehicle. In Proceedings of the 2017 13th IEEE International Conference on Control & Automation (ICCA), Orhid, Macedonia, 3–6 July 2017. [Google Scholar] [CrossRef]
- Marut, A.; Wojtowicz, K.; Falkowski, K. ArUco Markers Pose Estimation in UAV Landing Aid System. In Proceedings of the 2019 IEEE 5th International Workshop on Metrology for AeroSpace (MetroAeroSpace), Torino, Italy, 19–21 June 2019; pp. 261–266. [Google Scholar]
- Lee, S.; Shim, T.; Kim, S.; Park, J.; Hong, K.; Bang, H. Vision-Based Autonomous Landing of a Multi-Copter Unmanned Aerial Vehicle Using Reinforcement Learning. In Proceedings of the 2018 International Conference on Unmanned Aircraft Systems (ICUAS), Dallas, TX, USA, 12–15 June 2018; pp. 108–114. [Google Scholar]
- Dinh, T.H.; Hong, H.L.T. Detection and Localization of Helipad in Autonomous UAV Landing: A Coupled Visual-Inertial Approach with Artificial Intelligence. Tạp Chí Khoa Học Giao Thông Vận Tải 2020, 71, 828–839. [Google Scholar]
- Arantes, J.D.S.; Arantes, M.D.S.; Missaglia, A.B.; Simoes, E.D.V.; Toledo, C.F.M. Evaluating Hardware Platforms and Path Re-planning Strategies for the UAV Emergency Landing Problem. In Proceedings of the 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI), Boston, MA, USA, 6–8 November 2017; pp. 937–944. [Google Scholar]
- Kuchár, D.; Schreiber, P. Comparison of UAV Landing Site Classifications with Deep Neural Networks. In Computer Science On-line Conference; Springer: Cham, Switzerland, 2021; pp. 55–63. [Google Scholar]
- Sayfeddine, D. Control of Fixed-Wing UAV at Levelling Phase Using Artificial Intelligence. IOP Conf. Ser. Mater. Sci. Eng. 2018, 327, 022092. [Google Scholar] [CrossRef]
- Ayoub, N.; Schneider-Kamp, P. Real-Time On-Board Deep Learning Fault Detection for Autonomous UAV Inspections. Electronics 2021, 10, 1091. [Google Scholar] [CrossRef]
- Ayoub, N.; Schneider-Kamp, P. Real-time On-Board Detection of Components and Faults in an Autonomous UAV System for Power Line Inspection. In Proceedings of the 1st International Conference on Deep Learning Theory and Applications, Paris, France, 8–10 July 2020; pp. 68–75. [Google Scholar]
- Fu, X.; Zhu, F.; Wu, Q.; Sun, Y.; Lu, R.; Yang, R. Real-Time Large-Scale Dense Mapping with Surfels. Sensors 2018, 18, 1493. [Google Scholar] [CrossRef] [Green Version]
- De Souza, J.P.C.; Marcato, A.L.M.; de Aguiar, E.P.; Jucá, M.A.; Teixeira, A.M. Autonomous Landing of UAV Based on Artificial Neural Network Supervised by Fuzzy Logic. J. Control. Autom. Electr. Syst. 2019, 30, 522–531. [Google Scholar] [CrossRef]
- Maturana, D.; Scherer, S. 3D Convolutional Neural Networks for Landing Zone Detection from LiDAR. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015. [Google Scholar] [CrossRef]
- Sadhu, A.K.; Konar, A. Multi-Agent Coordination: A Reinforcement Learning Approach; John and Wiley and Sons: Hoboken, NJ, USA, 2020. [Google Scholar]
- Wang, L.; Wang, K.; Pan, C.; Xu, W.; Aslam, N.; Hanzo, L. Multi-Agent Deep Reinforcement Learning-Based Trajectory Planning for Multi-UAV Assisted Mobile Edge Computing. IEEE Trans. Cogn. Commun. Netw. 2020, 7, 73–84. [Google Scholar] [CrossRef]
- Zhang, Y.; Mou, Z.; Gao, F.; Jiang, J.; Ding, R.; Han, Z. UAV-Enabled Secure Communications by Multi-Agent Deep Reinforcement Learning. IEEE Trans. Veh. Technol. 2020, 69, 11599–11611. [Google Scholar] [CrossRef]
- Polvara, R.; Patacchiola, M.; Sharma, S.; Wan, J.; Manning, A.; Sutton, R.; Cangelosi, A. Toward End-to-End Control for UAV Autonomous Landing via Deep Reinforcement Learning. In Proceedings of the 2018 International Conference on Unmanned Aircraft Systems (ICUAS), Dallas, TX, USA, 12–15 June 2018. [Google Scholar] [CrossRef]
- Kraus, K.; Harley, I.A.; Kyle, S. Photogrammetry: Geometry from Images and Laser Scans; Walter De Gruyter: Berlin, Germany, 2007. [Google Scholar]
- Gruen, A.; Huang, S.T.; Gruen, A. Calibration and Orientation of Cameras in Computer Vision; Gruen, A., Huang, S.T., Eds.; Springer: Berlin/Heidelberg, Germany, 2021; Available online: https://www.springer.com/gp/book/9783540652830 (accessed on 29 September 2021).
- Luhmann, T.; Robson, S.; Kyle, S.; Boehm, J. Close-Range Photogrammetry and 3D Imaging; Walter De Gruyter: Berlin, Germany, 2019. [Google Scholar]
- El-Ashmawy, K. Using Direct Linear Transformation (DLT) Method for Aerial Photogrammetry Applications. ResearchGate, 16 October 2018. Available online: https://www.researchgate.net/publication/328351618_Using_direct_linear_transformation_DLT_method_for_aerial_photogrammetry_applications (accessed on 29 September 2021).
- Poynton, C.A. Digital Video and HD: Algorithms and Interfaces; Morgan Kaufmann: Waltham, MA, USA, 2012. [Google Scholar]
- Liang, Y. Salient Object Detection with Convex Hull Overlap. In Proceedings of the 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 10–13 December 2018; pp. 4605–4612. Available online: https://arxiv.org/abs/1612.03284 (accessed on 4 October 2021).
- Lin, S.; Garratt, M.A.; Lambert, A.J. Monocular Vision-Based Real-Time Target Recognition and Tracking for Autonomously Landing an UAV in a Cluttered Shipboard Environment. Auton. Robot. 2016, 41, 881–901. [Google Scholar] [CrossRef]
- Yadav, A.; Yadav, P. Digital Image Processing; University Science Press: New Delhi, India, 2021. [Google Scholar]
- Arthur, D.; Vassilvitskii, S. K-means++: The Advantages of Careful Seeding. Available online: http://ilpubs.stanford.edu:8090/778/1/2006-13.pdf (accessed on 10 December 2021).
- Fritsch, F.N.; Butland, J. A Method for Constructing Local Monotone Piecewise Cubic Interpolants. SIAM J. Sci. Stat. Comput. 1984, 5, 300–304. [Google Scholar] [CrossRef] [Green Version]
- Cleve, B.M. Numerical Computing with MATLAB; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2004. [Google Scholar]
- Bunke, H.; Bruckstein, A.; Dori, D. Shape, Structure and Pattern Recognition; Bunke, H., Bruckstein, A., Dori, D., Eds.; World Scientific: Singapore, 1995. [Google Scholar]
- Valavanis, K.P.; Oh, P.; Piegl, L.A. Unmanned Aircraft Systems: International Symposium on Unmanned Aerial Vehicles, UAV’08; Valavanis, K.P., Oh, P., Piegl, L.A., Eds.; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Ma, Y.; Soatto, S.; Košecká, J.; Sastry, S. An Invitation to 3-D Vision: From Images to Geometric Models; Springer: New York, NY, USA, 2004; Volume 26. [Google Scholar]
- Aghajan, H.; Cavallaro, A. Multi-Camera Networks: Principles and Applications; Aghajan, H., Cavallaro, A., Eds.; Academic Press: Cambridge, MA, USA, 2009. [Google Scholar]
- Yang, D. Informatics in Control, Automation and Robotics: Volume 2; Yang, D., Ed.; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; Volume 33. [Google Scholar]
- Hernandez-Matas, C.; Argyros, A.A.; Zabulis, X. Chapter 4: Retinal Image Preprocessing, Enhancement, and Registration. In Computational. Retinal Image Analysis; Elsevier: Amsterdam, The Netherlands, 2019; pp. 59–77. [Google Scholar] [CrossRef]
- Faugeras, O.; Luong, Q.T. The Geometry of Multiple Images: The Laws that Govern the Formation of Multiple Images of a Scene and Some of their Applications; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
- Nath, V.; Levinson, S.E. Autonomous Robotics and Deep Learning; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
- Triggs, B.; Zisserman, A.; Szeliski, R. Vision Algorithms: Theory and Practice: In Proceedings of the International Workshop on Vision Algorithms, Corfu, Greece, 21–22 September 1999; Triggs, B., Zisserman, A., Szeliski, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
- Strang, G.; Borre, K. Linear Algebra, Geodesy, and GPS; Wellesley-Cambridge Press: Wellesley, MA, USA, 1997. [Google Scholar]
- Agoston, M.K.; Agoston, M.K. Computer Graphics and Geometric Modeling; Springer: Berlin/Heidelberg, Germany, 2005; Volume 1, pp. 301–304. [Google Scholar]
- Carter, N. Introduction to the Mathematics of Computer Graphics; MAA Press: New Denver, BC, Canada, 2016; Volume 51. [Google Scholar]
- Bronstein, A.M.; Bronstein, M.M.; Kimmel, R. Numerical Geometry of Non-Rigid Shapes; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
Color | Channel Y | Channel CB | Channel CR |
---|---|---|---|
Blue | 0 ≤ Y ≤ 165 | 139 ≤ CB ≤ 255 | 0 ≤ CR ≤ 255 |
Green | 0 ≤ Y ≤ 255 | 0 ≤ CB ≤ 155 | 0 ≤ CR ≤ 90 |
Yellow | 103 ≤ Y≤ 255 | 0 ≤ CB ≤ 95 | 0 ≤ CR ≤ 255 |
Red | 0 ≤ Y ≤ 160 | 0 ≤ CB ≤ 255 | 167 ≤ CR ≤ 255 |
Absolute Error. | ATDA | IHRCF | ||||
---|---|---|---|---|---|---|
Translations (m) | 0.0162 | 0.0134 | 0.0697 | 0.0035 | 0.0039 | 0.0041 |
Angles (degree) | 2.9843 | 1.657 | 1.7743 | 0.98 | 1.3731 | 1.180 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sefidgar, M.; Landry, R., Jr. Landing System Development Based on Inverse Homography Range Camera Fusion (IHRCF). Sensors 2022, 22, 1870. https://doi.org/10.3390/s22051870
Sefidgar M, Landry R Jr. Landing System Development Based on Inverse Homography Range Camera Fusion (IHRCF). Sensors. 2022; 22(5):1870. https://doi.org/10.3390/s22051870
Chicago/Turabian StyleSefidgar, Mohammad, and Rene Landry, Jr. 2022. "Landing System Development Based on Inverse Homography Range Camera Fusion (IHRCF)" Sensors 22, no. 5: 1870. https://doi.org/10.3390/s22051870
APA StyleSefidgar, M., & Landry, R., Jr. (2022). Landing System Development Based on Inverse Homography Range Camera Fusion (IHRCF). Sensors, 22(5), 1870. https://doi.org/10.3390/s22051870