A Generalized Voronoi Diagram-Based Segment-Point Cyclic Line Segment Matching Method for Stereo Satellite Images
<p>Feature point matching and LSM. Matching points are derived from affine scale-invariant feature transform (ASIFT) algorithm and matching line segments are sourced from the proposed LSM method [<a href="#B7-remotesensing-16-04395" class="html-bibr">7</a>]. The matched feature points are connected by lines, and the points and line segments in the left image are marked in green, while those in the right image are marked in yellow.</p> "> Figure 2
<p>Outline of the proposed LSM method.</p> "> Figure 3
<p>An illustration of PHOW feature extraction.</p> "> Figure 4
<p>Line segment retrivery based on polar constraints. The red line segments indicate the target line segments in the left view. The green line segments represent the candidate line segments and the yellow line segments represent the other line segments in the right view. The purple dashed lines indicate the polar lines.</p> "> Figure 5
<p>Schematic illustration of line segment matching based on soft voting classifier. In the line graph of the right subfigure, the red curve indicates that CAR takes the value range of <math display="inline"><semantics> <mfenced separators="" open="[" close="]"> <mrow> <mn>0</mn> <mo>,</mo> <mo> </mo> <mn>1</mn> </mrow> </mfenced> </semantics></math> when the line segment intersection angle <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>∈</mo> <mfenced separators="" open="[" close="]"> <mrow> <mn>0</mn> <mo>,</mo> <mrow> <mi>π</mi> <mrow> <mfenced open="/" close=""> <mphantom> <mpadded width="0pt"> <mi>π</mi> <mn>6</mn> </mpadded> </mphantom> </mfenced> <mspace width="0.0pt"/> </mrow> <mn>6</mn> </mrow> </mrow> </mfenced> </mrow> </semantics></math>.</p> "> Figure 6
<p>Voronoi diagrams. (<b>a</b>) Left view, (<b>b</b>) Right view. First row: Results of ASIFT match point display. Feature points are represented by red dots, and the corresponding locations are marked with the same numerical labels. Second row: Voronoi diagram plotted on the satellite maps, where red dots indicate anchor points from ASIFT and blue edges denote the Voronoi diagram edges. Third row: Delaunay triangulation with Voronoi diagram, where red dots represent anchor points, red lines indicate Voronoi edges, and blue lines denote Delaunay edges. Additionally, the diagram Voronoi differences between the left and right images are highlighted with green boxes.</p> "> Figure 7
<p>Local point-line homography geometry. In the figure, <span class="html-italic">C</span> and <math display="inline"><semantics> <msup> <mi>C</mi> <mo>′</mo> </msup> </semantics></math> denote the camera point locations, <span class="html-italic">p</span> and <math display="inline"><semantics> <msup> <mi>p</mi> <mo>′</mo> </msup> </semantics></math> denote a pair of ASIFT matching points. The red line segment represents the left line segment in the left view, the green and yellow line segments denote the line segments with the corresponding line segments in the right view. The blue solid lines denote the Voronoi edges, and the blue dots denote the Voronoi seed points.</p> "> Figure 8
<p>The 14 pairs of test images from the benchmark dataset. Numbers 1–14 represent different pairs of test images.</p> "> Figure 9
<p>The 18 pairs of test areas from the WV-2 dataset. Numbers 1–18 represent different pairs of test images.</p> "> Figure 10
<p>The 15 pairs of test areas from the WV-3 dataset. Numbers 1–15 represent different pairs of test images.</p> "> Figure 11
<p>Precision, Recall, F1-score, and total number of matched line segments for the proposed method with CAR weights taking values of <math display="inline"><semantics> <mfenced separators="" open="[" close="]"> <mrow> <mn>0.1</mn> <mo>,</mo> <mn>0.2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>0.9</mn> </mrow> </mfenced> </semantics></math>.</p> "> Figure 12
<p>LSM results of the proposed method with CAR classifier weight values. (<b>a</b>–<b>d</b>) The results for values of <math display="inline"><semantics> <msub> <mi>ω</mi> <mn>2</mn> </msub> </semantics></math> of 0.1, 0.3, 0.5, and 0.7, respectively (only the left images are shown). Correctly matched line segments are highlighted in green, while incorrect line segments are highlighted in red.</p> "> Figure 13
<p>Statistics of the LILH, SLEM, and proposed methods on the 14 test images of the benchmark dataset. (<b>a</b>) Precision, (<b>b</b>) Recall, (<b>c</b>) F1-score.</p> "> Figure 14
<p>Actual matching results of the proposed method on the benchmark dataset. (<b>a</b>–<b>f</b>) The left images of representative matching results for the benchmark dataset. Mismatches are marked in red, while correct matches are marked in green. Precision and Recall values are also provided for each image.</p> "> Figure 15
<p>Statistics of LILH, SLEM, and proposed methods in the 18 test areas of the WV-2 dataset. (<b>a</b>) Precision, (<b>b</b>) Recall, (<b>c</b>) F1-score.</p> "> Figure 16
<p>Actual matching results of the proposed method on the WV-2 dataset. (<b>a</b>–<b>h</b>) The left images of representative matching results for the WV-2 dataset. Mismatches are labeled in red, correct matches are labeled in green. Precision values are also provided for each image.</p> "> Figure 17
<p>Intermediate results of the proposed method. The (<b>left</b>) figure represents the discrete matching of line segment points, with red dots indicating the matched line segment points in the left view, green dots indicating the matched line segment points in the right view, and yellow boxes indicating close-ups. The (<b>right</b>) figure represents the matching results of the corresponding line segment in the left figure, and the corresponding line segment is labeled in red. The blue line segments indicate the LSD line segments to be matched.</p> "> Figure 18
<p>Statistics of LILH, SLEM, and proposed methods in the 15 test areas of the WV-3 dataset. (<b>a</b>) Precision, (<b>b</b>) Recall, (<b>c</b>) F1-score.</p> "> Figure 19
<p>Actual matching results of the proposed method on the WV-3 dataset. (<b>a</b>–<b>h</b>) The left images of representative matching results for the WV-2 dataset. Mismatches are labeled in red, correct matches are labeled in green. Precision values are also provided for each image.</p> "> Figure 20
<p>Actual runtime of LILH, SLEM, and proposed methods on WV-2 dataset and WV-3 dataset. (<b>a</b>) Runtime on the WV-2 dataset, (<b>b</b>) Runtime on the WV-3 dataset.</p> ">
Abstract
:1. Introduction
1.1. Matching by Local Feature
1.2. Matching by Combining Geometric Features
1.3. Matching Based on Deep Learning
2. Method
2.1. Stereo Rectification of Satellite Image
2.2. Discrete Point Matching of Line Segments
2.2.1. Line Segment Discretization
2.2.2. Phow Feature Extraction
2.2.3. Matching of Discrete Line Segment Points
2.3. LSM Based on Soft Voting Classifier
2.3.1. Lpmr Classifier
2.3.2. Car Classifier
2.4. Calculation of Localized Point-Line Homography Model
2.4.1. Voronoi Diagram
2.4.2. Localized Point-Line Homography Model
3. Result
3.1. Experimental Configurations
3.1.1. Evaluation Metrics
3.1.2. Parameters
3.2. Experiments on the Benchmark Dataset
3.3. Experiments on the WV-2 Dataset
3.4. Experiments on the WV-3 Dataset
3.5. Complexity Analytics
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Arce, S.; Vernon, C.A.; Hammond, J.; Newell, V.; Janson, J.; Franke, K.W.; Hedengren, J.D. Automated 3D reconstruction using optimized view-planning algorithms for iterative development of structure-from-motion models. Remote Sens. 2020, 12, 2169. [Google Scholar] [CrossRef]
- Zhao, L.; Liu, Y.; Men, C.; Men, Y. Double propagation stereo matching for urban 3-d reconstruction from satellite imagery. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–17. [Google Scholar] [CrossRef]
- Jia, Q.; Gao, X.; Fan, X.; Luo, Z.; Li, H.; Chen, Z. Novel coplanar line-points invariants for robust line matching across views. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part VIII 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 599–611. [Google Scholar]
- Wei, D.; Zhang, Y.; Li, C. Robust line segment matching via reweighted random walks on the homography graph. Pattern Recognit. 2021, 111, 107693. [Google Scholar] [CrossRef]
- Zheng, X.; Yuan, Z.; Dong, Z.; Dong, M.; Gong, J.; Xiong, H. Smoothly varying projective transformation for line segment matching. ISPRS J. Photogramm. Remote Sens. 2022, 183, 129–146. [Google Scholar] [CrossRef]
- Guo, H.; Wei, D.; Zhang, Y.; Wan, Y.; Zheng, Z.; Yao, Y.; Liu, X.; Li, Z. The One-Point-One-Line geometry for robust and efficient line segment correspondence. ISPRS J. Photogramm. Remote Sens. 2024, 210, 80–96. [Google Scholar] [CrossRef]
- Yu, G.; Morel, J.M. ASIFT: An algorithm for fully affine invariant comparison. Image Process. Line 2011, 1, 11–38. [Google Scholar] [CrossRef]
- Zhang, L.; Koch, R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency. J. Vis. Commun. Image Represent. 2013, 24, 794–805. [Google Scholar] [CrossRef]
- López, J.; Santos, R.; Fdez-Vidal, X.R.; Pardo, X.M. Two-view line matching algorithm based on context and appearance in low-textured images. Pattern Recognit. 2015, 48, 2164–2184. [Google Scholar] [CrossRef]
- Li, K.; Yao, J.; Lu, X.; Li, L.; Zhang, Z. Hierarchical line matching based on line–junction–line structure descriptor and local homography estimation. Neurocomputing 2016, 184, 207–220. [Google Scholar] [CrossRef]
- Li, K.; Yao, J. Line segment matching and reconstruction via exploiting coplanar cues. ISPRS J. Photogramm. Remote Sens. 2017, 125, 33–49. [Google Scholar] [CrossRef]
- Zhang, H.; Luo, Y.; Qin, F.; He, Y.; Liu, X. Elsd: Efficient line segment detector and descriptor. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 2969–2978. [Google Scholar]
- Wang, Z.; Nie, J.; Li, D.; Feng, X.; Wu, Y.; Xu, G. A Two-Stage Line Matching Method for Multi-temporal Remote Sensing Images. In Proceedings of the 2021 IEEE 6th International Conference on Signal and Image Processing (ICSIP), Nanjing, China, 22–24 October 2021; pp. 170–174. [Google Scholar]
- Wang, J.; Zhu, Q.; Liu, S.; Wang, W. Robust line feature matching based on pair-wise geometric constraints and matching redundancy. ISPRS J. Photogramm. Remote Sens. 2021, 172, 41–58. [Google Scholar] [CrossRef]
- Wang, L.; Neumann, U.; You, S. Wide-baseline image matching using line signatures. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 1311–1318. [Google Scholar]
- Yammine, G.; Wige, E.; Simmet, F.; Niederkorn, D.; Kaup, A. Novel similarity-invariant line descriptor and matching algorithm for global motion estimation. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 1323–1335. [Google Scholar] [CrossRef]
- Wang, Q.; Zhang, W.; Liu, X.; Zhang, Z.; Baig, M.H.A.; Wang, G.; He, L.; Cui, T. Line matching of wide baseline images in an affine projection space. Int. J. Remote Sens. 2020, 41, 632–654. [Google Scholar] [CrossRef]
- Jia, Q.; Fan, X.; Gao, X.; Yu, M.; Li, H.; Luo, Z. Line matching based on line-points invariant and local homography. Pattern Recognit. 2018, 81, 471–483. [Google Scholar] [CrossRef]
- Chen, M.; Yan, S.; Qin, R.; Zhao, X.; Fang, T.; Zhu, Q.; Ge, X. Hierarchical line segment matching for wide-baseline images via exploiting viewpoint robust local structure and geometric constraints. ISPRS J. Photogramm. Remote Sens. 2021, 181, 48–66. [Google Scholar] [CrossRef]
- Wang, J.; Liu, S.; Zhang, P. A New Line Matching Approach for High-Resolution Line Array Remote Sensing Images. Remote Sens. 2022, 14, 3287. [Google Scholar] [CrossRef]
- Shen, L.; Zhu, J.; Xin, Q.; Huang, X.; Jin, T. Robust line segment mismatch removal using point-pair representation and Gaussian-uniform mixture formulation. ISPRS J. Photogramm. Remote Sens. 2023, 203, 314–327. [Google Scholar] [CrossRef]
- Lange, M.; Raisch, C.; Schilling, A. Wld: A Wavelet and Learning Based Line Descriptor for Line Feature Matching; The Eurographics Association: Tübingen, Germany, 2020. [Google Scholar]
- Abdellali, H.; Frohlich, R.; Vilagos, V.; Kato, Z. L2d2: Learnable line detector and descriptor. In Proceedings of the 2021 International Conference on 3D Vision (3DV), London, UK, 1–3 December 2021; pp. 442–452. [Google Scholar]
- Ma, Q.; Jiang, G.; Wu, J.; Cai, C.; Lai, D.; Bai, Z.; Chen, H. WGLSM: An end-to-end line matching network based on graph convolution. Neurocomputing 2021, 453, 195–208. [Google Scholar] [CrossRef]
- Li, H.; Chen, K.; Zhao, J.; Wang, J.; Kim, P.; Liu, Z.; Liu, Y.H. Learning to identify correct 2D-2D line correspondences on sphere. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 11743–11752.
- Pautrat, R.; Suárez, I.; Yu, Y.; Pollefeys, M.; Larsson, V. Gluestick: Robust image matching by sticking points and lines together. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 9706–9716. [Google Scholar]
- Maruani, N.; Klokov, R.; Ovsjanikov, M.; Alliez, P.; Desbrun, M. Voromesh: Learning watertight surface meshes with voronoi diagrams. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 14565–14574. [Google Scholar]
- Khan, M.A.; Iqbal, N.; Jamil, H.; Kim, D.H. An optimized ensemble prediction model using AutoML based on soft voting classifier for network intrusion detection. J. Netw. Comput. Appl. 2023, 212, 103560. [Google Scholar] [CrossRef]
- De Franchis, C.; Meinhardt-Llopis, E.; Michel, J.; Morel, J.M.; Facciolo, G. On stereo-rectification of pushbroom images. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 5447–5451. [Google Scholar]
- Von Gioi, R.G.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 32, 722–732. [Google Scholar] [CrossRef]
- Gaol, F.L. Bresenham Algorithm: Implementation and Analysis in Raster Shape. J. Comput. 2013, 8, 69–78. [Google Scholar] [CrossRef]
- Li, K.; Yao, J.; Lu, M.; Heng, Y.; Wu, T.; Li, Y. Line segment matching: A benchmark. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; pp. 1–9. [Google Scholar]
- Bosch, M.; Kurtz, Z.; Hagstrom, S.; Brown, M. A multiple view stereo benchmark for satellite imagery. In Proceedings of the 2016 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 18–20 October 2016; pp. 1–9. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
Parameters | Description | Values |
---|---|---|
The ratio of the minimum feature distance to the secondary feature distance in PHOW feature matching (Equation (18)) | 0.8 | |
LPMR classifier weighting (Equation (23)) | 0.5 | |
CAR classifier weighting (Equation (23)) | 0.5 | |
Range of maximum possible matches for pre-matched line segment filtering (Equation (24)) | 0.8 |
Precision (%) | Recall (%) | F1-Score (%) | Total Number of Matches | Number of Correct Matches | |
---|---|---|---|---|---|
LILH | 76.28 | 57.74 | 64.03 | 249.71 | 190.48 |
SLEM | 87.45 | 74.85 | 79.54 | 277.00 | 242.24 |
Proposed | 93.70 | 78.49 | 84.49 | 371.14 | 347.72 |
Precision (%) | Recall (%) | F1-Score (%) | Total Number of Matches | Number of Correct Matches | |
---|---|---|---|---|---|
LILH | 65.85 | 35.61 | 43.88 | 131.45 | 86.56 |
SLEM | 72.68 | 75.48 | 73.87 | 248.43 | 180.56 |
Proposed | 92.87 | 76.09 | 83.18 | 197.40 | 183.33 |
Precision (%) | Recall (%) | F1-Score (%) | Total Number of Matches | Number of Correct Matches | |
---|---|---|---|---|---|
LILH | 67.91 | 53.13 | 59.57 | 615.33 | 417.87 |
SLEM | 75.92 | 70.03 | 72.82 | 727.34 | 552.20 |
Proposed | 90.55 | 82.35 | 86.18 | 717.32 | 649.53 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, L.; Guo, F.; Zhu, Y.; Wang, H.; Zhou, B. A Generalized Voronoi Diagram-Based Segment-Point Cyclic Line Segment Matching Method for Stereo Satellite Images. Remote Sens. 2024, 16, 4395. https://doi.org/10.3390/rs16234395
Zhao L, Guo F, Zhu Y, Wang H, Zhou B. A Generalized Voronoi Diagram-Based Segment-Point Cyclic Line Segment Matching Method for Stereo Satellite Images. Remote Sensing. 2024; 16(23):4395. https://doi.org/10.3390/rs16234395
Chicago/Turabian StyleZhao, Li, Fengcheng Guo, Yi Zhu, Haiyan Wang, and Bingqian Zhou. 2024. "A Generalized Voronoi Diagram-Based Segment-Point Cyclic Line Segment Matching Method for Stereo Satellite Images" Remote Sensing 16, no. 23: 4395. https://doi.org/10.3390/rs16234395
APA StyleZhao, L., Guo, F., Zhu, Y., Wang, H., & Zhou, B. (2024). A Generalized Voronoi Diagram-Based Segment-Point Cyclic Line Segment Matching Method for Stereo Satellite Images. Remote Sensing, 16(23), 4395. https://doi.org/10.3390/rs16234395