[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = radargrammetry

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 9368 KiB  
Article
Radargrammetric 3D Imaging through Composite Registration Method Using Multi-Aspect Synthetic Aperture Radar Imagery
by Yangao Luo, Yunkai Deng, Wei Xiang, Heng Zhang, Congrui Yang and Longxiang Wang
Remote Sens. 2024, 16(3), 523; https://doi.org/10.3390/rs16030523 - 29 Jan 2024
Cited by 2 | Viewed by 1800
Abstract
Interferometric synthetic aperture radar (InSAR) and tomographic SAR measurement techniques are commonly used for the three-dimensional (3D) reconstruction of complex areas, while the effectiveness of these methods relies on the interferometric coherence among SAR images with minimal angular disparities. Radargrammetry exploits stereo image [...] Read more.
Interferometric synthetic aperture radar (InSAR) and tomographic SAR measurement techniques are commonly used for the three-dimensional (3D) reconstruction of complex areas, while the effectiveness of these methods relies on the interferometric coherence among SAR images with minimal angular disparities. Radargrammetry exploits stereo image matching to determine the spatial coordinates of corresponding points in two SAR images and acquire their 3D properties. The performance of the image matching process directly impacts the quality of the resulting digital surface model (DSM). However, the presence of speckle noise, along with dissimilar geometric and radiometric distortions, poses considerable challenges in achieving accurate stereo SAR image matching. To address these aforementioned challenges, this paper proposes a radargrammetric method based on the composite registration of multi-aspect SAR images. The proposed method combines coarse registration using scale invariant feature transform (SIFT) with precise registration using normalized cross-correlation (NCC) to achieve accurate registration between multi-aspect SAR images with large disparities. Furthermore, the multi-aspect 3D point clouds are merged using the proposed radargrammetric 3D imaging method, resulting in the 3D imaging of target scenes based on multi-aspect SAR images. For validation purposes, this paper presents a comprehensive 3D reconstruction of the Five-hundred-meter Aperture Spherical radio Telescope (FAST) using Ka-band airborne SAR images. It does not necessitate prior knowledge of the target and is applicable to the detailed 3D imaging of large-scale areas with complex structures. In comparison to other SAR 3D imaging techniques, it reduces the requirements for orbit control and radar system parameters. To sum up, the proposed 3D imaging method with composite registration guarantees imaging efficiency, while enhancing the imaging accuracy of crucial areas with limited data. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Imaging geometry of one specific aspect angle; (<b>b</b>) geometric relation of multi-aspect airborne observation geometry.</p>
Full article ">Figure 2
<p>Flowchart of 3D imaging in radargrammetry based on composite registration.</p>
Full article ">Figure 3
<p>Aperture-dependent motion error and compensation, the SAR platform flies along the y-axis looking towards its right side at an altitude of H, D is a sampled point when the aircraft is at A, whose ideal position should be B.</p>
Full article ">Figure 4
<p>Illustration of composite registration for stereo image group.</p>
Full article ">Figure 5
<p>Fusion of GPS data, IMU data and motion-compensated trajectory.</p>
Full article ">Figure 6
<p>(<b>a</b>) The optical image of FAST; (<b>b</b>) the stripmap SAR image of the FAST; (<b>c</b>) flight tracks of SAR platform and the range of corresponding tracks for the stereo image groups 1~6, presented in <a href="#remotesensing-16-00523-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 7
<p>(<b>a</b>) Coarse registration result for a sub-aperture SAR image pair; (<b>b</b>) an enlarged image showcases the coarse registration and segmentation results in the central region of the image pair; (<b>c</b>) fusion result obtained through coarse registration without sub-regional resampling for the image pair; (<b>d</b>) fusion result obtained through the sub-regional coarse registration and resampling of the image pair; (<b>e</b>) final fusion result of another image pair.</p>
Full article ">Figure 8
<p>Details of sub-regional coarse registration. The registration results of three sub-regions are listed in three rows separately. The first and second columns (<b>a</b>,<b>b</b>,<b>e</b>,<b>f</b>,<b>i</b>,<b>j</b>) represent the left and right image for registration; the third column (<b>c</b>,<b>g</b>,<b>k</b>) is the fusion results of resampled right image (in red) and the left image (in green) without segmentation. The fourth column (<b>d</b>,<b>h</b>,<b>l</b>) is the fusion results of resampled right image (in red) and the left image (in green) following the sub-region registration.</p>
Full article ">Figure 9
<p>The cross-correlation coefficient image after precise registration of an image pair.</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>c</b>) Sub-region point clouds from different image pairs, which are 3D imaging results of the sub-region containing the feed sources support towers; (<b>d</b>) point cloud results for a single image pair; (<b>e</b>) the fusion results of partial sub-region point clouds.</p>
Full article ">Figure 11
<p>FAST multi-aspect 3D imaging results. (<b>a</b>) The overall reconstruction results; (<b>b</b>–<b>e</b>) the reconstruction results presented from different aspect angles, with a viewing angle difference of approximately 90 degrees; (<b>f</b>) the reconstruction results presented from the aspect angle between <a href="#remotesensing-16-00523-f011" class="html-fig">Figure 11</a>d,e; (<b>g</b>) the reconstruction results presented from the aspect angle between <a href="#remotesensing-16-00523-f011" class="html-fig">Figure 11</a>b,c.</p>
Full article ">Figure 12
<p>(<b>a</b>) The aerial view of the reconstruction results; (<b>b</b>) optical image of the corresponding area in <a href="#remotesensing-16-00523-f012" class="html-fig">Figure 12</a>a; (<b>c</b>) zoomed aerial view of the reconstruction results after rotation. The red line represents the test sites for DEM quality evaluation. (<b>d</b>) Profile of the reconstruction results and DEM fitting curve corresponding to the red line in <a href="#remotesensing-16-00523-f012" class="html-fig">Figure 12</a>c.</p>
Full article ">
2861 KiB  
Proceeding Paper
Simulation of DEM Based on ICESat-2 Data Using Openly Accessible Topographic Datasets
by Shruti Pancholi, A. Abhinav, Sandeep Maithani and Ashutosh Bhardwaj
Environ. Sci. Proc. 2024, 29(1), 66; https://doi.org/10.3390/ECRS2023-16189 - 11 Dec 2023
Viewed by 683
Abstract
The digital elevation model (DEM) is a three-dimensional digital representation of the terrain or the Earth’s surface. For determining topography, DEMs are the most used and ideal method with (i.e., the digital surface model) or without the objects (i.e., the digital terrain model). [...] Read more.
The digital elevation model (DEM) is a three-dimensional digital representation of the terrain or the Earth’s surface. For determining topography, DEMs are the most used and ideal method with (i.e., the digital surface model) or without the objects (i.e., the digital terrain model). Various techniques are used to create DEMs, including traditional surveying methods, photogrammetry, InSAR, lidar, clinometry, and radargrammetry. DEMs generated by LiDAR tend to be the most accurate except for the VHR datasets acquired from UAVs with spatial resolution of a few centimeters. In many parts of the region, LiDAR data are not available, which limits researchers’ access to high-resolution and accurate DEMs. With a beam footprint of 13 m and a pulse interval of 0.7 m, ICESat-2 promises high orbital precision and high accuracy. ICESat-2 can produce high-accuracy DEMs in complex topographies with an accuracy of a few centimeters. The Earth’s surface elevations are provided by discrete photon data from ICESat-2. It is difficult to justify the continuity of the topographical data using traditional interpolation techniques since they over-smooth the estimated space. Geospatial data can be analyzed with machine learning algorithms to extract patterns and spatial extents. To estimate a DEM from LiDAR point data from ICESat-2 using CartoDEM, machine learning regression algorithms are used in this study V3 R1. This study was conducted over a hilly terrain of the Dehradun region in the foothills of the Himalayas in India. The applicability and robustness of these algorithms has been tested for a plain region of Ghaziabad, India, in an earlier study. The interpolation of DEM from ICESat-2 data was analyzed using regression-based machine learning techniques. Interpolated DEMs were evaluated against the TANDEM-X DEM of the same region with RMSEs of 7.13 m, 7.01 m, 7.15 m, and 3.76 m respectively, using gradient boosting regressors, random forest regressors, decision tree regressors, and multi-layer perceptron (MLP) regressors. Based on the four algorithms tested, the MLP regressor shows the best performance in the previous study. The accuracy of the simulated ICESat-2 DEM using the MLP regressor was assessed in this study using the DGPS points over the Dehradun region. The RMSE was of the order of 6.58 m for the DGPS reference data. Full article
(This article belongs to the Proceedings of ECRS 2023)
Show Figures

Figure 1

Figure 1
<p>Study area showing Dehradun region.</p>
Full article ">Figure 2
<p>Methodology followed for the simulation of DEM.</p>
Full article ">Figure 3
<p>Simulated DEM from CartoDEM and ICESat-2 (<b>a</b>) decision tree regressor, (<b>b</b>) gradient boosting regressor, (<b>c</b>) decision tree regressor, (<b>d</b>) multi-layer perceptron.</p>
Full article ">Figure 4
<p>GCPs collected using DGPS survey are overlaid on simulated MLP DEM product.</p>
Full article ">
21 pages, 140129 KiB  
Article
An Epipolar HS-NCC Flow Algorithm for DSM Generation Using GaoFen-3 Stereo SAR Images
by Jian Wang, Xiaolei Lv, Zenghui Huang and Xikai Fu
Remote Sens. 2023, 15(1), 129; https://doi.org/10.3390/rs15010129 - 26 Dec 2022
Cited by 6 | Viewed by 1936
Abstract
Radargrammetry is a widely used methodology to generate the large-scale Digital Surface Model (DSM). Stereo matching is the most challenging step in radargrammetry due to the significant geometric differences and the inherent speckle noise. The speckle noise results in significant grayscale differences of [...] Read more.
Radargrammetry is a widely used methodology to generate the large-scale Digital Surface Model (DSM). Stereo matching is the most challenging step in radargrammetry due to the significant geometric differences and the inherent speckle noise. The speckle noise results in significant grayscale differences of the same feature points, which makes the traditional Horn–Schunck (HS) flow or multi-window zero-mean normalized cross-correlation (ZNCC) methods degrade. Therefore, this paper proposes an algorithm named Epipolar HS-NCC Flow (EHNF) for dense stereo matching, which is an improved HS flow method with normalized cross-correction constraint based on epipolar stereo images. First, the epipolar geometry is applied to resample the image to realize the coarse stereo matching. Subsequently, the EHNF method forms a global energy function to achieve fine stereo matching. The EHNF method constructs a local normalized cross-correlation constraint term to compensate for the grayscale invariance constraint, especially for the SAR stereo images. Additionally, two assessment methods are proposed to calculate the optimal cross-correlation parameter and smoothness parameter according to the refined matched point pairs. Two GaoFen-3 (GF-3) image pairs from ascending and descending orbits and the open Light Detection and Ranging (LiDAR) data are utilized to fully evaluate the proposed method. The results demonstrate that the EHNF algorithm improves the DSM elevation accuracy by 9.6% and 27.0% compared with the HS flow and multi-window ZNCC methods, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>Flow chart for the radargrammetric DSM generation.</p>
Full article ">Figure 2
<p>(<b>a</b>) The illustration of stereo images before epipolar rectification. (<b>b</b>) The illustration of stereo images after epipolar rectification.</p>
Full article ">Figure 3
<p>The illustration of the calculation of the cross-correction item.</p>
Full article ">Figure 4
<p>The Omaha datasets of GF-3 stereo images.</p>
Full article ">Figure 5
<p>The fusion image of (<b>a</b>) the original GF0719 (in green) and original GF0731 (in purple); (<b>b</b>) the original GF0719 (in green) and the epipolar resampled GF0731 (in purple); (<b>c</b>) the original GF0721 (in green) and original GF0709 (in purple); (<b>d</b>) the original GF0721 (in green) and the epipolar resampled GF0709 (in purple).</p>
Full article ">Figure 6
<p>(<b>a</b>) The DSM generated by the ascending orbit stereo images. (<b>b</b>) The DSM generated by the descending orbit stereo images.</p>
Full article ">Figure 7
<p>The mountainous DSM generated by the (<b>a</b>) EHNF algorithm; (<b>b</b>) HS flow algorithm; (<b>c</b>) multi-window ZNCC algorithm of ascending orbit stereo images, and (<b>d</b>) EHNF algorithm; (<b>e</b>) HS flow algorithm; (<b>f</b>) multi-window ZNCC algorithm of descending orbit stereo images; and (<b>g</b>) the reference DSM generated by LiDAR data.</p>
Full article ">Figure 8
<p>The root square error of mountainous DSM generated by the (<b>a</b>) EHNF algorithm; (<b>b</b>) HS flow algorithm; (<b>c</b>) multi-window ZNCC algorithm of ascending orbit stereo images, and (<b>d</b>) EHNF algorithm; (<b>e</b>) HS flow algorithm; (<b>f</b>) multi-window ZNCC algorithm of descending orbit stereo images.</p>
Full article ">Figure 9
<p>The flat DSM generated by the (<b>a</b>) EHNF algorithm; (<b>b</b>) HS flow algorithm; (<b>c</b>) multi-window ZNCC algorithm of ascending orbit stereo images, and (<b>d</b>) EHNF algorithm; (<b>e</b>) HS flow algorithm; (<b>f</b>) multi-window ZNCC algorithm of descending orbit stereo images. (<b>g</b>) The reference DSM generated by LiDAR data.</p>
Full article ">Figure 10
<p>The root square error of flat DSM generated by the (<b>a</b>) EHNF algorithm; (<b>b</b>) HS flow algorithm; (<b>c</b>) multi-window ZNCC algorithm of ascending orbit stereo images, and (<b>d</b>) EHNF algorithm; (<b>e</b>) HS flow algorithm; (<b>f</b>) multi-window ZNCC algorithm of descending orbit stereo images.</p>
Full article ">Figure 11
<p>The histogram of the disparity map to explore the effect of the (<b>a</b>) smoothing parameter <math display="inline"><semantics> <mi>λ</mi> </semantics></math> and (<b>b</b>) cross-correlation parameter <math display="inline"><semantics> <mi>β</mi> </semantics></math>.</p>
Full article ">Figure 12
<p>The mountainous area of (<b>a</b>) the ascending orbit image GF0719 and (<b>b</b>) the descending orbit image GF0721.</p>
Full article ">
16 pages, 12578 KiB  
Article
A Probabilistic Approach for Stereo 3D Point Cloud Reconstruction from Airborne Single-Channel Multi-Aspect SAR Image Sequences
by Hanqing Zhang, Yun Lin, Fei Teng and Wen Hong
Remote Sens. 2022, 14(22), 5715; https://doi.org/10.3390/rs14225715 - 12 Nov 2022
Cited by 11 | Viewed by 2600
Abstract
We investigate the problem of obtaining dense 3D reconstruction from airborne multi-aspect synthetic aperture radar (SAR) image sequences. Dense 3D reconstructions of multi-view SAR images are vulnerable to anisotropic scatters. To address this issue, we propose a probabilistic 3D reconstruction method based on [...] Read more.
We investigate the problem of obtaining dense 3D reconstruction from airborne multi-aspect synthetic aperture radar (SAR) image sequences. Dense 3D reconstructions of multi-view SAR images are vulnerable to anisotropic scatters. To address this issue, we propose a probabilistic 3D reconstruction method based on jointly estimating the pixel’s height and degree of anisotropy. Specifically, we propose a mixture distribution model for the stereo-matching results, where the degree of anisotropy is modeled as an underlying error source. Then, a Bayesian filtering method is proposed for dense 3D point cloud generation. For real-time applications, redundancy in multi-aspect observations is further exploited in a probabilistic manner to accelerate the stereo-reconstruction process. To verify the effectiveness and reliability of the proposed method, 3D point cloud generation is tested on Ku-band drone SAR data for a domestic airport area. Full article
Show Figures

Figure 1

Figure 1
<p>Multi-aspect SAR data collection geometry. (<b>a</b>) Circular trajectory. (<b>b</b>) Polygon trajectory.</p>
Full article ">Figure 2
<p>Schematic of a two-view stereo SAR configuration; <math display="inline"><semantics> <mi mathvariant="script">I</mi> </semantics></math> is the reference image and <math display="inline"><semantics> <msup> <mi mathvariant="script">I</mi> <mo>′</mo> </msup> </semantics></math> is a slave image.</p>
Full article ">Figure 3
<p>Processes for generating a probabilistic height estimation.</p>
Full article ">Figure 4
<p>Posterior distribution for height. The top row shows the histograms of 40 height measurements corrupted by the outlier. The second row shows the posterior distributions for <span class="html-italic">h</span> when the height measurements are modeled with a single variant Gaussian distribution. The bottom row shows the 2D posterior distribution estimated by the proposed method, where the vertical axis is for inlier probability <math display="inline"><semantics> <mo>γ</mo> </semantics></math> and the horizontal axis is for height <span class="html-italic">h</span>. The true height value for this simulation is <math display="inline"><semantics> <mrow> <mi>h</mi> <mo>=</mo> <mn>5</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>; (<b>a</b>) 40 height measurements with <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math> outliers; (<b>b</b>) 40 height measurements with <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> outliers; (<b>c</b>) 40 height measurements with <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math> outliers.</p>
Full article ">Figure 5
<p>Flowchart for 3D point cloud generation; 3D map points are maintained and updated by Bayesian filters.</p>
Full article ">Figure 6
<p>Multi-aspect flight campaign over a domestic airport. Yellow dash circle: spot area with 360° illumination. Red arrow: flight trajectory (partial).</p>
Full article ">Figure 7
<p>Example SAR images collected from 4 different aspects. Radar line of sight is indicated on each image with white arrows.</p>
Full article ">Figure 8
<p>Airport 3D reconstruction. This figure contains <math display="inline"><semantics> <mrow> <mn>1.6</mn> </mrow> </semantics></math> million points. The result is displayed in a Cartesian coordinate system. (<b>a</b>) Top view. (<b>b</b>) Perspective view. (<b>c</b>) Side view.</p>
Full article ">Figure 9
<p>Point cloud for the terminal, with front and behind perspectives, as well as optical photos, are presented together. This figure demonstrates the full 360° 3D reconstruction ability of multi-aspect SAR.</p>
Full article ">Figure 10
<p>Comparison of 3D reconstruction from Palm’s method and our method; 3D point clouds are presented with a perspective view and a side view. By comparison, the roof structure is more clear in our method. (<b>a</b>) Palm’s method. (<b>b</b>) Our method. (<b>c</b>) Photo of hangars.</p>
Full article ">Figure 11
<p>Error curve evaluation. An error curve shows, for a given error distance <span class="html-italic">d</span>, how much of the result falls within <span class="html-italic">d</span> of the ground truth. We can see that our method outperforms Palm12 across all precision levels.</p>
Full article ">
17 pages, 3585 KiB  
Article
Three-Dimensional Coordinate Extraction Based on Radargrammetry for Single-Channel Curvilinear SAR System
by Chenghao Jiang, Shiyang Tang, Yi Ren, Yinan Li, Juan Zhang, Geng Li and Linrang Zhang
Remote Sens. 2022, 14(16), 4091; https://doi.org/10.3390/rs14164091 - 21 Aug 2022
Cited by 1 | Viewed by 1847
Abstract
With the rapid development of high-resolution synthetic aperture radar (SAR) systems, the technique that utilizes multiple two-dimensional (2-D) SAR images with different view angles to extract three-dimensional (3-D) coordinates of targets has gained wide concern in recent years. Unlike the traditional multi-channel SAR [...] Read more.
With the rapid development of high-resolution synthetic aperture radar (SAR) systems, the technique that utilizes multiple two-dimensional (2-D) SAR images with different view angles to extract three-dimensional (3-D) coordinates of targets has gained wide concern in recent years. Unlike the traditional multi-channel SAR utilized for 3-D coordinate extraction, the single-channel curvilinear SAR (CLSAR) has the advantages of large variation of view angle, requiring fewer acquisition data, and lower device cost. However, due to the complex aerodynamic configuration and flight characteristics, important issues should be considered, including the mathematical model establishment, imaging geometry analysis, and high-precision extraction model design. In this paper, to address these challenges, a 3-D vector model of CLSAR was presented and the imaging geometries under different view angles were analyzed. Then, a novel 3-D coordinate extraction approach based on radargrammetry was proposed, in which the unique property of the SAR system, called cylindrical symmetry, was utilized to establish a novel extraction model. Compared with the conventional approach, the proposed one has fewer constraints on the trajectory of radar platform, requires fewer model parameters, and can obtain higher extraction accuracy without the assistance of extra ground control points (GCPs). Numerical results using simulated data demonstrated the effectiveness of the proposed approach. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>3-D SAR coordinate extraction techniques based on 2-D synthetic aperture. (<b>a</b>) In-SAR. (<b>b</b>) Tomo-SAR. (<b>c</b>) LA-SAR.</p>
Full article ">Figure 2
<p>TerraSAR-X images of Swissotel Berlin, taken from view angles of 27.1°, 37.8°, and 45.8° from left to right, respectively.</p>
Full article ">Figure 3
<p>3-D geometry model of CLSAR.</p>
Full article ">Figure 4
<p>Directions of spatial resolution and traditional resolution.</p>
Full article ">Figure 5
<p>Geometry model of slave imaging focusing.</p>
Full article ">Figure 6
<p>Geometry model of master image focusing.</p>
Full article ">Figure 7
<p>Flowchart of the proposed approach.</p>
Full article ">Figure 8
<p>Geometry of the simulated targets.</p>
Full article ">Figure 9
<p>Imaging results of two different sub-apertures. (<b>a</b>) Result of the slave image; (<b>b</b>) result of the master image.</p>
Full article ">Figure 10
<p>Result after image pair registration.</p>
Full article ">Figure 11
<p>Extraction errors of nine point targets.</p>
Full article ">
25 pages, 8658 KiB  
Article
Radargrammetric DSM Generation by Semi-Global Matching and Evaluation of Penalty Functions
by Jinghui Wang, Ke Gong, Timo Balz, Norbert Haala, Uwe Soergel, Lu Zhang and Mingsheng Liao
Remote Sens. 2022, 14(8), 1778; https://doi.org/10.3390/rs14081778 - 7 Apr 2022
Cited by 9 | Viewed by 3262
Abstract
Radargrammetry is a useful approach to generate Digital Surface Models (DSMs) and an alternative to InSAR techniques that are subject to temporal or atmospheric decorrelation. Stereo image matching in radargrammetry refers to the process of determining homologous points in two images. The performance [...] Read more.
Radargrammetry is a useful approach to generate Digital Surface Models (DSMs) and an alternative to InSAR techniques that are subject to temporal or atmospheric decorrelation. Stereo image matching in radargrammetry refers to the process of determining homologous points in two images. The performance of image matching influences the final quality of DSM used for spatial-temporal analysis of landscapes and terrain. In SAR image matching, local matching methods are commonly used but usually produce sparse and inaccurate homologous points adding ambiguity to final products; global or semi-global matching methods are seldom applied even though more accurate and dense homologous points can be yielded. To fill this gap, we propose a hierarchical semi-global matching (SGM) pipeline to reconstruct DSMs in forested and mountainous regions using stereo TerraSAR-X images. In addition, three penalty functions were implemented in the pipeline and evaluated for effectiveness. To make accuracy and efficiency comparisons between our SGM dense matching method and the local matching method, the normalized cross-correlation (NCC) local matching method was also applied to generate DSMs using the same test data. The accuracy of radargrammetric DSMs was validated against an airborne photogrammetric reference DSM and compared with the accuracy of NASA’s 30 m SRTM DEM. The results show the SGM pipeline produces DSMs with height accuracy and computing efficiency that exceeds the SRTM DEM and NCC-derived DSMs. The penalty function adopting the Canny edge detector yields a higher vertical precision than the other two evaluated penalty functions. SGM is a powerful and efficient tool to produce high-quality DSMs using stereo Spaceborne SAR images. Full article
(This article belongs to the Special Issue New Developments in Remote Sensing for the Environment)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The Mount Song test area with coverage of photogrammetric reference DSM and TerraSAR-X/TanDEM-X stereo images.</p>
Full article ">Figure 2
<p>(<b>a</b>) Stereo anaglyph of Spotlight epipolar images; (<b>b</b>) Stereo anaglyph of Stripmap epipolar images.</p>
Full article ">Figure 3
<p>Zoomed region in (<b>a</b>) left and (<b>b</b>) right Spotlight epipolar image and (<b>c</b>) the stereo anaglyph. Zoomed region in (<b>d</b>) left and (<b>e</b>) right Stripmap epipolar image and (<b>f</b>) the stereo anaglyph.</p>
Full article ">Figure 4
<p>DSM with hillshading produced by SGM_canny algorithm from the Stripmap pair.</p>
Full article ">Figure 5
<p>Hillshade of test area 1: (<b>a</b>,<b>b</b>) Hillshade of the reference photogrammetry DSM, and 30 m resolution SRTM DEM; (<b>c</b>–<b>f</b>) Hillshade of 10 m resolution radargrammetric DSMs produced from SM images by NCC, SGM_const, SGM_gray, and SGM_canny.</p>
Full article ">Figure 6
<p>(<b>a</b>) DSM profiles along the line segment “m” in <a href="#remotesensing-14-01778-f005" class="html-fig">Figure 5</a>c; (<b>b</b>) Zoomed view of the profiles corresponding to the dotted box in <a href="#remotesensing-14-01778-f006" class="html-fig">Figure 6</a>a.</p>
Full article ">Figure 6 Cont.
<p>(<b>a</b>) DSM profiles along the line segment “m” in <a href="#remotesensing-14-01778-f005" class="html-fig">Figure 5</a>c; (<b>b</b>) Zoomed view of the profiles corresponding to the dotted box in <a href="#remotesensing-14-01778-f006" class="html-fig">Figure 6</a>a.</p>
Full article ">Figure 7
<p>(<b>a</b>) DSM profiles along the line segment “n” in <a href="#remotesensing-14-01778-f005" class="html-fig">Figure 5</a>c; (<b>b</b>) Zoomed view of profiles corresponding to the dotted box in <a href="#remotesensing-14-01778-f007" class="html-fig">Figure 7</a>a.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) DSM profiles along the line segment “n” in <a href="#remotesensing-14-01778-f005" class="html-fig">Figure 5</a>c; (<b>b</b>) Zoomed view of profiles corresponding to the dotted box in <a href="#remotesensing-14-01778-f007" class="html-fig">Figure 7</a>a.</p>
Full article ">Figure 8
<p>Hillshade of test area 2: (<b>a</b>–<b>c</b>) Hillshade of SRTM DEM, SL mode DSM by NCC, and SM mode DSM by NCC; (<b>d</b>–<b>f</b>) Hillshade of SL mode DSMs by SGM_const, SGM_gray, and SGM_canny; (<b>g</b>–<b>i</b>) Hillshade of SM mode DSMs by SGM_const, SGM_gray, and SGM_canny, and (<b>j</b>) Hillshade of reference DSM.</p>
Full article ">Figure 9
<p>(<b>a</b>) Amplitude image subset of SL_0925 and SL_1001; (<b>b</b>) Epipolar image subset of SL_0925 and SL_1001; (<b>c</b>) subset of the Spotlight mode disparity map derived by SGM_gray (Black colors represent valid disparity values; White color represents invalid disparity values); Geographic points generated from the disparity map overlaying on the reference DSM.</p>
Full article ">Figure 10
<p>Hillshade of test area 3: (<b>a</b>–<b>c</b>) Hillshade of SRTM DEM, SL mode DSM by NCC, and SM mode DSM by NCC; (<b>d</b>–<b>f</b>) Hillshade of SL mode DSMs by SGM_const, SGM_gray, and SGM_canny; (<b>g</b>–<b>i</b>) Hillshade of SM mode DSMs by SGM_const, SGM_gray, and SGM_canny, and (<b>j</b>) Hillshade of reference DSM.</p>
Full article ">
18 pages, 2761 KiB  
Article
DEM Generation With a Scale Factor Using Multi-Aspect SAR Imagery Applying Radargrammetry
by Shanshan Feng, Yun Lin, Yanping Wang, Yanhui Yang, Wenjie Shen, Fei Teng and Wen Hong
Remote Sens. 2020, 12(3), 556; https://doi.org/10.3390/rs12030556 - 7 Feb 2020
Cited by 19 | Viewed by 4100
Abstract
Digital elevation model (DEM) generation using multi-aspect synthetic aperture radar (SAR) imagery applying radargrammetry has become a hotspot. The traditional radargrammetric method is to solve the rigorous radar projection equations to obtain the three dimensional coordinates of targets. In this paper, we propose [...] Read more.
Digital elevation model (DEM) generation using multi-aspect synthetic aperture radar (SAR) imagery applying radargrammetry has become a hotspot. The traditional radargrammetric method is to solve the rigorous radar projection equations to obtain the three dimensional coordinates of targets. In this paper, we propose a new DEM generation method based on the offset between multi-aspect images formed on ground plane. The ground object will be projected to different positions from different viewing aspect angles if the height of object is not equal to the height of imaging plane. The linear relationship between the offset of imaging positions and height of the object is derived and scale factor is obtained finally. Height information can be retrieved from offset of imaging positions directly through the DEM extraction model presented in this paper. Thus the solution to nonlinear equations point by point can be avoided. Real C band airborne circular SAR images is used to verify the proposed approach. When extracted DEM applied in multi-aspect imaging process, superimposition of multi-aspect images will no longer be defocusing and can achieve finer observation of the scanned scene. Full article
(This article belongs to the Special Issue Airborne SAR: Data Processing, Calibration and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geometric relation of multi-aspect airborne observation geometry.</p>
Full article ">Figure 2
<p>Imaging geometry of one certain aspect angle. A target is above the imaging plane.</p>
Full article ">Figure 3
<p>Flowchart of the digital elevation model(DEM) generation processing.</p>
Full article ">Figure 4
<p>Schematic diagram of equivalent incidence angle.</p>
Full article ">Figure 5
<p>Top view of ground coordinate.</p>
Full article ">Figure 6
<p>Discrimination of <math display="inline"><semantics> <mrow> <mo>±</mo> <mo>|</mo> <mo>Δ</mo> <mi>h</mi> <mo>|</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Schematic diagram of solution about plain coordinate values.</p>
Full article ">Figure 8
<p>Location of test cite and optical photograph of the test site.</p>
Full article ">Figure 9
<p>Track of synthetic aperture radar(SAR) platform.</p>
Full article ">Figure 10
<p>The first pair of images.On each sub-aperture image, the radar line of sight is indicated by red arrow.</p>
Full article ">Figure 11
<p>The second pair of images.On each sub-aperture image, the radar line of sight is indicated by red arrow.</p>
Full article ">Figure 12
<p>A superposed graph of two different sub-aperture images. Offset between two images is caused by height difference.</p>
Full article ">Figure 13
<p>A set of points in the scene. Blue points are points which cover the scene.Red point is the point which is at the center of the scene.</p>
Full article ">Figure 14
<p>Deviation of scale factor <span class="html-italic">k</span> between any points and the target <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>. Actually, the Y-axis has no measured unit. Here we use the unit equivalent to the size of the pixel to denote the value of it.</p>
Full article ">Figure 15
<p>Preliminary elevation map of the scene generated by each pair images.</p>
Full article ">Figure 16
<p>Final DEM generated by the presented method.</p>
Full article ">Figure 17
<p>All-aperture image formed via magnitude accumulation of all the sub-aperture images.</p>
Full article ">Figure 18
<p>Region A: All-aperture image formed via magnitude accumulation of all the sub-aperture images.</p>
Full article ">Figure 19
<p>Region B: All-aperture image formed via magnitude accumulation of all the sub-aperture images.</p>
Full article ">
17 pages, 7280 KiB  
Article
Development of Operational Applications for TerraSAR-X
by Oliver Lang, Parivash Lumsdon, Diana Walter, Jan Anderssohn, Wolfgang Koppe, Jüergen Janoth, Tamer Koban and Christoph Stahl
Remote Sens. 2018, 10(10), 1535; https://doi.org/10.3390/rs10101535 - 25 Sep 2018
Cited by 4 | Viewed by 5266
Abstract
In the course of the TerraSAR-X mission, various new applications based on X-Band Synthetic Aperture Radar (SAR) data have been developed and made available as operational products or services. In this article, we elaborate on proven characteristics of TerraSAR-X that are responsible for [...] Read more.
In the course of the TerraSAR-X mission, various new applications based on X-Band Synthetic Aperture Radar (SAR) data have been developed and made available as operational products or services. In this article, we elaborate on proven characteristics of TerraSAR-X that are responsible for development of operational applications. This article is written from the perspective of a commercial data and service provider and the focus is on the following applications with high commercial relevance, and varying operational maturity levels: Surface Movement Monitoring (SMM), Ground Control Point (GCP) extraction and Automatic Target Recognition (ATR). Based on these applications, the article highlights the successful transition of innovative research into sustainable and operational use within various market segments. TerraSAR-X’s high orbit accuracy, its precise radar beam tracing, the high-resolution modes, and high-quality radiometric performance have proven to be the instrument’s advanced characteristics, through, which reliable ground control points and surface movement measurements are obtained. Moreover, TerraSAR-X high-resolution data has been widely exploited for the clarity of its target signatures in the fields of target intelligence and identification. TerraSAR-X’s multi temporal interferometry applications are non-invasive and are now fully standardised autonomous tools to measure surface deformation. In particular, multi-baseline interferometric techniques, such as Persistent Scatter Interferometry (PSI) and Small Baseline Subsets (SBAS) benefit from TerraSAR-X’s highly precise orbit information and phase stability. Similarly, the instrument’s precise orbit information is responsible for sub-metre accuracy of Ground Control Points (GCPs), which are essential inputs for orthorectification of remote sensing imagery, to locate targets, and to precisely georeference a variety of datasets. While geolocation accuracy is an essential ingredient in the intelligence field, high-resolution TerraSAR-X data, particularly in Staring SpotLight mode has been widely used in surveillance, security and reconnaissance applications in real-time and also by automatic or assisted target recognition software. Full article
(This article belongs to the Special Issue Ten Years of TerraSAR-X—Scientific Results)
Show Figures

Figure 1

Figure 1
<p>Semi-automated interferometric time series processing and post-processing steps.</p>
Full article ">Figure 2
<p>Time-position plot of SBAS connection graph for the used scenes in the case study (left figure). TerraSAR-X scenes were acquired on the listed dates in the table on the right.</p>
Full article ">Figure 3
<p>Selection of operational TerraSAR-X surface movement products for an opencast mining region with known tectonic faults (black hatching lines of ISGK100 © Geologischer Dienst North Rhine-Westphalia 2018): (<b>a</b>) Vertical movement velocities in the AOI of 13 × 13 km<sup>2</sup>; (<b>b</b>) Subset area (white rectangle in (<b>a</b>) with Surface Movement Monitoring (SMM) measurement pixels in Horrem (Kerpen, Germany); (<b>c</b>) SMM railway allocation product; (<b>d</b>) SMM road allocation product; (<b>e</b>) SMM building allocation product; (<b>f</b>) SMM enhanced product with marking of buildings and railway sections with maximum tilts &gt; 0.3 mm/m in a detailed area. Background: World Imagery (Source: Esri, Digital Globe, GeoEye, Earthstar Geographics) and OSM data (© OpenStreetMap).</p>
Full article ">Figure 3 Cont.
<p>Selection of operational TerraSAR-X surface movement products for an opencast mining region with known tectonic faults (black hatching lines of ISGK100 © Geologischer Dienst North Rhine-Westphalia 2018): (<b>a</b>) Vertical movement velocities in the AOI of 13 × 13 km<sup>2</sup>; (<b>b</b>) Subset area (white rectangle in (<b>a</b>) with Surface Movement Monitoring (SMM) measurement pixels in Horrem (Kerpen, Germany); (<b>c</b>) SMM railway allocation product; (<b>d</b>) SMM road allocation product; (<b>e</b>) SMM building allocation product; (<b>f</b>) SMM enhanced product with marking of buildings and railway sections with maximum tilts &gt; 0.3 mm/m in a detailed area. Background: World Imagery (Source: Esri, Digital Globe, GeoEye, Earthstar Geographics) and OSM data (© OpenStreetMap).</p>
Full article ">Figure 4
<p>Time series of vertical displacements (plotted points) and linear regression line (solid line) on selected measurements positions in <a href="#remotesensing-10-01535-f003" class="html-fig">Figure 3</a>b. (<b>a</b>) refers to the points P1 and P2, shown in <a href="#remotesensing-10-01535-f003" class="html-fig">Figure 3</a>b. Though spatially adjacent the temporal displacement of both points significantly differs. (<b>b</b>) refers to point P3 and represents the subsiding tendency of a waste disposal site.</p>
Full article ">Figure 5
<p>Stereo image configuration for Ground Control Points (GCP) calculation.</p>
Full article ">Figure 6
<p>Overview of the area of interest (Denver, USA) and selected points for ground coordinate measurements. The white box indicates the subset in <a href="#remotesensing-10-01535-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 7
<p>Signal response of a point target (centre peak) in the TerraSAR-X ST image as marked with white box in <a href="#remotesensing-10-01535-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 8
<p>Automatic Target Recognition (ATR) real-time processing steps emulated on a stand-alone computer using Synthetic Aperture Radar (SAR)-specific feature extractor and frequency domain high-speed Support Vector Machine classifier.</p>
Full article ">Figure 9
<p>Adaptation and processing steps of Deep Convolutional Neural Network for SAR ATR.</p>
Full article ">Figure 10
<p>Number of TerraSAR-X images per season for each location.</p>
Full article ">Figure 11
<p>Ground range resolution of TerraSAR-X images for each location.</p>
Full article ">Figure 12
<p>Performance of the processor for each target on tested data.</p>
Full article ">Figure 13
<p>Target detection and classification of TerraSAR-X Staring SpotLight images for Ryazan airbase.</p>
Full article ">
10102 KiB  
Article
LiDARgrammetry: A New Method for Generating Synthetic Stereoscopic Products from Digital Elevation Models
by Ricardo Rodríguez-Cielos, José Luis Galán-García, Yolanda Padilla-Domínguez, Pedro Rodríguez-Cielos, Ana Belén Bello-Patricio and José Antonio López-Medina
Appl. Sci. 2017, 7(9), 906; https://doi.org/10.3390/app7090906 - 12 Sep 2017
Cited by 5 | Viewed by 5844
Abstract
There are currently several new technologies being used to generate digital elevation models that do not use photogrammetric techniques. For example, LiDAR (Laser Imaging Detection and Ranging) and RADAR (RAdio Detection And Ranging) can generate 3D points and reflectivity information of the surface [...] Read more.
There are currently several new technologies being used to generate digital elevation models that do not use photogrammetric techniques. For example, LiDAR (Laser Imaging Detection and Ranging) and RADAR (RAdio Detection And Ranging) can generate 3D points and reflectivity information of the surface without using a photogrammetric approach. In the case of LiDAR, the intensity level indicates the amount of energy that the object reflects after a laser pulse is transmitted. This energy mainly depends on the material and the wavelength used by LiDAR. This intensity level can be used to generate a synthetic image colored by this attribute (intensity level), which can be viewed as a RGB (red, green and blue) picture. This work presents the outline of an innovative method, designed by the authors, to generate synthetic pictures from point clouds to use in classical photogrammetric software (digital restitution or stereoscopic vision). This is conducted using available additional information (for example, the intensity level of LiDAR). This allows mapping operators to view the LiDAR as if it were stereo-imagery, so they can manually digitize points, 3D lines, break lines, polygons and so on. Full article
(This article belongs to the Special Issue Laser Scanning)
Show Figures

Figure 1

Figure 1
<p>Block diagram for the generation of synthetic images.</p>
Full article ">Figure 2
<p>Visual aspect of a LiDAR point cloud using the intensity level to generate the image in grayscale. In the darkest areas, a greater density of points can be seen.</p>
Full article ">Figure 3
<p>LiDAR point cloud before and after the densification process is completed.</p>
Full article ">Figure 4
<p>Spatial reference systems in the two frames.</p>
Full article ">Figure 5
<p>Transformations to pass from system <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mi>t</mi> </msub> </mrow> </semantics> </math> and the systems <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mi>i</mi> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mi>j</mi> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p>Synthetic model of intensity generated for a LiDAR point cloud.</p>
Full article ">Figure 7
<p>Synthetic model of intensity generated and densified for a LiDAR point cloud.</p>
Full article ">Figure 8
<p>Screenshot of the planimetric control with the mean squared error obtained. The software allows you to move the images independently. See the coordinates of two images at the button in the first box. The cross is only used to target a point.</p>
Full article ">Figure 9
<p>Details of the hypsometric analysis carried out.</p>
Full article ">Figure 10
<p>Screenshot of the hypsometric analysis with the mean squared error. The software allows you to move the images independently. See the coordinates of two images on the button in the first box. The cross is only used to target a point.</p>
Full article ">
4763 KiB  
Article
Comparison of Laser and Stereo Optical, SAR and InSAR Point Clouds from Air- and Space-Borne Sources in the Retrieval of Forest Inventory Attributes
by Xiaowei Yu, Juha Hyyppä, Mika Karjalainen, Kimmo Nurminen, Kirsi Karila, Mikko Vastaranta, Ville Kankare, Harri Kaartinen, Markus Holopainen, Eija Honkavaara, Antero Kukko, Anttoni Jaakkola, Xinlian Liang, Yunsheng Wang, Hannu Hyyppä and Masato Katoh
Remote Sens. 2015, 7(12), 15933-15954; https://doi.org/10.3390/rs71215809 - 27 Nov 2015
Cited by 113 | Viewed by 13176
Abstract
It is anticipated that many of the future forest mapping applications will be based on three-dimensional (3D) point clouds. A comparison study was conducted to verify the explanatory power and information contents of several 3D remote sensing data sources on the retrieval of [...] Read more.
It is anticipated that many of the future forest mapping applications will be based on three-dimensional (3D) point clouds. A comparison study was conducted to verify the explanatory power and information contents of several 3D remote sensing data sources on the retrieval of above ground biomass (AGB), stem volume (VOL), basal area (G), basal-area weighted mean diameter (Dg) and Lorey’s mean height (Hg) at the plot level, utilizing the following data: synthetic aperture radar (SAR) Interferometry, SAR radargrammetry, satellite-imagery having stereo viewing capability, airborne laser scanning (ALS) with various densities (0.8–6 pulses/m2) and aerial stereo imagery. Laser scanning is generally known as the primary source providing a 3D point cloud. However, photogrammetric, radargrammetric and interferometric techniques can be used to produce 3D point clouds from space- and air-borne stereo images. Such an image-based point cloud could be utilized in a similar manner as ALS providing that accurate digital terrain model is available. In this study, the performance of these data sources for providing point cloud data was evaluated with 91 sample plots that were established in Evo, southern Finland within a boreal forest zone and surveyed in 2014 for this comparison. The prediction models were built using random forests technique with features derived from each data sources as independent variables and field measurements of forest attributes as response variable. The relative root mean square errors (RMSEs) varied in the ranges of 4.6% (0.97 m)–13.4% (2.83 m) for Hg, 11.7% (3.0 cm)–20.6% (5.3 cm) for Dg, 14.8% (4.0 m2/ha)–25.8% (6.9 m2/ha) for G, 15.9% (43.0 m3/ha)–31.2% (84.2 m3/ha) for VOL and 14.3% (19.2 Mg/ha)–27.5% (37.0 Mg/ha) for AGB, respectively, depending on the data used. Results indicate that ALS data achieved the most accurate estimates for all forest inventory attributes. For image-based 3D data, high-altitude aerial images and WorldView-2 satellite optical image gave similar results for Hg and Dg, which were only slightly worse than those of ALS data. As expected, spaceborne SAR data produced the worst estimates. WorldView-2 satellite data performed well, achieving accuracy comparable to the one with ALS data for G, VOL and AGB estimation. SAR interferometry data seems to contain more information for forest inventory than SAR radargrammetry and reach a better accuracy (relative RMSE decreased from 13.4% to 9.5% for Hg, 20.6% to 19.2% for Dg, 25.8% to 20.9% for G, 31.2% to 22.0% for VOL and 27.5% to 20.7% for AGB, respectively). However, the availability of interferometry data is limited. The results confirmed the high potential of all 3D remote sensing data sources for forest inventory purposes. However, the assumption of using other than ALS data is that there exist a high quality digital terrain model, in our case it was derived from ALS. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area and the location of sample plots. Background image is a canopy height model created from ALS data.</p>
Full article ">Figure 2
<p>Flowchart of the method and procedure for prediction of forest attribute using point cloud.</p>
Full article ">Figure 3
<p>Profiles of 60 m long and 4 m wide section of various data.</p>
Full article ">Figure 4
<p>Relative root mean square errors (RMSE)s of forest attribute estimates from RS point clouds.</p>
Full article ">Figure 5
<p>Inventory map for H<sub>g</sub> (m) from ALS-900 (first image) and the difference of inventory maps between ALS-900 and other RS data.</p>
Full article ">Figure 6
<p>Inventory map for D<sub>g</sub> (cm) from ALS-900 (first image) and the difference of inventory maps between ALS-900 and other RS data.</p>
Full article ">Figure 7
<p>Inventory map for G (m<sup>2</sup>/ha) from ALS-900 (first image) and the difference of inventory maps between ALS-900 and other RS data.</p>
Full article ">Figure 8
<p>Inventory map for Vol (m<sup>3</sup>/ha) from ALS-900 (first image) and the difference of inventory maps between ALS-900 and other RS data.</p>
Full article ">Figure 9
<p>Inventory map for AGB (Mg/ha) from ALS-900 (first image) and the difference of inventory maps between ALS-900 and other RS data.</p>
Full article ">Figure 10
<p>Increase in relative RMSEs for attribute estimates when plot size was reduced from 32 m × 32 m to 16 m × 16 m.</p>
Full article ">
787 KiB  
Article
Prediction of Forest Stand Attributes Using TerraSAR-X Stereo Imagery
by Mikko Vastaranta, Mikko Niemi, Mika Karjalainen, Jussi Peuhkurinen, Ville Kankare, Juha Hyyppä and Markus Holopainen
Remote Sens. 2014, 6(4), 3227-3246; https://doi.org/10.3390/rs6043227 - 10 Apr 2014
Cited by 20 | Viewed by 9623
Abstract
Consistent, detailed and up-to-date forest resource information is required for allocation of forestry activities and national and international reporting obligations. We evaluated the forest stand attribute prediction accuracy when radargrammetry was used to derive height information from TerraSAR-X stereo imagery. Radargrammetric elevations were [...] Read more.
Consistent, detailed and up-to-date forest resource information is required for allocation of forestry activities and national and international reporting obligations. We evaluated the forest stand attribute prediction accuracy when radargrammetry was used to derive height information from TerraSAR-X stereo imagery. Radargrammetric elevations were normalized to heights above ground using an airborne laser scanning (ALS)-derived digital terrain model (DTM). Derived height metrics were used as predictors in the most similar neighbor (MSN) estimation approach. In total, 207 field measured plots were used in MSN estimation, and the obtained results were validated using 94 stands with an average area of 4.1 ha. The relative root mean square errors for Lorey’s height, basal area, stem volume, and above-ground biomass were 6.7% (1.1 m), 12.0% (2.9 m2/ha), 16.3% (31.1 m3/ha), and 16.1% (15.6 t/ha). Although the prediction accuracies were promising, it should be noted that the predictions included bias. The respective biases were −4.6% (−0.7 m), −6.4% (−1.6 m2/ha), −9.3% (−17.8 m3/ha), and −9.5% (−9.1 t/ha). With detailed DTM, TerraSAR-X stereo radargrammetry-derived forest information appears to be suitable for providing consistent forest resource information over large areas. Full article
Show Figures


<p>The area covered by TerraSAR-X stereoimages. The aerial orthophoto was acquired from the National Land Survey of Finland in March 2013.</p>
Full article ">
<p>Plot-level maximum point heights and proportions of the points below a 25%, 50% and 75% given percentage of the maximum ALS height (25, 50, and 75 percentage).</p>
Full article ">
<p>The 85th percentile (h85) of the ALS and SAR points related to observed Lorey’s height at the plot level.</p>
Full article ">
<p>Vegetation density based on ALS and SAR related to observed stem volume at the plot level.</p>
Full article ">
<p>Stand-level stem volume estimates by stereo-SAR and multi-source NFI related to area-based ALS interpretation.</p>
Full article ">
1812 KiB  
Article
Forest Variable Estimation Using Radargrammetric Processing of TerraSAR-X Images in Boreal Forests
by Henrik Persson and Johan E.S. Fransson
Remote Sens. 2014, 6(3), 2084-2107; https://doi.org/10.3390/rs6032084 - 7 Mar 2014
Cited by 34 | Viewed by 8075
Abstract
The last decade has seen launches of radar satellite missions operating in X-band with the sensors acquiring images with spatial resolutions on the order of 1 m. This study uses digital surface models (DSMs) extracted from stereo synthetic aperture radar images and a [...] Read more.
The last decade has seen launches of radar satellite missions operating in X-band with the sensors acquiring images with spatial resolutions on the order of 1 m. This study uses digital surface models (DSMs) extracted from stereo synthetic aperture radar images and a reference airborne laser scanning digital terrain model to calculate the above-ground biomass and tree height. The resulting values are compared to in situ data. Analyses were undertaken at the Swedish test sites Krycklan (64°N) and Remningstorp (58°N), which have different site conditions. The results showed that, for 459 forest stands in Remningstorp, biomass estimation at the stand level could be performed with 22.9% relative root mean square error, while the height estimation showed 9.4%. Many factors influenced the results and it was found that the topography has a significant effect on the generated DSMs and should therefore be taken into consideration when the stand level mean slope is above four degrees. Different tree species did not have a major effect on the models during leaf-on conditions. Moreover, correct estimation within young forest stands was problematic. The intersection angles resulting in the best results were in the range 8–16°. Based on the results in this study, radargrammetry appears to be a promising potential remote sensing technique for future forest applications. Full article
Show Figures


<p>The two test sites Krycklan and Remningstorp, located in northern (64°N) and southern (58°N) Sweden, respectively.</p>
Full article ">
<p>The forest stand delineation and field plot distribution of the two test sites. (<b>a</b>) Krycklan; (<b>b</b>) Remningstorp.</p>
Full article ">
<p>Illustration of the information content from a subset of an ortho-rectified GammaMAP-filtered TerraSAR-X image acquired 17 October 2008, over Krycklan (<b>Top</b>) and 2 September 2010, over Remningstorp (<b>Bottom</b>).</p>
Full article ">
<p>The Krycklan test site with an ALS DTM overlaid by a DSM (delineated in green) derived from TerraSAR-X SL data.</p>
Full article ">
<p>Scatter plots of AGB (<b>Left</b>) and H (<b>Right</b>) estimations for stands at the Krycklan (<b>Top</b>) and Remningstorp (<b>Bottom</b>) test sites.</p>
Full article ">
<p>Scatter plot of tree species-specific H estimations at Remningstorp.</p>
Full article ">
<p>Topography affected stands at Krycklan. In the foreslope (<b>Left</b>), the central stand with low heights gets high heights shifted into its stand caused by the slope located in the stand to the right (61°). In the backslope (<b>Right</b>) this effect is negligible. <b>Top</b>: ALS percentile 95 height raster. <b>Middle</b>: TerraSAR-X derived height raster. <b>Bottom</b>: Slope raster derived from ALS data. The TerraSAR-X images are acquired from the left side in these images.</p>
Full article ">
<p>Scatter plots for the 20 flatter stands at Krycklan using field reference data from the BioSAR 2008 campaign, AGB (<b>Left</b>) and H (<b>Right</b>).</p>
Full article ">
9094 KiB  
Article
Forest Assessment Using High Resolution SAR Data in X-Band
by Roland Perko, Hannes Raggam, Janik Deutscher, Karlheinz Gutjahr and Mathias Schardt
Remote Sens. 2011, 3(4), 792-815; https://doi.org/10.3390/rs3040792 - 13 Apr 2011
Cited by 78 | Viewed by 11040
Abstract
Novel radar satellite missions also include sensors operating in X-band at very high resolution. The presented study reports methodologies, algorithms and results on forest assessment utilizing such X-band satellite images, namely from TerraSAR-X and COSMO-SkyMed sensors. The proposed procedures cover advanced stereo-radargrammetric and [...] Read more.
Novel radar satellite missions also include sensors operating in X-band at very high resolution. The presented study reports methodologies, algorithms and results on forest assessment utilizing such X-band satellite images, namely from TerraSAR-X and COSMO-SkyMed sensors. The proposed procedures cover advanced stereo-radargrammetric and interferometric data processing, as well as image segmentation and image classification. A core methodology is the multi-image matching concept for digital surface modeling based on geometrically constrained matching. Validation of generated surface models is made through comparison with LiDAR data, resulting in a standard deviation height error of less than 2 meters over forest. Image classification of forest regions is then based on X-band backscatter information, a canopy height model and interferometric coherence information yielding a classification accuracy above 90%. Such information is then directly used to extract forest border lines. High resolution X-band sensors deliver imagery that can be used for automatic forest assessment on a large scale. Full article
(This article belongs to the Special Issue 100 Years ISPRS - Advancing Remote Sensing Science)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Proposed workflow for deriving forest parameters using X-band SAR data.</p>
Full article ">Figure 2
<p>Explanation of relation between digital surface model (DSM), digital terrain model (DTM) and canopy height model (CHM).</p>
Full article ">Figure 3
<p>Overview of the test sites “Burgau”, 8.2 × 10.3 km<math display="inline"> <msup> <mrow/> <mn>2</mn> </msup> </math> (left) and “Seiersberg”, 12.4 × 11.9 km<math display="inline"> <msup> <mrow/> <mn>2</mn> </msup> </math> (right). These topographic maps show regions of forest in green color. The red box on the left indicates the subarea shown in <a href="#remotesensing-03-00792-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 4
<p>LiDAR and ortho photo reference data for a subarea of test site “Burgau”. <b>(a)</b> LiDAR DSM, <b>(b)</b> LiDAR DTM, <b>(c)</b> LiDAR CHM and <b>(d)</b> Ortho photo mosaic.</p>
Full article ">Figure 5
<p>Overview of the selected areas of interest (AOI) for DSM evaluation purposes superimposed on the LiDAR DSMs. Red regions mark areas on bare ground while the green regions mark forests. The number, average area <math display="inline"> <mrow> <mi>μ</mi> <mo>(</mo> <mi>A</mi> <mo>)</mo> </mrow> </math> and sum of areas <math display="inline"> <mrow> <mo>Σ</mo> <mo>(</mo> <mi>A</mi> <mo>)</mo> </mrow> </math> are given in square meters. The corresponding topographic maps are shown in <a href="#remotesensing-03-00792-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 6
<p>Exemplary results of DSM and CHM extraction. The TerraSAR-X DSM <b>(a)</b>, COSMO-SkyMed DSM <b>(b)</b>, LiDAR reference DTM <b>(c)</b>, TerraSAR-X CHM <b>(d)</b>, COSMO-SkyMed CHM <b>(e)</b>, LiDAR CHM <b>(f)</b>, color coded TerraSAR-X height error <b>(g)</b>, COSMO-SkyMed height error <b>(h)</b>, and a topographic map for visual comparison <b>(i)</b>. A subset of 7.1 × 7.6 km<math display="inline"> <msup> <mrow/> <mn>2</mn> </msup> </math> is shown.</p>
Full article ">Figure 7
<p>Burgau TerraSAR-X Spotlight asc123-c <b>(a)</b> and dsc123-c <b>(b)</b>: Canopy height underestimation.</p>
Full article ">Figure 8
<p>Burgau COSMO-SkyMed Spotlight asc123-c: Canopy height underestimation.</p>
Full article ">Figure 9
<p>Seiersberg TerraSAR-X Spotlight asc123-c <b>(a)</b> and dsc123-c <b>(b)</b> and TerraSAR-X Stripmap asc123-c <b>(c)</b> and dsc123-c <b>(d)</b>: Canopy height underestimation.</p>
Full article ">Figure 10
<p>Exemplary input data used for image segmentation <b>(a)</b> backscatter, <b>(b)</b> coherence, <b>(c)</b> texture and <b>(d)</b> canopy height model, reference segmentation based on laser scanner vegetation height model <b>(e)</b> and TerraSAR-X based segmentation <b>(f)</b>.</p>
Full article ">Figure 11
<p>Detailed views on forest border line extraction for two subsets. On the left reference border lines are given and on the right the automatically extracted borders using TerraSAR-X alone.</p>
Full article ">
383 KiB  
Article
Effects of Orbit and Pointing Geometry of a Spaceborne Formation for Monostatic-Bistatic Radargrammetry on Terrain Elevation Measurement Accuracy
by Alfredo Renga and Antonio Moccia
Sensors 2009, 9(1), 175-195; https://doi.org/10.3390/s90100175 - 8 Jan 2009
Cited by 6 | Viewed by 9480
Abstract
During the last decade a methodology for the reconstruction of surface relief by Synthetic Aperture Radar (SAR) measurements – SAR interferometry – has become a standard. Different techniques developed before, such as stereo-radargrammetry, have been experienced from space only in very limiting geometries [...] Read more.
During the last decade a methodology for the reconstruction of surface relief by Synthetic Aperture Radar (SAR) measurements – SAR interferometry – has become a standard. Different techniques developed before, such as stereo-radargrammetry, have been experienced from space only in very limiting geometries and time series, and, hence, branded as less accurate. However, novel formation flying configurations achievable by modern spacecraft allow fulfillment of SAR missions able to produce pairs of monostatic-bistatic images gathered simultaneously, with programmed looking angles. Hence it is possible to achieve large antenna separations, adequate for exploiting to the utmost the stereoscopic effect, and to make negligible time decorrelation, a strong liming factor for repeat-pass stereo-radargrammetric techniques. This paper reports on design of a monostatic-bistatic mission, in terms of orbit and pointing geometry, and taking into account present generation SAR and technology for accurate relative navigation. Performances of different methods for monostatic-bistatic stereo-radargrammetry are then evaluated, showing the possibility to determine the local surface relief with a metric accuracy over a wide range of Earth latitudes. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Italy)
Show Figures


<p>Observation strategies for monostatic-bistatic acquisition. The transmitting-receiving (Tx/Rx) and the receiving-only (Rx/o) antennae are supposed to operate at the same altitude in parallel trajectories (not to scale for clarity).</p>
Full article ">
<p>(a) Bistatic-to-monostatic ground range resolution ratio, (b) bistatic-to-monostatic azimuth resolution ratio, (c) bistatic angle, as a function of baseline (platforms operating at 620 km altitude in parallel trajectories).</p>
Full article ">
<p>Bistatic time interval required to receive the same monostatic swath width (40 km) for different observation strategies as a function of the baseline (platforms operating at 620 km altitude in parallel trajectories).</p>
Full article ">
<p>Ratio between bistatic and monostatic SNR for different observation strategies as a function of the baseline (platforms operating at 620 km altitude in parallel trajectories).</p>
Full article ">
<p>Three-dimensional viewing geometry of the monostatic-bistatic stereo-radargrammetric survey (not to scale for clarity).</p>
Full article ">
<p>(a) Along-track, (b) cross-track, and (c) radial baseline components for four bistatic antenna off-nadir angles within the range of covered latitudes.</p>
Full article ">
<p>Bistatic angle for four bistatic antenna off-nadir angles within the range of covered latitudes.</p>
Full article ">
<p>Bistatic antenna azimuth angle (a) and bistatic spacecraft yaw steering angle (b) for four bistatic antenna off-nadir angles within the range of covered latitudes.</p>
Full article ">
<p>Contributions to height uncertainty as a function of target latitude in the method of projection of bistatic parameters for four bistatic antenna off-nadir angles (1/10 of an image pixel assumed as co-registration uncertainty).</p>
Full article ">
Back to TopTop