[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (660)

Search Parameters:
Keywords = SAR backscatters

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 43142 KiB  
Article
Can Measurement and Input Uncertainty Explain Discrepancies Between the Wheat Canopy Scattering Model and SMAPVEX12 Observations?
by Lilangi Wijesinghe, Andrew W. Western, Jagannath Aryal and Dongryeol Ryu
Remote Sens. 2025, 17(1), 164; https://doi.org/10.3390/rs17010164 - 6 Jan 2025
Viewed by 315
Abstract
Realistic representation of microwave backscattering from vegetated surfaces is important for developing accurate soil moisture retrieval algorithms that use synthetic aperture radar (SAR) imagery. Many studies have reported considerable discrepancies between the simulated and observed backscatter. However, there has been limited effort to [...] Read more.
Realistic representation of microwave backscattering from vegetated surfaces is important for developing accurate soil moisture retrieval algorithms that use synthetic aperture radar (SAR) imagery. Many studies have reported considerable discrepancies between the simulated and observed backscatter. However, there has been limited effort to identify the sources of errors and contributions quantitatively using process-based backscatter simulation in comparison with extensive ground observations. This study examined the influence of input uncertainties on simulated backscatter from a first-order radiative transfer model, named the Wheat Canopy Scattering Model (WCSM), using ground-based and airborne data collected during the SMAPVEX12 campaign. Input uncertainties to WCSM were simulated using error statistics for two crop growth stages. The Sobol’ method was adopted to analyze the uncertainty in WCSM-simulated backscatters originating from different inputs before and after the wheat ear emergence. The results show that despite the presence of wheat ears, uncertainty in root mean square (RMS) height of 0.2 cm significantly influences simulated co-polarized backscatter uncertainty. After ear emergence, uncertainty in ears dominates simulated cross-polarized backscatter uncertainty. In contrast, uncertainty in RMS height before ear emergence dominates the accuracy of simulated cross-polarized backscatter. These findings suggest that considering wheat ears in the model structure and precise representation of surface roughness is essential to accurately simulate backscatter from a wheat field. Since the discrepancy between the simulated and observed backscatter coefficients is due to both model and observation uncertainty, the uncertainty of the UAVSAR data was estimated by analyzing the scatter between multiple backscatter coefficients obtained from the same targets near-simultaneously, assuming the scatter represents the observation uncertainty. Observation uncertainty of UAVSAR backscatter for HH, VV, and HV polarizations are 0.8 dB, 0.87 dB, and 0.86 dB, respectively. Discrepancies between WCSM-simulated backscatter and UAVSAR observations are discussed in terms of simulation and observation uncertainty. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of the scattering mechanisms considered in the Wheat Canopy Scattering Model (WCSM) (adapted from Yan et al. [<a href="#B32-remotesensing-17-00164" class="html-bibr">32</a>]).</p>
Full article ">Figure 2
<p>Map of the study area and wheat sampling fields used in the present study overlaid on the UAVSAR-HH backscatter intensity image acquired on 17 July 2012 for line ID 31606. The inset illustrates the layout of the 16 sampling locations within each wheat field. Wheat crop measurements were made at sampling locations #2, #11, and #14 in each wheat field.</p>
Full article ">Figure 3
<p>Schematic diagram of the methods followed in the study; steps followed in UAVSAR data processing and backscatter extraction, simulating L-band backscatter using the Wheat Canopy Scattering Model (WCSM) using SMAPVEX12 ground measurements and allometric relationships from Yan et al. [<a href="#B32-remotesensing-17-00164" class="html-bibr">32</a>], simulation uncertainty, and observation uncertainty analyses.</p>
Full article ">Figure 4
<p>Wheat Canopy Scattering Model (WCSM) simulated total backscatter for HH, VV, and HV polarizations as a function of the incidence angle.</p>
Full article ">Figure 5
<p>Total sensitivity (<math display="inline"><semantics> <msub> <mi>S</mi> <mi>T</mi> </msub> </semantics></math>) values from the Sobol’ method for WCSM-simulated HH-, VV-, and HV-polarized backscatters for two different crop growth stages; (<b>a</b>) crop height 20 cm (before heading − without wheat ears) and (<b>b</b>) crop height 80 cm (after heading − with wheat ears).</p>
Full article ">Figure 6
<p>Correlation plots between observed backscatter for different line ID combinations: 31604 vs. 31603 (<b>a</b>) HH, (<b>b</b>) VV, (<b>c</b>) HV; 31605 vs. 31603 (<b>d</b>) HH, (<b>e</b>) VV, (<b>f</b>) HV; 31606 vs. 31603 (<b>g</b>) HH, (<b>h</b>) VV, (<b>i</b>) HV; 31605 vs. 31604 (<b>j</b>) HH, (<b>k</b>) VV, (<b>l</b>) HV; 31606 vs. 31604 (<b>m</b>) HH, (<b>n</b>) VV, (<b>o</b>) HV; 31606 vs. 31605 (<b>p</b>) HH, (<b>q</b>) VV, (<b>r</b>) HV.</p>
Full article ">Figure 7
<p>Comparison of observed backscatter from UAVSAR and simulated backscatter from the Wheat Canopy Scattering Model (WCSM) where rows represent the flight line IDs 31603, 31604, 31605, and 31606, and columns represent HH-, VV-, and HV-polarized backscatter, respectively. (<b>a</b>) 31603-HH, (<b>b</b>) 31603-VV, (<b>c</b>) 31603-HV, (<b>d</b>) 31604-HH, (<b>e</b>) 31604-VV, (<b>f</b>) 31604-HV, (<b>g</b>) 31605-HH, (<b>h</b>) 31605-VV, (<b>i</b>) 31605-HV, (<b>j</b>) 31606-HH, (<b>k</b>) 31606-VV, and (<b>l</b>) 31606-HV. Uncertainty in observations and simulations are shown via x and y error bars (±standard deviation), respectively, at the mid-right in each panel.</p>
Full article ">Figure 8
<p>Total backscatter and relative contributions of soil (attenuated soil scattering) and vegetation (volume scattering of ear, leaf, and stem) simulated from the Wheat Canopy Scattering Model (WCSM) for all three polarizations at sampling locations #2, #11, and #14 of wheat fields #44, #45, #65, #73, #74, #81, #85, and #91 on 17 July 2012 for line ID 31606.</p>
Full article ">Figure 9
<p>Time series of UAVSAR backscatter for HH, VV, and HV polarizations and soil moisture measurements in wheat field #42 (sampling location #2) during SMAPVEX12 campaign.</p>
Full article ">Figure 10
<p>Sensitivity of WCSM-simulated total backscatter for HH, VV, and HV polarizations at L-band with changes in gravimetric water content of wheat (<b>a</b>) ears, (<b>b</b>) leaves, and (<b>c</b>) stems.</p>
Full article ">Figure A1
<p>Evolution of sensitivity indices with ensemble size; first-order sensitivity (<math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math>) (<b>a</b>) HH, (<b>b</b>) VV, (<b>c</b>) HV polarizations and total sensitivity (<math display="inline"><semantics> <msub> <mi>S</mi> <mi>T</mi> </msub> </semantics></math>) (<b>d</b>) HH, (<b>e</b>) VV, (<b>f</b>) HV polarizations.</p>
Full article ">Figure A2
<p>Rank of (<b>a</b>) <math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <msub> <mi>S</mi> <mi>T</mi> </msub> </semantics></math> of the 20 input factors to WCSM from the Sobol’ method for HH, VV, and HV polarizations at two different crop heights; 20 cm and 80 cm.</p>
Full article ">
22 pages, 6345 KiB  
Article
Fast Dynamic Time Warping and Hierarchical Clustering with Multispectral and Synthetic Aperture Radar Temporal Analysis for Unsupervised Winter Food Crop Mapping
by Hsuan-Yi Li, James A. Lawarence, Philippa J. Mason and Richard C. Ghail
Agriculture 2025, 15(1), 82; https://doi.org/10.3390/agriculture15010082 - 2 Jan 2025
Viewed by 447
Abstract
Food sustainability has become a major global concern in recent years. Multiple complimentary strategies to deal with this issue have been developed; one of these approaches is regenerative farming. The identification and analysis of crop type phenology are required to achieve sustainable regenerative [...] Read more.
Food sustainability has become a major global concern in recent years. Multiple complimentary strategies to deal with this issue have been developed; one of these approaches is regenerative farming. The identification and analysis of crop type phenology are required to achieve sustainable regenerative faming. Earth Observation (EO) data have been widely applied to crop type identification using supervised Machine Learning (ML) and Deep Learning (DL) classifications, but these methods commonly rely on large amounts of ground truth data, which usually prevent historical analysis and may be impractical in very remote, very extensive or politically unstable regions. Thus, the development of a robust but intelligent unsupervised classification model is attractive for the long-term and sustainable prediction of agricultural yields. Here, we propose FastDTW-HC, a combination of Fast Dynamic Time Warping (DTW) and Hierarchical Clustering (HC), as a significantly improved method that requires no ground truth input for the classification of winter food crop varieties of barley, wheat and rapeseed, in Norfolk, UK. A series of variables is first derived from the EO products, and these include spectral indices from Sentinel-2 multispectral data and backscattered amplitude values at dual polarisations from Sentinel-1 Synthetic Aperture Radar (SAR) data. Then, the phenological patterns of winter barley, winter wheat and winter rapeseed are analysed using the FastDTW-HC applied to the time-series created for each variable, between Nov 2019 and June 2020. Future research will extend this winter food crop mapping analysis using FastDTW-HC modelling to a regional scale. Full article
Show Figures

Figure 1

Figure 1
<p>The growth stages of winter barley, winter wheat and winter rapeseed from late November to June [<a href="#B33-agriculture-15-00082" class="html-bibr">33</a>,<a href="#B34-agriculture-15-00082" class="html-bibr">34</a>,<a href="#B35-agriculture-15-00082" class="html-bibr">35</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) Location of Norfolk in the UK, using a Google Earth image (inset), and a Sentinel-2 image map of Norfolk, UK, with the yellow square showing the study area; (<b>b</b>) detailed image of the study area and ground truth point locations for winter barley (orange), wheat (blue) and rapeseed (lilac) from RPA, UK.</p>
Full article ">Figure 3
<p>The flowchart and workflow of this research.</p>
Full article ">Figure 4
<p>The general concepts of the Euclidean and DTW similarity (distance) calculations between pixels X and Y in two time-series.</p>
Full article ">Figure 5
<p>Illustration of a “warp path” between the index values of two pixels in two time-series datasets, X and Y, in an n-by-m matrix of time points, where the “warp path” represents the similarity between the index values of two pixels in time-series n and m.</p>
Full article ">Figure 6
<p>An example of the Fast DTW process on an optimal warping alignment with local neighbourhood adjustments from a 1/8 resolution to the original resolution.</p>
Full article ">Figure 7
<p>A graphical illustration of the hierarchical clustering concept. Five individual (conceptual) clusters (A, B, C, D and E) are clustered according to their similarity (i.e., distance) values. Clusters A and B and clusters D and E then form new clusters of AB and DE, whilst C remains alone. Similarities among AB, DE and the individual cluster C, are then used to form the second layer. Since C is more similar to AB, a new ABC cluster is formed whilst DE remains. The final layer gathers all remaining clusters into one large cluster, ABCDE, and the dendrogram of A, B, C, D and E is formed [<a href="#B48-agriculture-15-00082" class="html-bibr">48</a>].</p>
Full article ">Figure 8
<p>(<b>a</b>) Supervised classification results on winter crops produced by the RPA (RPA, 2021); (<b>b</b>) initial result with the NDVI and the final integration results with R1 to R5 (<b>c</b>–<b>g</b>). Orange represents barley, blue represents wheat and lilac represents rapeseed.</p>
Full article ">Figure 8 Cont.
<p>(<b>a</b>) Supervised classification results on winter crops produced by the RPA (RPA, 2021); (<b>b</b>) initial result with the NDVI and the final integration results with R1 to R5 (<b>c</b>–<b>g</b>). Orange represents barley, blue represents wheat and lilac represents rapeseed.</p>
Full article ">Figure 9
<p>Spectral index and amplitude values throughout the growing season for winter varieties of barley (orange), wheat (blue) and rapeseed (lilac).</p>
Full article ">Figure 9 Cont.
<p>Spectral index and amplitude values throughout the growing season for winter varieties of barley (orange), wheat (blue) and rapeseed (lilac).</p>
Full article ">
26 pages, 12759 KiB  
Article
Rice Identification and Spatio-Temporal Changes Based on Sentinel-1 Time Series in Leizhou City, Guangdong Province, China
by Kaiwen Zhong, Jian Zuo and Jianhui Xu
Remote Sens. 2025, 17(1), 39; https://doi.org/10.3390/rs17010039 - 26 Dec 2024
Viewed by 326
Abstract
Due to the limited availability of high-quality optical images during the rice growth period in the Lingnan region of China, effectively monitoring the rice planting situation has been a challenge. In this study, we utilized multi-temporal Sentinel-1 data to develop a method for [...] Read more.
Due to the limited availability of high-quality optical images during the rice growth period in the Lingnan region of China, effectively monitoring the rice planting situation has been a challenge. In this study, we utilized multi-temporal Sentinel-1 data to develop a method for rapidly extracting the range of rice fields using a threshold segmentation approach and employed a U-Net deep learning model to delineate the distribution of rice fields. Spatio-temporal changes in rice distribution in Leizhou City, Guangdong Province, China, from 2017 to 2021 were analyzed. The results revealed that by analyzing SAR-intensive time series data, we were able to determine the backscattering coefficient of typical crops in Leizhou and use the threshold segmentation method to identify rice labels in SAR-intensive time series images. Furthermore, we extracted the distribution range of early and late rice in Leizhou City from 2017 to 2021 using a U-Net model with a minimum relative error accuracy of 3.56%. Our analysis indicated an increasing trend in both overall rice planting area and early rice planting area, accounting for 44.74% of early rice and over 50% of late rice planting area in 2021. Double-cropping rice cultivation was predominantly concentrated in the Nandu River basin, while single-cropping areas were primarily distributed along rivers and irrigation facilities. Examination of the traditional double-cropping areas in Fucheng Town from 2017 to 2021 demonstrated that over 86.94% had at least one instance of double cropping while more than 74% had at least four instances, which suggested that there is high continuity and stability within the pattern of rice cultivation practices observed throughout Leizhou City. Full article
(This article belongs to the Section Remote Sensing for Geospatial Science)
Show Figures

Figure 1

Figure 1
<p>The location of the study area.</p>
Full article ">Figure 2
<p>Photos of different types of crop samples.</p>
Full article ">Figure 3
<p>Workflow chart.</p>
Full article ">Figure 4
<p>The curves of backscatter coefficient change in samples in Leizhou. RICE represents the samples selected from Sentinel-2 images, the other samples, such as rice-1 and rice-2, represent those selected from the ground observation experiments. (<b>a</b>) Curves of backscattering coefficient change in crop samples from March to July 2021. (<b>b</b>) Curves of backscattering coefficient change in crop samples from August to November 2021.</p>
Full article ">Figure 4 Cont.
<p>The curves of backscatter coefficient change in samples in Leizhou. RICE represents the samples selected from Sentinel-2 images, the other samples, such as rice-1 and rice-2, represent those selected from the ground observation experiments. (<b>a</b>) Curves of backscattering coefficient change in crop samples from March to July 2021. (<b>b</b>) Curves of backscattering coefficient change in crop samples from August to November 2021.</p>
Full article ">Figure 5
<p>The distribution maps of rice in Leizhou City from 2017 to 2021.</p>
Full article ">Figure 5 Cont.
<p>The distribution maps of rice in Leizhou City from 2017 to 2021.</p>
Full article ">Figure 5 Cont.
<p>The distribution maps of rice in Leizhou City from 2017 to 2021.</p>
Full article ">Figure 6
<p>The structure of rice in Leizhou from 2017 to 2021. The area of single rice calculated the areas that can only be labeled as early rice or late rice, the area of double rice calculated the areas that both planted the early rice and late rice in one year, the area of early rice calculated the areas that only planted early rice while the area of late rice calculated the areas that only planted the late rice and late rice in one year.</p>
Full article ">Figure 7
<p>The chart of crop planting structure in Fucheng Town.</p>
Full article ">Figure 8
<p>The Distribution Maps of Rice in Fucheng Town from 2017 to 2021.</p>
Full article ">Figure 9
<p>Regional map of rice planting 9–10 seasons from 2017 to 2021 in Fucheng. The vector outlined by the red line represents the extent of the area where double-cropping rice was planted in 2017-2021.</p>
Full article ">Figure 10
<p>Regional map of rice planting 7–8 seasons from 2017 to 2021 in Fucheng. The range in the yellow lines indicates the paddy fields where 7–8 rice crops were planted, and the range in the red box indicates the paddy fields where no rice was grown during the season.</p>
Full article ">Figure 11
<p>Regional map of rice planting 5–6 seasons from 2017 to 2021 in Fucheng. The range in the blue lines indicates the paddy fields where 5–6 rice crops were planted, and the range in the red box indicates the paddy fields where no rice was grown during the season.</p>
Full article ">Figure 12
<p>Regional map of rice planting 3–4 seasons from 2017 to 2021 in Fucheng. The range in the green lines indicates the paddy fields where 3–4 rice crops were planted, and the range in the red box indicates the paddy fields where no rice is grown during the season.</p>
Full article ">Figure 13
<p>Regional map of rice planting 1–2 seasons from 2017 to 2021 in Fucheng. The range in the purple lines indicates the paddy fields where 1–2 rice crops were planted, and the range in the red box indicates the paddy fields where no rice was grown during the season.</p>
Full article ">
23 pages, 9152 KiB  
Article
Multi-Band Scattering Characteristics of Miniature Masson Pine Canopy Based on Microwave Anechoic Chamber Measurement
by Kai Du, Yuan Li, Huaguo Huang, Xufeng Mao, Xiulai Xiao and Zhiqu Liu
Sensors 2025, 25(1), 46; https://doi.org/10.3390/s25010046 - 25 Dec 2024
Viewed by 254
Abstract
Using microwave remote sensing to invert forest parameters requires clear canopy scattering characteristics, which can be intuitively investigated through scattering measurements. However, there are very few ground-based measurements on forest branches, needles, and canopies. In this study, a quantitative analysis of the canopy [...] Read more.
Using microwave remote sensing to invert forest parameters requires clear canopy scattering characteristics, which can be intuitively investigated through scattering measurements. However, there are very few ground-based measurements on forest branches, needles, and canopies. In this study, a quantitative analysis of the canopy branches, needles, and ground contribution of Masson pine scenes in C-, X-, and Ku-bands was conducted based on a microwave anechoic chamber measurement platform. Four canopy scenes with different densities by defoliation in the vertical direction were constructed, and the backscattering data for each scene were collected in the C-, X-, and Ku-bands across eight incidence angles and eight azimuth angles, respectively. The results show that in the vertical observation direction, the backscattering energy of the C- and X-bands was predominantly contributed by the ground, whereas the Ku-band signal exhibited higher sensitivity to the canopy structure. The backscattering energy of the scene was influenced by the incident angle, particularly in the cross-polarization, where backscattering energy increased with larger incident angles. The scene’s backscattering energy was influenced by the scattering and extinction of canopy branches and needles, as well as by ground scattering, resulting in a complex relationship with canopy density. In addition, applying orientation correction to the polarization scattering matrix can mitigate the impact of the incident angle and reduce the decomposition energy errors in the Freeman–Durden model. In order to ensure the reliability of forest parameter inversion based on SAR data, a greater emphasis should be placed on physical models that account for signal scattering and the extinction process, rather than relying on empirical models. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Interior view of microwave characteristic measurement and simulation imaging science experiment platform (LAMP, Deqing, China); (<b>b</b>) Geometric diagram of the platform.</p>
Full article ">Figure 2
<p>(<b>a</b>) The scene with all needles (S1), (<b>b</b>) the first defoliation scene (S2), (<b>c</b>) the second defoliation scene (S3), (<b>d</b>) the scene without needles (S4).</p>
Full article ">Figure 3
<p>Workflow of this study.</p>
Full article ">Figure 4
<p>Illustration of backscatter energy profile and signal locations of canopy and ground.</p>
Full article ">Figure 5
<p>Statistics of the ground and canopy energy contribution ratios for different canopy structure scenes: (<b>a</b>) scene S1; (<b>b</b>) scene S2; (<b>c</b>) scene S3; (<b>d</b>) scene S4.</p>
Full article ">Figure 6
<p>Cumulative backscattering energy distribution curves for various scenes under different polarization modes in the C-Band: (<b>a</b>) HH polarization mode; (<b>b</b>) VV polarization mode; (<b>c</b>) HV polarization mode; (<b>d</b>) VH polarization mode.</p>
Full article ">Figure 7
<p>Cumulative backscattering energy distribution curves for various scenes under different polarization modes in the X-Band: (<b>a</b>) HH polarization mode; (<b>b</b>) VV polarization mode; (<b>c</b>) HV polarization mode; (<b>d</b>) VH polarization mode.</p>
Full article ">Figure 8
<p>Cumulative backscattering energy distribution curves for various scenes under different polarization modes in the Ku-Band: (<b>a</b>) HH polarization mode; (<b>b</b>) VV polarization mode; (<b>c</b>) HV polarization mode; (<b>d</b>) VH polarization mode.</p>
Full article ">Figure 9
<p>Variation of backscattering energy with observation incidence angle for scene S1: (<b>a</b>) in the C-band; (<b>b</b>) in the X-band; (<b>c</b>) in the Ku-band.</p>
Full article ">Figure 10
<p>Variation of backscattering energy with observation incidence angle for scene S1 after de-orientation: (<b>a</b>) in the C-band; (<b>b</b>) in the X-band; (<b>c</b>) in the Ku-band.</p>
Full article ">Figure 11
<p>Side-looking backscattering energy for different canopy structure scenes of Masson pine: (<b>a</b>–<b>c</b>) represents the backscattering energy of the C-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>d</b>–<b>f</b>) represents that of the X-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>g</b>–<b>i</b>) represents that of the Ku-band at incidence angles of 35°, 45°, and 55°, respectively.</p>
Full article ">Figure 12
<p>Side-looking backscattering energy for different canopy structure scenes of Masson pine after orientation correction: (<b>a</b>–<b>c</b>) represents the backscattering energy of the C-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>d</b>–<b>f</b>) represents that of the X-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>g</b>–<b>i</b>) represents that of the Ku-band at incidence angles of 35°, 45°, and 55°, respectively.</p>
Full article ">Figure 13
<p>Decomposition energy error statistics based on different polarization decomposition algorithms: (<b>a</b>–<b>d</b>) represents the energy error distribution under different incident angles for scenes S1, S2, S3, and S4, respectively, using the Freeman–Durden model decomposition; (<b>e</b>–<b>h</b>) represents that for scenes S1, S2, S3, and S4, respectively, using the decomposition based on the Freeman–Durden model combined with orientation correction; (<b>i</b>–<b>l</b>) represents that for scenes S1, S2, S3, and S4, respectively, using the decomposition based on the modified Freeman–Durden model combined with orientation correction.</p>
Full article ">Figure 14
<p>The scattering characteristics energy proportion of each scene obtained by the Freeman–Durden model: (<b>a</b>–<b>c</b>) represents the energy proportion of the C-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>d</b>–<b>f</b>) represents that of the X-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>g</b>–<b>i</b>) represents that of the Ku-band at incidence angles of 35°, 45°, and 55°, respectively.</p>
Full article ">Figure 15
<p>The scattering characteristics energy proportion of each scene obtained by the modified Freeman–Durden model combined with orientation correction: (<b>a</b>–<b>c</b>) represents the energy proportion of the C-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>d</b>–<b>f</b>) represents that of the X-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>g</b>–<b>i</b>) represents that of the Ku-band at incidence angles of 35°, 45°, and 55°, respectively.</p>
Full article ">
20 pages, 6779 KiB  
Article
Studying Forest Species Classification Methods by Combining PolSAR and Vegetation Spectral Indices
by Hongbo Zhu, Weidong Song, Bing Zhang, Ergaojie Lu, Jiguang Dai, Wei Zhao and Zhongchao Hu
Forests 2025, 16(1), 15; https://doi.org/10.3390/f16010015 - 25 Dec 2024
Viewed by 460
Abstract
Tree species are important factors affecting the carbon sequestration capacity of forests and maintaining the stability of ecosystems, but trees are widely distributed spatially and located in complex environments, and there is a lack of large-scale regional tree species classification models for remote [...] Read more.
Tree species are important factors affecting the carbon sequestration capacity of forests and maintaining the stability of ecosystems, but trees are widely distributed spatially and located in complex environments, and there is a lack of large-scale regional tree species classification models for remote sensing imagery. Therefore, many studies aim to solve this problem by combining multivariate remote sensing data and proposing a machine learning model for forest tree species classification. However, satellite-based laser systems find it difficult to meet the needs of regional forest species classification characters, due to their unique footprint sampling method, and SAR data limit the accuracy of species classification, due to the problem of information blending in backscatter coefficients. In this work, we combined Sentinel-1 and Sentinel-2 data to construct a machine learning tree classification model based on optical features, vegetation spectral features, and PolSAR polarization observation features, and propose a forest tree classification feature selection method featuring the Hilbert–Huang transform for the problem of mixed information on the surface of SAR data. The PSO-RF method was used to classify forest species, including four temperate broadleaf forests, namely, aspen (Populus L.), maple (Acer), peach tree (Prunus persica), and apricot tree (Prunus armeniaca L.), and two coniferous forests, namely, Chinese pine (Pinus tabuliformis Carrière) and Mongolian pine (Pinus sylvestris var. mongolica Litv.). In this study, some experiments were conducted using two Sentinel-1 images, four Sentinel-2 images, and 550 measured forest survey sample data points pertaining to the forested area of Fuxin District, Liaoning Province, China. The results show that the fusion model constructed in this study has high accuracy, with a Kappa coefficient of 0.94 and an overall classification accuracy of 95.1%. In addition, this study shows that PolSAR data can play an important role in forest tree species classification. In addition, by applying the Hilbert–Huang transform to PolSAR data, other feature information that interferes with the perceived vertical structure of forests can be suppressed to a certain extent, and its role in the classification of forest species, combined with PolSAR, should not be ignored. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Geographic location map of Fuxin area.</p>
Full article ">Figure 2
<p>The structure of the multi-source remote sensing forest species classification methods.</p>
Full article ">Figure 3
<p>Random forest importance ranking chart.</p>
Full article ">Figure 4
<p>Distribution of forest species in Fuxin region in 2021, determined based on multi-source remote sensing forest species classification methods.</p>
Full article ">Figure 5
<p>Map of localized forest species distribution in the study area. (<b>a</b>) Map of forest species distribution in the southwestern part of the study area. (<b>b</b>) Map of forest species distribution in the northeastern part of the study area.</p>
Full article ">Figure 6
<p>Results of feature ablation experiments. ((<b>a</b>) is the producer accuracy of the PolSAR feature ablation experiment; (<b>b</b>) is the user accuracy of the PolSAR feature ablation experiment; (<b>c</b>) is the producer accuracy of the optical feature ablation experiment; (<b>d</b>) is the user accuracy of the optical feature ablation experiment; (<b>e</b>) is the producer accuracy of the vegetation spectral feature ablation experiment; (<b>f</b>) is the user accuracy of the vegetation spectral feature ablation experiment; and (<b>g</b>) is the overall accuracy of the three overall accuracies of the feature ablation experiments).</p>
Full article ">Figure 7
<p>Plot of Hilbert–Huang transform results. ((<b>a</b>) Hilbert–Huang transform result for C11; (<b>b</b>) Hilbert–Huang transform result for C22; (<b>c</b>) Hilbert–Huang transform result for alpha; (<b>d</b>) Hilbert–Huang transform result for anisotropy; and (<b>e</b>) Hilbert–Huang transform result for entropy).</p>
Full article ">
20 pages, 7291 KiB  
Article
Downscaling of Remote Sensing Soil Moisture Products That Integrate Microwave and Optical Data
by Jie Wang, Huazhu Xue, Guotao Dong, Qian Yuan, Ruirui Zhang and Runsheng Jing
Appl. Sci. 2024, 14(24), 11875; https://doi.org/10.3390/app142411875 - 19 Dec 2024
Viewed by 397
Abstract
Soil moisture is a key variable that affects ecosystem carbon and water cycles and that can directly affect climate change. Remote sensing is the best way to obtain global soil moisture data. Currently, soil moisture remote sensing products have coarse spatial resolution, which [...] Read more.
Soil moisture is a key variable that affects ecosystem carbon and water cycles and that can directly affect climate change. Remote sensing is the best way to obtain global soil moisture data. Currently, soil moisture remote sensing products have coarse spatial resolution, which limits their application in agriculture, the ecological environment, and urban planning. Soil moisture downscaling methods rely mainly on optical data. Affected by weather, the spatial discontinuity of optical data has a greater impact on the downscaling results. The synthetic aperture radar (SAR) backscatter coefficient is strongly correlated with soil moisture. This study was based on the Google Earth Engine (GEE) platform, which integrated Moderate-Resolution Imaging Spectroradiometer (MODIS) optical and SAR backscattering coefficients and used machine learning methods to downscale the soil moisture product, reducing the original soil moisture with a resolution of 10 km to 1 km and 100 m. The downscaling results were verified using in situ observation data from the Shandian River and Wudaoliang. The results show that in the two study areas, the downscaling results after adding SAR backscattering coefficients are better than before. In the Shandian River, the R increases from 0.28 to 0.42. In Wudaoliang, the R value increases from 0.54 to 0.70. The RMSE value is 0.03 (cm3/cm3). The downscaled soil moisture products play an important role in water resource management, natural disaster monitoring, ecological and environmental protection, and other fields. In the monitoring and management of natural disasters, such as droughts and floods, it can provide key information support for decision-makers and help formulate more effective emergency response plans. During droughts, affected areas can be identified in a timely manner, and the allocation and scheduling of water resources can be optimized, thereby reducing agricultural losses. Full article
Show Figures

Figure 1

Figure 1
<p>Study area. (<b>a</b>) is the surface coverage type and sites distribution of the Wudaoliang area; (<b>b</b>) is the surface coverage type and site distribution of the Shandian River.</p>
Full article ">Figure 2
<p>Flowchart for data processing and soil moisture downscaling. MODIS, SRTM, and SMAP are the abbreviations of the dataset that provides the data required for the experiment. NDVI, LST, SLOPE, and VV/VH are auxiliary data used to train the downscaling model. RF and XGB are the names of the models used for training.</p>
Full article ">Figure 3
<p>Heat map of R values between model data used for downscaling. (<b>a</b>) The Shandian River; (<b>b</b>) Wudaoliang. SMAP_10km is original soil moisture; NDVI, ALB, LST, LAI, SLOPE, VV, and VH are auxiliary data resampled to 10 km resolution.</p>
Full article ">Figure 4
<p>Various feature weights of RF. ALB, LAI, LST, NDVI, SLOPE, VV, and VH are auxiliary data used in building downscaling models using the random forest algorithm.</p>
Full article ">Figure 5
<p>Soil moisture distributions in the Shandian River before and after downscaling. SMAP_10km is the original soil moisture; SMAP_NOVV_1km is the downscaled soil moisture without SAR backscattering coefficient data; SMAP_1km and SMAP_100m are the downscaled soil moisture with added SAR backscattering coefficient data.</p>
Full article ">Figure 6
<p>Soil moisture distributions in the Shandian River before and after downscaling. SMAP_10km is the original soil moisture; SMAP_NOVV_1km is the downscaled soil moisture without SAR backscattering coefficient data; SMAP_1km and SMAP_100m are the downscaled soil moisture with added SAR backscattering coefficient data.</p>
Full article ">Figure 7
<p>Soil moisture distributions before and after downscaling in the Wudaoliang area. SMAP_10km is the original soil moisture; SMAP_NOVV_1km is the downscaled soil moisture without SAR backscattering coefficient data; SMAP_1km is the downscaled soil moisture with added SAR backscattering coefficient data.</p>
Full article ">Figure 8
<p>Scatter plot of before and after downscaling in the Shandian River. The red dotted line in the figure indicates the 1:1 line.</p>
Full article ">Figure 9
<p>Comparison of Taylor diagrams before and after downscaling in the Wudaoliang area. P1 represents 20/08/12, and P2 represents 20/08/20.</p>
Full article ">Figure 10
<p>Scatter plots of soil moisture and in situ SM before and after downscaling. (<b>a</b>) is the verification result before downscaling; (<b>b</b>) is the verification result after downscaling. The red dotted line in the figure indicates the 1:1 line.</p>
Full article ">
28 pages, 16088 KiB  
Article
A Hierarchical Machine Learning-Based Strategy for Mapping Grassland in Manitoba’s Diverse Ecoregions
by Mirmajid Mousavi, James Kobina Mensah Biney, Barbara Kishchuk, Ali Youssef, Marcos R. C. Cordeiro, Glenn Friesen, Douglas Cattani, Mustapha Namous and Nasem Badreldin
Remote Sens. 2024, 16(24), 4730; https://doi.org/10.3390/rs16244730 - 18 Dec 2024
Viewed by 603
Abstract
Accurate and reliable knowledge about grassland distribution is essential for farmers, stakeholders, and government to effectively manage grassland resources from agro-economical and ecological perspectives. This study developed a novel pixel-based grassland classification approach using three supervised machine learning (ML) algorithms, which were assessed [...] Read more.
Accurate and reliable knowledge about grassland distribution is essential for farmers, stakeholders, and government to effectively manage grassland resources from agro-economical and ecological perspectives. This study developed a novel pixel-based grassland classification approach using three supervised machine learning (ML) algorithms, which were assessed in the province of Manitoba, Canada. The grassland classification process involved three stages: (1) to distinguish between vegetation and non-vegetation covers, (2) to differentiate grassland from non-grassland landscapes, and (3) to identify three specific grassland classes (tame, native, and mixed grasses). Initially, this study investigated different satellite data, such as Sentinel-1 (S1), Sentinel-2 (S2), and Landsat 8 and 9, individually and combined, using the random forest (RF) method, with the best performance at the first two steps achieved using a combination of S1 and S2. The combination was then utilized to conduct the first two steps of classification using support vector machine (SVM) and gradient tree boosting (GTB). In step 3, after filtering out non-grassland pixels, the performance of RF, SVM, and GTB classifiers was evaluated with combined S1 and S2 data to distinguish different grassland types. Eighty-nine multitemporal raster-based variables, including spectral bands, SAR backscatters, and digital elevation models (DEM), were input for ML models. RF had the highest classification accuracy at 69.96% overall accuracy (OA) and a Kappa value of 0.55. After feature selection, the variables were reduced to 61, increasing OA to 72.62% with a Kappa value of 0.58. GTB ranked second, with its OA and Kappa values improving from 67.69% and 0.50 to 72.18% and 0.58 after feature selection. The impact of raster data quality on grassland classification accuracy was assessed through multisensor image fusion. Grassland classification using the Hue, Saturation, and Value (HSV) fused images showed higher OA (59.18%) and Kappa values (0.36) than the Brovey Transform (BT) and non-fused images. Finally, a web map was created to show grassland results within the Soil Landscapes of Canada (SLC) polygons, relating soil landscapes to grassland distribution and providing valuable information for decision-makers and researchers. Future work may include extending the current methodology by considering other influential variables, like meteorological parameters or soil properties, to create a comprehensive grassland inventory across the whole Prairie ecozone of Canada. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographical location of study area. Manitoba’s PE, with different ecoregions, is located in the province of Manitoba, Canada.</p>
Full article ">Figure 2
<p>The spatial distribution of the ground-truthing sampling for all LULC classes included in the classification of Manitoba’s PE grasslands.</p>
Full article ">Figure 3
<p>General overview of the major steps and workflow of the novel strategy for grassland classification, which is developed to improve ecological monitoring using multisource RS data and advanced ML techniques. This workflow integrates data from the S1, S2, L8, and L9 satellites. It involves major stages of image preprocessing, multitemporal composition, image fusion using HSV and BT, and advanced ML classifiers, including RF, SVM, and GTB. The classification is performed in three steps to achieve fine-scale identification of native, tame, and mixed grasses, starting from basic vegetation classification (Step 1) to detailed grassland class differentiation (Step 3); # represents the generated grassland map from each ML process. Ancillary field data, topographic features, and LULC information were incorporated as inputs to generate the final grassland maps for web-based visualization.</p>
Full article ">Figure 4
<p>Atmospheric effects on RS imagery demonstrate the influence of atmospheric conditions on the quality of RS data, specifically how clouds and shadows can occlude pixels and impact the accuracy of the reflected signal received by MSS sensors. (<b>a</b>) The pixel occluded by a shadow shows a scenario where a shadow, cast by an obstacle like a cloud, causes the pixel to be occluded, leading to distorted signals received by the sensor. (<b>b</b>) The pixel occluded by a cloud shows a situation where a cloud directly occludes the pixel, resulting in inaccurate data due to the cloud’s interference with the reflected sunlight that reaches the sensor.</p>
Full article ">Figure 5
<p>The steps of HSV method modified from Al-Wassai et al. [<a href="#B85-remotesensing-16-04730" class="html-bibr">85</a>]. After transforming the RGB image to HSV format, its V channel was replaced with the HR channel, which was then converted back to RGB mode.</p>
Full article ">Figure 6
<p>Comparison of step 2 grassland classification results using different ML models: (<b>a</b>) RF, (<b>b</b>) SVM, and (<b>c</b>) GTB.</p>
Full article ">Figure 7
<p>Classification accuracy varies with different input features ranked based on ANOVA for the RF, SVM, and GTB.</p>
Full article ">Figure 8
<p>Sampled areas before and after image fusion for image quality improvement: (<b>a</b>) Landsat 30 m, (<b>b</b>) HSV fused image, and (<b>c</b>) BT fused image.</p>
Full article ">Figure 9
<p>Scatter plots of fused and non-fused bands using BT and HSV approaches. Except for band 2 (B2), HSV had a higher r-squared. Around 1900 points were selected to build the scatter plots, and the color bar represents the point density, speeded from low density (Blue) to high density (Red).</p>
Full article ">Figure 10
<p>The detailed grassland classification of Manitoba’s PE using RF supervised ML classification model and S1 + S2 data combination.</p>
Full article ">Figure 11
<p>Distribution of mixed, tamed, and native grasslands across three ecoregions, highlighting the areas covered by each grassland class. The percentage listed for each ecoregion shows its proportion of the total grassland area, with Southwest Manitoba Uplands at 2.42%, Lake Manitoba Plain at 41.83%, and Aspen Parkland at 55.75%. The relative dominance of each grassland type across the Aspen Parkland, Lake Manitoba Plain, and Southwest Manitoba Uplands illustrates regional differences in land use and ecological composition.</p>
Full article ">Figure A1
<p>The list of all features with their scores. The numbers following spectral bands, VIs and backscatter variables indicate multiple composite images created during the growing season. Red points show the features that were excluded from classification models to achieve their highest OA and Kappa coefficient; (<b>a</b>) ANOVA F-Value of RF; (<b>b</b>) ANOVA F-Value of SVM; and (<b>c</b>) ANOVA F-Value of GTB.</p>
Full article ">Figure A2
<p>The classification maps of pixel-level fusion with RF approach using (<b>a</b>) Multispectral image, (<b>b</b>) HSV fused image, and (<b>c</b>) BT fused image.</p>
Full article ">Figure A3
<p>Out-of-bag (OOB) error for different numbers of trees and number of variables per split was calculated. Different numbers of variables per split tested are the square root of the total number of variables (SQRT), the total number of variables (ALL), and the natural logarithm of the total number of variables (Ln).</p>
Full article ">Figure A4
<p>(<b>a</b>) OA of classification for different kernel types in the SVM Model. (<b>b</b>) Grid search to find the best value for the Cost/Regularization parameter for Linear kernel in SVM.</p>
Full article ">Figure A5
<p>The effect of the number of trees on OA in GTB classification.</p>
Full article ">
21 pages, 13076 KiB  
Article
A Framework for High-Spatiotemporal-Resolution Soil Moisture Retrieval in China Using Multi-Source Remote Sensing Data
by Zhuangzhuang Feng, Xingming Zheng, Xiaofeng Li, Chunmei Wang, Jinfeng Song, Lei Li, Tianhao Guo and Jia Zheng
Land 2024, 13(12), 2189; https://doi.org/10.3390/land13122189 - 15 Dec 2024
Viewed by 782
Abstract
High-spatiotemporal-resolution and accurate soil moisture (SM) data are crucial for investigating climate, hydrology, and agriculture. Existing SM products do not yet meet the demands for high spatiotemporal resolution. The objective is to develop and evaluate a retrieval framework to derive SM estimates with [...] Read more.
High-spatiotemporal-resolution and accurate soil moisture (SM) data are crucial for investigating climate, hydrology, and agriculture. Existing SM products do not yet meet the demands for high spatiotemporal resolution. The objective is to develop and evaluate a retrieval framework to derive SM estimates with high spatial (100 m) and temporal (<3 days) resolution that can be used on a national scale in China. Therefore, this study integrates multi-source data, including optical remote sensing (RS) data from Sentinel-2 and Landsat-7/8/9, synthetic aperture radar (SAR) data from Sentinel-1, and auxiliary data. Four machine learning and deep learning algorithms are applied, including Random Forest Regression (RFR), Extreme Gradient Boosting (XGBoost), Long Short-Term Memory (LSTM) networks, and Ensemble Learning (EL). The integrated framework (IF) considers three feature scenarios (SC1: optical RS + auxiliary data, SC2: SAR + auxiliary data, SC3: optical RS + SAR + auxiliary data), encompassing a total of 33 features. The results are as follows: (1) The correlation coefficients (r) between auxiliary data (such as sand fraction, r = −0.48; silt fraction, r = 0.47; and evapotranspiration, r = −0.42), SAR features (such as the backscatter coefficients for VV-pol (σvv0), r = 0.47), and optical RS features (such as Shortwave Infrared Band 2 (SWIR2) reflectance data from Sentinel-2 and Landsat-7/8/9, r = −0.39) with observed SM are significant. This indicates that multi-source data can provide complementary information for SM monitoring. (2) Compared to XGBoost and LSTM, RFR and EL demonstrate superior overall performance and are the preferred models for SM prediction. Their R2 for the training and test sets exceed 0.969 and 0.743, respectively, and their ubRMSE are below 0.022 and 0.063 m3/m3, respectively. (3) The SM prediction accuracy is highest for the scenario of optical + SAR + auxiliary data, followed by SAR + auxiliary data, and finally optical + auxiliary data. (4) With an increasing Normalized Difference Vegetation Index (NDVI) and SM values, the trained models exhibit a general decrease in prediction performance and accuracy. (5) In 2021 and 2022, without considering cloud cover, the IF theoretically achieved an SM revisit time of 1–3 days across 95.01% and 96.53% of China’s area, respectively. However, SC1 was able to achieve a revisit time of 1–3 days over 60.73% of China’s area in 2021 and 69.36% in 2022, while the area covered by SC2 and SC3 at this revisit time accounted for less than 1% of China’s total area. This study validates the effectiveness of combining multi-source RS data with auxiliary data in large-scale SM monitoring and provides new methods for improving SM retrieval accuracy and spatiotemporal coverage. Full article
(This article belongs to the Section Land – Observation and Monitoring)
Show Figures

Figure 1

Figure 1
<p>The spatial distribution of the SONTE-China 17 sites within the study area.</p>
Full article ">Figure 2
<p>A framework for estimating SM based on multi-source RS data. *** represents the first priority, ** represents the second priority, and * represents the third priority.</p>
Full article ">Figure 3
<p>The training (<b>top</b>) and test (<b>bottom</b>) results of four models from IF at SONTE-China (17 sites). The red dotted line is the trend line. The gray dotted line represents the error line at 0.06 m<sup>3</sup>/m<sup>3</sup>.</p>
Full article ">Figure 4
<p>The training results of four models at SONTE-China (17 sites). The red dotted line is the trend line. The gray dotted line represents the error line at 0.06 m<sup>3</sup>/m<sup>3</sup>.</p>
Full article ">Figure 5
<p>The test results of four models at SONTE-China (17 sites). The red dotted line is the trend line. The gray dotted line represents the error line at 0.06 m<sup>3</sup>/m<sup>3</sup>.</p>
Full article ">Figure 6
<p>The time series of estimated and observed SM from three scenarios at NQ, JYT, and MQ sites. The blue solid line represents the observed SM at 0–5 cm. The green solid line represents the daily NDVI. The red, green, and purple squares represent the estimated SM for SC1, SC2, and SC3, respectively. The blue bars indicate daily precipitation. The red dashed vertical lines distinguish between the training and test sets.</p>
Full article ">Figure 7
<p>Revisit time between SC1, SC2, SC3, and IF for monitoring SM in China (2021). (<b>a</b>) SC1: Optical RS + auxiliary data only; (<b>b</b>) SC2: SAR + auxiliary data only; (<b>c</b>) SC3: optical RS + SAR + auxiliary data; (<b>d</b>) IF: combined SC3, SC2, and SC1 scenarios.</p>
Full article ">Figure 8
<p>Revisit time between SC1, SC2, SC3, and IF for monitoring SM in China (2022). (<b>a</b>) SC1: Optical RS + auxiliary data only; (<b>b</b>) SC2: SAR + auxiliary data only; (<b>c</b>) SC3: optical RS + SAR + auxiliary data; (<b>d</b>) IF: combined SC3, SC2, and SC1 scenarios.</p>
Full article ">Figure 9
<p>Training (<b>top</b>) and test (<b>bottom</b>) results of three categories using the RFR based on the SC3 dataset at SONTE-China (17 sites). The red dotted line is the trend line. The gray dotted line represents the error line at 0.06 m<sup>3</sup>/m<sup>3</sup>.</p>
Full article ">Figure 10
<p>Performance of different models under various NDVI categories in the training set (<b>left</b>) and test set (<b>right</b>). The colored dot lines represent R<sup>2</sup>, and the bar charts represent ubRMSE.</p>
Full article ">Figure 11
<p>Performance of different models under various SM categories in the training set (<b>left</b>) and test set (<b>right</b>). The bar charts represent ubRMSE, and the red dot line represents the average ubRMSE.</p>
Full article ">Figure 12
<p>Revisit time distribution for multi-source RS monitoring of SM under different scenarios (2021–2022).</p>
Full article ">
18 pages, 5133 KiB  
Article
Field Scale Soil Moisture Estimation with Ground Penetrating Radar and Sentinel 1 Data
by Rutkay Atun, Önder Gürsoy and Sinan Koşaroğlu
Sustainability 2024, 16(24), 10995; https://doi.org/10.3390/su162410995 - 15 Dec 2024
Viewed by 639
Abstract
This study examines the combined use of ground penetrating radar (GPR) and Sentinel-1 synthetic aperture radar (SAR) data for estimating soil moisture in a 25-decare field in Sivas, Türkiye. Soil moisture, vital for sustainable agriculture and ecosystem management, was assessed using in situ [...] Read more.
This study examines the combined use of ground penetrating radar (GPR) and Sentinel-1 synthetic aperture radar (SAR) data for estimating soil moisture in a 25-decare field in Sivas, Türkiye. Soil moisture, vital for sustainable agriculture and ecosystem management, was assessed using in situ measurements, SAR backscatter analysis, and GPR-derived dielectric constants. A novel empirical model adapted from the classical soil moisture index (SSM) was developed for Sentinel-1, while GPR data were processed using the reflected wave method for estimating moisture at 0–10 cm depth. GPR demonstrated a stronger correlation within situ measurements (R2 = 74%) than Sentinel-1 (R2 = 32%), reflecting its ability to detect localized moisture variations. Sentinel-1 provided broader trends, revealing its utility for large-scale analysis. Combining these techniques overcame individual limitations, offering detailed spatial insights and actionable data for precision agriculture and water management. This integrated approach highlights the complementary strengths of GPR and SAR, enabling accurate soil moisture mapping in heterogeneous conditions. The findings emphasize the value of multi-technique methods for addressing challenges in sustainable resource management, improving irrigation strategies, and mitigating climate impacts. Full article
Show Figures

Figure 1

Figure 1
<p>Study area: (<b>a</b>) location of the study area in the Earth; (<b>b</b>) location of the study area in the country; (<b>c</b>) regional location of the study area; (<b>d</b>) boundary of the study area.</p>
Full article ">Figure 2
<p>Points measured with soil moisture meter sensor.</p>
Full article ">Figure 3
<p>Flowchart of the study.</p>
Full article ">Figure 4
<p>Soil moisture-backscatter relationship in vv polarization.</p>
Full article ">Figure 5
<p>Soil moisture-backscatter relationship in vh polarization.</p>
Full article ">Figure 6
<p>Soil moisture estimated with Sentinel 1—GPR profiles.</p>
Full article ">Figure 7
<p>Relationship between soil moisture values estimated with Sentinel 1 and measured with soil moisture meter sensor.</p>
Full article ">Figure 8
<p>Relationship between soil moisture values estimated by GPR and measured by soil moisture meter sensor.</p>
Full article ">Figure 9
<p>Soil moisture estimated from GPR Profile 1 and soil moisture estimated from Sentinel 1.</p>
Full article ">Figure 10
<p>Soil moisture estimated from GPR Profile 2 and soil moisture estimated from Sentinel 1.</p>
Full article ">Figure 11
<p>Soil moisture estimated from GPR Profile 3 and soil moisture estimated from Sentinel 1.</p>
Full article ">Figure 12
<p>Soil moisture was estimated from GPR Profile 3 and soil moisture from Sentinel 1.</p>
Full article ">Figure 13
<p>GPR profile 1.</p>
Full article ">Figure 14
<p>GPR profile 2.</p>
Full article ">
18 pages, 9421 KiB  
Article
SAR Data and Harvesting Residues: An Initial Assessment of Estimation Potential
by Alberto Udali, Henrik J. Persson, Bruce Talbot and Stefano Grigolato
Earth 2024, 5(4), 945-962; https://doi.org/10.3390/earth5040049 - 1 Dec 2024
Viewed by 828
Abstract
The increasing demand for large-scale, high-frequency environmental monitoring has driven the adoption of satellite-based technologies for effective forest management, especially in the context of climate change. This study explores the potential of SAR for estimating the mass of harvesting residues, a significant component [...] Read more.
The increasing demand for large-scale, high-frequency environmental monitoring has driven the adoption of satellite-based technologies for effective forest management, especially in the context of climate change. This study explores the potential of SAR for estimating the mass of harvesting residues, a significant component of forest ecosystems that impacts nutrient cycling, fire risk, and bioenergy production. The research hypothesizes that while the spatial distribution of residues remains stable, changes in moisture content—reflected in variations in the dielectric properties of the woody material—can be detected by SAR techniques. Two models, the generalized linear model (GLM) and random forest (RF) model, were used to predict the mass of residues using interferometric variables (phase, amplitude, and coherence) as well as the backscatter signal from several acquisition pairs. The models provided encouraging results (R2 of 0.48 for GLM and 0.13 for RF), with an acceptable bias and RMSE. It was concluded that it is possible to derive useful indications about the mass of harvesting residues from SAR data and the findings could lead to the improved monitoring and management of forest residues, contributing to sustainable forestry practices and the enhanced utilization of bioenergy resources. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the study; location of the sample plots—divided between field plots and interpreted plots—and an example of the woody material distribution in (<b>A</b>) a field plot and (<b>B</b>) an interpreted plot.</p>
Full article ">Figure 2
<p>(<b>A</b>) Example of a transect organization with the boxes indicating the vertices of the triangle and the transects between them. The localization of the plot is depicted with respect to a hypothetical machine trail, but it could be applied generally. (<b>B</b>) Example of a residue’s distribution over a plantation after a clear-cut; it is possible to notice the presence of material of different sizes.</p>
Full article ">Figure 3
<p>Spatial and temporal distance (i.e., the baseline) between the acquired images, also considering the subdivision into master and slave images.</p>
Full article ">Figure 4
<p>Methodology flowchart to estimate residue mass using SAR and field data. Boxes in red are related to InSAR and DInSAR processing, in blue are reported the SAR processing for backscatter, while in orange are highlighted the photogrammetric process. Dashed boxes indicate secondary analysis.</p>
Full article ">Figure 5
<p>Phase images from the computed interferograms. The values in the images range from −π to π, from red to blue. Study site is represent with a purple outline and plots with a black outline.</p>
Full article ">Figure 6
<p>The coherence difference obtained from and between the image pairs: (<b>A</b>) April–May, (<b>B</b>) April–July, (<b>C</b>) coherence difference between (<b>A</b>) and (<b>B</b>), (<b>D</b>) April–October, (<b>E</b>) coherence difference between (<b>A</b>) and (<b>D</b>). The coherence scale in (<b>A</b>,<b>B</b>,<b>D</b>) is 0–1, whereas the coherence difference scale ranges from −1 to 1.</p>
Full article ">Figure 7
<p>Linear interpolation of the predicted and field mass using the field plots (circles) and interpreted plots (triangles) together for the (<b>A</b>) GLM and (<b>B</b>) RF models. The dashed line represents a 1:1 diagonal, whereas the solid line depicts the regression line for the distribution.</p>
Full article ">Figure 8
<p>Performance assessment through the linear interpolation of the predicted and field mass for the (<b>A</b>) GLM and (<b>B</b>) RF models. The dashed line represents a 1:1 diagonal, whereas the solid line depicts the regression line for the distribution.</p>
Full article ">Figure 9
<p>A comparison of the predictions of the residue mass (i.e., residue “heat map”) performed by (<b>A</b>) the GLM model and (<b>B</b>) the RF model.</p>
Full article ">Figure A1
<p>Importance scores for the variables used in the final RF model. The mean of squared residual computed by the model was 0.069 Mg, with 9.42% of variance explained.</p>
Full article ">Figure A2
<p>Temperature and precipitation data for the study area. The triangles are positioned at the date of acquisition of the Sentinel-1 images (to compare with <a href="#earth-05-00049-f003" class="html-fig">Figure 3</a>). Temperature and precipitation data are from the South African Weather Service.</p>
Full article ">
22 pages, 6555 KiB  
Article
Mangrove Extraction from Compact Polarimetric Synthetic Aperture Radar Images Based on Optimal Feature Combinations
by Sijing Shu, Ji Yang, Wenlong Jing, Chuanxun Yang and Jianping Wu
Forests 2024, 15(11), 2047; https://doi.org/10.3390/f15112047 - 20 Nov 2024
Viewed by 560
Abstract
As a polarimetric synthetic aperture radar (SAR) mode capable of simultaneously acquiring abundant surface information and conducting large-width observations, compact polarimetric synthetic aperture radar (CP SAR) holds great promise for mangrove dynamics monitoring. Nevertheless, there have been no studies on mangrove identification using [...] Read more.
As a polarimetric synthetic aperture radar (SAR) mode capable of simultaneously acquiring abundant surface information and conducting large-width observations, compact polarimetric synthetic aperture radar (CP SAR) holds great promise for mangrove dynamics monitoring. Nevertheless, there have been no studies on mangrove identification using CP SAR. This study aims to explore the potential of C-band CP SAR for mangrove monitoring applications, with the objective of identifying the most effective CP SAR descriptors for mangrove discrimination. A systematic comparison of 52 well-known CP features is provided, utilizing CP SAR data derived from the reconstruction of C-band Gaofen-3 quad-polarimetric data. Among all the features, Shannon entropy (SE), a random polarimetric constituent (VB), Shannon entropy (SEI), and the Bragg backscattering constituent (VG) exhibited the best performance. By combining these four features, we designed three supervised classifiers—support vector machine (SVM), maximum likelihood (ML), and artificial neural network (ANN)—for comparative analysis experiments. The results demonstrated that the optimal polarimetric feature combination not only reduced the redundancy of polarimetric feature data but also enhanced overall accuracy. The highest accuracy of mangrove extraction reached 98.04%. Among the three classifiers, SVM outperformed the other classifiers in mangrove extraction, while ML achieved the highest overall classification accuracy. Full article
(This article belongs to the Special Issue Forest and Urban Green Space Ecosystem Services and Management)
Show Figures

Figure 1

Figure 1
<p>Study area and data images. (<b>a</b>) Geographical location of the Leizhou Peninsula; (<b>b</b>) optical satellite image; (<b>c</b>) SAR data image in HH polarimetric mode; (<b>d</b>) SAR data image in VH polarimetric mode; (<b>e</b>) SAR data image in HV polarimetric mode; (<b>f</b>) SAR data image in VV polarimetric mode.</p>
Full article ">Figure 2
<p>Optimal polarimetric feature selection flow.</p>
Full article ">Figure 3
<p>Euclidean distances between different classes in CP feature images. (<b>a</b>) denotes the Euclidean distance between mangrove and water; (<b>b</b>) denotes the Euclidean distance between mangrove and land; (<b>c</b>) denotes the Euclidean distance between mangrove and seawater; (<b>d</b>) denotes the Euclidean distance between water and land; (<b>e</b>) denotes the Euclidean distance between water and seawater; (<b>f</b>) denotes the Euclidean distance between land and seawater.</p>
Full article ">Figure 3 Cont.
<p>Euclidean distances between different classes in CP feature images. (<b>a</b>) denotes the Euclidean distance between mangrove and water; (<b>b</b>) denotes the Euclidean distance between mangrove and land; (<b>c</b>) denotes the Euclidean distance between mangrove and seawater; (<b>d</b>) denotes the Euclidean distance between water and land; (<b>e</b>) denotes the Euclidean distance between water and seawater; (<b>f</b>) denotes the Euclidean distance between land and seawater.</p>
Full article ">Figure 4
<p>CP feature image.</p>
Full article ">Figure 5
<p>Differences in eigenvalue responses between mangroves and other cover classes in feature images with enhanced combined performance.</p>
Full article ">Figure 6
<p>SVM classification results are based on a single polarimetric feature input. Mangroves are shown in red, water in blue, land in yellow, and seawater in blue.</p>
Full article ">Figure 7
<p>Mangrove extraction results are based on a single polarimetric feature input.</p>
Full article ">Figure 8
<p>Classification results based on optimal polarimetric feature combination input. Mangroves are in red, water in blue, land in yellow, and seawater in blue.</p>
Full article ">Figure 9
<p>Mangrove extraction results based on optimal polarimetric feature combination input.</p>
Full article ">Figure 10
<p>Comparison of mangrove extraction accuracy, OA, and Kappa coefficient values of the different classifiers and features.</p>
Full article ">Figure 11
<p>Euclidean distance and classification accuracy. (<b>a</b>) Euclidean distance and classification accuracy between mangrove and land, where O(M-L) denotes the Euclidean distance between mangrove and land in the feature image, and AM and AL denote the classification accuracy of mangrove and land, respectively. (<b>b</b>) Euclidean distance and classification accuracy between water and seawater, where O(W-S) denotes the Euclidean distance between water and seawater in the feature image, and AW and AS indicate the classification accuracy of water and seawater, respectively.</p>
Full article ">
21 pages, 23870 KiB  
Article
Utilizing LuTan-1 SAR Images to Monitor the Mining-Induced Subsidence and Comparative Analysis with Sentinel-1
by Fengqi Yang, Xianlin Shi, Keren Dai, Wenlong Zhang, Shuai Yang, Jing Han, Ningling Wen, Jin Deng, Tao Li, Yuan Yao and Rui Zhang
Remote Sens. 2024, 16(22), 4281; https://doi.org/10.3390/rs16224281 - 17 Nov 2024
Viewed by 729
Abstract
The LuTan-1 (LT-1) satellite, launched in 2022, is China’s first L-band full-polarimetric Synthetic Aperture Radar (SAR) constellation, boasting interferometry capabilities. However, given its limited use in subsidence monitoring to date, a comprehensive evaluation of LT-1’s interferometric quality and capabilities is necessary. In this [...] Read more.
The LuTan-1 (LT-1) satellite, launched in 2022, is China’s first L-band full-polarimetric Synthetic Aperture Radar (SAR) constellation, boasting interferometry capabilities. However, given its limited use in subsidence monitoring to date, a comprehensive evaluation of LT-1’s interferometric quality and capabilities is necessary. In this study, we utilized the Differential Interferometric Synthetic Aperture Radar (DInSAR) technique to analyze mining-induced subsidence results near Shenmu City (China) with LT-1 data, revealing nine subsidence areas with a maximum subsidence of −19.6 mm within 32 days. Furthermore, a comparative analysis between LT-1 and Sentinel-1 data was conducted focusing on the aspects of subsidence results, interferometric phase, scattering intensity, and interferometric coherence. Notably, LT-1 detected some subsidence areas larger than those identified by Sentinel-1, attributed to LT-1’s high resolution, which significantly enhances the detectability of deformation gradients. Additionally, the coherence of LT-1 data exceeded that of Sentinel-1 due to LT-1’s L-band long wavelength compared to Sentinel-1’s C-band. This higher coherence facilitated more accurate capturing of differential interferometric phases, particularly in areas with large-gradient subsidence. Moreover, the quality of LT-1’s monitoring results surpassed that of Sentinel-1 in root mean square error (RMSE), standard deviation (SD), and signal-to-noise ratio (SNR). In conclusion, these findings provide valuable insights for future subsidence-monitoring tasks utilizing LT-1 data. Ultimately, the systematic differences between LT-1 and Sentinel-1 satellites confirm that LT-1 is well-suited for detailed and accurate subsidence monitoring in complex environments. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Land Subsidence Monitoring)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Geographical location of the study area; (<b>b</b>) desert grass beach area; (<b>c</b>) open-pit mining area.</p>
Full article ">Figure 2
<p>Technical workflow chart.</p>
Full article ">Figure 3
<p>(<b>a</b>) LT-1 satellite data subsidence monitoring results in the study area; (<b>b</b>–<b>d</b>) enlarged views of typical subsidence areas; (<b>c1</b>,<b>d1</b>) are typical subsidence areas identified by both LT-1 and Sentinel-1 satellite data.</p>
Full article ">Figure 4
<p>(<b>a</b>) Sentinel-1 satellite data subsidence monitoring results in the study area; (<b>b</b>–<b>d</b>) enlarged views of typical subsidence areas.</p>
Full article ">Figure 5
<p>(<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>) Results from LT-1 satellite data; (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>) results from Sentinel-1 satellite data; (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>) Google Earth optical images. The dashed circles are areas of subsidence, the AA′ is profile line.</p>
Full article ">Figure 6
<p>Subsidence results along the A-A′ cross-section for LT-1 and Sentinel-1.</p>
Full article ">Figure 7
<p>(<b>d</b>,<b>e</b>) Interferometric phase maps of LT-1 and Sentinel-1 satellite data, respectively; (<b>a</b>–<b>c</b>) enlarged views of LT-1 satellite data, Sentinel-1 satellite data, and Google optical images in the first typical subsidence area; (<b>f</b>–<b>h</b>) the same for the second typical subsidence area. The dashed circles are areas of subsidence.</p>
Full article ">Figure 8
<p>(<b>d</b>,<b>e</b>) Backscatter intensity maps of LT-1 and Sentinel-1 satellite data, respectively; (<b>a</b>–<b>c</b>,<b>f</b>–<b>h</b>) enlarged views of LT-1 satellite data, Sentinel-1 satellite data, and Google optical images.</p>
Full article ">Figure 9
<p>(<b>d</b>,<b>e</b>) Coherence maps of LT-1 and Sentinel-1, respectively; (<b>a</b>–<b>c</b>,<b>f</b>–<b>h</b>) enlarged views of LT-1, Sentinel-1, and Google optical images in the typical subsidence areas A and B, respectively.</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>c</b>) Statistical chart of coherence comparison in the study area, area A, and area B.</p>
Full article ">Figure 11
<p>The track of the LT-1 satellite observing the study area is shown in the left image, while the track of the Sentinel-1 satellite observing the study area is depicted in the right image.</p>
Full article ">Figure 12
<p>MDDG distribution from different SAR satellites under the variations of wavelength and resolution.</p>
Full article ">
21 pages, 6345 KiB  
Article
Integration of Optical and Synthetic Aperture Radar Data with Different Synthetic Aperture Radar Image Processing Techniques and Development Stages to Improve Soybean Yield Prediction
by Isabella A. Cunha, Gustavo M. M. Baptista, Victor Hugo R. Prudente, Derlei D. Melo and Lucas R. Amaral
Agriculture 2024, 14(11), 2032; https://doi.org/10.3390/agriculture14112032 - 12 Nov 2024
Viewed by 912
Abstract
Predicting crop yield throughout its development cycle is crucial for planning storage, processing, and distribution. Optical remote sensing has been used for yield prediction but has limitations, such as cloud interference and only capturing canopy-level data. Synthetic Aperture Radar (SAR) complements optical data [...] Read more.
Predicting crop yield throughout its development cycle is crucial for planning storage, processing, and distribution. Optical remote sensing has been used for yield prediction but has limitations, such as cloud interference and only capturing canopy-level data. Synthetic Aperture Radar (SAR) complements optical data by capturing information even in cloudy conditions and providing additional plant insights. This study aimed to explore the correlation of SAR variables with soybean yield at different crop stages, testing if SAR data enhances predictions compared to optical data alone. Data from three growing seasons were collected from an area of 106 hectares, using eight SAR variables (Alpha, Entropy, DPSVI, RFDI, Pol, RVI, VH, and VV) and four speckle noise filters. The Random Forest algorithm was applied, combining SAR variables with the EVI optical index. Although none of the SAR variables showed strong correlations with yield (r < |0.35|), predictions improved when SAR data were included. The best performance was achieved using DPSVI with the Boxcar filter, combined with EVI during the maturation stage (with EVI:RMSE = 0.43, 0.49, and 0.60, respectively, for each season; while EVI + DPSVI:RMSE = 0.39, 0.49, and 0.42). Despite improving predictions, the computational demands of SAR processing must be considered, especially when optical data are limited due to cloud cover. Full article
(This article belongs to the Special Issue Applications of Remote Sensing in Agricultural Soil and Crop Mapping)
Show Figures

Figure 1

Figure 1
<p>Experimental area with field boundaries marked in red and soybean yield data points in each harvest.</p>
Full article ">Figure 2
<p>Temporal profiles of SAR data in <span class="html-italic">VV</span> and <span class="html-italic">VH</span> backscatter coefficient (<b>a</b>) and optical data considering EVI (<b>b</b>). The red circle represents the selected image dates based on the EVI.</p>
Full article ">Figure 3
<p>SAR data workflow for obtaining (<b>a</b>) backscatter coefficients and (<b>b</b>) polarimetric decomposition.</p>
Full article ">Figure 4
<p>Prediction scenarios performed. Input data corresponding to each tested scenario (in red): (<b>a</b>) using all stages and SAR variables together, (<b>b</b>) using stages separately and all SAR variables together, (<b>c</b>) using the stage that previously performed best with the variables separated.</p>
Full article ">Figure 5
<p>Spearman correlation coefficient between SAR data and soybean yield, including harvest, growth stages, speckle noise reduction filters, and SAR variables. Significant correlations at 5%.</p>
Full article ">Figure 6
<p>R<sup>2</sup> and RMSE values of predictions for each harvest individually with all stages of image collection, using only optical data (EVI) compared to using optical data together with all SAR variables.</p>
Full article ">Figure 7
<p>DPSVI index map for distinct growth stages and soybean harvests. The highlighted area in black shows the difference in cultivar in harvest 3.</p>
Full article ">Figure 8
<p>Percentage difference in R<sup>2</sup> of predictions with EVI and adding SAR variables in models using each stage individually compared to all stages combined.</p>
Full article ">Figure 9
<p>Percentage difference in RMSE of predictions with EVI and adding SAR variables in models using each stage individually compared to all stages combined.</p>
Full article ">Figure 10
<p>R<sup>2</sup> values for predictions using all growth stages with only optical data and using optical data in conjunction with all SAR variables.</p>
Full article ">Figure 11
<p>RMSE values for predictions using all growth stages with only optical data and using optical data in conjunction with all SAR variables.</p>
Full article ">Figure 12
<p>R<sup>2</sup> values obtained for Stage 3 using scenarios with separate SAR variables in conjunction with EVI, compared to using all SAR variables combined with EVI and using only optical data (EVI).</p>
Full article ">Figure 13
<p>Visual comparison between actual yield maps and predicted yield using DPSVI in conjunction with EVI for Stage 3, using the Boxcar filter. The actual yield data were interpolated using ordinary kriging. The error maps represent the difference between the actual and predicted maps, showing positive and negative variations.</p>
Full article ">
31 pages, 7836 KiB  
Article
Estimation of Forest Growing Stock Volume with Synthetic Aperture Radar: A Comparison of Model-Fitting Methods
by Maurizio Santoro, Oliver Cartus, Oleg Antropov and Jukka Miettinen
Remote Sens. 2024, 16(21), 4079; https://doi.org/10.3390/rs16214079 - 31 Oct 2024
Viewed by 616
Abstract
Satellite-based estimation of forest variables including forest biomass relies on model-based approaches since forest biomass cannot be directly measured from space. Such models require ground reference data to adapt to the local forest structure and acquired satellite data. For wide-area mapping, such reference [...] Read more.
Satellite-based estimation of forest variables including forest biomass relies on model-based approaches since forest biomass cannot be directly measured from space. Such models require ground reference data to adapt to the local forest structure and acquired satellite data. For wide-area mapping, such reference data are too sparse to train the biomass retrieval model and approaches for calibrating that are independent from training data are sought. In this study, we compare the performance of one such calibration approach with the traditional regression modelling using reference measurements. The performance was evaluated at four sites representative of the major forest biomes in Europe focusing on growing stock volume (GSV) prediction from time series of C-band Sentinel-1 and Advanced Land Observing Satellite Phased Array L-band Synthetic Aperture Radar (ALOS-2 PALSAR-2) backscatter measurements. The retrieval model was based on a Water Cloud Model (WCM) and integrated two forest structural functions. The WCM trained with plot inventory GSV values or calibrated with the aid of auxiliary data products correctly reproduced the trend between SAR backscatter and GSV measurements across all sites. The WCM-predicted backscatter was within the range of measurements for a given GSV level with average model residuals being smaller than the range of the observations. The accuracy of the GSV estimated with the calibrated WCM was close to the accuracy obtained with the trained WCM. The difference in terms of root mean square error (RMSE) was less than 5% units. This study demonstrates that it is possible to predict biomass without providing reference measurements for model training provided that the modelling scheme is physically based and the calibration is well set and understood. Full article
(This article belongs to the Special Issue SAR for Forest Mapping III)
Show Figures

Figure 1

Figure 1
<p>The location of the study sites. Each site is illustrated with a colour composite of Sentinel-1 imagery (Red: VV-polarized backscatter; Green: VH-polarized backscatter; Blue: difference in the VV- and VH-polarized backscatter).</p>
Full article ">Figure 2
<p>Canopy height from ICESat-2 data averaged at the level of sub-national units and corresponding average GSV values together with the fit of Equation (5) after stratifying by forest biome. Estimates of the coefficients a and b in Equation (5) and the standard error of the regression are visualized in the upper left corner of each panel.</p>
Full article ">Figure 3
<p>Measured and modelled Sentinel-1 VV- and VH-polarized backscatter over the Catalonian site stratified by the local incidence angle and illustrated as a function of the canopy density level (circles: average value; vertical bars: two-sided one standard deviation). The Sentinel-1 image was acquired on 17 July 2016. The asterisks at the canopy densities of 0% and 100% represent the estimates of <span class="html-italic">σ</span><sup>0</sup><span class="html-italic"><sub>gr</sub></span> and <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>v</mi> <mi>e</mi> <mi>g</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msubsup> </mrow> </semantics></math> obtained by regressing Equation (1) to the observations. The diamond and cross symbols at 100% canopy density represent the estimate of <span class="html-italic">σ</span><sup>0</sup><span class="html-italic"><sub>veg</sub></span> obtained by compensating for <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>v</mi> <mi>e</mi> <mi>g</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msubsup> </mrow> </semantics></math> for the standard deviation of the observations and the backscatter of dense forests, respectively (see also <a href="#remotesensing-16-04079-f005" class="html-fig">Figure 5</a>).</p>
Full article ">Figure 4
<p>Measured and modelled Sentinel-1 VV- and VH-polarized backscatter over the Finland N site as a function of canopy density. The Sentinel-1 image was acquired on 12 July 2018. Plot notations are the same as in <a href="#remotesensing-16-04079-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>The standard deviation of the VV- and VH-polarized backscatter observations per canopy density level (circles) and linear regression (solid line) for the Sentinel-1 image acquired over the Catalonian site on 17 July 2016.</p>
Full article ">Figure 6
<p>The standard deviation of the VV- and VH-polarized backscatter observations for the Sentinel-1 image acquired over the Finland N site on 12 July 2018 at VV and VH polarization. Plot notations follow <a href="#remotesensing-16-04079-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 7
<p>Measured and modelled ALOS-2 PALSAR-2 HH- and HV-polarized backscatter over the Finland N site stratified by the local incidence angle and illustrated as a function of the canopy density level (circles: average value; vertical bars: two-sided one standard deviation). The asterisks at canopy densities of 0% and 100% represent the estimates of <span class="html-italic">σ</span><sup>0</sup><span class="html-italic"><sub>gr</sub></span> and <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>v</mi> <mi>e</mi> <mi>g</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msubsup> </mrow> </semantics></math> obtained by fitting Equation (1) to the observations. The diamond and cross symbols at 100% canopy density represent the estimate of <span class="html-italic">σ</span><sup>0</sup><span class="html-italic"><sub>veg</sub></span> obtained by compensating for <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>v</mi> <mi>e</mi> <mi>g</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msubsup> </mrow> </semantics></math> for the standard deviation of the observations and the backscatter of dense forests, respectively.</p>
Full article ">Figure 8
<p>The standard deviation of the backscatter observations per canopy density level (circles) and linear regression (solid line) for the ALOS-2 PALSAR-2 HH- and HV-polarized backscatter acquired over the Finland N site.</p>
Full article ">Figure 9
<p>Measured and modelled VV- and VH-pol. backscatter as a function of GSV for the sites of Catalonia (<b>left panels</b>) and Finland N (<b>right panels</b>) for the Sentinel-1 dataset used in <a href="#remotesensing-16-04079-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-16-04079-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 10
<p>Scatter plots illustrating the estimates of <span class="html-italic">σ</span><sup>0</sup><span class="html-italic"><sub>gr</sub></span> and <span class="html-italic">σ</span><sup>0</sup><span class="html-italic"><sub>veg</sub></span> from training (<span class="html-italic">x</span> axis) and calibration (<span class="html-italic">y</span> axis) for VV- and VH-polarized Sentinel-1 images over the sites of Catalonia and Finland N. The dashed line represents the identity line.</p>
Full article ">Figure 11
<p>Measured and modelled ALOS-2 PALSAR-2 HH- and HV-pol. backscatter as a function of GSV grouped for the sites of Catalonia, Finland N, and Finland S.</p>
Full article ">Figure 12
<p>Scatter plots illustrating the estimates of <span class="html-italic">σ</span><sup>0</sup><span class="html-italic"><sub>gr</sub></span> and <span class="html-italic">σ</span><sup>0</sup><span class="html-italic"><sub>veg</sub></span> from training (<span class="html-italic">x</span> axis) and calibration (<span class="html-italic">y</span> axis) for HH- and HV-polarized ALOS-2 PALSAR-2 mosaics acquired between 2015 and 2020 over the sites of Catalonia, Finland N, and Finland S. The dashed line represents the identity line.</p>
Full article ">Figure 13
<p>The comparison of GSV values estimated from the Sentinel-1 dataset and from field inventory for the sites of Catalonia and Finland N. Crosses refer to individual field plots. Circles represent the median value of the estimated GSV for 10 m<sup>3</sup>/ha large bins of reference GSV. The dashed line represents the identity line.</p>
Full article ">Figure 14
<p>GSV estimates from the ALOS-2 PALSAR-2 mosaic of 2018 and the mosaics of 2015–2021 compared to the field inventory values for the site of Finland S. Plot arrangement and notations follow <a href="#remotesensing-16-04079-f013" class="html-fig">Figure 13</a>.</p>
Full article ">Figure 15
<p>The comparison of GSV values estimated from three years of ALOS-2 PALSAR-2 mosaics and from field inventory for the sites of Catalonia, Finland N, and Finland S. Plot arrangement and notations follow <a href="#remotesensing-16-04079-f013" class="html-fig">Figure 13</a>.</p>
Full article ">Figure 16
<p>Scatter plots comparing the SAR-based and the field-measured GSV for all study sites. Plot arrangement and notations follow <a href="#remotesensing-16-04079-f013" class="html-fig">Figure 13</a>.</p>
Full article ">
22 pages, 14974 KiB  
Article
Adapting CuSUM Algorithm for Site-Specific Forest Conditions to Detect Tropical Deforestation
by Anam Sabir, Unmesh Khati, Marco Lavalle and Hari Shanker Srivastava
Remote Sens. 2024, 16(20), 3871; https://doi.org/10.3390/rs16203871 - 18 Oct 2024
Viewed by 1091
Abstract
Forest degradation is a major issue in ecosystem monitoring, and to take reformative measures, it is important to detect, map, and quantify the losses of forests. Synthetic Aperture Radar (SAR) time-series data have the potential to detect forest loss. However, its sensitivity is [...] Read more.
Forest degradation is a major issue in ecosystem monitoring, and to take reformative measures, it is important to detect, map, and quantify the losses of forests. Synthetic Aperture Radar (SAR) time-series data have the potential to detect forest loss. However, its sensitivity is influenced by the ecoregion, forest type, and site conditions. In this work, we assessed the accuracy of open-source C-band time-series data from Sentinel-1 SAR for detecting deforestation across forests in Africa, South Asia, and Southeast Asia. The statistical Cumulative Sums of Change (CuSUM) algorithm was applied to determine the point of change in the time-series data. The algorithm’s robustness was assessed for different forest site conditions, SAR polarizations, resolutions, and under varying moisture conditions. We observed that the change detection algorithm was affected by the site- and forest-management activities, and also by the precipitation. The forest type and eco-region affected the detection performance, which varied for the co- and cross-pol backscattering components. The cross-pol channel showed better deforested region delineation with less spurious detection. The results for Kalimantan showed a better accuracy at a 100 m spatial resolution, with a 25.1% increase in the average Kappa coefficient for the VH polarization channel in comparison with a 25 m spatial resolution. To avoid false detection due to the high impact of soil moisture in the case of Haldwani, a seasonal analysis was carried out based on dry and wet seasons. For the seasonal analysis, the cross-pol channel showed good accuracy, with an average Kappa coefficient of 0.85 at the 25 m spatial resolution. This work was carried out in support of the upcoming NISAR mission. The datasets were repackaged to the NISAR-like HDF5 format and processing was carried out with methods similar to NISAR ATBDs. Full article
(This article belongs to the Special Issue NISAR Global Observations for Ecosystem Science and Applications)
Show Figures

Figure 1

Figure 1
<p>Study area map showing three forest sites covered in this study: (<b>A</b>) Kalimantan forests in Indonesia, (<b>B</b>) Haldwani forests in India, and (<b>C</b>) forests near Libreville in Mondah. The polygons represent the ground truth (change area) for the years 2017 to 2022 for Kalimantan and Haldwani, and for 2017 to 2019 for Mondah.</p>
Full article ">Figure 2
<p>Proposed workflow for change detection and validation.</p>
Full article ">Figure 3
<p>Backscatter time-series plots for Kalimantan for urban (red), deforested (orange), and unchanged forest (green) areas at the 25 m spatial resolution.</p>
Full article ">Figure 4
<p>Backscatter time-series plots at the 25 m spatial resolution: (<b>A</b>) Kalimantan—unchanged area, (<b>B</b>) Kalimantan—deforested area, (<b>C</b>) Haldwani—unchanged area, and (<b>D</b>) Haldwani—deforested. The red and green curves show the VV and VH backscatter, respectively. The blue vertical dotted line represents the date of felling.</p>
Full article ">Figure 5
<p>Case: Logging of 375 ha and 830 ha forest regions in Kalimantan in the years 2018 and 2022. Images show the detected deforested area with the VV and VH polarizations at the 25 m and 100 m spatial resolutions for 2018 (<b>A</b>–<b>D</b>) and 2022 (<b>E</b>–<b>H</b>).</p>
Full article ">Figure 6
<p>Case: Logging of a 63 ha forest region in Haldwani. (<b>a</b>,<b>b</b>) show the PlanetScope true color images before and after the logging, respectively. (<b>c</b>,<b>d</b>) show the SAR VH backscatter of the same area, pre- and post-felling, respectively. (<b>e</b>) shows the change maps generated with VH backscatter at the 25 m and 100 m spatial resolutions. Colors show the date (YYYYMMDD) of change, as marked by the algorithm.</p>
Full article ">Figure 7
<p>Kalimantan—VH backscatter and SWI plot: (<b>A</b>) deforested area at the 25 m spatial resolution, (<b>B</b>) unchanged area at the 25 m spatial resolution, (<b>C</b>) deforested area at the 100 m spatial resolution, and (<b>D</b>) unchanged area at the 100 m spatial resolution. The grey region shows the duration of the understory flooding. Haldwani—VH backscatter and SWI plot: (<b>E</b>) deforested area at the 25 m spatial resolution, (<b>F</b>) unchanged area at the 25 m spatial resolution, (<b>G</b>) deforested area at the 100 m spatial resolution, and (<b>H</b>) unchanged area at the 100 m spatial resolution. The blue dotted line shows the date of felling.</p>
Full article ">Figure 8
<p>Change maps showing the deforested areas in Brazil in the year 2020. The change map on the left was generated using the CuSUM algorithm with adaptive thresholding. The change map on the right was Hansen’s change map for 2020.</p>
Full article ">Figure 9
<p>Change maps showing the deforested areas in Congo in the year 2018. The change map on the left was generated using the CuSUM algorithm with adaptive thresholding. The change map on the right was Hansen’s change map for 2018.</p>
Full article ">Figure 10
<p>Change map for Kalimantan forests for years 2017–2022 with S-1 VH polarization. The base map used was the Environmental Systems Research Institute (ESRI) topographic map.</p>
Full article ">Figure 11
<p>Change map for dry seasons for the Haldwani forests for the years 2017–2022 with S-1 VH polarization. The base map used was the ESRI topographic map.</p>
Full article ">Figure 12
<p>Change maps for the Mondah forests for the years 2017–2022 with S-1 VH polarization. The base map used was the ESRI topographic map.</p>
Full article ">
Back to TopTop