[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 16, December-2
Previous Issue
Volume 16, November-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 16, Issue 23 (December-1 2024) – 273 articles

Cover Story (view full-size image): This research provides insight into the use of RS to study urban biodiversity. Using the Scopus database, we have examined peer-reviewed articles published from 2008 to 2023, employing specific keywords to identify relevant research. The majority of the research on urban biodiversity using RS focuses on large cities in the Northern Hemisphere, often neglecting smaller cities. This has led to a concentration of data on Mediterranean and temperate regions, with limited coverage of biomes like boreal, desert, and tropical areas. In addition, we observed from the extracted metadata which RS sensors and biodiversity targets were the main focus of research. Our work provides a comprehensive overview of the current methodologies and highlights areas for further investigation, providing guidance for future remote sensing studies of urban biodiversity. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 4121 KiB  
Article
Thermal Patterns at the Campi Flegrei Caldera Inferred from Satellite Data and Independent Component Analysis
by Francesco Mercogliano, Andrea Barone, Luca D’Auria, Raffaele Castaldo, Malvina Silvestri, Eliana Bellucci Sessa, Teresa Caputo, Daniela Stroppiana, Stefano Caliro, Carmine Minopoli, Rosario Avino and Pietro Tizzani
Remote Sens. 2024, 16(23), 4615; https://doi.org/10.3390/rs16234615 - 9 Dec 2024
Cited by 1 | Viewed by 831
Abstract
In volcanic regions, the analysis of Thermal InfraRed (TIR) satellite imagery for Land Surface Temperature (LST) retrieval is a valid technique to detect ground thermal anomalies. This allows us to achieve rapid characterization of the shallow thermal field, supporting ground surveillance networks in [...] Read more.
In volcanic regions, the analysis of Thermal InfraRed (TIR) satellite imagery for Land Surface Temperature (LST) retrieval is a valid technique to detect ground thermal anomalies. This allows us to achieve rapid characterization of the shallow thermal field, supporting ground surveillance networks in monitoring volcanic activity. However, surface temperature can be influenced by processes of different natures, which interact and mutually interfere, making it challenging to interpret the spatio-temporal variations in the LST parameter. In this paper, we use a workflow to detect the main thermal patterns in active volcanic areas by analyzing the Independent Component Analysis (ICA) results applied to satellite nighttime TIR imagery time series. We employed the proposed approach to study the surface temperature distribution at the Campi Flegrei caldera volcanic site (Southern Italy, Naples) during the 2013–2022 time interval. The results revealed the contribution of four main distinctive thermal patterns, which reflect the endogenous processes occurring at the Solfatara crater, the environmental processes affecting the Agnano plain, the unique microclimate of the Astroni crater, and the morphoclimatic aspects of the entire volcanic area. Full article
Show Figures

Figure 1

Figure 1
<p>Developed workflow. Operative flowchart used in this work to identify the main thermal patterns of an investigated area.</p>
Full article ">Figure 2
<p>Campi Flegrei caldera. Painted relief map with the main structural features (black dashed lines) of the Campi Flegrei caldera redrawn from [<a href="#B65-remotesensing-16-04615" class="html-bibr">65</a>]. The blue box highlights the Area Of Interest (AOI) of this work. The geographic location map is reported in the upper right corner box.</p>
Full article ">Figure 3
<p>LST time series analysis for the investigated area from 2013 to 2022. Temporal variations of (<b>a</b>) the processed LST dataset, (<b>b</b>) the retrieved seasonal trend, and (<b>c</b>) the detrended LST dataset. Grey dots indicate the temperature values at each pixel of the considered area, while blue dots are the mean values; blue continuous lines express the interpolated trend using mean values.</p>
Full article ">Figure 4
<p>Mean LST map. Detrended mean LST map of the investigated area during the entire considered time period, 2013–2022, superimposed on the structural map redrawn from [<a href="#B65-remotesensing-16-04615" class="html-bibr">65</a>].</p>
Full article ">Figure 5
<p>Results of the L-curve method. Analysis of the residuals against the number of components (black crosses); the black dashed lines indicate different slope trends in the L-curve, while the red arrow shows the point where the L-curve has its maximum curvature. The residuals (<span class="html-italic">y</span>-axis) are computed as the sum of the squares of the differences between the input dataset and the decomposed one with respect to the number of considered components.</p>
Full article ">Figure 6
<p>Independent Component Analysis (ICA) results. Maps of the normalized spatial patterns of the four retrieved ICs superimposed on the AOI structural map redrawn from [<a href="#B65-remotesensing-16-04615" class="html-bibr">65</a>]. The mapped values indicate the correlation among the different subregions of the analyzed area: (<b>a</b>) first retrieved IC (IC1); (<b>b</b>) second retrieved IC (IC2); (<b>c</b>) third retrieved IC (IC3); and (<b>d</b>) fourth retrieved IC (IC4).</p>
Full article ">Figure 7
<p>Comparison of the retrieved IC with other datasets. Comparing the retrieved IC1 thermal field (blue dots) and its best-fit fourth-order polynomial trend (blue continuous line) with (<b>a</b>) the ground-based temperature trend (red continuous line), (<b>b</b>) the seismicity probability density function (green continuous line), (<b>c</b>) the cGPS-derived vertical deformation rate (orange dots), and its best-fit fourth-order polynomial trend (orange continuous line). (<b>d</b>) Comparison of the interpolated trend (blue continuous line) of the mean IC2 thermal field (blue dots) and the median water table level changes recorded at the Agnano plain (cyan continuous line). (<b>e</b>) Correlation plot between the retrieved IC4 spatial pattern and the related altitude; the orange line points out the best-fit linear regression line.</p>
Full article ">
20 pages, 4062 KiB  
Article
A CNN-Based Framework for Automatic Extraction of High-Resolution River Bankfull Width
by Wenqi Li, Chendi Zhang, David Puhl, Xiao Pan, Marwan A. Hassan, Stephen Bird, Kejun Yang and Yang Zhao
Remote Sens. 2024, 16(23), 4614; https://doi.org/10.3390/rs16234614 - 9 Dec 2024
Viewed by 915
Abstract
River width is a crucial parameter that correlates and reflects the hydrological, geomorphological, and ecological characteristics of the channel. However, the width data with high spatial resolution is limited owing to the difficulties in extracting channel width under complex and variable riverine surroundings. [...] Read more.
River width is a crucial parameter that correlates and reflects the hydrological, geomorphological, and ecological characteristics of the channel. However, the width data with high spatial resolution is limited owing to the difficulties in extracting channel width under complex and variable riverine surroundings. To address this issue, we aimed to develop an automatic framework specifically for delineating river channels and measuring the bankfull widths at small spatial intervals along the channel. The DeepLabV3+ Convolutional Neural Network (CNN) model was employed to accurately delineate channel boundaries and a Voronoi Diagram approach was complemented as the river width algorithm (RWA) to calculate river bankfull widths. The CNN model was trained by images across four river types and performed well with all the evaluating metrics (mIoU, Accuracy, F1-score, and Recall) higher than 0.97, referring to the accuracy over 97% in prediction. The RWA outperformed other existing river width calculation methods by showing lower errors. The application of the framework in the Lillooet River, Canada, presented the capacity of this methodology to obtain detailed distributions of hydraulic and hydrological parameters, including flow resistance, flow energy, and sediment transport capacity, based on high-resolution channel widths. Our work highlights the significant potential of the newly developed framework in acquiring high-resolution channel width information and characterizing fluvial dynamics based on these widths along river channels, which contributes to facilitating cost-effective integrated river management. Full article
(This article belongs to the Special Issue Remote Sensing in Geomatics (Second Edition))
Show Figures

Figure 1

Figure 1
<p>The architecture of the Deeplabv3+ model was adapted from [<a href="#B66-remotesensing-16-04614" class="html-bibr">66</a>]. The encoder module aggregates features and extracts higher semantic information, while the decoder module tries to decode the features generated by the encoder to recover the spatial information. The input image was an example of Klinaklini River (British Columbia, Canada) and was acquired from GIC, UBC.</p>
Full article ">Figure 2
<p>The workflow of the river width calculation algorithm (RWA): (<b>a</b>) marking the channel bank boundary; (<b>b</b>) resampling and smoothing bank boundary points with linear interpolation; (<b>c</b>) creating Voronoi polygons according to bank points; (<b>d</b>) filtering the Voronoi vertices out of channel boundary; (<b>e</b>) applying the RWA to filter Voronoi vertices which should be excluded in the centerline; (<b>f</b>) finding the order of centerline points and generating the centerline; (<b>g</b>) smoothing the centerline based on Gaussian-weighted moving average filter; and (<b>h</b>) calculating the transects based on the centerline. The panel (<b>i</b>) is the enlarged view of the panel (<b>h</b>) with marking the centerline and transection for channel width calculation.</p>
Full article ">Figure 3
<p>Location of the study reach in the Lillooet River. This bottom-left image was downloaded from Google Earth on 1 August 2024. The red triangle indicates the location of the study reach. The larger image was downloaded from USGS Earth Explorer (<a href="https://earthexplorer.usgs.gov/" target="_blank">https://earthexplorer.usgs.gov/</a>) on 10 August 2023. The arrow in the figure indicates the flow direction.</p>
Full article ">Figure 4
<p>The prediction of sampled images from different subsets. The red rectangles in (<b>d</b>–<b>f</b>) indicate the locations of shadows of the vegetation. The arrow in each panel refers to the flow direction. The Ground Truth of the second column means the manually labeled result. East Creek refers to the Small Forested Stream subset, Klinaklini River refers to the Braided River subset, Green River refers to the Desert River subset, and Jinsha River belongs to the Multi-Types of Rivers subset we used to train and validate the model.</p>
Full article ">Figure 5
<p>Validation of RWA-derived river width with the manually measured width based on air photos of Coldwater River in 1978, 2015, and 2022.</p>
Full article ">Figure 6
<p>(<b>a</b>) Location of the target reach in the Lillooet River and longitudinal distributions of (<b>b</b>) Darcy-Weisbach resistance coefficient and channel width; (<b>c</b>) shear stress; (<b>d</b>) stream power; and (<b>e</b>) bedload transport rate. The arrow in (<b>a</b>) refers to the flow direction.</p>
Full article ">Figure 7
<p>Sample of bad performance of our CNN model on one of the rivers from the subset for Multi-types of Rivers (<a href="#remotesensing-16-04614-t001" class="html-table">Table 1</a>). The red ellipse showed in this figure indicates the missing part of river boundary identification. The arrow in (<b>a</b>) indicates the flow direction. (<b>b</b>) indicates the label of (<b>a</b>). (<b>c</b>) indicates the prediction result of (<b>a</b>).</p>
Full article ">Figure 8
<p>(<b>a</b>) The influence of image resolution on the performance of river width derivation. The dashed line indicates the zero-error compared with the width extracted from the ground truth image. (<b>b</b>–<b>e</b>) The sample images width resolution of 0.005 m/pixel and 0.2 m/pixel and the prediction result.</p>
Full article ">Figure 9
<p>Prediction performance for different image tile sizes on the Desert River subset.</p>
Full article ">
35 pages, 19129 KiB  
Article
Mapping Lithology with Hybrid Attention Mechanism–Long Short-Term Memory: A Hybrid Neural Network Approach Using Remote Sensing and Geophysical Data
by Michael Appiah-Twum, Wenbo Xu and Emmanuel Daanoba Sunkari
Remote Sens. 2024, 16(23), 4613; https://doi.org/10.3390/rs16234613 - 9 Dec 2024
Cited by 1 | Viewed by 1019
Abstract
Remote sensing provides an efficient roadmap in geological analysis and interpretation. However, some challenges arise when remote sensing techniques are integrated with machine learning in geological surveys. Factors including irregular spatial distribution, sample imbalance, interclass resemblances, regolith, and geochemical similarities impede geological feature [...] Read more.
Remote sensing provides an efficient roadmap in geological analysis and interpretation. However, some challenges arise when remote sensing techniques are integrated with machine learning in geological surveys. Factors including irregular spatial distribution, sample imbalance, interclass resemblances, regolith, and geochemical similarities impede geological feature diagnosis, interpretation, and identification across varied remote sensing datasets. To address these limitations, a hybrid-attention-integrated long short-term memory (LSTM) network is employed to diagnose, interpret, and identify lithological feature representations in a remote sensing-based geological analysis using multisource data fusion. The experimental design integrates varied datasets including Sentinel-2A, Landsat-9, ASTER, ALOS PALSAR DEM, and Bouguer anomaly gravity data. The proposed model incorporates a hybrid attention mechanism (HAM) comprising channel and spatial attention submodules. HAM utilizes an adaptive technique that merges global-average-pooled features with max-pooled features, enhancing the model’s accuracy in identifying lithological units. Additionally, a channel separation operation is employed to allot refined channel features into clusters based on channel attention maps along the channel dimension. The comprehensive analysis of results from comparative extensive experiments demonstrates HAM-LSTM’s state-of-the-art performance, outperforming existing attention modules and attention-based models (ViT, SE-LSTM, and CBAM-LSTM). Comparing HAM-LSTM to baseline LSTM, the HAM module’s integrated configurations equip the proposed model to better diagnose and identify lithological units, thereby increasing the accuracy by 3.69%. Full article
Show Figures

Figure 1

Figure 1
<p>An overview of this study’s workflow: The multisource data fusion technique is employed to fuse the gravity anomaly data and remote sensing data. Channel and spatial attention mechanisms are modeled to learn the spatial and spectral information of pixels in the fused data and the resultant attention features, fed into the LSTM network for sequential iterative processing to map lithology.</p>
Full article ">Figure 2
<p>Location of study area and regional geological setting. (<b>a</b>) Administrative map of Burkina Faso; (<b>b</b>) administrative map of Bougouriba and Ioba Provinces within which the study area is located; (<b>c</b>) geological overview of Burkina Faso (modified from [<a href="#B44-remotesensing-16-04613" class="html-bibr">44</a>]) indicating the study area; (<b>d</b>) color composite image of Landsat-9 covering the study area.</p>
Full article ">Figure 3
<p>False color composite imagery of remote sensing data used: (<b>a</b>) Sentinel-2A (bands 4-3-2); (<b>b</b>) Landsat-9 (bands 4-3-2); (<b>c</b>) ASTER (bands 3-2-1); and (<b>d</b>) 12.5 m spatial resolution high-precision ALOS PALSAR DEM.</p>
Full article ">Figure 4
<p>Vegetation masking workflow.</p>
Full article ">Figure 5
<p>The HAM structure. It comprises three sequential components: channel attention submodule, feature separation chamber, and spatial attention submodule. One-dimensional and two-dimensional feature maps are produced by the channel and spatial attention submodules, respectively.</p>
Full article ">Figure 6
<p>Framework of HAM’s channel attention submodule. Dimensional feature information is generated by both max-pooling and average-pooling operations. The resultant features are then fed through a one-dimensional convolution with a sigmoid activation to deduce the definitive channel feature.</p>
Full article ">Figure 7
<p>Framework of HAM’s spatial attention. Two feature clusters of partitioned refined channel features from the separation chamber are fed into the submodule. Average-pooling and max-pooling functions subsequently synthesize two pairs of 2D maps into a shared convolution layer to synthesize spatial attention maps.</p>
Full article ">Figure 8
<p>The structural framework of the proposed HAM-LSTM model.</p>
Full article ">Figure 9
<p>Gravity anomaly maps of the terrane used: (<b>a</b>) complete Bouguer anomaly; (<b>b</b>) residual gravity.</p>
Full article ">Figure 10
<p>Band imagery: (<b>a</b>) Landsat-9 band 5; (<b>b</b>) Sentinel-2A band 5; (<b>c</b>) ASTER band 5; (<b>d</b>) fused image; (<b>e</b>) partial magnification of (<b>a</b>) (<math display="inline"><semantics> <mrow> <mn>279</mn> <mo>×</mo> <mn>235</mn> </mrow> </semantics></math> pixels); (<b>f</b>) partial magnification of (<b>b</b>) (<math display="inline"><semantics> <mrow> <mn>279</mn> <mo>×</mo> <mn>235</mn> </mrow> </semantics></math> pixels); (<b>g</b>) partial magnification of (<b>c</b>) (<math display="inline"><semantics> <mrow> <mn>279</mn> <mo>×</mo> <mn>235</mn> </mrow> </semantics></math> pixels); and (<b>h</b>) partial magnification of (<b>d</b>) (<math display="inline"><semantics> <mrow> <mn>279</mn> <mo>×</mo> <mn>235</mn> </mrow> </semantics></math> pixels).</p>
Full article ">Figure 11
<p>Resultant multisource fusion imagery.</p>
Full article ">Figure 12
<p>Annotation map of the study area.</p>
Full article ">Figure 13
<p>An illustration of the sliding window method implementation.</p>
Full article ">Figure 14
<p>Graphs of training performance of the varied model implementations in this study: (<b>a</b>) accuracy and (<b>b</b>) loss.</p>
Full article ">Figure 15
<p>Classification maps derived from implementing (<b>a</b>) HAM-LSTM, (<b>b</b>) CBAM-LSTM, (<b>c</b>) SE-LSTM, (<b>d</b>) ViT, and (<b>e</b>) LSTM on the multisource fusion dataset.</p>
Full article ">Figure 16
<p>Confusion matrices of (<b>a</b>) HAM-LSTM, (<b>b</b>) CBAM-LSTM, (<b>c</b>) SE-LSTM, (<b>d</b>) LSTM, and (<b>e</b>) ViT implementation.</p>
Full article ">
19 pages, 6466 KiB  
Article
Increases in Temperature and Precipitation in the Different Regions of the Tarim River Basin Between 1961 and 2021 Show Spatial and Temporal Heterogeneity
by Siqi Wang, Ailiyaer Aihaiti, Ali Mamtimin, Hajigul Sayit, Jian Peng, Yongqiang Liu, Yu Wang, Jiacheng Gao, Meiqi Song, Cong Wen, Fan Yang, Chenglong Zhou, Wen Huo and Yisilamu Wulayin
Remote Sens. 2024, 16(23), 4612; https://doi.org/10.3390/rs16234612 - 9 Dec 2024
Viewed by 709
Abstract
The Tarim River Basin (TRB) faces significant ecological challenges due to global warming, making it essential to understand the changes in the climates of its sub-basins for effective management. With this aim, data from national meteorological stations, ERA5_Land, and climate indices from 1961 [...] Read more.
The Tarim River Basin (TRB) faces significant ecological challenges due to global warming, making it essential to understand the changes in the climates of its sub-basins for effective management. With this aim, data from national meteorological stations, ERA5_Land, and climate indices from 1961 to 2021 were used to analyze the temperature and precipitation variations in the TRB and its sub-basins and to assess their climate sensitivity. Our results showed that (1) the annual mean temperature increased by 0.2 °C/10a and precipitation increased by 7.1 mm/10a between 1961 and 2021. Moreover, precipitation trends varied significantly among the sub-basins, with that in the Aksu River Basin increasing the most (12.9 mm/10a) and that in the Cherchen River Basin increasing the least (1.9 mm/10a). Moreover, ERA5_Land data accurately reproduced the spatiotemporal patterns of temperature (correlation 0.92) and precipitation (correlation 0.72) in the TRB. (2) Empirical Orthogonal Function analysis identified the northern sections of the Kaidu, Weigan, and Yerqiang river basins as centers of temperature sensitivity and the western part of the Kaidu and Cherchen River Basin as the center of precipitation sensitivity. (3) Global warming is closely correlated with sub-basin temperature (correlation above 0.5) but weakly correlated with precipitation (correlation 0.2~0.5). TRB temperatures were found to have a positive correlation with AMO, especially in the Hotan, Kashgar, and Aksu river basins, and a negative correlation with AO and NAO, particularly in the Keriya and Hotan river basins. Precipitation correlations between the climate indices were complex and varied across the different basins. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Location map of the study area. The shaded color indicates the elevation of the TRB (m); red color symbols represent the distribution of stations in the sub-basins of the TRB.</p>
Full article ">Figure 2
<p>(<b>a</b>,<b>b</b>) Time series of the annual mean temperature and precipitation anomalies, red represents a positive anomaly, blue represents a negative anomaly; solid lines represent the detrended 10-year running average; (<b>c</b>,<b>d</b>) linear trends of annual mean temperature (°C) and precipitation (mm) in the TRB from 1961 to 2021; (<b>e</b>,<b>f</b>) spatial distribution of annual mean temperature and mean PRCPTOT in the TRB from 1961 to 2021.</p>
Full article ">Figure 3
<p>Spatial distribution of (<b>a</b>,<b>c</b>) the first eigenvector and (<b>b</b>,<b>d</b>) second eigenvector of the EOF analysis of the annual mean temperature and PRCTOP in the TRB from 1961 to 2021.</p>
Full article ">Figure 4
<p>Temporal trends of (<b>a</b>,<b>c</b>) EOF1 and (<b>b</b>,<b>d</b>) EOF2 for annual mean temperature and PRCPTOT in the TRB from 1961 to 2021, where the solid line is the PC value and the dashed line is the linearly fitted trend.</p>
Full article ">Figure 5
<p>Trends in (<b>a</b>) annual mean temperature and (<b>b</b>) precipitation in the sub-basins of the TRB from 1961 to 2021.</p>
Full article ">Figure 6
<p>The annual mean temperature anomaly after detrending in the TRB sub-basins from 1961 to 2021, where the reference period was 1981–2010, red represents a positive anomaly, blue represents a negative anomaly.</p>
Full article ">Figure 7
<p>The annual precipitation anomaly after detrending in the TRB sub-basins from 1961 to 2021, where the reference period was 1981–2010, red represents a positive anomaly, blue represents a negative anomaly.</p>
Full article ">Figure 8
<p>(<b>a</b>–<b>d</b>) The spatial distribution of seasonal mean temperature (°C) in the sub-basins from 1961 to 2021; (<b>f</b>–<b>i</b>) the spatial distribution of seasonal precipitation (mm) in the sub-basins from 1961 to 2021; (<b>e</b>,<b>j</b>) thermal maps of seasonal mean temperature (°C) and seasonal precipitation (mm) in the sub-basins.</p>
Full article ">Figure 9
<p>Comparison of observed quantiles for (<b>a</b>) temperature and (<b>b</b>) precipitation with ERA5_Land data. The solid red line represents the 1:1 line.</p>
Full article ">Figure 10
<p>Spatial distribution of (<b>a</b>) annual mean temperature and (<b>b</b>) annual precipitation and (<b>c</b>,<b>d</b>) their spatial variation trends in the TRB and its sub-basins based on ERA5_Land data.</p>
Full article ">Figure 11
<p>The correlation between the annual mean temperature (°C) and annual precipitation (mm) in the sub-basins and global warming (* represents significance <span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 12
<p>The correlations between the 10-year running average of (<b>a</b>) annual mean temperature and (<b>b</b>) precipitation in the TRB and the 10-year running average of each climate index after the detrend (* represents significance <span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">
21 pages, 5316 KiB  
Article
A Weakly Supervised Multimodal Deep Learning Approach for Large-Scale Tree Classification: A Case Study in Cyprus
by Arslan Amin, Andreas Kamilaris and Savvas Karatsiolis
Remote Sens. 2024, 16(23), 4611; https://doi.org/10.3390/rs16234611 - 9 Dec 2024
Cited by 1 | Viewed by 949
Abstract
Forest ecosystems play an essential role in ecological balance, supporting biodiversity and climate change mitigation. These ecosystems are crucial not only for ecological stability but also for the local economy. Performing a tree census at a country scale via traditional methods is resource-demanding, [...] Read more.
Forest ecosystems play an essential role in ecological balance, supporting biodiversity and climate change mitigation. These ecosystems are crucial not only for ecological stability but also for the local economy. Performing a tree census at a country scale via traditional methods is resource-demanding, error-prone, and requires significant effort by a large number of experts. While emerging technologies such as satellite imagery and AI provide the means for achieving promising results in this task with less effort, considerable effort is still required by experts to annotate hundreds or thousands of images. This study introduces a novel methodology for a tree census classification system which leverages historical and partially labeled data, employs probabilistic data imputation and a weakly supervised training technique, and thus achieves state-of-the-art precision in classifying the dominant tree species of Cyprus. A crucial component of our methodology is a ResNet50 model which takes as input high spatial resolution satellite imagery in the visible band and near-infrared band, as well as topographical features. By applying a multimodal training approach, a classification accuracy of 90% among nine targeted tree species is achieved. Full article
Show Figures

Figure 1

Figure 1
<p>The proposed methodology takes advantage of multimodal information at an early processing stage by integrating RGB, NIR, and topographical features. This data fusion feeds into a DL model, resulting in a final classification.</p>
Full article ">Figure 2
<p>The grid layer corresponding to the satellite images used in the data overlayed on a map of Cyprus.</p>
Full article ">Figure 3
<p>Examples of images manually removed from the dataset to facilitate the training of the model by reducing data ambiguity: (<b>a</b>) cloud-covered fields, (<b>b</b>) residential areas, and (<b>c</b>) empty fields.</p>
Full article ">Figure 4
<p>Ground truth annotations of forest areas provided by the Cyprus Department of Forestry.</p>
Full article ">Figure 5
<p>Ground truth annotations of agricultural fields provided by the Cyprus Ministry of Agriculture.</p>
Full article ">Figure 6
<p>Distribution of landscape types in the original dataset (before data imputation).</p>
Full article ">Figure 7
<p>Overlapping points.</p>
Full article ">Figure 8
<p>Overlapping points removed.</p>
Full article ">Figure 9
<p>Distribution of landscape types in the processed dataset (after data imputation).</p>
Full article ">Figure 10
<p>The confusion matrix compares the predicted and actual classes, highlighting classification accuracy and misclassification patterns.</p>
Full article ">Figure 11
<p>Distribution of tree classes with elevation.</p>
Full article ">Figure 12
<p>Occurrence of tree classes with soil type.</p>
Full article ">Figure 13
<p>Distribution of <span class="html-italic">Pinus nigra</span> and <span class="html-italic">Pinus brutia</span> with elevation.</p>
Full article ">Figure 14
<p>Occurrence of <span class="html-italic">Pinus nigra</span> and <span class="html-italic">Pinus brutia</span> with soil type.</p>
Full article ">
21 pages, 7656 KiB  
Article
Multitemporal Monitoring for Cliff Failure Potential Using Close-Range Remote Sensing Techniques at Navagio Beach, Greece
by Aliki Konsolaki, Efstratios Karantanellis, Emmanuel Vassilakis, Evelina Kotsi and Efthymios Lekkas
Remote Sens. 2024, 16(23), 4610; https://doi.org/10.3390/rs16234610 - 9 Dec 2024
Viewed by 816
Abstract
This study aims to address the challenges associated with rockfall assessment and monitoring, focusing on the coastal cliffs of “Navagio Shipwreck Beach” in Zakynthos. A complete time-series analysis was conducted using state-of-the-art methodologies including a 2020 survey using unmanned aerial systems (UASs) and [...] Read more.
This study aims to address the challenges associated with rockfall assessment and monitoring, focusing on the coastal cliffs of “Navagio Shipwreck Beach” in Zakynthos. A complete time-series analysis was conducted using state-of-the-art methodologies including a 2020 survey using unmanned aerial systems (UASs) and two subsequent surveys, incorporating terrestrial laser scanning (TLS) and UAS survey techniques in 2023. Achieving high precision and accuracy in georeferencing involving direct georeferencing, the utilization of pseudo ground control points (pGCPs), and integrating post-processing kinematics (PPK) with global navigation satellite system (GNSS) permanent stations’ RINEX data is necessary for co-registering the multitemporal models effectively. For the change detection analysis, UAS surveys were utilized, employing the multiscale model-to-model cloud comparison (M3C2) algorithm, while TLS data were used in a validation methodology due to their very high-resolution model. The synergy of these advanced technologies and methodologies offers a comprehensive understanding of rockfall dynamics, aiding in effective assessment and monitoring strategies for coastal cliffs prone to rockfall risk. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Coastline Monitoring)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Geological Map of Zakynthos Island. [I] Post-orogenic sediments of (1) Coastal deposits, (2) Alluvial deposits, (3) Terra Rossa, and (4) Coastal deposits, marls, and sandstones, [II] Paxoi Unit consists of (5) Miocene sandstones, mudstones, and marls, (6) Miocene Gypsum, (7) Oligocene Marly Limestones, (8) Eocene Marly Limestones, and (9) Upper Cretaceous Limestones, and [III] Ionian Unit consists of (10) Triassic Gypsum and Evaporites and (11) Triassic Limestones. The map shows also the faults (12) mapped by IGME [<a href="#B44-remotesensing-16-04610" class="html-bibr">44</a>]. (<b>b</b>) Top View of the study area.</p>
Full article ">Figure 2
<p>Flowchart of the proposed methodology.</p>
Full article ">Figure 3
<p>The UAS survey campaigns took place from the same take-off point at 180 m ASL (<b>a</b>), using a Phantom 4 RTK drone (<b>b</b>). GNSS-RTK equipment with NRTK connection capabilities was used for validating the elevation of specific points (<b>c</b>).</p>
Full article ">Figure 4
<p>Methodology applied after UAS data collection. The 2020 reference dataset was processed using a PPK procedure with RINEX from the ZAK permanent station. The 2023 dataset was subsequently registered to the 2020 data using pGCPs integrated into the model.</p>
Full article ">Figure 5
<p>Four bases were established for covering the surrounding slopes of Navagio Beach (<b>a</b>–<b>d</b>). We used self-constructed targets for co-registering the point clouds (<b>e</b>).</p>
Full article ">Figure 6
<p>The distribution of the TLS bases and the targets, used for the co-registration and geolocation procedure of the four datasets. The inset shows the result of the four-point cloud registration in one. The blue dots represent the four bases.</p>
Full article ">Figure 7
<p>The resulting mesh of the area under investigation generated from the TLS dataset. The inset table shows the registration results. A list of the targets used for the model georeferencing along with the RMS error is also presented.</p>
Full article ">Figure 8
<p>(<b>a</b>) The predominant morphological planes with average orientation, (A) 086/259 in yellow (B) 075/336 in pink, and (C) 066/045 in blue, that led to the segmentation of the outcrop and (<b>b</b>) the equivalent divided segments (i, ii, and iii) based on the predominant morphological planes. The dimensions of each segment are i. 212 m × 180 m, ii. 150 m × 130 m, and iii. 150 m × 165 m (width × height).</p>
Full article ">Figure 9
<p>Surface change results between TLS 2023 and UAS 2023 alongside the surface change histogram distribution. Three random cross-sections were selected and presented crossing the cliffs of the area of interest.</p>
Full article ">Figure 10
<p>Positive (indicating gain) and negative (indicating loss) values are depicted in reddish and blueish, respectively, for each segment (<b>A</b>, <b>B</b> and <b>C</b>), alongside the Gaussian spatial distribution of each segment with a 95% limit of detection (red and blue respectively).</p>
Full article ">Figure 11
<p>Map view of Zakynthos coastal cliff showing F1, F2, and F3 fault traces in yellow dashed lines (<b>a</b>). Oblique views of the F1 fault in conjunction with 4 discrete antithetic faults (<b>b</b>), F2 normal fault zone with F3 fault (<b>c</b>), and their planes as measured on the point cloud 70/219 (F1), 80/205 (F2), and 75/270 (F3). The loss and gain material areas are also displayed (see also <a href="#remotesensing-16-04610-f010" class="html-fig">Figure 10</a>) (<b>d</b>).</p>
Full article ">
18 pages, 12126 KiB  
Article
Recognition of Ground Clutter in Single-Polarization Radar Based on Gated Recurrent Unit
by Jiaxin Wang, Haibo Zou, Landi Zhong and Zhiqun Hu
Remote Sens. 2024, 16(23), 4609; https://doi.org/10.3390/rs16234609 - 9 Dec 2024
Viewed by 700
Abstract
A new method is proposed for identifying ground clutter in single-polarization radar data based on the gated recurrent unit (GRU) neural network. This method needs five independent input variables related to radar reflectivity structure, which are the reflectivity at current tilt, the reflectivity [...] Read more.
A new method is proposed for identifying ground clutter in single-polarization radar data based on the gated recurrent unit (GRU) neural network. This method needs five independent input variables related to radar reflectivity structure, which are the reflectivity at current tilt, the reflectivity at the upper tilt, the reflectivity at 3.5 km, the echo top height, and the texture of reflectivity at current tilt, respectively. The performance of the new method is compared with that of the traditional method used in the Weather Surveillance Radar 1988-Doppler system in four cases with different scenarios. The results show that the GRU method is more effective than the traditional method in capturing ground clutter, particularly in situations where ground clutter exists at two adjacent elevation angles. Furthermore, in order to assess the new method more comprehensively, 709 radar scans from Nanchang radar in July 2019 and 708 scans from Jingdezhen radar in June 2019 were collected and processed by the two methods, and the frequency map of radar reflectivity exceeding 20 dBZ was analyzed. The results indicate that the GRU method has a stronger ability than the traditional method to identify and remove ground clutter. Meanwhile, the GRU method can also preserve meteorological echoes well. Full article
Show Figures

Figure 1

Figure 1
<p>Geographical locations and surrounding terrain of the Nanchang radar stations (NC) and the Jingdezhen radar stations (JDZ).</p>
Full article ">Figure 2
<p>The structure of the GRU model.</p>
Full article ">Figure 3
<p>The structure of the ground clutter recognition model.</p>
Full article ">Figure 4
<p>The variation in loss function of GRU training by epoch for different batch sizes.</p>
Full article ">Figure 5
<p>NC radar observed reflectivity field (dBZ) at (<b>a</b>) 0.5° elevation angle and (<b>b</b>) 1.5° elevation angle at 2102 UTC 1 June 2019. The processed reflectivity field (dBZ) at 0.5° elevation angle by (<b>c</b>) the traditional method and (<b>d</b>) the GRU method to remove ground clutter. Solid lines indicate the altitudes of 300 to 900 m, with intervals of 300 m. The dashed line indicates the distance of 80 km from the radar station.</p>
Full article ">Figure 6
<p>NC radar observed reflectivity field (dBZ) at (<b>a</b>) 0.5° elevation angle and (<b>b</b>) 1.5° elevation angle at 0732 UTC 1 June 2019. The processed reflectivity field (dBZ) at 0.5° elevation angle by (<b>c</b>) the traditional method and (<b>d</b>) the GRU method to remove ground clutter. Solid lines indicate the altitudes of 300 to 900 m, with intervals of 300 m. The dashed line indicates the distance of 80 km from the radar station.</p>
Full article ">Figure 7
<p>NC radar observed reflectivity field (dBZ) at (<b>a</b>) 0.5° elevation angle and (<b>b</b>) 1.5° elevation angle at 0205 UTC 5 July 2019. The processed reflectivity field (dBZ) at 0.5° elevation angle by (<b>c</b>) the traditional method and (<b>d</b>) the GRU method to remove ground clutter. Solid lines indicate the altitudes of 300 to 900 m, with intervals of 300 m. The dashed line indicates the distance of 80 km from the radar station.</p>
Full article ">Figure 8
<p>JDZ radar observed reflectivity field (dBZ) at (<b>a</b>) 0.5° elevation angle and (<b>b</b>) 2.4° elevation angle at 1405 UTC 5 June 2019. The processed reflectivity field (dBZ) at 0.5° elevation angle by (<b>c</b>) the traditional method and (<b>d</b>) the GRU method to remove ground clutter. Solid lines indicate the altitudes of 300 to 1200 m, with intervals of 300 m. The dashed line indicates the distance of 40 km from the radar station.</p>
Full article ">Figure 9
<p>The frequency maps of reflectivity exceeding 20 dBZ from the raw radar data at (<b>a</b>) 0.5° elevation and (<b>b</b>) 1.5° elevation, and the processed radar data at 0.5° elevation using (<b>c</b>) the traditional method and (<b>d</b>) the GRU method. Solid lines indicate the altitudes of 300 to 900 m, with intervals of 300 m. The radar data are detected by the NC radar from 0104 UTC 1 to 2309 UTC 31 July 2019, with 709 samples.</p>
Full article ">Figure 10
<p>The frequency maps of reflectivity exceeding 20 dBZ from the raw radar data at (<b>a</b>) 0.5° elevation and (<b>b</b>) 2.4° elevation, and the processed radar data at 0.5° elevation using (<b>c</b>) the traditional method and (<b>d</b>) the GRU method. Solid lines indicate the altitudes of 300 to 1200 m, with intervals of 300 m. The radar data are detected by the JDZ radar from 0004 UTC 1 to 2301 UTC 30 June 2019, with 708 samples.</p>
Full article ">
19 pages, 6086 KiB  
Article
Remote Sensing Estimation of CDOM for Songhua River of China: Distributions and Implications
by Pengju Feng, Kaishan Song, Zhidan Wen, Hui Tao, Xiangfei Yu and Yingxin Shang
Remote Sens. 2024, 16(23), 4608; https://doi.org/10.3390/rs16234608 - 8 Dec 2024
Viewed by 880
Abstract
Rivers are crucial pathways for transporting organic carbon from land to ocean, playing a vital role in the global carbon cycle. Dissolved organic carbon (DOC) and chromophoric dissolved organic matter (CDOM) are major components of dissolved organic matter and have significant impacts on [...] Read more.
Rivers are crucial pathways for transporting organic carbon from land to ocean, playing a vital role in the global carbon cycle. Dissolved organic carbon (DOC) and chromophoric dissolved organic matter (CDOM) are major components of dissolved organic matter and have significant impacts on maintaining the stability of river ecosystems and driving the global carbon cycle. In this study, the in situ samples of aCDOM(355) and DOC collected along the main stream of the Songhua River were matched with Sentinel-2 imagery. Multiple linear regression and five machine learning models were used to analyze the data. Among these models, XGBoost demonstrated a superior, highly stable performance on the validation set (R2 = 0.85, RMSE = 0.71 m−1). The multiple linear regression results revealed a strong correlation between CDOM and DOC (R2 = 0.73), indicating that CDOM can be used to indirectly estimate DOC concentrations. Significant seasonal variations in the CDOM distribution in the Songhua River were observed: aCDOM(355) in spring (6.23 m−1) was higher than that in summer (5.3 m−1) and autumn (4.74 m−1). The aCDOM(355) values in major urban areas along the Songhua River were generally higher than those in non-urban areas. Using the predicted DOC values and annual flow data at the sites, the annual DOC flux in Harbin was calculated to be approximately 0.2275 Tg C/Yr. Additionally, the spatial variation in annual CDOM was influenced by both natural changes in the watershed and human activities. These findings are pivotal for a deeper understanding of the role of river systems in the global carbon cycle. Full article
Show Figures

Figure 1

Figure 1
<p>Location of sampling points along Songhua River. All experimental sampling points are set in the main stream of the Songhua River, excluding small headwaters and tributaries within the watershed. The sampling points are distributed according to the Songhua River basin and divided into the following areas: “Head” represents the source of the Songhua River, “Upstream” represents the upper reaches of the main stream of the Songhua River, “Midstream” represents the middle reaches of the main stream of the Songhua River, and “Downstream” represents the lower reaches of the main stream of the Songhua River.</p>
Full article ">Figure 2
<p>Distribution of a<sub>CDOM</sub>(355) (<b>a</b>) and SUVA254 (<b>b</b>) along Songhua River.</p>
Full article ">Figure 3
<p>Linear relationship between DOC and a<sub>CDOM</sub>(355) in main stream of Songhua River.</p>
Full article ">Figure 4
<p>Calibration and validation of different algorithm models for a<sub>CDOM</sub>(355) in the Songhua River: (<b>a</b>) Random Forest (RF); (<b>b</b>) Support Vector Regression (SVR); (<b>c</b>) Back Propagation Neural Network (BP); (<b>d</b>) Gradient Boosting Decision Tree (GBDT); and (<b>e</b>) eXtreme Gradient Boosting (XGBoost).</p>
Full article ">Figure 5
<p>Spatial distribution of the annual average dissolved organic carbon (DOC) concentration in the Songhua River Basin in 2022.</p>
Full article ">Figure 6
<p>Spatio-temporal distribution of a<sub>CDOM</sub>(355) in the main stream of the Songhua River. (Note: JMS represents the Jiamusi section of the Songhua River; HEB denotes the Harbin section; SY refers to the Songyuan section; and JL indicates the Jilin section.)</p>
Full article ">Figure 7
<p>Annual mean values of a<sub>CDOM</sub>(355) in urban and non-urban sections of Songhua River flowing through each city.</p>
Full article ">Figure 8
<p>Mean values of a<sub>CDOM</sub>(355) in main stream of Songhua River during non-glacial periods (spring, summer, autumn) and throughout entire year (annual).</p>
Full article ">Figure 9
<p>Contributions of various factors to CDOM in main stream of Songhua River (driving factors: PRE (precipitation); forest; CL (construction land); population; cropland; barren (land without vegetation); WIN (wind); PRS (pressure); and TEM (temperature)).</p>
Full article ">Figure 10
<p>DOC fluxes at different stations of Songhua River in different seasons in 2022.</p>
Full article ">
22 pages, 14975 KiB  
Article
Estimating Water Depth of Different Waterbodies Using Deep Learning Super Resolution from HJ-2 Satellite Hyperspectral Images
by Shuangyin Zhang, Kailong Hu, Xinsheng Wang, Baocheng Zhao, Ming Liu, Changjun Gu, Jian Xu and Xuejun Cheng
Remote Sens. 2024, 16(23), 4607; https://doi.org/10.3390/rs16234607 - 8 Dec 2024
Viewed by 1032
Abstract
Hyperspectral remote sensing images offer a unique opportunity to quickly monitor water depth, but how to utilize the enriched spectral information and improve its spatial resolution remains a challenge. We proposed a water depth estimation framework to improve spatial resolution using deep learning [...] Read more.
Hyperspectral remote sensing images offer a unique opportunity to quickly monitor water depth, but how to utilize the enriched spectral information and improve its spatial resolution remains a challenge. We proposed a water depth estimation framework to improve spatial resolution using deep learning and four inversion methods and verified the effectiveness of different super resolution and inversion methods in three waterbodies based on HJ-2 hyperspectral images. Results indicated that it was feasible to use HJ-2 hyperspectral images with a higher spatial resolution via super resolution methods to estimate water depth. Deep learning improves the spatial resolution of hyperspectral images from 48 m to 24 m and shows less information loss with peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and spectral angle mapper (SAM) values of approximately 37, 0.92, and 2.42, respectively. Among four inversion methods, the multilayer perceptron demonstrates superior performance for the water reservoir, achieving the mean absolute error (MAE) and the mean absolute percentage error (MAPE) of 1.292 m and 22.188%, respectively. For two rivers, the random forest model proves to be the best model, with an MAE of 0.750 m and an MAPE of 10.806%. The proposed method can be used for water depth estimation of different water bodies and can improve the spatial resolution of water depth mapping, providing refined technical support for water environment management and protection. Full article
Show Figures

Figure 1

Figure 1
<p>Study areas and distribution of sample points. (<b>a</b>) Chalin-Sanjiangkou River; (<b>b</b>) Changtan-Cili River; (<b>c</b>) Shenzhen Reservoir.</p>
Full article ">Figure 2
<p>Study data. (<b>a</b>) Examples of HJ-2 A/B HSI SR dataset; (<b>b</b>) HSI of Chalin-Sanjiangkou River and Changtan-Cili River areas; (<b>c</b>) HSI of Shenzhen Reservoir area.</p>
Full article ">Figure 3
<p>Unmanned ship equipped with multibeam and its measuring results.</p>
Full article ">Figure 4
<p>Flow chart of HJ2-AB hyperspectral image water depth inversion.</p>
Full article ">Figure 5
<p>SR results of HJ2-A/B HSI images on the study area (shown in false color combination).</p>
Full article ">Figure 6
<p>Water depth inversion results of original LR HJ2-A/B HSIs on three study areas.</p>
Full article ">Figure 7
<p>Water depth inversion results of super-resolved HJ2-A/B HSIs on Shenzhen reservoir area.</p>
Full article ">Figure 8
<p>Scatter plots of water depth inversion results with ground truth values based on super-resolved HSIs of different areas. (<b>a</b>) Chalin-Sanjiangkou River; (<b>b</b>) Changtan-Cili River; (<b>c</b>) Shenzhen reservoir.</p>
Full article ">Figure 8 Cont.
<p>Scatter plots of water depth inversion results with ground truth values based on super-resolved HSIs of different areas. (<b>a</b>) Chalin-Sanjiangkou River; (<b>b</b>) Changtan-Cili River; (<b>c</b>) Shenzhen reservoir.</p>
Full article ">Figure 9
<p>Water depth inversion accuracy based on super-resolved HSIs of different areas. (<b>a</b>) Chalin-Sanjiangkou River; (<b>b</b>) Changtan-Cili River; (<b>c</b>) Shenzhen reservoir.</p>
Full article ">
22 pages, 7862 KiB  
Article
Comparison Between Thermal-Image-Based and Model-Based Indices to Detect the Impact of Soil Drought on Tree Canopy Temperature in Urban Environments
by Takashi Asawa, Haruki Oshio and Yumiko Yoshino
Remote Sens. 2024, 16(23), 4606; https://doi.org/10.3390/rs16234606 - 8 Dec 2024
Viewed by 890
Abstract
This study aimed to determine whether canopy and air temperature difference (ΔT) as an existing simple normalizing index can be used to detect an increase in canopy temperature induced by soil drought in urban parks, regardless of the unique energy balance and three-dimensional [...] Read more.
This study aimed to determine whether canopy and air temperature difference (ΔT) as an existing simple normalizing index can be used to detect an increase in canopy temperature induced by soil drought in urban parks, regardless of the unique energy balance and three-dimensional (3D) structure of urban trees. Specifically, we used a thermal infrared camera to measure the canopy temperature of Zelkova serrata trees and compared the temporal variation of ΔT to that of environmental factors, including solar radiation, wind speed, vapor pressure deficit, and soil water content. Normalization based on a 3D energy-balance model was also performed and used for comparison with ΔT. To represent the 3D structure, a terrestrial light detection and ranging-derived 3D tree model was used as the input spatial data. The temporal variation in ΔT was similar to that of the index derived using the energy-balance model, which considered the 3D structure of trees and 3D radiative transfer, with a correlation coefficient of 0.85. In conclusion, the thermal-image-based ΔT performed comparably to an index based on the 3D energy-balance model and detected the increase in canopy temperature because of the reduction in soil water content for Z. serrata trees in an urban environment. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>Overview of the study site: (<b>a</b>) map and aerial photographs; (<b>b</b>) target trees. Aerial photographs were obtained through the Geospatial Information Authority of Japan in June 2019.</p>
Full article ">Figure 2
<p>Schematic diagram of the study site, including the measurement points.</p>
Full article ">Figure 3
<p>Photographs of the measurement points: (<b>a</b>) Point A; (<b>b</b>) Point B; (<b>c</b>) Point C.</p>
Full article ">Figure 4
<p>Areas used to acquire (<b>a</b>) leaf temperature for calculating ΔT, (<b>b</b>) input values for numerical simulation, and (<b>c</b>) the normalized index α. In (<b>a</b>,<b>b</b>), the areas are indicated by white lines. In (<b>a</b>), a shaded portion used to detect low-quality thermal images is represented by a black square. In (<b>c</b>), the voxels used to calculate the mean value of the normalized index α are highlighted. A visible image of the target tree is shown in (<b>d</b>) for reference.</p>
Full article ">Figure 5
<p>Schematic of the input parameters for the FLiESvox model. Input parameters are shown in blue and their sources in black. Some parameters were set for each wavelength region, i.e., ultraviolet (UV), visible (VIS), and near-infrared (NIR).</p>
Full article ">Figure 6
<p>Temporal variation of the all measured data (all times): (<b>a</b>) SWC and rainfall; (<b>b</b>) air temperature; (<b>c</b>) global solar radiation; (<b>d</b>) wind speed; (<b>e</b>) VPD; (<b>f</b>) ΔT. The SWC is the mean of the data obtained at depths of 15 and 35 cm at points far from the side ditch (<a href="#remotesensing-16-04606-f003" class="html-fig">Figure 3</a>b). For ΔT, the box-and-whisker plot of the light blue line shows the data from 10:30 to 12:30, which was used for the analysis.</p>
Full article ">Figure 7
<p>Relationship between ΔT and environmental factors: (<b>a</b>,<b>e</b>) SWC; (<b>b</b>,<b>f</b>) global solar radiation; (<b>c</b>,<b>g</b>) wind speed; (<b>d</b>,<b>h</b>) VPD. Each circle in the graph represents an individual measurement: (<b>a</b>–<b>d</b>) all data were obtained between 10:30 and 12:30 local standard time (LST); (<b>e</b>–<b>h</b>) data were obtained between 10:30 and 12:30 LST when the global solar radiation exceeded 800 W m<sup>−2</sup>. The SWC is the mean of data obtained at depths of 15 cm and 35 cm on points far from the side ditch (<a href="#remotesensing-16-04606-f003" class="html-fig">Figure 3</a>b). Each graph shows a correlation coefficient (R) and linear regression line (broken gray line).</p>
Full article ">Figure 8
<p>Relationship between ΔT and SWC from 10:30 to 12:30 LST for different conditions of solar radiation, VPD, and wind speed. For solar radiation, there are three classes: 0–200 W m<sup>−2</sup>, 200–800 W m<sup>−2</sup>, and higher; for VPD, there are three classes: 0–0.2 kPa, 0.2–0.4 kPa, and higher; and for wind speed, there are two classes: 0–1.5 m s<sup>−1</sup> and higher. The SWC is the mean of data obtained at depths of 15 cm and 35 cm on points far from the side ditch (<a href="#remotesensing-16-04606-f003" class="html-fig">Figure 3</a>b). The dotted line represents the SWC value corresponding to the permanent wilting point. Filled circles indicate that both SWC values at the two depths are lower than the permanent wilting point. When the number of samples is greater than 5, the regression line and correlation coefficient (R) are shown in this Figure.</p>
Full article ">Figure 9
<p>Temporal variation in the mean value of measured data between 10:30 and 12:30 local standard time (LST) under conditions of global solar radiation greater than 800 W m<sup>−2</sup>: (<b>a</b>) SWC; (<b>b</b>) global solar radiation; (<b>c</b>) wind speed; (<b>d</b>) VPD; (<b>e</b>) ΔT; (<b>f</b>) ΔT corrected for the effect of wind speed (ΔT<sub>cor</sub>); (<b>g</b>) the normalized index α; (<b>h</b>) overlaid plots of SWC, solar radiation, wind speed, and ΔT; (<b>i</b>) overlaid plots of SWC, ΔT, ΔT<sub>cor</sub>, and α. In (<b>h</b>,<b>i</b>), the values normalized by the minimum and maximum values are plotted for each item.</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>d</b>) Relationship between ΔT and environmental factors, (<b>e</b>–<b>h</b>) relationship between ΔT corrected for wind speed effect (ΔT<sub>cor</sub>) and environmental factors, and (<b>i</b>–<b>l</b>) relationship between the normalized index α and environmental factors: (<b>a</b>,<b>e</b>,<b>i</b>) SWC; (<b>b</b>,<b>f</b>,<b>j</b>) global solar radiation; (<b>c</b>,<b>g</b>,<b>k</b>) wind speed; and (<b>d</b>,<b>h</b>,<b>l</b>) VPD. The SWC is the mean of data obtained at depths of 15 and 35 cm on points distant from the side ditch (<a href="#remotesensing-16-04606-f003" class="html-fig">Figure 3</a>b). Each graph shows a correlation coefficient (R), regression equation, normalized root mean squared error (E), and regression line (broken gray line).</p>
Full article ">Figure 11
<p>Relationship between ΔT and wind speed for data obtained under conditions where it was anticipated that SWC would have no effect on the canopy temperature. Each graph shows a correlation coefficient (R), regression equation, and regression line (broken gray line).</p>
Full article ">Figure 12
<p>Relationship between ΔT and normalized index α. The graph shows a correlation coefficient (R) and linear regression line (gray broken line).</p>
Full article ">Figure 13
<p>Temporal variation in the maximum ΔT (ΔT<sub>max</sub>) between 10:30 and 12:30.</p>
Full article ">
20 pages, 9410 KiB  
Article
Evolution Characteristics and Risk Assessment on Nonpoint Source Pollution in the Weihe River Basin, China
by Jiqiang Lyu, Haihao Zhang, Yuanjia Huang, Chunyu Bai, Yuhao Yang, Junlin Shi, Zhizhou Yang, Yan Wang, Zhaohui Zhou, Pingping Luo, Meng Jiao and Aidi Huo
Remote Sens. 2024, 16(23), 4605; https://doi.org/10.3390/rs16234605 - 8 Dec 2024
Viewed by 741
Abstract
Temporal and spatial changes in non-point source pollution, driven by significant alterations in land use due to increased human activity, have considerably affected the quality of groundwater, surface water, and soil environments in the region. This study examines the Weihe River basin in [...] Read more.
Temporal and spatial changes in non-point source pollution, driven by significant alterations in land use due to increased human activity, have considerably affected the quality of groundwater, surface water, and soil environments in the region. This study examines the Weihe River basin in greater detail, an area heavily impacted by human activity. The study developed the River Section Potential Pollution Index (R-PPI) model using the Potential Non-Point Source Pollution Index (PNPI) model in order to investigate the dynamic changes in River Section Potential Pollution (R-PP) over a 31-year period and its associated risks, especially those related to land use and land cover change (LUCC). The predominant land uses in the Weihe River Basin are cropland, grassland, and forest, making up around 97% of the basin’s total area. The Weihe River Basin underwent a number of soil and water conservation initiatives between 1990 and 2020, which significantly decreased the potential pollution risk in the river segment. The research separated the R-PP risk values in the area into five different categories using a quantile classification technique. According to the data, there is a polarization of R-PP risk in the area, with downstream parts especially having an increased risk of pollution in river segments impacted by human activity. On the other hand, river segments in the middle and upper reaches of the basin showed a discernible decline in possible pollution risk throughout the study period. The Weihe River Basin’s rapid urbanization and land degradation are to blame for the current increase in R-PP risk. The substantial influence of LUCC on the dynamic variations in R-PP risk in the Weihe River Basin is highlighted by this study. Additionally, it offers crucial information for upcoming conservation initiatives and urban planning guidelines meant to enhance the area’s ecological well-being and environmental standards. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the Weihe River Basin.</p>
Full article ">Figure 2
<p>R-PPI model structure.</p>
Full article ">Figure 3
<p>Soil-type map of the Weihe River Basin.</p>
Full article ">Figure 4
<p>Proportional area of different land uses on the Weihe River Basin from 1990 to 2020.</p>
Full article ">Figure 5
<p>Land uses in the representative years. (<b>A</b>) year 1990, (<b>B</b>) year 2000, (<b>C</b>) year 2010, (<b>D</b>) year 2020.</p>
Full article ">Figure 6
<p>Land use conversion in the Weihe River Basin from 1990 to 2020.</p>
Full article ">Figure 7
<p>The percentage length of different R-PP risk levels in the Weihe River from 1990 to 2020.</p>
Full article ">Figure 8
<p>Correlation tests between R-PPI data and the Water Quality Monitoring Site’s measured data in the Weihe River from 2006 to 2017.Letters 1 to 4 in the red and blue bars: actual number of occurrences.</p>
Full article ">Figure 9
<p>The transition of R-PPI risk levels and land use in the Weihe River Basin from 1990 to 2020. Corresponding magnifications of the black boxed areas are shown in the upper and bottom panels (1–8).</p>
Full article ">
16 pages, 3157 KiB  
Article
Differential Study on Estimation Models for Indica Rice Leaf SPAD Value and Nitrogen Concentration Based on Hyperspectral Monitoring
by Yufen Zhang, Kaiming Liang, Feifei Zhu, Xuhua Zhong, Zhanhua Lu, Yibo Chen, Junfeng Pan, Chusheng Lu, Jichuan Huang, Qunhuan Ye, Yuanhong Yin, Yiping Peng, Zhaowen Mo and Youqiang Fu
Remote Sens. 2024, 16(23), 4604; https://doi.org/10.3390/rs16234604 - 8 Dec 2024
Viewed by 706
Abstract
Soil and plant analyzer development (SPAD) value and leaf nitrogen concentration (LNC) based on dry weight are important indicators affecting rice yield and quality. However, there are few reports on the use of machine learning algorithms based on hyperspectral monitoring to synchronously predict [...] Read more.
Soil and plant analyzer development (SPAD) value and leaf nitrogen concentration (LNC) based on dry weight are important indicators affecting rice yield and quality. However, there are few reports on the use of machine learning algorithms based on hyperspectral monitoring to synchronously predict SPAD value and LNC of indica rice. Meixiangzhan No. 2, a high-quality indica rice, was grown at different nitrogen rates. A hyperspectral device with an integrated handheld leaf clip-on leaf spectrometer and an internal quartz-halogen light source was conducted to monitor the spectral reflectance of leaves at different growth stages. Linear regression (LR), random forest (RF), support vector regression (SVR), and gradient boosting regression tree (GBRT) were employed to construct models. Results indicated that the sensitive bands for SPAD value and LNC were displayed to be at 350–730 nm and 486–727 nm, respectively. Normalized difference spectral indices NDSI (R497, R654) and NDSI (R729, R730) had the strongest correlation with leaf SPAD value (R = 0.97) and LNC (R = −0.90). Models constructed via RF and GBRT were markedly superior to those built via LR and SVR. For prediction of leaf SPAD value and LNC, the model constructed with the RF algorithm based on whole growth periods of spectral reflectance performed the best, with R2 values of 0.99 and 0.98 and NRMSE values of 2.99% and 4.61%. The R2 values of 0.98 and 0.83 and the NRMSE values of 4.88% and 12.16% for the validation of leaf SPAD value and LNC were obtained, respectively. Results indicate that there are significant spectral differences associated with SPAD value and LNC. The model built with RF had the highest accuracy and stability. Findings can provide a scientific basis for non-destructive real-time monitoring of leaf color and precise fertilization management of indica rice. Full article
(This article belongs to the Special Issue Remote Sensing for Crop Nutrients and Related Traits)
Show Figures

Figure 1

Figure 1
<p>Technology roadmap. PI, HD, MF, and MA represent the panicle initiation stage, heading stage, middle stage of grain filling, and mature stage, respectively. <span class="html-italic">NDSI</span> represents the normalized difference spectral index. LR, RF, SVR, and GBRT represent linear regression, random forest, support vector regression, and gradient boosting regression trees, respectively.</p>
Full article ">Figure 2
<p>Effects of different nitrogen fertilizer levels on the leaf SPAD value and leaf nitrogen concentration (LNC) of rice at various growth stages. (<b>A</b>) Represents the leaf SPAD value, (<b>B</b>) represents the leaf nitrogen concentration, PI denotes the panicle initiation stage of rice, HD denotes the heading stage, MF denotes the middle stage of grain filling, and MA denotes the mature stage. N0, N1, N2, N3, and N4 indicate treatments of 0, 0.05, 0.1, 0.2, and 0.4 g N/kg dry soil basis, respectively. Lowercase letters for the same growth stage under different nitrogen fertilizer levels indicate significant differences at the <span class="html-italic">p</span> &lt; 0.05 level.</p>
Full article ">Figure 3
<p>Linear regression of leaf SPAD value with leaf nitrogen concentration (LNC) at the PI (<b>A</b>), HD (<b>B</b>), MF (<b>C</b>), and MA (<b>D</b>) stages, respectively. R<sup>2</sup> indicates the correlation coefficient between leaf SPAD and leaf nitrogen concentration. ** Indicates a significant difference at the <span class="html-italic">p</span> &lt; 0.01 level.</p>
Full article ">Figure 4
<p>Spectral reflectance of rice leaves at the PI (<b>A</b>), HD (<b>B</b>), MF (<b>C</b>), and MA (<b>D</b>) stages. N0, N1, N2, N3, and N4 represent treatments of 0, 0.05, 0.1, 0.2, and 0.4 g N/kg dry soil basis, respectively.</p>
Full article ">Figure 5
<p>Correlation coefficients between leaf SPAD value and leaf nitrogen concentration (LNC) and spectral reflectance at various growth stages. The horizontal lines in the figure represent a correlation coefficient of −0.6. (<b>A</b>–<b>D</b>) Represent the correlation analysis between leaf SPAD value and spectral reflectance; and (<b>E</b>–<b>H</b>) represent the correlation analysis between leaf nitrogen concentration and spectral reflectance, where (<b>A</b>,<b>E</b>) represent the PI stage; (<b>B</b>,<b>F</b>) represent the HD stage; (<b>C</b>,<b>G</b>) represent the MF stage; and (<b>D</b>,<b>H</b>) represent the MA stage.</p>
Full article ">Figure 6
<p>Correlation coefficients between spectral reflectances and randomly constructed <span class="html-italic">NDSI</span> in four key growth stages and leaf SPAD value and leaf nitrogen concentration (LNC): (<b>A</b>,<b>C</b>) Represent the correlation analyses between the spectral reflectances and the leaf SPAD value and leaf nitrogen concentration over the whole growth periods, respectively. The horizontal lines in the figure represent a correlation coefficient of −0.6. (<b>B</b>,<b>D</b>) Represent the correlation analyses between the randomly constructed <span class="html-italic">NDSI</span> values and the leaf SPAD value and leaf nitrogen concentration, respectively.</p>
Full article ">Figure 7
<p>Prediction of leaf SPAD value on the basis of the reflectance throughout the whole growth period and <span class="html-italic">NDSI</span> (<span class="html-italic">R497, R654</span>). (<b>A</b>–<b>D</b>) Represent the predictions of leaf SPAD value by models based on reflectance throughout the whole growth period, and (<b>E</b>–<b>H</b>) represent the predictions of leaf SPAD value by models based on the <span class="html-italic">NDSI</span>. (<b>A</b>,<b>E</b>) Are LR models; (<b>B</b>,<b>F</b>) are RF models; (<b>C</b>,<b>G</b>) are SVR models; and (<b>D</b>,<b>H</b>) are GBRT models.</p>
Full article ">Figure 8
<p>Prediction of leaf nitrogen concentration based on the reflectance throughout the whole growth period and <span class="html-italic">NDSI (R729, R730)</span> models. (<b>A</b>–<b>D</b>) Represent the predictions of leaf nitrogen concentration by models based on reflectance throughout the whole growth period, and (<b>E</b>–<b>H</b>) represent the predictions of leaf nitrogen concentration by models based on the <span class="html-italic">NDSI</span>. (<b>A</b>,<b>E</b>) Are LR models; (<b>B</b>,<b>F</b>) are RF models; (<b>C</b>,<b>G</b>) are SVR models; and (<b>D</b>,<b>H</b>) are GBRT models.</p>
Full article ">
23 pages, 7313 KiB  
Article
Shallow Water Bathymetry Inversion Based on Machine Learning Using ICESat-2 and Sentinel-2 Data
by Mengying Ye, Changbao Yang, Xuqing Zhang, Sixu Li, Xiaoran Peng, Yuyang Li and Tianyi Chen
Remote Sens. 2024, 16(23), 4603; https://doi.org/10.3390/rs16234603 - 7 Dec 2024
Viewed by 1446
Abstract
Shallow water bathymetry is essential for maritime navigation, environmental monitoring, and coastal management. While traditional methods such as sonar and airborne LiDAR provide high accuracy, their high cost and time-consuming nature limit their application in remote and sensitive areas. Satellite remote sensing offers [...] Read more.
Shallow water bathymetry is essential for maritime navigation, environmental monitoring, and coastal management. While traditional methods such as sonar and airborne LiDAR provide high accuracy, their high cost and time-consuming nature limit their application in remote and sensitive areas. Satellite remote sensing offers a cost-effective and rapid alternative for large-scale bathymetric inversion, but it still relies on significant in situ data to establish a mapping relationship between spectral data and water depth. The ICESat-2 satellite, with its photon-counting LiDAR, presents a promising solution for acquiring bathymetric data in shallow coastal regions. This study proposes a rapid bathymetric inversion method based on ICESat-2 and Sentinel-2 data, integrating spectral information, the Forel-Ule Index (FUI) for water color, and spatial location data (normalized X and Y coordinates and polar coordinates). An automated script for extracting bathymetric photons in shallow water regions is provided, aiming to facilitate the use of ICESat-2 data by researchers. Multiple machine learning models were applied to invert bathymetry in the Dongsha Islands, and their performance was compared. The results show that the XG-CID and RF-CID models achieved the highest inversion accuracies, 93% and 94%, respectively, with the XG-CID model performing best in the range from −10 m to 0 m and the RF-CID model excelling in the range from −15 m to −10 m. Full article
(This article belongs to the Special Issue Artificial Intelligence for Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Map of the study area for this study. (<b>a</b>) Location of the study area for this study. (<b>b</b>) Sentinel-2 image map of Dongsha Islands. (<b>c</b>) Sentinel-2 image map of Lingshui-Sanya Bay; the red dots are the actual measurement points of water depth.</p>
Full article ">Figure 2
<p>The technical flowchart of this study. The blue dashed box illustrates the key steps in the ICESat-2 bathymetric photon extraction process.</p>
Full article ">Figure 3
<p>Noise and land photons were filtered using the YAPC algorithm. The right panel shows a zoomed-in view of the rectangular area in the image. (<b>a</b>) Photon signal density confidence map based on the YAPC algorithm, with red areas indicating high-confidence regions. (<b>b</b>) Water signal estimation map, where the red dots represent valid photon signals from the water surface and below.</p>
Full article ">Figure 4
<p>Reference lines for filtering noise and land photons. (<b>a</b>) The Otsu threshold method is used to automatically determine the minimum threshold for valid photon signals, with the red vertical line representing the threshold line for valid photon signals. (<b>b</b>) The estimated water surface height is obtained based on the histogram statistics of valid photon signals along the track, with the vertical black line indicating the estimated water surface height.</p>
Full article ">Figure 5
<p>Water depth map after refraction correction. The gray points represent uncorrected photon data, while the black points indicate refraction-corrected photon data. The blue points denote estimated water surface photons, and the red line represents the estimated seafloor.</p>
Full article ">Figure 6
<p>Plot of ICESat-2 bathymetric photon extraction data results. (<b>a</b>) Lingshui-Sanya Bay. (<b>b</b>) Dongsha Islands.</p>
Full article ">Figure 7
<p>Difference distribution map showing the distribution of depth differences between the measured points and the nearest ICESat-2 bathymetry points. The <span class="html-italic">X</span>-axis represents the sequence number of the measured points.</p>
Full article ">Figure 8
<p>Plot of inversion results based on four models for Dongsha Islands, where ‘-Bands’ represents bathymetric images inverted using spectral characteristic information, and ‘-CID’ represents bathymetric images inverted using comprehensive information. (<b>a</b>) Random Forest-Bands. (<b>b</b>) Gradient Boosting-Bands. (<b>c</b>) Polynomial Regression-Bands. (<b>d</b>) XGBoost-Bands. (<b>e</b>) Random Forest-CID. (<b>f</b>) Gradient Boosting-CID. (<b>g</b>) Polynomial Regression-CID. (<b>h</b>) XGBoost-CID. (<b>i</b>) Stumpf-BG. (<b>j</b>) Stumpf-BR. (<b>k</b>) Forel-Ule Index.</p>
Full article ">Figure 9
<p>Scatter plots, residual plots, and deviation distributions of predicted bathymetry versus ICESat-2 bathymetry values. (<b>a</b>) Random Forest-Bands. (<b>b</b>) Gradient Boosting-Bands. (<b>c</b>) Polynomial Regression-Bands. (<b>d</b>) XGBoost-Bands. (<b>e</b>) Random Forest-CID. (<b>f</b>) Gradient Boosting-CID. (<b>g</b>) Polynomial Regression-CID. (<b>h</b>) XGBoost-CID. (<b>i</b>) Stumpf-BG. (<b>j</b>) Stumpf-BR.</p>
Full article ">Figure 10
<p>The bar charts of performance evaluation metrics for each model across different depth ranges.</p>
Full article ">Figure 11
<p>SHAP analysis of feature contributions across depth intervals. The leftmost plot in each group represents the overall analysis, covering feature contribution analysis across all depth intervals, while the remaining plots correspond to different depth intervals. (<b>a</b>) Random Forest-CID. (<b>b</b>) Gradient Boosting-CID. (<b>c</b>) XGBoost-CID.</p>
Full article ">
15 pages, 4588 KiB  
Technical Note
Local Pyramid Vision Transformer: Millimeter-Wave Radar Gesture Recognition Based on Transformer with Integrated Local and Global Awareness
by Zhaocheng Wang, Guangxuan Hu, Shuo Zhao, Ruonan Wang, Hailong Kang and Feng Luo
Remote Sens. 2024, 16(23), 4602; https://doi.org/10.3390/rs16234602 - 7 Dec 2024
Viewed by 904
Abstract
A millimeter-wave radar is widely accepted by the public due to its low susceptibility to interference, such as changes in light, and the protection of personal privacy. With the development of the deep learning theory, the deep learning method has been dominant in [...] Read more.
A millimeter-wave radar is widely accepted by the public due to its low susceptibility to interference, such as changes in light, and the protection of personal privacy. With the development of the deep learning theory, the deep learning method has been dominant in the millimeter-wave radar field, which usually uses convolutional neural networks for feature extraction. In recent years, transformer networks have also been highly valued by researchers due to their parallel processing capabilities and long-distance dependency modeling capabilities. However, traditional convolutional neural networks (CNNs) and vision transformers each have their limitations: CNNs usually overlook the global features of images and vision transformers may neglect local image continuity, and both of them may impede gesture recognition performance. In addition, whether CNN or transformer, their implementation is hindered by the scarcity of public radar gesture datasets. To address these limitations, this paper proposes a new recognition method using a local pyramid visual transformer (LPVT) based on millimeter-wave radar. LPVT can capture global and local features in dynamic gesture spectrograms, ultimately improving the recognition ability of gestures. In this paper, we mainly carried out the following two tasks: building the corresponding datasets and executing gesture recognition. First, we constructed a gesture dataset for training. In this stage, we use a 77 GHz radar to collect the echo signals of gestures and preprocess them to build a dataset. Second, we propose the LPVT network specifically designed for gesture recognition tasks. By integrating local sensing into the globally focused transformer, we improve its capacity to capture both global and local features in dynamic gesture spectrograms. The experimental results using the dataset we constructed show that the proposed LPVT network achieved a gesture recognition accuracy of 92.2%, which exceeds the performance of other networks. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Gesture recognition structure diagram.</p>
Full article ">Figure 2
<p>Gesture data collection scene.</p>
Full article ">Figure 3
<p>AWR1642 mm wave radar.</p>
Full article ">Figure 4
<p>Signal preprocessing flowchart.</p>
Full article ">Figure 5
<p>Examples of micro-Doppler time spectrogram. (<b>a</b>) Spectrogram of wave. (<b>b</b>) Spectrogram of pinch. (<b>c</b>) Spectrogram of click. (<b>d</b>) Spectrogram of swipe. (<b>e</b>) Spectrogram of circle. (<b>f</b>) Spectrogram of clap.</p>
Full article ">Figure 5 Cont.
<p>Examples of micro-Doppler time spectrogram. (<b>a</b>) Spectrogram of wave. (<b>b</b>) Spectrogram of pinch. (<b>c</b>) Spectrogram of click. (<b>d</b>) Spectrogram of swipe. (<b>e</b>) Spectrogram of circle. (<b>f</b>) Spectrogram of clap.</p>
Full article ">Figure 6
<p>LPVT network structural diagram.</p>
Full article ">Figure 7
<p>LPVT-encoder structural diagram.</p>
Full article ">Figure 8
<p>Comparison of local receptive field in CNN and global attention mechanism in transformer.</p>
Full article ">Figure 9
<p>The proposed method that integrates the global attention mechanism and local receptive field.</p>
Full article ">Figure 10
<p>Local feedforward network structure diagram.</p>
Full article ">Figure 11
<p>The confusion matrix of the six gesture classes.</p>
Full article ">Figure 12
<p>The confusion matrix of the six gesture classes without DropKey.</p>
Full article ">
27 pages, 15438 KiB  
Article
Three-Dimensional Pulsed-Laser Imaging via Compressed Sensing Reconstruction Based on Proximal Momentum-Gradient Descent
by Han Gao, Guifeng Zhang, Min Huang, Yanbing Xu, Yucheng Zheng, Shuai Yuan and Huan Li
Remote Sens. 2024, 16(23), 4601; https://doi.org/10.3390/rs16234601 - 7 Dec 2024
Viewed by 737
Abstract
Compressed sensing (CS) is a promising approach to enhancing the spatial resolution of images obtained from few-pixel array sensors in three-dimensional (3D) laser imaging scenarios. However, traditional CS-based methods suffer from insufficient range resolutions and poor reconstruction quality at low CS sampling ratios. [...] Read more.
Compressed sensing (CS) is a promising approach to enhancing the spatial resolution of images obtained from few-pixel array sensors in three-dimensional (3D) laser imaging scenarios. However, traditional CS-based methods suffer from insufficient range resolutions and poor reconstruction quality at low CS sampling ratios. To solve the CS reconstruction problem under the time-of-flight (TOF)-based pulsed-laser imaging framework, a CS algorithm based on proximal momentum-gradient descent (PMGD) is proposed in this paper. To improve the accuracy of the range and intensity reconstructed from overlapping samples, the PMGD framework is developed by introducing an extra fidelity term based on a pulse shaping method, in which the reconstructed echo signal obtained from each sensor pixel can be refined during the iterative reconstruction process. Additionally, noise level estimation with the fast Johnson–Lindenstrauss transform is adopted, enabling the integration of a denoising neural network into PMGD to further enhance reconstruction accuracy. The simulation results obtained on real datasets demonstrate that the proposed method can yield more accurate reconstructions and significant improvements over the recently developed CS-based approaches. Full article
Show Figures

Figure 1

Figure 1
<p>Architecture of the proposed pulse shaping method.</p>
Full article ">Figure 2
<p>Architecture of the proposed 3D CS reconstruction method.</p>
Full article ">Figure 3
<p>Original scenes contained in the indoor datasets. (<b>a</b>) Bunny. (<b>b</b>) RGB-D Scenes 9. (<b>c</b>) RGB-D Scenes 14. (<b>d</b>) 7-Scenes Chess. (<b>e</b>) 7-Scenes Pumpkin.</p>
Full article ">Figure 4
<p>Original scenes contained in the remote sensing datasets. (<b>a</b>) ASTER+Landsat-8 scene 1. (<b>b</b>) ASTER+Landsat-8 scene 2. (<b>c</b>) ALOS+WV scene 1. (<b>d</b>) ALOS+WV scene 2.</p>
Full article ">Figure 5
<p>Three-dimensional views of the reconstructed images produced for the indoor data (CS ratio = 0.125). From top to bottom: original image, TVAL3-TSVD, MH-BCS-SPL, GAP-TV-acc, LDAMP, GAM-RM-F, and PMGD-PSM (proposed). In each subfigure, the <span class="html-italic">x</span>, <span class="html-italic">y</span>, and <span class="html-italic">z</span> axes represent the horizontal pixel number, the vertical pixel number, and the height, respectively.</p>
Full article ">Figure 6
<p>Three-dimensional views of the reconstructed images produced for the remote sensing data (CS ratio = 0.125). From top to bottom: original image, TVAL3-TSVD, MH-BCS-SPL, GAP-TV-acc, LDAMP, GAM-RM-F, and PMGD-PSM (proposed). In each subfigure, the <span class="html-italic">x</span>, <span class="html-italic">y</span>, and <span class="html-italic">z</span> axes represent the horizontal pixel number, the vertical pixel number, and the height, respectively.</p>
Full article ">Figure 7
<p>Two-dimensional images of the reconstructed columns produced for ASTER+Landsat-8 Scene 1 (CS ratio = 0.125). From top to bottom: 2D images of the 14th, 42nd, 70th, 98th and 126th columns. (<b>a</b>) TVAL3-TSVD; (<b>b</b>) MH-BCS-SPL; (<b>c</b>) GAP-TV; (<b>d</b>) LDAMP; (<b>e</b>) GAM-RM-F; (<b>f</b>) PMGD-PSM (proposed).</p>
Full article ">Figure 8
<p>PSNR results obtained for ASTER+Landsat-8 Scene 1 (CS sampling ratio = 0.3) with different pulse widths. (<b>a</b>) Range profiles; (<b>b</b>) intensity images.</p>
Full article ">Figure 9
<p>PSNR results obtained for ASTER+Landsat-8 Scene 1 (CS sampling ratio = 0.3) with different A/D sampling rates. (<b>a</b>) Range profiles; (<b>b</b>) intensity images.</p>
Full article ">Figure 10
<p>PSNR results obtained for ASTER+Landsat-8 Scene 1 (CS ratio = 0.3) with white Gaussian noise. (<b>a</b>) Range profiles; (<b>b</b>) intensity images.</p>
Full article ">
23 pages, 14898 KiB  
Article
Methods for the Construction and Editing of an Efficient Control Network for the Photogrammetric Processing of Massive Planetary Remote Sensing Images
by Xin Ma, Chun Liu, Xun Geng, Sifen Wang, Tao Li, Jin Wang, Pengying Liu, Jiujiang Zhang, Qiudong Wang, Yuying Wang, Yinhui Wang and Zhen Peng
Remote Sens. 2024, 16(23), 4600; https://doi.org/10.3390/rs16234600 - 7 Dec 2024
Viewed by 585
Abstract
Planetary photogrammetry remains an important technical means of producing high-precision planetary maps. High-quality control networks are fundamental to successful bundle adjustment. However, current software tools used by the planetary mapping community to construct and edit control networks exhibit very low efficiency. Moreover, redundant [...] Read more.
Planetary photogrammetry remains an important technical means of producing high-precision planetary maps. High-quality control networks are fundamental to successful bundle adjustment. However, current software tools used by the planetary mapping community to construct and edit control networks exhibit very low efficiency. Moreover, redundant and invalid control points in the control network can further increase the time required for the bundle adjustment process. Due to a lack of targeted algorithm optimization, existing software tools and methods are unable to meet the photogrammetric processing requirements of massive planetary remote sensing images. To address these issues, we first proposed an efficient control network construction framework based on approximate orthoimage matching and hash quick search. Next, to effectively reduce the redundant control points in the control network and decrease the computation time required for bundle adjustment, we then proposed a control network-thinning algorithm based on a K-D tree fast search. Finally, we developed an automatic detection method based on ray tracing for identifying invalid control points in the control network. To validate the proposed methods, we conducted photogrammetric processing experiments using both the Lunar Reconnaissance Orbiter (LRO) narrow-angle camera (NAC) images and the Origins Spectral Interpretation Resource Identification Security Regolith Explorer (OSIRIS-REx) PolyCam images; we then compared the results with those derived from the famous open-source planetary photogrammetric software, the United States Geological Survey (USGS) Integrated Software for Imagers and Spectrometers (ISIS) version 8.0.0. The experimental results demonstrate that the proposed methods significantly improve the efficiency and quality of constructing control networks for large-scale planetary images. For thousands of planetary images, we were able to speed up the generation and editing of the control network by more than two orders of magnitude. Full article
Show Figures

Figure 1

Figure 1
<p>The overall process of constructing the control network.</p>
Full article ">Figure 2
<p>Schematic diagram of the control network construction process after image matching to obtain control points. Since the three control points shown in the left part of the figure are corresponding points, they need to be merged into a single control point.</p>
Full article ">Figure 3
<p>Flowchart of control network construction using the exhaustive search method.</p>
Full article ">Figure 4
<p>Hash key generation process (<b>a</b>) and example of a hash table (<b>b</b>). The hash key represents the ID of the control measure. If the hash values of two control measures are the same, they belong to the same control point.</p>
Full article ">Figure 5
<p>Flowchart of control network construction based on hash processing.</p>
Full article ">Figure 6
<p>Workflow of the K-D tree thinning algorithm (having acquired 3D ground coordinates).</p>
Full article ">Figure 7
<p>Diagram of invalid control points. (<b>a</b>) When the image’s EO parameters are inaccurate, control points near the edge of the valid texture region cannot intersect with the celestial body (The line segments represent the calculated light rays). (<b>b</b>) Illustration of two control measures on asteroid Bennu’s surface that cannot intersect the celestial body; the red cross marks indicate the same invalid feature point. The invalid control point fails to compute its ground coordinates due to the low accuracy of the initial EO parameters and the inaccurate shape model of the celestial body (green crosses represent control points).</p>
Full article ">Figure 8
<p>Flowchart for removing invalid control points.</p>
Full article ">Figure 9
<p>Distribution of images in Dataset 1 over the lunar South Pole (the colored outlines indicate the images, and the base map is the lunar LROC WAC orthoimage provided by NASA [<a href="#B43-remotesensing-16-04600" class="html-bibr">43</a>]).</p>
Full article ">Figure 10
<p>Distribution of images in Dataset 2 (the colored outlines represent the test images, and the base map is the Bennu OSIRIS-REx OCAMS global image mosaic provided by the USGS Astrogeology Science Center [<a href="#B44-remotesensing-16-04600" class="html-bibr">44</a>]).</p>
Full article ">Figure 11
<p>Distribution of images in Dataset 3, covering the entire surface of the asteroid Bennu (the colored outlines represent the images).</p>
Full article ">Figure 12
<p>Time statistics for the construction of a control network using different algorithms. The vertical axis represents the time required to construct the control network. A logarithmic scale is used to enhance the visibility of smaller values due to the significant differences in computation time.</p>
Full article ">Figure 13
<p>Time statistics for thinning control networks using different algorithms. Similarly, the vertical axis represents computation time (a logarithmic scale is used to enhance the visibility of smaller values due to significant differences in computation time), and the horizontal axis shows the number of control points processed.</p>
Full article ">Figure 14
<p>Distribution of control points on the single image before and after thinning the control network of Dataset 1 (green crosses represent control points the red box highlights the differences between the two methods).</p>
Full article ">Figure 15
<p>Distribution of control points on each image before and after thinning of the control network of Dataset 2 (areas without control points are due to a lack of overlap between images or poor image quality; green crosses represent control points; the red box highlights the differences between the two methods).</p>
Full article ">Figure 16
<p>Distribution of control points on each image before and after thinning for the control network of Dataset 3 (the colored outlines represent the images).</p>
Full article ">Figure 17
<p>Illustration of the identified invalid control points on a single image from Dataset 3, where the green cross marks represent valid control points, and the red cross marks indicate the invalid control points identified by our method.</p>
Full article ">Figure 18
<p>Sigma 0 of the bundle adjustment before and after thinning.</p>
Full article ">Figure 19
<p>Image mosaic results of adjacent orthoimages (light blue arrows indicate seam lines).</p>
Full article ">
19 pages, 9878 KiB  
Article
Arctic Sea Ice Surface Temperature Retrieval from FengYun-3A MERSI-I Data
by Yachao Li, Tingting Liu, Zemin Wang, Mohammed Shokr, Menglin Yuan, Qiangqiang Yuan and Shiyu Wu
Remote Sens. 2024, 16(23), 4599; https://doi.org/10.3390/rs16234599 - 7 Dec 2024
Viewed by 748
Abstract
Arctic sea-ice surface temperature (IST) is an important environmental and climatic parameter. Currently, wide-swath sea-ice surface temperature products have a spatial resolution of approximately 1000 m. The Medium Resolution Spectral Imager (MERSI-I) offers a thermal infrared channel with a wide-swath width of 2900 [...] Read more.
Arctic sea-ice surface temperature (IST) is an important environmental and climatic parameter. Currently, wide-swath sea-ice surface temperature products have a spatial resolution of approximately 1000 m. The Medium Resolution Spectral Imager (MERSI-I) offers a thermal infrared channel with a wide-swath width of 2900 km and a high spatial resolution of 250 m. In this study, we developed an applicable single-channel algorithm to retrieve ISTs from MERSI-I data. The algorithm accounts for the following challenges: (1) the wide range of incidence angle; (2) the unstable snow-covered ice surface; (3) the variation in atmospheric water vapor content; and (4) the unique spectral response function of MERSI-I. We reduced the impact of using a constant emissivity on the IST retrieval accuracy by simulating the directional emissivity. Different ice surface types were used in the simulation, and we recommend the sun crust type as the most suitable for IST retrieval. We estimated the real-time water vapor content using a band ratio method from the MERSI-I near-infrared data. The results show that the retrieved IST was lower than the buoy measurements, with a mean bias and root-mean-square error (RMSE) of −1.928 K and 2.616 K. The retrieved IST is higher than the IceBridge measurements, with a mean bias and RMSE of 1.056 K and 1.760 K. Compared with the original algorithm, the developed algorithm has higher accuracy and reliability. The sensitivity analysis shows that the atmospheric water vapor content with an error of 20% may lead to an IST retrieval error of less than 1.01 K. Full article
(This article belongs to the Special Issue Geodata Science and Spatial Analysis with Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The locations of the AERONET stations (red stars), buoys (yellow stars), and MERSI-I/MODIS images (blue rectangles) in the Arctic. Because of the movement of the buoys, the yellow stars may represent the same buoy at different times. The MERSI-I/ MODIS images are not the complete scenes, and only the overlapping areas between the two sensors are presented.</p>
Full article ">Figure 2
<p>Temperature profiles in the snow and sea ice measured by the thermistor string installed in a CRREL buoy device. The colored lines denote the observation times. The dotted line denotes the snow–air interface. The data are from (<b>a</b>) 18 April 2010, and (<b>b</b>) 3 June 2011.</p>
Full article ">Figure 3
<p>The flowchart of the developed algorithm for the MERSI-I TIR data.</p>
Full article ">Figure 4
<p>Modeled directional emissivity of five ice and snow types at 0–75 emergence angles in the 7–15 μm range. The ice and snow type and corresponding <math display="inline"> <semantics> <mrow> <msub> <mrow> <mi>f</mi> </mrow> <mrow> <mi>s</mi> <mi>p</mi> </mrow> </msub> </mrow> </semantics> </math> values are listed in the subfigures (<b>a</b>–<b>e</b>).</p>
Full article ">Figure 5
<p>The ice and snow surface emissivity variation plot with the emergence angle and error in the IST caused by the ice and snow surface emissivity variation. The sea-ice surface type is sun crust (<math display="inline"> <semantics> <mrow> <msub> <mrow> <mi>f</mi> </mrow> <mrow> <mi>s</mi> <mi>p</mi> </mrow> </msub> </mrow> </semantics> </math> = 0.53).</p>
Full article ">Figure 6
<p>The IST retrieval error from using the emissivity of different sea-ice surface types: (<b>a</b>–<b>e</b>) IST retrieval errors caused by using the emissivity values for fine dendrite snow, medium granular snow, coarse grained snow, sun crust snow, and bare glaze ice, respectively.</p>
Full article ">Figure 7
<p>Scatter plots of the simulated AWVC and band ratios: (<b>a</b>)–(<b>c</b>) scatter plots of AWVC and band ratios <math display="inline"> <semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mn>17</mn> </mrow> </msub> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mn>18</mn> </mrow> </msub> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mn>19</mn> </mrow> </msub> </mrow> </semantics> </math>, respectively. <math display="inline"> <semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics> </math> is the band ratio of band <span class="html-italic">i</span> and band 16 (<span class="html-italic">i</span> = 17, 18, and 19).</p>
Full article ">Figure 8
<p>Scatter plots of the predictions and ground truths for four parameters (transmissivity, <math display="inline"> <semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>a</mi> </mrow> </msub> </mrow> </semantics> </math>, atmospheric upwelling radiance, and downwelling radiance) based on the different fitting equations (linear, affine, and quadratic). The coefficient of determination (R<sup>2</sup>) values of the different scatter plots are listed in the subfigures.</p>
Full article ">Figure 9
<p>Scatter plots of the ISTs from buoy data against the retrieved ISTs: (<b>a</b>) proposed algorithm; (<b>b</b>) ISC algorithm; (<b>c</b>) MODIS IST product.</p>
Full article ">Figure 10
<p>Comparison with the IST from the IceBridge measurements; (<b>a</b>) locations of the comparison points; (<b>b</b>) comparison between the IST from the proposed algorithm and the IST from the IceBridge measurements; (<b>c</b>) comparison between the IST from the proposed ISC algorithm and the IST from the IceBridge measurements.</p>
Full article ">Figure 11
<p>Comparison between the spatial maps of the IST from the MERSI-I ISC algorithm and the MODIS IST product over subsections of the Arctic. The two columns on the left are MERSI-I and MODIS IST. The right column presents the scatter plots of the ISTs from the two datasets. The dates of the images are shown in the spatial maps.</p>
Full article ">Figure 12
<p>Scatter plots of the retrieved AWVC from MERSI-I and AERONET: (<b>a</b>,<b>b</b>) accuracy verification with and without including R<sub>19</sub>, respectively.</p>
Full article ">
20 pages, 8588 KiB  
Article
Coupling Coordination and Influencing Mechanism of Ecosystem Services Using Remote Sensing: A Case Study of Food Provision and Soil Conservation
by Yu Li, Weina Zhen, Donghui Shi, Yihang Tang and Bing Xia
Remote Sens. 2024, 16(23), 4598; https://doi.org/10.3390/rs16234598 - 7 Dec 2024
Cited by 1 | Viewed by 760
Abstract
Understanding the trade-offs and synergies between ecosystem services is essential for effective ecological management. We selected food provisioning and soil conservation services to explore their intrinsic link and trade-offs. We evaluated these services in Minnesota from 1998 to 2018 using multi-source remote sensing [...] Read more.
Understanding the trade-offs and synergies between ecosystem services is essential for effective ecological management. We selected food provisioning and soil conservation services to explore their intrinsic link and trade-offs. We evaluated these services in Minnesota from 1998 to 2018 using multi-source remote sensing data. The coupling coordination degree model (CCDM) was employed to quantify the relationship between these services. The CCDM evaluates the degree of coordination between systems by measuring their interactions. In addition, we used the geographically weighted regression (GWR) model to identify factors influencing this relationship. Our findings reveal that, while Minnesota’s food provision services have shown a significant overall upward trajectory, distinct declines occurred in 2008 and 2018. In contrast, soil conservation services showed considerable variability from year to year, without a clear trend. Over time, the relationship between food provision and soil conservation services evolved from uncoordinated and transitional to more coordinated development. Our analysis indicates that climate–soil indicators (Z1) exert the most significant influence on the coupling coordination degree (CCD), followed by topography (Z3), vegetation quality (Z4), and socio-economic indicators (Z2). This suggests that natural environmental factors have a greater impact than socio-economic factors. Spatial analysis highlights that topography exhibits significant spatial heterogeneity and serves as the primary spatial driving factor. This study explores the trade-offs between food provision and soil conservation ecosystem services in Minnesota, enhancing the understanding of trade-offs among different ecosystem services and providing insights for global sustainable agricultural development. Full article
Show Figures

Figure 1

Figure 1
<p>Study area.</p>
Full article ">Figure 2
<p>Changes in Minnesota’s food provision services.</p>
Full article ">Figure 3
<p>Spatial distribution of food provision services in Minnesota.</p>
Full article ">Figure 4
<p>Changes in Minnesota’s soil conservation services.</p>
Full article ">Figure 5
<p>Spatial distribution of soil conservation services in Minnesota.</p>
Full article ">Figure 6
<p>Food provision and soil conservation services coupling coordination degree in Minnesota.</p>
Full article ">Figure 7
<p>Result of the rotated component matrix.</p>
Full article ">Figure 8
<p>Spatial distribution of impact coefficients of CCD.</p>
Full article ">
22 pages, 9868 KiB  
Article
Re-Estimating GEDI Ground Elevation Using Deep Learning: Impacts on Canopy Height and Aboveground Biomass
by Rei Mitsuhashi, Yoshito Sawada, Ken Tsutsui, Hidetake Hirayama, Tadashi Imai, Taishi Sumita, Koji Kajiwara and Yoshiaki Honda
Remote Sens. 2024, 16(23), 4597; https://doi.org/10.3390/rs16234597 - 7 Dec 2024
Viewed by 989
Abstract
This paper presents a method to improve ground elevation estimates through waveform analysis from the Global Ecosystem Dynamics Investigation (GEDI) and examines its impact on canopy height and aboveground biomass (AGB) estimation. The method uses a deep learning model to estimate ground elevation [...] Read more.
This paper presents a method to improve ground elevation estimates through waveform analysis from the Global Ecosystem Dynamics Investigation (GEDI) and examines its impact on canopy height and aboveground biomass (AGB) estimation. The method uses a deep learning model to estimate ground elevation from the GEDI waveform. Geographic transferability was demonstrated by recalculating canopy height and AGB estimation accuracy using the improved ground elevation without changing established GEDI formulas for relative height (RH) and AGB. The study covers four regions in Japan and South America, from subarctic to tropical zones, integrating GEDI waveform data with airborne laser scan (ALS) data. Transfer learning was explored to enhance accuracy in regions not used for training. Ground elevation estimates using deep learning showed an RMSE improvement of over 3 m compared to the conventional GEDI L2A product, with generalization performance. Applying transfer learning and retraining with additional data further improved the estimation accuracy, even with limited datasets. The findings suggest that improving ground elevation estimates enhances canopy height and AGB accuracy, maximizing GEDI’s global AGB estimation algorithms. Optimizing models for each region could further enhance accuracy. The broader application of this method may improve global carbon cycle understanding and climate models. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) GEDI-observed waveform (GEDI L1B) and (<b>b</b>) calculated relative height (RH) from GEDI L1B, an explanatory variable for AGB (GEDI L2A). The ground elevation estimation of GEDI L2A (<span class="html-italic">elev_lowestmode</span>) and actual elevation are shown for both.</p>
Full article ">Figure 2
<p>Study sites in Japan. (<b>a</b>) Geospatial location and Köppen–Geiger climate classification of Area 1 to Area 3 in Japan. <span class="html-italic">Cfa</span> is humid subtropical climate, <span class="html-italic">Cfb</span> is oceanic climate, <span class="html-italic">Dfa</span> is hot-summer humid continental climate, <span class="html-italic">Dfb</span> is warm-summer humid continental climate, and <span class="html-italic">Dfc</span> is subarctic climate. (<b>b</b>) GEDI footprint at ALS observation sites in Area 1 (Shizuoka: mostly <span class="html-italic">Cfa</span> warm oceanic climate, wide range of elevations and steep terrain). (<b>c</b>) GEDI footprint at ALS observation sites and AGB calculation site in Area 2 (Fukuoka: similar climate with Area 1, planted needleleaf forest). (<b>d</b>) GEDI footprint at ALS observation sites and AGB calculation site in Area 3 (Tsubetsu: mostly <span class="html-italic">Dfc</span> subarctic climate).</p>
Full article ">Figure 3
<p>Study site in South America (Area 4: tropical climate). GEDI footprint at ALS observation sites and AGB calculation site.</p>
Full article ">Figure 4
<p>Evaluation flow.</p>
Full article ">Figure 5
<p>Comparison of ground bins corresponding to ALS-derived ground within the GEDI footprint and the results of ground estimations from waveforms under various conditions. The dashed line in the figure represents the reference line where the data should overlap when the error is zero.</p>
Full article ">Figure 6
<p>Relationship between <span class="html-italic">ground_bin SNR</span> and ground elevation estimation accuracy (<b>upper</b>), slope calculated with ALS, and ground elevation estimation accuracy (<b>lower</b>).</p>
Full article ">Figure 7
<p>Relationship between the amount of data used for transfer learning in each area and ground elevation estimation accuracy.</p>
Full article ">Figure 8
<p>Comparison between RH98 from simulation waveforms generated from ALS with corrected coordinates and RH98 derived from GEDI waveforms under various conditions. The dashed line in the figure represents the reference line where the data should overlap when the error is zero.</p>
Full article ">Figure 9
<p>Relationship between <span class="html-italic">ground_bin SNR</span> and RH98 from simulation waveforms generated from ALS estimation accuracy (<b>upper</b>), slope calculated with ALS, and RH98 from simulation waveforms generated from ALS estimation accuracy (<b>lower</b>).</p>
Full article ">Figure 10
<p>Comparison of RH98 calculated from GEDI observation waveforms using ground elevation derived from ALS with RH98 from simulation waveforms. The dashed line in the figure represents the reference line where the data should overlap when the error is zero.</p>
Full article ">Figure 11
<p>Comparison of aboveground biomass (AGB) derived from ALS within the GEDI footprint and AGB estimated under various conditions. The dashed line in the figure represents the reference line where the data should overlap when the error is zero.</p>
Full article ">Figure 12
<p>Relationship between <span class="html-italic">ground_bin SNR</span> and AGB estimation accuracy (<b>upper</b>), slope calculated with ALS, and AGB estimation accuracy (<b>lower</b>).</p>
Full article ">Figure 13
<p>Comparison of AGB estimated from GEDI observation waveforms using ground elevation derived from ALS with AGB derived from ALS and local measurements. The dashed line in the figure represents the reference line where the data should overlap when the error is zero. (<b>a</b>) RH metrics for estimating AGB recalculated using ground elevation derived from ALS. (<b>b</b>) The estimation formula will be recalculated after adjusting for local land cover.</p>
Full article ">
18 pages, 10004 KiB  
Article
Evaluation of Soil Moisture Retrievals from a Portable L-Band Microwave Radiometer
by Runze Zhang, Abhi Nayak, Derek Houtz, Adam Watts, Elahe Soltanaghai and Mohamad Alipour
Remote Sens. 2024, 16(23), 4596; https://doi.org/10.3390/rs16234596 - 6 Dec 2024
Viewed by 1036
Abstract
A novel Portable L-band radiometer (PoLRa), compatible with tower-, vehicle- and drone-based platforms, can provide gridded soil moisture estimations from a few meters to several hundred meters yet its retrieval accuracy has rarely been examined. This study aims to provide an initial assessment [...] Read more.
A novel Portable L-band radiometer (PoLRa), compatible with tower-, vehicle- and drone-based platforms, can provide gridded soil moisture estimations from a few meters to several hundred meters yet its retrieval accuracy has rarely been examined. This study aims to provide an initial assessment of the performance of PoLRa-derived soil moisture at a spatial resolution of approximately 0.7 m × 0.7 m at a set of sampling pixels in central Illinois, USA. This preliminary evaluation focuses on (1) the consistency of PoLRa-measured brightness temperatures from different viewing directions over the same area and (2) whether PoLRa-derived soil moisture retrievals are within an acceptable accuracy range. As PoLRa shares many aspects of the L-band radiometer onboard NASA’s Soil Moisture Active Passive (SMAP) mission, two SMAP operational algorithms and the conventional dual-channel algorithm (DCA) were applied to calculate volumetric soil moisture from the measured brightness temperatures. The vertically polarized brightness temperatures from the PoLRa are typically more stable than their horizontally polarized counterparts across all four directions. In each test period, the standard deviations of observed dual-polarization brightness temperatures are generally less than 5 K. By comparing PoLRa-based soil moisture retrievals against the simultaneous moisture values obtained by a handheld capacitance probe, the unbiased root mean square error (ubRMSE) and the Pearson correlation coefficient (R) are mostly below 0.05 m3/m3 and above 0.7 for various algorithms adopted here. While SMAP models and the DCA algorithm can derive soil moisture from PoLRa observations, no single algorithm consistently outperforms the others. These findings highlight the significant potential of ground- or drone-based PoLRa measurements as a standalone reference for the calibration and validation of spaceborne L-band synthetic aperture radars and radiometers. The accuracy of PoLRa-yielded high-resolution soil moisture can be further improved via standardized operational procedures and appropriate tau-omega parameters. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

Figure 1
<p>Experimental sites, timeline, and instrument setup of ground-based PoLRa measurements: (<b>a</b>) geographical coordinates of experimental locations for phases 1 and 2; (<b>b</b>) microwave radiation measurements over the same area from four different directions (highlighted by the yellow arrows) and five measurement points for in situ sensor, i.e., TEROS12 capacitance probe; (<b>c</b>) the footprint area observed during phase 1; (<b>d</b>) geographical coordinates of experimental locations for phase 3; and (<b>e</b>) the setup and geometric view of ground-based PoLRa instruments.</p>
Full article ">Figure 2
<p>Process flow chart that describes the conversion of PoLRa-derived brightness temperatures to areal soil moisture over the targeted locations with performance assessment.</p>
Full article ">Figure 3
<p>(<b>a</b>) Conceptual geometric configuration of the footprint captured by PoLRa (mounted at a height of 1.14 m on the steel stand) and the centered square within the elliptical footprint used for initial validation of PoLRa-derived soil moisture retrievals. (<b>b</b>) Practical setup of PoLRa’s measurement position and the validation zone during the experiment. (<b>c</b>) Example of soil moisture benchmarking using the METER TEROS 12 capacitance probe at pre-selected points within the validation square.</p>
Full article ">Figure 4
<p>Boxplots of polarized brightness temperatures over the testing sites from four different directions on four dates: (<b>a</b>) 3 November 2023; (<b>b</b>) 4 November 2023; (<b>c</b>) 7 November 2023; and (<b>d</b>) 8 November 2023.</p>
Full article ">Figure 5
<p>Time series of different representative polarized brightness temperatures extracted from the daily sets of filtered brightness temperatures during phase three over (<b>a</b>) bare soil and (<b>b</b>) grassland.</p>
Full article ">Figure 6
<p>Time series of soil moisture derived from PoLRa-derived brightness temperatures using three different algorithms (SCAH, RDCA, and DCA0) and measured by in situ probe during phase 3 over (<b>a</b>) bare soil and (<b>b</b>) grassland. The green shaded areas around the in situ soil moisture time series indicate the range of measured values within two standard deviations.</p>
Full article ">Figure A1
<p>Time series of raw voltages measured at (a) horizontal polarization and (b) vertical polarization by PoLRa’s antenna with different settings where the details are attached to the table below. The paired vertically lines reflect the beginning and end of each phase.</p>
Full article ">Figure A2
<p>Time series of volumetric soil moisture measured by in situ probes from five points within the same areas of (<b>a</b>) bare soil and (<b>b</b>) grassland during phase 3.</p>
Full article ">
33 pages, 53086 KiB  
Article
Study on Soil Freeze–Thaw and Surface Deformation Patterns in the Qilian Mountains Alpine Permafrost Region Using SBAS-InSAR Technique
by Zelong Xue, Shangmin Zhao and Bin Zhang
Remote Sens. 2024, 16(23), 4595; https://doi.org/10.3390/rs16234595 - 6 Dec 2024
Viewed by 1012
Abstract
The Qilian Mountains, located on the northeastern edge of the Qinghai–Tibet Plateau, are characterized by unique high-altitude and cold-climate terrain, where permafrost and seasonally frozen ground are extensively distributed. In recent years, with global warming and increasing precipitation on the Qinghai–Tibet Plateau, permafrost [...] Read more.
The Qilian Mountains, located on the northeastern edge of the Qinghai–Tibet Plateau, are characterized by unique high-altitude and cold-climate terrain, where permafrost and seasonally frozen ground are extensively distributed. In recent years, with global warming and increasing precipitation on the Qinghai–Tibet Plateau, permafrost degradation has become severe, further exacerbating the fragility of the ecological environment. Therefore, timely research on surface deformation and the freeze–thaw patterns of alpine permafrost in the Qilian Mountains is imperative. This study employs Sentinel-1A SAR data and the SBAS-InSAR technique to monitor surface deformation in the alpine permafrost regions of the Qilian Mountains from 2017 to 2023. A method for spatiotemporal interpolation of ascending and descending orbit results is proposed to calculate two-dimensional surface deformation fields further. Moreover, by constructing a dynamic periodic deformation model, the study more accurately summarizes the regular changes in permafrost freeze–thaw and the trends in seasonal deformation amplitudes. The results indicate that the surface deformation time series in both vertical and east–west directions obtained using this method show significant improvements in accuracy over the initial data, allowing for a more precise reflection of the dynamic processes of surface deformation in the study area. Subsidence is predominant in permafrost areas, while uplift mainly occurs in seasonally frozen ground areas near lakes and streams. The average vertical deformation rate is 1.56 mm/a, with seasonal amplitudes reaching 35 mm. Topographical (elevation; slope gradient; aspect) and climatic factors (temperature; soil moisture; precipitation) play key roles in deformation patterns. The deformation of permafrost follows five distinct phases: summer thawing; warm-season stability; frost heave; winter cooling; and spring thawing. This study enhances our understanding of permafrost deformation characteristics in high-latitude and high-altitude regions, providing a reference for preventing geological disasters in the Qinghai–Tibet Plateau area and offering theoretical guidance for regional ecological environmental protection and infrastructure safety. Full article
Show Figures

Figure 1

Figure 1
<p>Study area: (<b>a</b>) location of the Qilian Mountains within the QTP, with the range’s map base showing permafrost classification; (<b>b</b>) DEM of the Qilian Mountains; (<b>c</b>) surface cover classification map of the Qilian Mountains; (<b>d</b>) permafrost classification map of the study area; (<b>e</b>) slope map of the study area; (<b>f</b>) terrain classification map of the study area.</p>
Full article ">Figure 2
<p>The method flow chart used in this study.</p>
Full article ">Figure 3
<p>Schematic of radar imaging, using ascending SAR imagery as an example. (<b>a</b>) Spatial relationship between LOS deformation and true surface deformation; (<b>b</b>) geometric relationship of LOS deformation in the horizontal plane (all arrows indicate the positive direction); (<b>c</b>) geometric relationship of LOS deformation in the vertical plane (all arrows indicate the positive direction); (<b>d</b>) geometric relationship of LOS deformation in the horizontal plane.</p>
Full article ">Figure 4
<p>Original GNSS data. (<b>a</b>) Vertical direction original GNSS data (QHGC); (<b>b</b>) east–west direction original GNSS data (QHGC).</p>
Full article ">Figure 5
<p>GNSS data after detrending. (<b>a</b>) Vertical direction GNSS data after detrending (QHGC); (<b>b</b>) east–west direction GNSS data after detrending (QHGC).</p>
Full article ">Figure 6
<p>GNSS data after denoising. (<b>a</b>) Vertical direction GNSS data after denoising (QHGC); (<b>b</b>) east–west direction GNSS data after denoising (QHGC).</p>
Full article ">Figure 7
<p>Baseline diagrams for ascending SAR data. (<b>a</b>) The perpendicular baseline of ascending SAR data; (<b>b</b>) the perpendicular baseline of descending SAR data.</p>
Full article ">Figure 8
<p>InSAR coherence map. (<b>a</b>) Ascending InSAR coherence results; (<b>b</b>) descending InSAR coherence results. Red ellipses indicate examples of areas with high vegetation coverage, red triangles denote examples of areas with no vegetation, and red rectangles represent examples of water body areas.</p>
Full article ">Figure 9
<p>LOS deformation rate maps. (<b>a</b>) Ascending SAR data LOS deformation rate results; (<b>b</b>) descending SAR data LOS deformation rate results.</p>
Full article ">Figure 10
<p>Two-dimensional SBAS-InSAR results at the QHGC site. (<b>a</b>) Vertical SBAS-InSAR results; (<b>b</b>) east–west SBAS-InSAR results.</p>
Full article ">Figure 11
<p>Comparison of two-dimensional InSAR results with GNSS data at the QHGC site. (<b>a</b>) Comparison of vertical InSAR and GNSS data; (<b>b</b>) comparison of east–west InSAR and GNSS data.</p>
Full article ">Figure 12
<p>Accuracy verification of two-dimensional surface deformation using the methods described in this study: (<b>a</b>) mutual verification results of vertical ascending InSAR and GNSS data; (<b>b</b>) mutual verification results of interpolated vertical descending InSAR and GNSS data; (<b>c</b>) mutual verification results of vertical InSAR and GNSS data; (<b>d</b>) mutual verification results of east–west InSAR and GNSS data.</p>
Full article ">Figure 12 Cont.
<p>Accuracy verification of two-dimensional surface deformation using the methods described in this study: (<b>a</b>) mutual verification results of vertical ascending InSAR and GNSS data; (<b>b</b>) mutual verification results of interpolated vertical descending InSAR and GNSS data; (<b>c</b>) mutual verification results of vertical InSAR and GNSS data; (<b>d</b>) mutual verification results of east–west InSAR and GNSS data.</p>
Full article ">Figure 13
<p>Vertical surface deformation time series in typical areas. (<b>a</b>) Geographic location map of typical feature points; (<b>b</b>–<b>g</b>) enlarged views of each feature point location; (<b>h</b>–<b>m</b>) vertical surface deformation time series for DS1-DS6, with the dynamic periodic deformation model for each feature point marked in the figure.</p>
Full article ">Figure 14
<p>Two-dimensional deformation field. (<b>a</b>) Map of annual vertical deformation rates, with a histogram of annual vertical deformation rates displayed in the lower right inset; (<b>b</b>) seasonal deformation amplitude, with a histogram of seasonal deformation amplitudes shown in the lower right inset.</p>
Full article ">Figure 15
<p>Q-statistic of the Geodetector. (<b>a</b>) Q-statistic of the interannual deformation rates; (<b>b</b>) Q-statistic of the seasonal deformation amplitudes.</p>
Full article ">Figure 16
<p>Selected analysis area locations. (<b>a</b>) Transect line and local geographical map of the analysis area; (<b>b</b>,<b>c</b>) DEM maps of the analysis area.</p>
Full article ">Figure 17
<p>Relationship between seasonal deformation, annual deformation rate, slope, and elevation (Transect line AB).</p>
Full article ">Figure 18
<p>Histograms of annual vertical deformation rates and seasonal deformation amplitudes by slope aspect. (<b>a</b>) Histograms of annual vertical deformation rates at sunny and shaded slopes within the study area; (<b>b</b>) histograms of seasonal deformation amplitudes of permafrost at sunny and shaded slopes within the study area.</p>
Full article ">Figure 19
<p>Time series of vertical surface deformation in relation to temperature changes at different locations. (<b>a</b>) Relationship between vertical surface deformation time series and temperature changes at point DS3; (<b>b</b>) relationship between vertical surface deformation time series and temperature changes at point DS4.</p>
Full article ">Figure 20
<p>Relationship between vertical surface deformation time series and precipitation at different locations. (<b>a</b>) Relationship between vertical surface deformation time series and precipitation at points DS3 and DS4; (<b>b</b>) relationship between vertical surface deformation time series and local soil moisture at points DS3 and DS4.</p>
Full article ">Figure 21
<p>The Five Stages of Seasonal Deformation in Permafrost at DS6 (P1: Summer Melting Process; P2: Warm Season Stabilization Process; P3: Freezing Uplift Process; P4: Winter Cooling Process; P5: Spring Warming Process).</p>
Full article ">
16 pages, 4723 KiB  
Article
A Wavelet Decomposition Method for Estimating Soybean Seed Composition with Hyperspectral Data
by Aviskar Giri, Vasit Sagan, Haireti Alifu, Abuduwanli Maiwulanjiang, Supria Sarkar, Bishal Roy and Felix B. Fritschi
Remote Sens. 2024, 16(23), 4594; https://doi.org/10.3390/rs16234594 - 6 Dec 2024
Viewed by 779
Abstract
Soybean seed composition, particularly protein and oil content, plays a critical role in agricultural practices, influencing crop value, nutritional quality, and marketability. Accurate and efficient methods for predicting seed composition are essential for optimizing crop management and breeding strategies. This study assesses the [...] Read more.
Soybean seed composition, particularly protein and oil content, plays a critical role in agricultural practices, influencing crop value, nutritional quality, and marketability. Accurate and efficient methods for predicting seed composition are essential for optimizing crop management and breeding strategies. This study assesses the effectiveness of combining handheld spectroradiometers with the Mexican Hat wavelet transformation to predict soybean seed composition at both seed and canopy levels. Initial analyses using raw spectral data from these devices showed limited predictive accuracy. However, by using the Mexican Hat wavelet transformation, meaningful features were extracted from the spectral data, significantly enhancing prediction performance. Results showed improvements: for seed-level data, Partial Least Squares Regression (PLSR), a method used to reduce spectral data complexity while retaining critical information, showed R2 values increasing from 0.57 to 0.61 for protein content and from 0.58 to 0.74 for oil content post-transformation. Canopy-level data analyzed with Random Forest Regression (RFR), an ensemble method designed to capture non-linear relationships, also demonstrated substantial improvements, with R2 increasing from 0.07 to 0.44 for protein and from 0.02 to 0.39 for oil content post-transformation. These findings demonstrate that integrating handheld spectroradiometer data with wavelet transformation bridges the gap between high-end spectral imaging and practical, accessible solutions for field applications. This approach not only improves the accuracy of seed composition prediction at both seed and canopy levels but also supports more informed decision-making in crop management. This work represents a significant step towards making advanced crop assessment tools more accessible, potentially improving crop management strategies and yield optimization across various farming scales. Full article
(This article belongs to the Special Issue Recent Progress in Hyperspectral Remote Sensing Data Processing)
Show Figures

Figure 1

Figure 1
<p>Shows the overall study area location and the equipment used in the research (<b>a</b>) The star indicates the location of the Bradford Research Farm in Missouri, USA, where the study was conducted. (<b>b</b>) Research fields. (<b>c</b>) H1G plot boundaries. (<b>d</b>) PSR+ and PSR+ 3500 Spectroradiometer. (<b>e</b>) Benchtop hyperspectral scanner.</p>
Full article ">Figure 2
<p>Overview of the workflow, including hyperspectral data collection for both canopy and seed, followed by image preprocessing, corrections, and machine learning modeling, culminating in model evaluation.</p>
Full article ">Figure 3
<p>Mean spectra and distribution of the soybean canopy (<b>a</b>) and harvested seed (<b>b</b>). The <span class="html-italic">y</span>-axis represents the reflectance percentage, while the <span class="html-italic">x</span>-axis corresponds to the wavelength. Regions impacted by atmospheric absorption and noise have been excluded for clarity.</p>
Full article ">Figure 4
<p>Density plots of protein (<b>a</b>) and oil (<b>b</b>) content for the same seed samples (ground truth) were used as reference data for developing models based on canopy spectra (spectroradiometer) and seed spectra (benchtop hyperspectral scanner). The <span class="html-italic">y</span>-axis represents density, and the <span class="html-italic">x</span>-axis represents the content (%) of each element.</p>
Full article ">Figure 5
<p>Spectral transformation of seed and canopy-level spectra. The left panel presents the raw spectral signatures, while the right panel shows the transformed spectra for both seed and canopy samples. The raw spectra show the percentage of reflectance, whereas the transformed spectra display intensity in relation to wavelength. Subfigures (<b>c</b>,<b>d</b>) present data from the PSR spectroradiometer sensor, while subfigures (<b>a</b>,<b>b</b>) display data from the Hyspex sensor.</p>
Full article ">Figure 6
<p>Model Results. Figures (<b>a</b>,<b>b</b>,<b>e</b>,<b>f</b>) correspond to the Hyspex sensor, while figures (<b>c</b>,<b>d</b>,<b>g</b>,<b>h</b>) represent the PSR spectroradiometer sensor.</p>
Full article ">Figure 7
<p>Illustrates the important wavelengths for predicting oil and protein content at seed level from HySpex sensor samples. The blue line represents the raw spectra, while the red line depicts the transformed spectra. It also shows the electromagnetic wavelength range spans from the visible to SWIR regions.</p>
Full article ">Figure 8
<p>Highlights the significant wavelengths for oil and protein prediction in canopy-level samples from the spectroradiometer sensor. The blue and red lines show the raw and transformed spectra, respectively. The <span class="html-italic">x</span>-axis shows the electromagnetic wavelength range spans and the <span class="html-italic">y</span>-axis shows the importance score.</p>
Full article ">Figure 9
<p>Illustrates seed-level important wavelengths.</p>
Full article ">Figure 10
<p>Illustrates canopy level important wavelengths.</p>
Full article ">
17 pages, 5445 KiB  
Article
CaLiJD: Camera and LiDAR Joint Contender for 3D Object Detection
by Jiahang Lyu, Yongze Qi, Suilian You, Jin Meng, Xin Meng, Sarath Kodagoda and Shifeng Wang
Remote Sens. 2024, 16(23), 4593; https://doi.org/10.3390/rs16234593 - 6 Dec 2024
Viewed by 960
Abstract
Three-dimensional object detection has been a key area of research in recent years because of its rich spatial information and superior performance in addressing occlusion issues. However, the performance of 3D object detection still lags significantly behind that of 2D object detection, owing [...] Read more.
Three-dimensional object detection has been a key area of research in recent years because of its rich spatial information and superior performance in addressing occlusion issues. However, the performance of 3D object detection still lags significantly behind that of 2D object detection, owing to challenges such as difficulties in feature extraction and a lack of texture information. To address this issue, this study proposes a 3D object detection network, CaLiJD (Camera and Lidar Joint Contender for 3D object Detection), guided by two-dimensional detection results. CaLiJD creatively integrates advanced channel attention mechanisms with a novel bounding-box filtering method to improve detection accuracy, especially for small and occluded objects. Bounding boxes are detected by the 2D and 3D networks for the same object in the same scene as an associated pair. The detection results that satisfy the criteria are then fed into the fusion layer for training. In this study, a novel fusion network is proposed. It consists of numerous convolutions arranged in both sequential and parallel forms and includes a Grouped Channel Attention Module for extracting interactions among multi-channel information. Moreover, a novel bounding-box filtering mechanism was introduced, incorporating the normalized distance from the object to the radar as a filtering criterion within the process. Experiments were conducted using the KITTI 3D object detection benchmark. The results showed that a substantial improvement in mean Average Precision (mAP) was achieved by CaLiJD compared with the baseline single-modal 3D detection model, with an enhancement of 7.54%. Moreover, the improvement achieved by our method surpasses that of other classical fusion networks by an additional 0.82%. In particular, CaLiJD achieved mAP values of 73.04% and 59.86%, respectively, thus demonstrating state-of-the-art performance for challenging small-object detection tasks such as those involving cyclists and pedestrians. Full article
(This article belongs to the Special Issue Point Cloud Processing with Machine Learning)
Show Figures

Figure 1

Figure 1
<p>Quantitative Analysis of Camera, LiDAR, Fusion and Proposed Methods. It shows the performance comparison between CaLiJD and other state-of-the-art models on the Kitti dataset, where the horizontal axis represents the different kinds of methods and the vertical axis represents the AP values (Moderate) of the different methods on the kitti dataset.</p>
Full article ">Figure 2
<p>The overall network architecture of CaLiJD. It is mainly divided into a backbone network, a data selection module and a fusion layer. SECOND and C_RCNN represent the 3D and 2D backbone networks of CaLiJD, which are applied to obtain the candidate boxes required for the fusion network. The obtained candidates are filtered and input to the fusion layer for training.</p>
Full article ">Figure 3
<p>Selection mechanism for CaLiJD. (<b>a</b>) demonstrates the data screening method in the traditional late-fusion network, and (<b>b</b>) demonstrates the data screening method in CaLiJD.</p>
Full article ">Figure 4
<p>The overall structure of the feature fusion layer.</p>
Full article ">Figure 5
<p>Grouped Channel Attention Module. <math display="inline"><semantics> <mi>H</mi> </semantics></math> is the feature map obtained after fusion of features in the fusion layer. <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>′</mo> </mrow> </semantics></math> denotes the feature map that has been assigned weights by GCAM. <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">h</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">h</mi> <mn>2</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">h</mi> <mn>3</mn> </msub> </mrow> </semantics></math> represent three different features extracted from different grouped convolutions.</p>
Full article ">Figure 6
<p>Grouped Channel Attention Module.</p>
Full article ">Figure 7
<p>Visualization of the KITTI dataset. (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>) show 2D images from four different scenes. (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>) present the visualizations of 3D detection using the SECOND for these four scenes. (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>) present visualizations of 3D detection using SECOND for these four scenes. (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>) show the visualization results for the CaLiJD. The green 3D bounding boxes represent the detection results, whereas the areas within the yellow circles indicate the erroneous detections.</p>
Full article ">Figure 7 Cont.
<p>Visualization of the KITTI dataset. (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>) show 2D images from four different scenes. (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>) present the visualizations of 3D detection using the SECOND for these four scenes. (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>) present visualizations of 3D detection using SECOND for these four scenes. (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>) show the visualization results for the CaLiJD. The green 3D bounding boxes represent the detection results, whereas the areas within the yellow circles indicate the erroneous detections.</p>
Full article ">Figure 8
<p>Comparison of mAP values for detection results on the car split. The figure presents a bar chart that visually compares the AP values of three algorithms, including CaLiJD, on the KITTI dataset’s car split under recall levels of 11 and 40. Yellow represents CaLiJD, red represents CLOCs, and blue represents SECOND. (<b>a</b>) shows the results for 3D object detection, while (<b>b</b>) displays the results for BEV detection.</p>
Full article ">
23 pages, 3947 KiB  
Article
Learnable Resized and Laplacian-Filtered U-Net: Better Road Marking Extraction and Classification on Sparse-Point-Cloud-Derived Imagery
by Miguel Luis Rivera Lagahit, Xin Liu, Haoyi Xiu, Taehoon Kim, Kyoung-Sook Kim and Masashi Matsuoka
Remote Sens. 2024, 16(23), 4592; https://doi.org/10.3390/rs16234592 - 6 Dec 2024
Viewed by 729
Abstract
High-definition (HD) maps for autonomous driving rely on data from mobile mapping systems (MMS), but the high cost of MMS sensors has led researchers to explore cheaper alternatives like low-cost LiDAR sensors. While cost effective, these sensors produce sparser point clouds, leading to [...] Read more.
High-definition (HD) maps for autonomous driving rely on data from mobile mapping systems (MMS), but the high cost of MMS sensors has led researchers to explore cheaper alternatives like low-cost LiDAR sensors. While cost effective, these sensors produce sparser point clouds, leading to poor feature representation and degraded performance in deep learning techniques, such as convolutional neural networks (CNN), for tasks like road marking extraction and classification, which are essential for HD map generation. Examining common image segmentation workflows and the structure of U-Net, a CNN, reveals a source of performance loss in the succession of resizing operations, which further diminishes the already poorly represented features. Addressing this, we propose improving U-Net’s ability to extract and classify road markings from sparse-point-cloud-derived images by introducing a learnable resizer (LR) at the input stage and learnable resizer blocks (LRBs) throughout the network, thereby mitigating feature and localization degradation from resizing operations in the deep learning framework. Additionally, we incorporate Laplacian filters (LFs) to better manage activations along feature boundaries. Our analysis demonstrates significant improvements, with F1-scores increasing from below 20% to above 75%, showing the effectiveness of our approach in improving road marking extraction and classification from sparse-point-cloud-derived imagery. Full article
(This article belongs to the Special Issue Applications of Laser Scanning in Urban Environment)
Show Figures

Figure 1

Figure 1
<p>A typical CNN image segmentation workflow includes resizing operations both before and after the network to adhere to computing constraints.</p>
Full article ">Figure 2
<p>A sparse-point-cloud-derived image shown in its (<b>left</b>) original size and (<b>right</b>) downsampled size, which will serve as the input for the network. Noticeable changes are evident as the image decreases in scale, with portions of target features being visibly missing.</p>
Full article ">Figure 3
<p>A downsampled input sparse-point-cloud-derived image (<b>left</b>) and its counterpart after four max-pooling operations, similar to those of U-Net’s encoder (<b>right</b>). The pink-circled area highlights target feature disappearance, while the blue-circled area shows misrepresentation of non-target feature areas. This effect remains even after training the network, as confirmed by the results presented in this paper.</p>
Full article ">Figure 4
<p>Structure of the proposed learnable resized and Laplacian-filtered U-Net. The pink box denotes the learnable resizer (LR) at the input phase. The yellow arrow represents the feature map sharpening at the skip connection. The blue and brown arrows indicate the learnable resizing blocks (LRBs) with Laplacian filter(s) (LF) placed in lieu of the resizing operations within the network.</p>
Full article ">Figure 5
<p>Abstract representation of a learnable resizer: a conventional resizing operation combined with a learnable convolution operation.</p>
Full article ">Figure 6
<p>Comparing downsampling results from (<b>left</b>) the learnable resizer and (<b>right</b>) a conventional resizer (bilinear interpolation). The pink box showcases zoomed-in features, while the blue box highlights retained edges after passing the downsampled images through a simple edge filter. To enhance clarity, the downsampled outcome from the learnable resizer has been converted to grayscale for better visualization.</p>
Full article ">Figure 7
<p>The effective receptive field (ERF) of U-Net with a resizer and max pooling (<b>left</b>) versus a learnable resizer (<b>right</b>) at the input phase with respect to the central pixel. The ERFs shown are from an untrained model and were visualized using the method presented in the original paper [<a href="#B27-remotesensing-16-04592" class="html-bibr">27</a>], and then zoomed and scaled for presentation purposes.</p>
Full article ">Figure 8
<p>Existing resizing operations in U-Net include max pooling in the encoder and transposed (or up-convolutions) in the decoder.</p>
Full article ">Figure 9
<p>The effective receptive field (ERF) with respect to the central pixel for a trained U-Net with LR <b>(left)</b> versus LR+LRB (<b>right</b>) at the input phase. The ERFs were visualized using the method presented in the original paper [<a href="#B27-remotesensing-16-04592" class="html-bibr">27</a>] and then zoomed and scaled for presentation purposes.</p>
Full article ">Figure 10
<p>A downsampled sparse-point-cloud-derived image (<b>left</b>) and its Laplacian-filtered counterpart (<b>right</b>). The pink circle highlights a target feature whose boundary has been emphasized by the Laplacian filter.</p>
Full article ">Figure 11
<p>Sample (<b>left</b>) sparse-point-cloud-derived image and its corresponding (<b>right</b>) labeled ground truth. The images in the dataset have sizes of 2048 × 512 pixels, with a corresponding ground resolution of 1 × 1 cm. Pixel values were obtained from intensity or return signal strength from the low-cost LiDAR scanning. In the ground truth, black, white, green, and red pixels represent no point cloud value, other (non-road marking), lane line, and ped xing features, respectively.</p>
Full article ">Figure 12
<p>Samples measuring 1 × 1 m of the sparse-point-cloud-derived image, highlighting the widely spaced distribution of features across the image. This image has also been modified to binary, with all pixels corresponding to point cloud values turned white for better visualization.</p>
Full article ">Figure 13
<p>A sample of the segmentation results is shown. U-Net completely misses the pedestrian crossing and misclassifies the lane line. U-Net with an LR successfully extracts both but presents misclassification through the over extended lane line. U-Net+LRB+LF with an LR achieves the best extraction and classification, closely resembling the ground truth.</p>
Full article ">Figure 14
<p>Tracking the localization of the pedestrian crossing class using seg-grad-cam [<a href="#B35-remotesensing-16-04592" class="html-bibr">35</a>], shown as a blue-to-red gradient (with red indicating the highest activation), reveals how the model identifies this class from sampled encoder and decoder layers compared to the base model.</p>
Full article ">Figure 15
<p>Tracking the localization of the lane line class using seg-grad-cam [<a href="#B35-remotesensing-16-04592" class="html-bibr">35</a>], shown as a blue-to-red gradient (with red indicating the highest activation), reveals how the model identifies this class from sampled encoder and decoder layers compared to the base model.</p>
Full article ">Figure 16
<p>A sample segmentation result of (<b>left</b>) the base model and (<b>right</b>) our proposal, focusing on feature boundaries. The pink pixels highlight misclassifications along the boundaries, indicating overreaching.</p>
Full article ">Figure 17
<p>Visualized results of our proposal compared to other U-Net variants. For clarity, we retain only target road marking pixels. In the classification task, green pixels represent a lane line, while red pixels represent a pedestrian crossing. For the extraction task, yellow pixels represent those markings classified as either of the road marking types. This distinction is important to highlight the model’s ability to identify road markings in general and correctly distinguish between different types.</p>
Full article ">Figure 18
<p>Visualized results of our proposal compared to other models. For clarity, we retain only target road marking pixels. In the classification task, green pixels represent a lane line, while red pixels represent a pedestrian crossing. For the extraction task, yellow pixels represent those markings classified as either of the road marking types. This distinction is important to highlight the model’s ability to identify road markings in general and correctly distinguish between different types.</p>
Full article ">
24 pages, 6178 KiB  
Article
HoloGaussian Digital Twin: Reconstructing 3D Scenes with Gaussian Splatting for Tabletop Hologram Visualization of Real Environments
by Tam Le Phuc Do, Jinwon Choi, Viet Quoc Le, Philippe Gentet, Leehwan Hwang and Seunghyun Lee
Remote Sens. 2024, 16(23), 4591; https://doi.org/10.3390/rs16234591 - 6 Dec 2024
Viewed by 1415
Abstract
Several studies have explored the use of hologram technology in architecture and urban design, demonstrating its feasibility. Holograms can represent 3D spatial data and offer an immersive experience, potentially replacing traditional methods such as physical 3D and offering a promising alternative to mixed-reality [...] Read more.
Several studies have explored the use of hologram technology in architecture and urban design, demonstrating its feasibility. Holograms can represent 3D spatial data and offer an immersive experience, potentially replacing traditional methods such as physical 3D and offering a promising alternative to mixed-reality display technologies. Holograms can visualize realistic scenes such as buildings, cityscapes, and landscapes using the novel view synthesis technique. This study examines the suitability of spatial data collected through the Gaussian splatting method for tabletop hologram visualization. Recent advancements in Gaussian splatting algorithms allow for real-time spatial data collection of a higher quality compared to photogrammetry and neural radiance fields. Both hologram visualization and Gaussian splatting share similarities in that they recreate 3D scenes without the need for mesh reconstruction. In this research, unmanned aerial vehicle-acquired primary image data were processed for 3D reconstruction using Gaussian splatting techniques and subsequently visualized through holographic displays. Two experimental environments were used, namely, a building and a university campus. As a result, 3D Gaussian data have proven to be an ideal spatial data source for hologram visualization, offering new possibilities for real-time motion holograms of real environments and digital twins. Full article
(This article belongs to the Special Issue Application of Photogrammetry and Remote Sensing in Urban Areas)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Common 3D reconstruction methods for real-world environments.</p>
Full article ">Figure 2
<p>Locations of experiment environments 1 and 2 on the university campus.</p>
Full article ">Figure 3
<p>Gaussian splatting practice interface displaying the dataset of the research experiment in point cloud. (<b>a</b>) Experiment environment 1. (<b>b</b>) Experiment environment 2.</p>
Full article ">Figure 4
<p>DJI Mini 4 Pro performing flight missions for data collection at the selected experiment environment. (<b>a</b>) Dji Mini Pro 4 under operation. (<b>b</b>) FOVs of the UAV and the coverage area.</p>
Full article ">Figure 5
<p>Circular paths for data collection in experiment environment 1. (<b>a</b>) Detailed circular flight paths of the UAV. (<b>b</b>) Calculated coverage area of the data collection based on different recording angles.</p>
Full article ">Figure 6
<p>Data collection for rectangular paths in experiment environment 2. (<b>a</b>) Detailed rectangular flight paths of the UAV, with calculated coverage areas based on different recording angles: (<b>b</b>) 60°, (<b>c</b>) 30°, and (<b>d</b>) 0°.</p>
Full article ">Figure 7
<p>Data collection crossover paths for experiment environment 2. (<b>a</b>) Detailed crossover flight paths of the UAV, with calculated coverage areas based on different recording crossover paths: (<b>b</b>) vertical path, (<b>c</b>) horizontal path, and (<b>d</b>) diagonal paths 1 and 2.</p>
Full article ">Figure 7 Cont.
<p>Data collection crossover paths for experiment environment 2. (<b>a</b>) Detailed crossover flight paths of the UAV, with calculated coverage areas based on different recording crossover paths: (<b>b</b>) vertical path, (<b>c</b>) horizontal path, and (<b>d</b>) diagonal paths 1 and 2.</p>
Full article ">Figure 8
<p>Operations with Unreal Engine when displaying data and performing novel view synthesis rendering for hologram printing. (<b>a</b>) PLY file opened in the Unreal Engine interface and (<b>b</b>) OBJ file opened in the Unreal Engine interface.</p>
Full article ">Figure 9
<p>Setup of the exhibited zero-degree tabletop hologram.</p>
Full article ">Figure 10
<p>Flow chart of the experiment conducted in this study. The method comprises four main phases, including data collection, data input scenarios, data processing, and tabletop hologram production.</p>
Full article ">Figure 11
<p>Representation of images extracted from the collected data in experiment environment 1.</p>
Full article ">Figure 12
<p>Representation of images extracted from the collected data in experiment environment 2.</p>
Full article ">Figure 13
<p>Comparison of 3D reconstruction quality between Gaussian splatting and photogrammetry in both experiment environments. (Note. Red zones are used to identify the significant difference of 3D reconstruction quality between the two methods).</p>
Full article ">Figure 14
<p>Three-dimensional reconstruction of data from six scenarios using Gaussian splatting and photogrammetry.</p>
Full article ">Figure 15
<p>Hologram visualization for (<b>a</b>) experiment environment 1 (<b>b</b>) and experiment environment 2.</p>
Full article ">Figure 16
<p>Workflow of the urban and architectural HoloGaussian digital twin.</p>
Full article ">
22 pages, 27970 KiB  
Article
Monthly Prediction of Pine Stress Probability Caused by Pine Shoot Beetle Infestation Using Sentinel-2 Satellite Data
by Wen Jia, Shili Meng, Xianlin Qin, Yong Pang, Honggan Wu, Jia Jin and Yunteng Zhang
Remote Sens. 2024, 16(23), 4590; https://doi.org/10.3390/rs16234590 - 6 Dec 2024
Viewed by 710
Abstract
Due to the significant threat to forest health posed by beetle infestations on pine trees, timely and accurate predictions are crucial for effective forest management. This study developed a pine tree stress probability prediction workflow based on monthly cloud-free Sentinel-2 composite images to [...] Read more.
Due to the significant threat to forest health posed by beetle infestations on pine trees, timely and accurate predictions are crucial for effective forest management. This study developed a pine tree stress probability prediction workflow based on monthly cloud-free Sentinel-2 composite images to address this challenge. First, representative pine tree stress samples were selected by combining long-term forest disturbance data using the Continuous Change Detection and Classification (CCDC) algorithm with high-resolution remote sensing imagery. Monthly cloud-free Sentinel-2 images were then composited using the Multifactor Weighting (MFW) method. Finally, a Random Forest (RF) algorithm was employed to build the pine tree stress probability model and analyze the importance of spectral, topographic, and meteorological features. The model achieved prediction precisions of 0.876, 0.900, and 0.883, and overall accuracies of 89.5%, 91.6%, and 90.2% for January, February, and March 2023, respectively. The results indicate that spectral features, such as band reflectance and vegetation indices, ranked among the top five in importance (i.e., SWIR2, SWIR1, Red band, NDVI, and NBR). They more effectively reflected changes in canopy pigments and leaf moisture content under stress compared with topographic and meteorological features. Additionally, combining long-term stress disturbance data with high-resolution imagery to select training samples improved their spatial and temporal representativeness, enhancing the model’s predictive capability. This approach provides valuable insights for improving forest health monitoring and uncovers opportunities to predict future beetle outbreaks and take preventive measures. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Location of the study area in Ning’er County, Puer City, Yunnan Province, China, overlaid on a false-color Sentinel-2 image (R, G, B = SWIR1, NIR, Red bands). The yellow dashed line delineates the study area’s boundaries.</p>
Full article ">Figure 2
<p>Field survey of pine stress.</p>
Full article ">Figure 3
<p>Overall technical workflow for predicting monthly pine stress probability.</p>
Full article ">Figure 4
<p>Reference data based on stress disturbance results. (<b>a</b>) The monthly stress disturbance results from 2019 to 2023; (<b>b</b>) An example of reference sample points displayed on the GF-1, GF-2, Sentinel-2, and Landsat-8 imagery; (<b>c</b>) The spatial distribution of non-stress sample points selected through visual interpretation; (<b>d</b>) The spatial distribution of pine stress sample points selected through visual interpretation.</p>
Full article ">Figure 5
<p>Comparison of monthly cloud-free Sentinel-2 composite images and vegetation indices from January to March 2023. The images are displayed as false-color composites (RGB = SWIR1, NIR, Red). A specific site was selected for detailed close-up analysis, showing the imagery, NDVI, and NDWI of the pine stress area affected by beetle infestation.</p>
Full article ">Figure 6
<p>Feature importance ranking for pine stress prediction model.</p>
Full article ">Figure 7
<p>Predicted pine stress probability for January, February, and March 2023 (<b>left</b>) and spatial distribution of areas with probability greater than 80% (<b>right</b>).</p>
Full article ">Figure 8
<p>Site 1: Monthly increase in pine stress level and area from January to March 2023, with Sentinel-2 imagery and stress probability distribution. The dash circles are key focus areas of forest stress.</p>
Full article ">Figure 9
<p>Site 2: Monthly decrease in pine stress level and area from January to March 2023, with Sentinel-2 imagery and stress probability distribution. The dash circles are key focus areas of forest stress.</p>
Full article ">Figure 10
<p>Site 3: Monthly changes (increase and decrease) in pine stress levels and areas from January to March 2023, with Sentinel-2 imagery and stress probability distribution. The dash circles are key focus areas of forest stress.</p>
Full article ">
18 pages, 8923 KiB  
Article
Survival Risk Analysis for Four Endemic Ungulates on Grasslands of the Tibetan Plateau Based on the Grazing Pressure Index
by Lingyan Yan, Lingqiao Kong, Zhiyun Ouyang, Jinming Hu and Li Zhang
Remote Sens. 2024, 16(23), 4589; https://doi.org/10.3390/rs16234589 - 6 Dec 2024
Cited by 1 | Viewed by 527
Abstract
Ungulates are essential for maintaining the health of grassland ecosystems on the Tibetan plateau. Increased livestock grazing has caused competition for food resources, threatening ungulates’ survival. The survival risk of food resources for ungulates can be quantified by the grazing pressure index, which [...] Read more.
Ungulates are essential for maintaining the health of grassland ecosystems on the Tibetan plateau. Increased livestock grazing has caused competition for food resources, threatening ungulates’ survival. The survival risk of food resources for ungulates can be quantified by the grazing pressure index, which requires accurate grassland carrying capacity. Previous research on the grazing pressure index has rarely taken into account the influence of wild ungulates, mainly due to the lack of precise spatial data on their quantity. In this study, we conducted field investigations to construct high-resolution spatial distributions for the four endemic ungulates on the Tibetan plateau. By factoring in the grazing consumption of these ungulates, we recalculated the grassland carrying capacity to obtain the grazing pressure index, which allowed us to assess the survival risks for each species. The results show: (1) Quantity estimates for Tibetan antelope (Pantholops hodgsonii), Tibetan wild donkey (Equus kiang), Tibetan gazelle (Procapra picticaudata), and wild yak (Bos mutus) of the Tibetan plateau are 24.57 × 104, 17.93 × 104, 7.16 × 104, and 1.88 × 104, respectively; they mainly distributed in the northern and western regions of the Tibetan plateau. (2) The grassland carrying capacity of the Tibetan plateau is 69.98 million sheep units, with ungulate grazing accounting for 5% of forage utilization. Alpine meadow and alpine steppe exhibit the highest grassland carrying capacity. (3) The grazing pressure index on the Tibetan plateau grasslands is 2.23, indicating a heightened grazing pressure in the southern and eastern regions. (4) The habitat survival risk analysis indicates that the high survival risk (the grazing pressure index exceeds 1.2) areas for the four ungulate species account for the following proportions of their total habitat areas: Tibetan wild donkeys (49.76%), Tibetan gazelles (47.00%), Tibetan antelopes (40.76%), and wild yaks (34.83%). These high-risk areas are primarily located within alpine meadow and temperate desert steppe. This study provides a quantitative assessment of survival risks for these four ungulate species on the Tibetan plateau grasslands and serves as a valuable reference for ungulate conservation and grassland ecosystem management. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial location of the TP.</p>
Full article ">Figure 2
<p>Line transects of the field investigation (<b>a</b>) and occurrences (<b>b</b>) of the four ungulates on the TP from 2018 to 2023.</p>
Full article ">Figure 3
<p>Quantities of the four endemic ungulates and the rates of ungulate’s habitats in nature reserves on the TP.</p>
Full article ">Figure 4
<p>Spatial distribution of the four ungulates on the TP. The four ungulates are converted to a standard SU.</p>
Full article ">Figure 5
<p>The contribution of grassland ecosystems and other ecosystems to the habitat area of the four ungulate on the TP. AD: Alpine desert steppe; AS: Alpine steppe; AM: Alpine meadow; OG: includes temperate typical steppe, temperate desert steppe, tussock, and Temperate meadow steppe; OE: other ecosystems.</p>
Full article ">Figure 6
<p>GCC and forage consumption by the four ungulates in the TP in 2020. (<b>a</b>) The forage consumption by the four ungulates in a year (kg); (<b>b</b>) GCC is the grassland carrying capacity.</p>
Full article ">Figure 7
<p>Spatial pattern of the GPI on the TP.</p>
Full article ">Figure 8
<p>Spatial patterns of grazing pressure for the four ungulates in their habitats of the TP. (<b>a</b>) GPI of Tibetan antelope on the TP; (<b>b</b>) GPI of Tibetan wild donkey on the TP; (<b>c</b>) GPI of Tibetan gazelle on the TP; (<b>d</b>) GPI of wild yak on the TP.</p>
Full article ">Figure 9
<p>Contributions of the four ungulates to grazing pressure in their habitats (<b>a</b>) and in high-density regions (<b>b</b>) on the TP.</p>
Full article ">Figure 10
<p>Spatial pattern of grazing pressure for the four ungulates in the high-density regions on the TP. (<b>a</b>) GPI of Tibetan antelope in the high-density regions on the TP; (<b>b</b>) GPI of Tibetan wild donkey in the high-density regions on the TP; (<b>c</b>) GPI of Tibetan gazelle in the high-density regions on the TP; (<b>d</b>) GPI of wild yak in the high-density regions on the TP.</p>
Full article ">Figure 11
<p>Contributions of grasslands to grazing pressure for the four ungulates. AD: Alpine desert steppe; AS: Alpine steppe; AM: Alpine meadow; TT: Temperate typical steppe; TD: Temperate desert steppe; TM: Temperate meadow steppe. The tussock is only inhabited by the Tibetan antelope, and the quantity is rare; therefore, it was not included in the analysis. (<b>a</b>) Contributions of difference GPI of Tibetan wild donkey on different types of grassland; (<b>b</b>) Contributions of difference GPI of Tibetan gazelle on different types of grassland; (<b>c</b>) Contributions of difference GPI of wild yak on different types of grassland; (<b>d</b>) Contributions of difference GPI of Tibetan antelope on different types of grassland.</p>
Full article ">
22 pages, 3002 KiB  
Review
Overview of Operational Global and Regional Ocean Colour Essential Ocean Variables Within the Copernicus Marine Service
by Vittorio E. Brando, Rosalia Santoleri, Simone Colella, Gianluca Volpe, Annalisa Di Cicco, Michela Sammartino, Luis González Vilas, Chiara Lapucci, Emanuele Böhm, Maria Laura Zoffoli, Claudia Cesarini, Vega Forneris, Flavio La Padula, Antoine Mangin, Quentin Jutard, Marine Bretagnon, Philippe Bryère, Julien Demaria, Ben Calton, Jane Netting, Shubha Sathyendranath, Davide D’Alimonte, Tamito Kajiyama, Dimitry Van der Zande, Quinten Vanhellemont, Kerstin Stelzer, Martin Böttcher and Carole Lebretonadd Show full author list remove Hide full author list
Remote Sens. 2024, 16(23), 4588; https://doi.org/10.3390/rs16234588 - 6 Dec 2024
Viewed by 889
Abstract
The Ocean Colour Thematic Assembly Centre (OCTAC) of the Copernicus Marine Service delivers state-of-the-art Ocean Colour core products for both global oceans and European seas, derived from multiple satellite missions. Since 2015, the OCTAC has provided global and regional high-level merged products that [...] Read more.
The Ocean Colour Thematic Assembly Centre (OCTAC) of the Copernicus Marine Service delivers state-of-the-art Ocean Colour core products for both global oceans and European seas, derived from multiple satellite missions. Since 2015, the OCTAC has provided global and regional high-level merged products that offer value-added information not directly available from space agencies. This is achieved by integrating observations from various missions, resulting in homogenized, inter-calibrated datasets with broader spatial coverage than single-sensor data streams. OCTAC enhanced continuously the basin-level accuracy of essential ocean variables (EOVs) across the global ocean and European regional seas, including the Atlantic, Arctic, Baltic, Mediterranean, and Black seas. From 2019 onwards, new EOVs have been introduced, focusing on phytoplankton functional groups, community structure, and primary production. This paper provides an overview of the evolution of the OCTAC catalogue from 2015 to date, evaluates the accuracy of global and regional products, and outlines plans for future product development. Full article
(This article belongs to the Special Issue Oceans from Space V)
Show Figures

Figure 1

Figure 1
<p>Overview of the OCTAC catalogue evolutions of the single-sensor and multisensor global and regional OC products from 2015 to 2024. The blue lines mark the timelines of each product type; covered basins are marked in green and listed under each line; satellite sensors are marked in black; spatial resolution of products/datasets is marked in blue. The red dots mark the dates of the MY reprocessing.</p>
Full article ">Figure 2
<p>Spatial coverage of the Sentinel-3 OLCI 300 m and Sentinel-2 MSI 100 m datasets. (<b>A</b>) All European regional seas and a 200 km strip from the coastline in the global product for Sentinel-3 OLCI. (<b>B</b>) A 20 km strip from the coastline for the European coastal waters covered in 5 days with Sentinel-2 MSI.</p>
Full article ">Figure 3
<p>OC sensors and high-resolution imagers adopted upstream in OCTAC processing chains. Timelines of legacy, and current and forthcoming (approved and planned) sensors are displayed (source CEOS): red identifies science OC missions, blue identifies operational OC missions, and brown identifies high-resolution/land imagers.</p>
Full article ">Figure 4
<p>Mediterranean Sea satellite CHL trend over the period 1997-2023, based on the CMEMS product OCEANCOLOUR_MED_BGC_L4_MY_009_144. (<b>A</b>) Time series and linear trend of monthly regional average satellite CHL: the monthly regional average (weighted by pixel area) time series is shown in gray, with the de-seasonalized time series in green and the linear trend in blue. (<b>B</b>) Map of satellite CHL trend, expressed in % per year, with positive trends in red and negative trends in blue.</p>
Full article ">Figure 5
<p>Time series (1998–2023) of SDG 14.1.1a Level 2 sub-indicator for European countries. The potential eutrophication values for European waters are based on CMEMS OC regional products aggregated over the EEZ for each country. AL: Albania, BE: Belgium, BG: Bulgaria, CY: Cyprus, DE: Germany, DK: Denmark, EE: Estonia, EL: Greece, ES: Spain, FI: Finland, FO: Faroe Islands, FR: France, GE: Georgia, GL: Greenland, HR: Croatia, IE: Ireland, IS: Iceland, IT: Italy, LT: Lithuania, LV: Lat- via, MC: Monaco, ME: Montenegro, MT: Malta, NL: Netherlands, NO: Norway, PL: Poland, PT: Portugal, RO: Romania, SE: Sweden, SI: Slovenia, UK: United Kingdom.</p>
Full article ">
29 pages, 11518 KiB  
Article
Evaluating the Two-Source Energy Balance Model Using MODIS Data for Estimating Evapotranspiration Time Series on a Regional Scale
by Mahsa Bozorgi, Jordi Cristóbal and Magí Pàmies-Sans
Remote Sens. 2024, 16(23), 4587; https://doi.org/10.3390/rs16234587 - 6 Dec 2024
Viewed by 871
Abstract
Estimating daily continuous evapotranspiration (ET) can significantly enhance the monitoring of crop stress and drought on regional scales, as well as benefit the design of agricultural drought early warning systems. However, there is a need to verify the models’ performance in estimating the [...] Read more.
Estimating daily continuous evapotranspiration (ET) can significantly enhance the monitoring of crop stress and drought on regional scales, as well as benefit the design of agricultural drought early warning systems. However, there is a need to verify the models’ performance in estimating the spatiotemporal continuity of long-term daily evapotranspiration (ETd) on regional scales due to uncertainties in satellite measurements. In this study, a thermal-based two-surface energy balance (TSEB) model was used concurrently with Terra/Aqua MODIS data and the ERA5 atmospheric reanalysis dataset to calculate the surface energy balance of the soil–canopy–atmosphere continuum and estimate ET at a 1 km spatial resolution from 2000 to 2022. The performance of the model was evaluated using 11 eddy covariance flux towers in various land cover types (i.e., savannas, woody savannas, croplands, evergreen broadleaf forests, and open shrublands), correcting for the energy balance closure (EBC). The Bowen ratio (BR) and residual (RES) methods were used for enforcing the EBC in the EC observations. The modeled ET was evaluated against unclosed ET and closed ET (ETBR and ETRES) under clear-sky and all-sky observations as well as gap-filled data. The results showed that the modeled ET presented a better agreement with closed ET compared to unclosed ET in both Terra and Aqua datasets. Additionally, although the model overestimated ETd across all different land cover types, it successfully captured the spatiotemporal variability in ET. After the gap-filling, the total number of days compared with flux measurements increased substantially, from 13,761 to 19,265 for Terra and from 13,329 to 19,265 for Aqua. The overall mean results including clear-sky and all-sky observations as well as gap-filled data with the Aqua dataset showed the lowest errors with ETRES, by a mean bias error (MBE) of 0.96 mm.day−1, an average mean root square (RMSE) of 1.47 mm.day−1, and a correlation (r) value of 0.51. The equivalent figures for Terra were about 1.06 mm.day−1, 1.60 mm.day−1, and 0.52. Additionally, the result from the gap-filling model indicated small changes compared with the all-sky observations, which demonstrated that the modeling framework remained robust, even with the expanded days. Hence, the presented modeling framework can serve as a pathway for estimating daily remote sensing-based ET on regional scales. Furthermore, in terms of temporal trends, the intra-annual and inter-annual variability in ET can be used as indicators for monitoring crop stress and drought. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area and the 11 selected flux towers. Projection system in UTM-30N WGS-84.</p>
Full article ">Figure 2
<p>Flowchart of the modeling framework estimating ET<sub>d</sub> time series. The orange rectangle denotes the pre-processing of MODIS vegetation indices through TIMESAT.</p>
Full article ">Figure 3
<p>TSEB modelling scheme (adapted from [<a href="#B61-remotesensing-16-04587" class="html-bibr">61</a>]).</p>
Full article ">Figure 4
<p>Scatterplots of the EBC calculated using the EC with the statistical metrics for the entire study period (<b>top left</b>) and the days compared with the mode (<b>bottom right</b>) at each flux tower.</p>
Full article ">Figure 5
<p>Temporal variations in the modeled ET<sub>d</sub> estimated using Terra and Aqua datasets compared to in situ-measured ET at flux towers.</p>
Full article ">Figure 6
<p>Scatterplots of the modeled ET<sub>d</sub> estimated using Terra against unclosed and closed ET (ET<sub>BR</sub> and ET<sub>RES</sub>).</p>
Full article ">Figure 7
<p>Scatterplots of the modeled ET<sub>d</sub> estimated using Aqua against unclosed and closed ET (ET<sub>BR</sub> and ET<sub>RES</sub>).</p>
Full article ">Figure 8
<p>Scatterplots of the LST obtained from MODSI Terra dataset and the LST calculated using half-hourly flux tower data at satellite overpass time.</p>
Full article ">Figure 9
<p>Scatterplots of the LST obtained from MODSI Aqua dataset and the LST calculated using half-hourly flux tower data at satellite overpass time.</p>
Full article ">Figure 10
<p>Mean monthly variability in estimated ET using Terra dataset.</p>
Full article ">Figure 11
<p>Mean monthly variability in estimated ET using Aqua dataset.</p>
Full article ">Figure 12
<p>Seasonal variation in estimated ET from 2000 to 2022 in the study area using Terra dataset.</p>
Full article ">Figure 13
<p>Seasonal variation in estimated ET from 2002 to 2022 in the study area using Aqua dataset.</p>
Full article ">Figure 14
<p>Temporal variation in annual cumulative of estimated <span class="html-italic">ET</span> from 2000 to 2022 in the study area using Aqua (<b>left</b>) and Terra (<b>right</b>) dataset.</p>
Full article ">Figure 15
<p>Spatiotemporal variability in annual cumulative of estimated ET from 2000 to 2022 in the study area using Terra dataset.</p>
Full article ">Figure 16
<p>Spatiotemporal variability in annual cumulative of estimated ET from 2000 to 2022 in the study area using Aqua dataset.</p>
Full article ">
36 pages, 41599 KiB  
Article
A Large-Scale Inter-Comparison and Evaluation of Spatial Feature Engineering Strategies for Forest Aboveground Biomass Estimation Using Landsat Satellite Imagery
by John B. Kilbride and Robert E. Kennedy
Remote Sens. 2024, 16(23), 4586; https://doi.org/10.3390/rs16234586 - 6 Dec 2024
Viewed by 785
Abstract
Aboveground biomass (AGB) estimates derived from Landsat’s spectral bands are limited by spectral saturation when AGB densities exceed 150–300 Mg ha1. Statistical features that characterize image texture have been proposed as a means to alleviate spectral saturation. However, apart from [...] Read more.
Aboveground biomass (AGB) estimates derived from Landsat’s spectral bands are limited by spectral saturation when AGB densities exceed 150–300 Mg ha1. Statistical features that characterize image texture have been proposed as a means to alleviate spectral saturation. However, apart from Gray Level Co-occurrence Matrix (GLCM) statistics, many spatial feature engineering techniques (e.g., morphological operations or edge detectors) have not been evaluated in the context of forest AGB estimation. Moreover, many prior investigations have been constrained by limited geographic domains and sample sizes. We utilize 176 lidar-derived AGB maps covering ∼9.3 million ha of forests in the Pacific Northwest of the United States to construct an expansive AGB modeling dataset that spans numerous biophysical gradients and contains AGB densities exceeding 1000 Mg ha1. We conduct a large-scale inter-comparison of multiple spatial feature engineering techniques, including GLCMs, edge detectors, morphological operations, spatial buffers, neighborhood vectorization, and neighborhood similarity features. Our numerical experiments indicate that statistical features derived from GLCMs and spatial buffers yield the greatest improvement in AGB model performance out of the spatial feature engineering strategies considered. Including spatial features in Random Forest AGB models reduces the root mean squared error (RMSE) by 9.97 Mg ha1. We contextualize this improvement model performance by comparing to AGB models developed with multi-temporal features derived from the LandTrendr and Continuous Change Detection and Classification algorithms. The inclusion of temporal features reduces the model RMSE by 18.41 Mg ha1. When spatial and temporal features are both included in the model’s feature set, the RMSE decreases by 21.71 Mg ha1. We conclude that spatial feature engineering strategies can yield nominal gains in model performance. However, this improvement came at the cost of increased model prediction bias. Full article
(This article belongs to the Section Biogeosciences Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The perimeters of the lidar AGB maps that were used as reference data in this analysis. The average forest AGB (Mg <math display="inline"><semantics> <mrow> <msup> <mi>ha</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>) in each perimeter is depicted.</p>
Full article ">Figure 2
<p>An illustration of the sampling and data partitioning scheme used to generate the modeling dataset. A 500 m buffer was placed around test set locations to exclude samples from the training and development. This mitigates the impact of of spatial autocorrelation on our numerical experiments. Plots are superimposed over Landsat imagery (shortwave infrared-2, near-infrared, red reflectance; <b>left panel</b>) and true color National Agricultural Imagery Program 1 m imagery (<b>right panel</b>).</p>
Full article ">Figure 3
<p>An overview of the image processing and feature engineering workflow used in this analysis.</p>
Full article ">Figure 4
<p>RMSE distributions for the RF models developed in experiment 1.</p>
Full article ">Figure 5
<p>Predicted vs. observed AGB values from the second experiment comparing the AGB predictions generated by Random Forest models over the testing set. Models were produced using (<b>A</b>) the baseline features, (<b>B</b>) the baseline and spatial features, (<b>C</b>) the baseline and temporal features, (<b>D</b>) the baseline, spatial, and temporal features. The relationships are summarized using an ordinary least square regression curve (red line). The black dashed line is the one-to-one curve.</p>
Full article ">Figure 6
<p>The location of the four 15 <math display="inline"><semantics> <mrow> <msup> <mi>km</mi> <mn>2</mn> </msup> </mrow> </semantics></math> subsets (red squares) that were selected to visualize the outputs from the AGB models developed in experiment 2. The subsets are located in (A) the Coast Range in Oregon, (B) Eastern Oregon, (C) North Central Washington, and (D) Central Idaho.</p>
Full article ">Figure 7
<p>The reference lidar AGB map and the spatial residuals from each of the four models applied to a 15 <math display="inline"><semantics> <mrow> <msup> <mi>km</mi> <mn>2</mn> </msup> </mrow> </semantics></math> area in the Oregon Coast Range. Red colors indicate that the model overestimated the lidar AGB density. Blue indicates that the model underestimated the lidar AGB density.</p>
Full article ">Figure 8
<p>The reference lidar AGB map and the spatial residuals from each of the four models applied to a 15 <math display="inline"><semantics> <mrow> <msup> <mi>km</mi> <mn>2</mn> </msup> </mrow> </semantics></math> area in Eastern Oregon. Red colors indicate that the model overestimated the lidar AGB density. Blue indicates that the model underestimated the lidar AGB density.</p>
Full article ">Figure 9
<p>The reference lidar AGB map and the spatial residuals from each of the four models applied to a 15 <math display="inline"><semantics> <mrow> <msup> <mi>km</mi> <mn>2</mn> </msup> </mrow> </semantics></math> area in Central Idaho. Red colors indicate that the model overestimated the lidar AGB density. Blue indicates that the model underestimated the lidar AGB density.</p>
Full article ">Figure 10
<p>The reference lidar AGB map and the spatial residuals from each of the four models applied to a 15 <math display="inline"><semantics> <mrow> <msup> <mi>km</mi> <mn>2</mn> </msup> </mrow> </semantics></math> area in North Central Washington. Red colors indicate the model overestimated the lidar AGB density. Blue indicates the model underestimated the the lidar AGB density.</p>
Full article ">
Previous Issue
Back to TopTop