[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 15, June-1
Previous Issue
Volume 15, May-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 15, Issue 10 (May-2 2023) – 219 articles

Cover Story (view full-size image): Climate and management practices jointly control the spatial and temporal patterns of land surface phenology. However, most studies solely focus on analyzing the climatic controls on the inter-annual variability and trends in vegetation phenology. This study examined the impacts of climate and management practices and their interactions. Results showed that the interactions of drought and management (baling or grazing) could greatly affect vegetation phenology and suppress production. Burning plus baling might be a good management strategy in a good rainfall year to increase forage productivity. The impacts of climate and management interactions suggested that ranchers need to adjust management strategies (dynamic instead of static management plans) based on climatic conditions to maintain productive and sustainable grasslands. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 9060 KiB  
Article
Improving the Spatial Accuracy of UAV Platforms Using Direct Georeferencing Methods: An Application for Steep Slopes
by Mustafa Zeybek, Selim Taşkaya, Ismail Elkhrachy and Paolo Tarolli
Remote Sens. 2023, 15(10), 2700; https://doi.org/10.3390/rs15102700 - 22 May 2023
Cited by 8 | Viewed by 3368
Abstract
The spatial accuracy of unmanned aerial vehicles (UAVs) and the images they capture play a crucial role in the mapping process. Researchers are exploring solutions that use image-based techniques such as structure from motion (SfM) to produce topographic maps using UAVs while accessing [...] Read more.
The spatial accuracy of unmanned aerial vehicles (UAVs) and the images they capture play a crucial role in the mapping process. Researchers are exploring solutions that use image-based techniques such as structure from motion (SfM) to produce topographic maps using UAVs while accessing locations with extremely high accuracy and minimal surface measurements. Advancements in technology have enabled real-time kinematic (RTK) to increase positional accuracy to 1–3 times the ground sampling distance (GSD). This paper focuses on post-processing kinematic (PPK) of positional accuracy to achieve a GSD or better. To achieve this, precise satellite orbits, clock information, and UAV global navigation satellite system observation files are utilized to calculate the camera positions with the highest positional accuracy. RTK/PPK analysis is conducted to improve the positional accuracies obtained from different flight patterns and altitudes. Data are collected at altitudes of 80 and 120 meters, resulting in GSD values of 1.87 cm/px and 3.12 cm/px, respectively. The evaluation of ground checkpoints using the proposed PPK methodology with one ground control point demonstrated root mean square error values of 2.3 cm (horizontal, nadiral) and 2.4 cm (vertical, nadiral) at an altitude of 80 m, and 1.4 cm (horizontal, oblique) and 3.2 cm (vertical, terrain-following) at an altitude of 120 m. These results suggest that the proposed methodology can achieve high positional accuracy for UAV image georeferencing. The main contribution of this paper is to evaluate the PPK approach to achieve high positional accuracy with unmanned aerial vehicles and assess the effect of different flight patterns and altitudes on the accuracy of the resulting topographic maps. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area, (<b>a</b>) Türkiye in the global sphere, (<b>b</b>) the province of Artvin in Turkey, (<b>c</b>) the orthomosaics of the study area, (<b>d</b>) the DSM of the study area.</p>
Full article ">Figure 2
<p>Flowchart of the proposed methodology.</p>
Full article ">Figure 3
<p>Data processing and flight types: (<b>a</b>) indirect georeferencing with GCPs, (<b>b</b>) post-processing kinematic (PPK), (<b>c</b>) real-time kinematic (RTK), (<b>d</b>) nadir 2D flight type, (<b>e</b>) oblique 3D flight type, (<b>f</b>) terrain-following 2D flight type.</p>
Full article ">Figure 4
<p>1 GCP (R10)’s location and distribution of other checkpoints.</p>
Full article ">Figure A1
<p>Results of 80 m 2D flight: (<b>a</b>) estimated camera position residual histograms, (<b>b</b>) camera position normality, (<b>c</b>) CP residuals, (<b>d</b>) CP residual normality plots.</p>
Full article ">Figure A2
<p>Results of 80 m terrain-following flight: (<b>a</b>) estimated camera position residual histograms, (<b>b</b>) camera position normality, (<b>c</b>) CP residuals, (<b>d</b>) CP residual normality plots.</p>
Full article ">Figure A3
<p>Results of 120 m 2D flight: (<b>a</b>) estimated camera position residual histograms, (<b>b</b>) camera position normality, (<b>c</b>) CP residuals, (<b>d</b>) CP residual normality plots.</p>
Full article ">Figure A4
<p>Results of 120 m 3D flight: (<b>a</b>) estimated camera position residual histograms, (<b>b</b>) camera position normality, (<b>c</b>) CP residuals, (<b>d</b>) CP residual normality plots.</p>
Full article ">Figure A5
<p>Results of 120 m terrain-following flight: (<b>a</b>) estimated camera position residual histograms, (<b>b</b>) camera position normality, (<b>c</b>) CP residuals, (<b>d</b>) CP residual normality plots.</p>
Full article ">
23 pages, 2827 KiB  
Article
Boosting Adversarial Transferability with Shallow-Feature Attack on SAR Images
by Gengyou Lin, Zhisong Pan, Xingyu Zhou, Yexin Duan, Wei Bai, Dazhi Zhan, Leqian Zhu, Gaoqiang Zhao and Tao Li
Remote Sens. 2023, 15(10), 2699; https://doi.org/10.3390/rs15102699 - 22 May 2023
Cited by 8 | Viewed by 1835
Abstract
Adversarial example generation on Synthetic Aperture Radar (SAR) images is an important research area that could have significant impacts on security and environmental monitoring. However, most current adversarial attack methods on SAR images are designed for white-box situations by end-to-end means, which are [...] Read more.
Adversarial example generation on Synthetic Aperture Radar (SAR) images is an important research area that could have significant impacts on security and environmental monitoring. However, most current adversarial attack methods on SAR images are designed for white-box situations by end-to-end means, which are often difficult to achieve in real-world situations. This article proposes a novel black-box targeted attack method, called Shallow-Feature Attack (SFA). Specifically, SFA assumes that the shallow features of the model are more capable of reflecting spatial and semantic information such as target contours and textures in the image. The proposed SFA generates ghost data packages for input images and generates critical features by extracting gradients and feature maps at shallow layers of the model. The feature-level loss is then constructed using the critical features from both clean images and target images, which is combined with the end-to-end loss to form a hybrid loss function. By fitting the critical features of the input image at specific shallow layers of the neural network to the target critical features, our attack method generates more powerful and transferable adversarial examples. Experimental results show that the adversarial examples generated by the SFA attack method improved the success rate of single-model attack under a black-box scenario by an average of 3.73%, and 4.61% after combining them with ensemble-model attack without victim models. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Flowchart of SFA algorithm.</p>
Full article ">Figure 2
<p>From left to right are the original image of BTR-60 and its feature maps extracted from the first pooling layer of AlexNet, DenseNet121, RegNet_x_400mf, and ResNet18.</p>
Full article ">Figure 3
<p>The process of generating ghost data packages through 0–1 random mask of input data.</p>
Full article ">Figure 4
<p>Critical feature loss process.</p>
Full article ">Figure 5
<p>Attack success rates on ResNet18 with different 0–1 random mask rate <span class="html-italic">p</span> against 8 test models using SFA.</p>
Full article ">Figure 6
<p>Different labels in MSTAR dataset.</p>
Full article ">Figure 7
<p>The pixel differences between the compressed and reconstructed images of the adversarial samples generated by the SFA method and the images generated solely using the SFA method are shown in color, with non-zero pixels highlighted for better visualization.</p>
Full article ">Figure 8
<p>The comparison of attack success rates between SFA and its variant methods and their corresponding -CR methods is shown in the following line graph, where the yellow line represents the -CR methods and the blue line represents SFA and its variants.</p>
Full article ">
24 pages, 6172 KiB  
Article
Two New Methods Based on Implicit Expressions and Corresponding Predictor-Correctors for Gravity Anomaly Downward Continuation and Their Comparison
by Chong Zhang, Pengbo Qin, Qingtian Lü, Wenna Zhou and Jiayong Yan
Remote Sens. 2023, 15(10), 2698; https://doi.org/10.3390/rs15102698 - 22 May 2023
Cited by 1 | Viewed by 1774
Abstract
Downward continuation is a key technique for processing and interpreting gravity anomalies, as it has a major role in reducing values to horizontal planes and identifying small and shallow sources. However, it can be unstable and inaccurate, particularly when continuation depth increases. While [...] Read more.
Downward continuation is a key technique for processing and interpreting gravity anomalies, as it has a major role in reducing values to horizontal planes and identifying small and shallow sources. However, it can be unstable and inaccurate, particularly when continuation depth increases. While the Milne and Adams–Bashforth methods based on numerical solutions of the mean-value theorem have partly addressed these problems, more accurate and realistic methods need to be presented to enhance results. To address these challenges, we present two new methods, Milne–Simpson and Adams–Bashforth–Moulton, based on implicit expressions and their predictor-correctors. We test the validity of the presented methods by applying them to synthetic models and real data, and we obtain stability, accuracy, and large depth (eight times depth intervals) downward continuation. To facilitate wider applications, we use calculated vertical derivatives (of the first order) by the integrated second vertical derivatives (ISVD) method to replace theoretical ones from forward calculations and real ones from observations, obtaining reasonable downward continuations. To further understand the effect of introduced calculation factors, we also compare previous and presented methods under different conditions, such as with purely theoretical gravity anomalies and their vertical derivatives at different heights from forward calculations, calculated gravity anomalies and their vertical derivatives at non-measurement heights above the observation by upward continuation, calculated vertical derivatives of gravity anomalies by the ISVD method at the measurement height, and noise. While the previous Adams–Bashforth method sometimes outperforms the newly presented methods, new methods of the Milne–Simpson predictor-corrector and Adams–Bashforth–Moulton predictor-corrector generally present better downward continuation results compared to previous methods. Full article
Show Figures

Figure 1

Figure 1
<p>Subsurface distribution of the synthetic cuboid model.</p>
Full article ">Figure 2
<p>Theoretical gravity anomalies and their vertical derivatives from forward calculations at different heights are used in downward continuations. All downward continuation depths are 8 m (eight times the depth interval) at the height of −8 m. (<b>a</b>) The gravity anomaly to be downward continued, which is the theoretical gravity anomaly from forward calculation at the measurement height of 0 m regarded as the ground observation surface, (<b>b</b>) The reference gravity anomaly for downward continuations, which is the theoretical gravity anomaly from forward calculation at the height of −8 m. (<b>c</b>) The downward continuation of (<b>a</b>) by the classical FFT method. (<b>d</b>) The downward continuation of (<b>a</b>) by the integral iteration method. (<b>e</b>) The downward continuation of (<b>a</b>) by the Milne method. (<b>f</b>) The downward continuation of (<b>a</b>) by the Milne–Simpson predictor-corrector method. (<b>g</b>) The downward continuation of (<b>a</b>) by the Adams–Bashforth method. (<b>h</b>) The downward continuation of (<b>a</b>) by the Adams–Bashforth–Moulton predictor-corrector method.</p>
Full article ">Figure 3
<p>The theoretical gravity anomaly and its vertical derivative at the measurement height of 0 m from the forward calculation and corresponding gravity anomalies and their vertical derivatives at non-measurement heights above 0 m calculated by upward continuation are used in downward continuations. All downward continuation depths are 8 m (eight times the depth interval) which is at the measurement height of −8 m. (<b>a</b>) The gravity anomaly to be downward continued, which is the theoretical gravity anomaly from forward calculation at the measurement height of 0 m regarded as the ground observation surface. (<b>b</b>) The reference gravity anomaly for downward continuations, which is the theoretical gravity anomaly from forward calculation at the measurement height of −8 m. (<b>c</b>) The downward continuation of (<b>a</b>) by the classical FFT method. (<b>d</b>) The downward continuation of (<b>a</b>) by the integral iteration method. (<b>e</b>) The downward continuation of (<b>a</b>) by the Milne method. (<b>f</b>) The downward continuation of (<b>a</b>) by the Milne–Simpson predictor-corrector method. (<b>g</b>) The downward continuation of (<b>a</b>) by the Adams–Bashforth method. (<b>h</b>) The downward continuation of (<b>a</b>) by the Adams–Bashforth–Moulton predictor-corrector method.</p>
Full article ">Figure 4
<p>The theoretical gravity anomaly from forward calculation at the measurement height of 0 m, corresponding calculated vertical derivatives of the gravity anomaly at the measurement height of 0 m by the ISVD method, and corresponding gravity anomalies and their vertical derivatives at non-measurement heights above 0 m calculated by upward continuation are used in downward continuations. All downward continuation depths are 8 m (eight times the depth interval) which is at the measurement height of −8 m. (<b>a</b>) The gravity anomaly to be downward continued, which is the theoretical gravity anomaly from forward calculation at the measurement height of 0 m regarded as the ground observation surface. (<b>b</b>) The reference gravity anomaly for downward continuations, which is the theoretical gravity anomaly from forward calculation at the measurement height of −8 m. (<b>c</b>) The downward continuation of (<b>a</b>) by the classical FFT method. (<b>d</b>) The downward continuation of (<b>a</b>) by the integral iteration method. (<b>e</b>) The downward continuation of (<b>a</b>) by the Milne method. (<b>f</b>) The downward continuation of (<b>a</b>) by the Milne–Simpson predictor-corrector method. (<b>g</b>) The downward continuation of (<b>a</b>) by the Adams–Bashforth method. (<b>h</b>) The downward continuation of (<b>a</b>) by the Adams–Bashforth–Moulton predictor-corrector method.</p>
Full article ">Figure 5
<p>The differences between the downward continuations and the reference gravity anomaly from forward calculation at the height of −8 m. (<b>a</b>) The difference between <a href="#remotesensing-15-02698-f004" class="html-fig">Figure 4</a>b,c. (<b>b</b>) The difference between <a href="#remotesensing-15-02698-f004" class="html-fig">Figure 4</a>b,d. (<b>c</b>) The difference between <a href="#remotesensing-15-02698-f004" class="html-fig">Figure 4</a>b,e. (<b>d</b>) The difference between <a href="#remotesensing-15-02698-f004" class="html-fig">Figure 4</a>b,f. (<b>e</b>) The difference between <a href="#remotesensing-15-02698-f004" class="html-fig">Figure 4</a>b,g. (<b>f</b>) The difference between <a href="#remotesensing-15-02698-f004" class="html-fig">Figure 4</a>b,h.</p>
Full article ">Figure 6
<p>The theoretical gravity anomaly from forward calculation with 2% Gaussian white noise at the measurement height of 0 m, corresponding calculated vertical derivatives of the gravity anomaly at the measurement height of 0 m by the ISVD method, and corresponding gravity anomalies and their vertical derivatives at non-measurement heights above 0 m calculated by upward continuation are used in downward continuations. All downward continuation depths are 8 m (eight times the depth interval) which is at the measurement height of −8 m. (<b>a</b>) The gravity anomaly to be downward continued, which is the theoretical gravity anomaly from forward calculation with 2% Gaussian white noise at the measurement height of 0 m regarded as the ground observation surface. (<b>b</b>) The reference gravity anomaly for downward continuations, which is the theoretical gravity anomaly from forward calculation without noise at the measurement height of −8 m. (<b>c</b>) The downward continuation of (<b>a</b>) by the classical FFT method. (<b>d</b>) The downward continuation of (<b>a</b>) by the integral iteration method. (<b>e</b>) The downward continuation of (<b>a</b>) by the Milne method. (<b>f</b>) The downward continuation of (<b>a</b>) by the Milne–Simpson predictor-corrector method. (<b>g</b>) The downward continuation of (<b>a</b>) by the Adams–Bashforth method. (<b>h</b>) The downward continuation of (<b>a</b>) by the Adams–Bashforth–Moulton predictor-corrector method.</p>
Full article ">Figure 7
<p>Variations of RMS errors between reference gravity anomalies from forward calculations and downward continuation results by different methods from the height of −1 m to that of −10 m. Green lines represent the integral iteration method, blue lines represent the Milne method, magenta lines represent the Milne–Simpson predictor-corrector method, cyan lines represent the Adams–Bashforth method, and black lines represent the Adams–Bashforth–Moulton predictor-corrector method. (<b>a</b>) Under the condition of <a href="#sec3dot1dot1-remotesensing-15-02698" class="html-sec">Section 3.1.1</a> in general coordinates. (<b>b</b>) Under the condition of <a href="#sec3dot1dot1-remotesensing-15-02698" class="html-sec">Section 3.1.1</a> in logarithmic coordinates. (<b>c</b>) Under the condition of <a href="#sec3dot1dot2-remotesensing-15-02698" class="html-sec">Section 3.1.2</a> in general coordinates. (<b>d</b>) Under the condition of <a href="#sec3dot1dot2-remotesensing-15-02698" class="html-sec">Section 3.1.2</a> in logarithmic coordinates. (<b>e</b>) Under the condition of <a href="#sec3dot1dot3-remotesensing-15-02698" class="html-sec">Section 3.1.3</a> in general coordinates. (<b>f</b>) Under the condition of <a href="#sec3dot1dot3-remotesensing-15-02698" class="html-sec">Section 3.1.3</a> in logarithmic coordinates.</p>
Full article ">Figure 8
<p>Both the observed airborne gravity anomaly and the observed vertical derivative are used to carry out downward continuation in the Nechako basin, with a downward continuation depth of 2000 m. (<b>a</b>) The gravity anomaly to be downward continued, which is obtained from the measured airborne gravity anomaly by upward continuation to 2200 m. (<b>b</b>) The observed gravity anomaly at the height of 200 m, which is taken as the reference gravity anomaly. (<b>c</b>) The downward continuation of (<b>a</b>) by the Milne method. (<b>d</b>) The downward continuation of (<b>a</b>) by the Milne–Simpson predictor-corrector method. (<b>e</b>) The downward continuation of (<b>a</b>) by the Adams–Bashforth method. (<b>f</b>) The downward continuation of (<b>a</b>) by the Adams–Bashforth–Moulton predictor-corrector method.</p>
Full article ">Figure 9
<p>Only the observed airborne gravity anomaly is used to carry out downward continuation in the Nechako basin, with a downward continuation depth of 2000 m. (<b>a</b>) The gravity anomaly to be downward continued, which is obtained from the measured airborne gravity anomaly by upward continuation to 2200 m. (<b>b</b>) The observed gravity anomaly at the height of 200 m, which is taken as the reference gravity anomaly. (<b>c</b>) The downward continuation of (<b>a</b>) by the Milne method. (<b>d</b>) The downward continuation of (<b>a</b>) by the Milne–Simpson predictor-corrector method. (<b>e</b>) The downward continuation of (<b>a</b>) by the Adams–Bashforth method. (<b>f</b>) The downward continuation of (<b>a</b>) by the Adams–Bashforth–Moulton predictor-corrector method.</p>
Full article ">Figure 10
<p>RMS errors between the observed airborne gravity anomaly, which is taken as the reference anomaly, and the downward continuations from the upward continuation gravity anomaly at the height of 200 m. Solid lines represent the real gravity anomaly and its vertical derivative (vertical derivative abbreviated as VD), which are obtained by upward continuation at the height of 2200 m as input. Dash lines represent only the real gravity anomaly obtained by upward continuation at the height of 2200 m as input. The blue lines are by the Milne method. The magenta lines are the Milne–Simpson predictor-corrector method. The cyan lines are the Adams–Bashforth method. The black lines are the Adams–Bashforth–Moulton predictor-corrector method.</p>
Full article ">
25 pages, 5655 KiB  
Article
Tree Species Classification in Subtropical Natural Forests Using High-Resolution UAV RGB and SuperView-1 Multispectral Imageries Based on Deep Learning Network Approaches: A Case Study within the Baima Snow Mountain National Nature Reserve, China
by Xianggang Chen, Xin Shen and Lin Cao
Remote Sens. 2023, 15(10), 2697; https://doi.org/10.3390/rs15102697 - 22 May 2023
Cited by 9 | Viewed by 2833
Abstract
Accurate information on dominant tree species and their spatial distribution in subtropical natural forests are key ecological monitoring factors for accurately characterizing forest biodiversity, depicting the tree competition mechanism and quantitatively evaluating forest ecosystem stability. In this study, the subtropical natural forest in [...] Read more.
Accurate information on dominant tree species and their spatial distribution in subtropical natural forests are key ecological monitoring factors for accurately characterizing forest biodiversity, depicting the tree competition mechanism and quantitatively evaluating forest ecosystem stability. In this study, the subtropical natural forest in northwest Yunnan province of China was selected as the study area. Firstly, an object-oriented multi-resolution segmentation (MRS) algorithm was used to segment individual tree crowns from the UAV RGB imagery and satellite multispectral imagery in the forests with different densities (low (547 n/ha), middle (753 n/ha) and high (1040 n/ha)), and parameters of the MRS algorithm were tested and optimized for accurately extracting the tree crown and position information of the individual tree. Secondly, the texture metrics of the UAV RGB imagery and the spectral metrics of the satellite multispectral imagery within the individual tree crown were extracted, and the random forest algorithm and three deep learning networks constructed in this study were utilized to classify the five dominant tree species. Finally, we compared and evaluated the performance of the random forest algorithm and three deep learning networks for dominant tree species classification using the field measurement data, and the influence of the number of training samples on the accuracy of dominant tree species classification using deep learning networks was investigated. The results showed that: (1) Stand density had little influence on individual tree segmentation using the object-oriented MRS algorithm. In the forests with different stand densities, the F1 score of individual tree segmentation based on satellite multispectral imagery was 71.3–74.7%, and that based on UAV high-resolution RGB imagery was 75.4–79.2%. (2) The overall accuracy of dominant tree species classification using the light-weight network MobileNetV2 (OA = 71.11–82.22%), residual network ResNet34 (OA = 78.89–91.11%) and dense network DenseNet121 (OA = 81.11–94.44%) was higher than that of the random forest algorithm (OA = 60.00–64.44%), among which DenseNet121 had the highest overall accuracy. Texture metrics improved the overall accuracy of dominant tree species classification. (3) For the three deep learning networks, the changes in overall accuracy of dominant tree species classification influenced by the number of training samples were 2.69–4.28%. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The workflow of dominant tree species classification in subtropical natural forests of northwest Yunnan province using multi-source fine-scale remote sensing data and deep learning network approaches.</p>
Full article ">Figure 2
<p>Overview of the Gehuaqing (GHQ) study site. (<b>a</b>) Map of Yunnan province and the location of Gehuaqing; (<b>b</b>) UAV high-resolution RGB imagery and spatial distribution of individual trees measured in the field work; (<b>c</b>) high-resolution satellite multispectral imagery of the study site.</p>
Full article ">Figure 3
<p>Steps for constructing the sample dataset of individual trees. (<b>a</b>) Ortho-imagery of the study area (RGB display); (<b>b</b>) individual tree crown boundaries generated via object-oriented MRS algorithm; (<b>c</b>) individual tree samples obtained from field measurement (the red and green dots are individual trees of Yunnan pine and Oriental white oak); (<b>d</b>) crowns of dominant individual tree species; (<b>e</b>) tree crown images of dominant individual tree species. See <a href="#remotesensing-15-02697-t001" class="html-table">Table 1</a> for short descriptions of the dominant tree species.</p>
Full article ">Figure 4
<p>Framework of the three deep learning networks. (<b>a</b>) Lightweight network MobileNetV2 framework; (<b>b</b>) residual network ResNet34 framework; (<b>c</b>) dense network DenseNet121 framework.</p>
Full article ">Figure 5
<p>Sensitivity analysis of individual tree segmentation using object-oriented MRS algorithm (plot size: 50 × 50 m). (<b>a</b>–<b>c</b>) MRS segmentation using satellite multispectral imagery (the segmentation scale factors are 10, 20 and 30, respectively); (<b>d</b>–<b>f</b>) MRS segmentation using UAV RGB imagery (the segmentation scale factors are 20, 40 and 60, respectively).</p>
Full article ">Figure 6
<p>The multispectral imagery and high-resolution RGB imagery with the positions of individual trees (test data), tree tops and tree crowns were detected via the object-oriented MRS algorithm in the forests with three stand densities (low (N = 41), middle (N = 52) and high (N = 78)). (<b>a</b>,<b>e</b>,<b>i</b>) Multispectral imagery; (<b>b</b>,<b>f</b>,<b>j</b>) individual tree segmentation using the multispectral imagery combined with MRS algorithm in the three plots; (<b>c</b>,<b>g</b>,<b>k</b>) high-resolution RGB imagery; (<b>d</b>,<b>h</b>,<b>l</b>) individual tree segmentation using the high-resolution RGB imagery combined with MRS algorithm in the three plots.</p>
Full article ">Figure 7
<p>The importance values of the ten most important metrics of the random forest algorithm used for the dominant tree species classification. (<b>a</b>) Dominant tree species classification using spectral indices; (<b>b</b>) dominant tree species classification using spectral and texture metrics. Note: red correlation represents the correlation degree of red band. Red VA represents the variance in the red band; red CO represents the contrast of the red band. See <a href="#remotesensing-15-02697-t003" class="html-table">Table 3</a>b for short descriptions of the spectral and texture metrics.</p>
Full article ">Figure 8
<p>Training processes of the three deep learning networks based on SV sample dataset. (<b>a</b>) Change in loss value in the lightweight network MobileNetV2; (<b>b</b>) change in accuracy in lightweight network MobileNetV2; (<b>c</b>) change in loss value in the residual network ResNet34; (<b>d</b>) change in accuracy in the residual network ResNet34; (<b>e</b>) change in loss value in the dense network DenseNet121; (<b>f</b>) change in accuracy in the dense network DenseNet121.</p>
Full article ">Figure 9
<p>Training processes of the three deep learning networks based on SVRGB sample dataset. (<b>a</b>) Change in loss value in the light-weight network MobileNetV2; (<b>b</b>) change in accuracy in lightweight network MobileNetV2; (<b>c</b>) change in loss value in the residual network ResNet34; (<b>d</b>) change in accuracy in the residual network ResNet34; (<b>e</b>) change in loss value in the dense network DenseNet121; (<b>f</b>) change in accuracy in the dense network DenseNet121.</p>
Full article ">Figure 10
<p>Mapping of five dominant tree species classifications in three plots (100 × 100 m). Left: Image after RGB imagery and multispectral imagery stacked (display in RGB); (<b>a</b>,<b>c</b>,<b>e</b>) plot 1, plot 2 and plot 3, respectively. Right: The results of five dominant tree species classifications using dense network DenseNet121; (<b>b</b>,<b>d</b>,<b>f</b>) the five dominant tree species classifications mapping plot 1, plot 2 and plot 3, respectively. See <a href="#remotesensing-15-02697-t001" class="html-table">Table 1</a> for short descriptions of the dominant tree species.</p>
Full article ">Figure 11
<p>The overall accuracy of three deep learning networks using 20%, 40%, 60% and 80% of the total samples of each dominant tree species based on the SVRGB sample dataset.</p>
Full article ">
19 pages, 9122 KiB  
Article
Spectral-Swin Transformer with Spatial Feature Extraction Enhancement for Hyperspectral Image Classification
by Yinbin Peng, Jiansi Ren, Jiamei Wang and Meilin Shi
Remote Sens. 2023, 15(10), 2696; https://doi.org/10.3390/rs15102696 - 22 May 2023
Cited by 13 | Viewed by 3638
Abstract
Hyperspectral image classification (HSI) has rich applications in several fields. In the past few years, convolutional neural network (CNN)-based models have demonstrated great performance in HSI classification. However, CNNs are inadequate in capturing long-range dependencies, while it is possible to think of the [...] Read more.
Hyperspectral image classification (HSI) has rich applications in several fields. In the past few years, convolutional neural network (CNN)-based models have demonstrated great performance in HSI classification. However, CNNs are inadequate in capturing long-range dependencies, while it is possible to think of the spectral dimension of HSI as long sequence information. More and more researchers are focusing their attention on transformer which is good at processing sequential data. In this paper, a spectral shifted window self-attention based transformer (SSWT) backbone network is proposed. It is able to improve the extraction of local features compared to the classical transformer. In addition, spatial feature extraction module (SFE) and spatial position encoding (SPE) are designed to enhance the spatial feature extraction of the transformer. The spatial feature extraction module is proposed to address the deficiency of transformer in the capture of spatial features. The loss of spatial structure of HSI data after inputting transformer is supplemented by proposed spatial position encoding. On three public datasets, we ran extensive experiments and contrasted the proposed model with a number of powerful deep learning models. The outcomes demonstrate that our suggested approach is efficient and that the proposed model performs better than other advanced models. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overall structure of the proposed SSWT model for HSI classification.</p>
Full article ">Figure 2
<p>The structure of the spatial attention in SFE.</p>
Full article ">Figure 3
<p>SPE in a sample with the spatial size is 7 × 7.</p>
Full article ">Figure 4
<p>The structure of (<b>a</b>) S(W)-MSA of SwinT and (<b>b</b>) S-(S)W-MSA of SSWT (ours).</p>
Full article ">Figure 5
<p>Visualization of PU Datasets. (<b>a</b>) False-color map. (<b>b</b>) Ground-truth map.</p>
Full article ">Figure 6
<p>Visualization of SA Datasets. (<b>a</b>) False-color map. (<b>b</b>) Ground-truth map.</p>
Full article ">Figure 7
<p>Visualization of HU Datasets. (<b>a</b>) False-color map. (<b>b</b>) Ground-truth map.</p>
Full article ">Figure 8
<p>Overall accuracy(%) with different patch sizes on the three datasets. The window numbers in transformer layers is set to [1, 2, 2, 4].</p>
Full article ">Figure 9
<p>Classification maps of different methods in PU dataset. (<b>a</b>) Bi-LSTM. (<b>b</b>) 3D-CNN. (<b>c</b>) RSSAN. (<b>d</b>) DFFN. (<b>e</b>) Vit. (<b>f</b>) SwinT. (<b>g</b>) SF. (<b>h</b>) Hit. (<b>i</b>) SSFTT. (<b>j</b>) Proposed SSWT.</p>
Full article ">Figure 10
<p>Classification maps of different methods in SA dataset. (<b>a</b>) Bi-LSTM. (<b>b</b>) 3D-CNN. (<b>c</b>) RSSAN. (<b>d</b>) DFFN. (<b>e</b>) Vit. (<b>f</b>) SwinT. (<b>g</b>) SF. (<b>h</b>) Hit. (<b>i</b>) SSFTT. (<b>j</b>) Proposed SSWT.</p>
Full article ">Figure 10 Cont.
<p>Classification maps of different methods in SA dataset. (<b>a</b>) Bi-LSTM. (<b>b</b>) 3D-CNN. (<b>c</b>) RSSAN. (<b>d</b>) DFFN. (<b>e</b>) Vit. (<b>f</b>) SwinT. (<b>g</b>) SF. (<b>h</b>) Hit. (<b>i</b>) SSFTT. (<b>j</b>) Proposed SSWT.</p>
Full article ">Figure 11
<p>Classification maps of different methods in HU dataset. (<b>a</b>) Bi-LSTM. (<b>b</b>) 3D-CNN. (<b>c</b>) RSSAN. (<b>d</b>) DFFN. (<b>e</b>) Vit. (<b>f</b>) SwinT. (<b>g</b>) SF. (<b>h</b>) Hit. (<b>i</b>) SSFTT. (<b>j</b>) Proposed SSWT.</p>
Full article ">Figure 12
<p>Classification results in different training percent of samples on the three datasets. (<b>a</b>) PU. (<b>b</b>) SA. (<b>c</b>) HU.</p>
Full article ">
18 pages, 8497 KiB  
Article
Eddy Covariance CO2 Flux Gap Filling for Long Data Gaps: A Novel Framework Based on Machine Learning and Time Series Decomposition
by Dexiang Gao, Jingyu Yao, Shuting Yu, Yulong Ma, Lei Li and Zhongming Gao
Remote Sens. 2023, 15(10), 2695; https://doi.org/10.3390/rs15102695 - 22 May 2023
Cited by 6 | Viewed by 3270
Abstract
Continuous long-term eddy covariance (EC) measurements of CO2 fluxes (NEE) in a variety of terrestrial ecosystems are critical for investigating the impacts of climate change on ecosystem carbon cycling. However, due to a number of issues, approximately 30–60% of annual flux data [...] Read more.
Continuous long-term eddy covariance (EC) measurements of CO2 fluxes (NEE) in a variety of terrestrial ecosystems are critical for investigating the impacts of climate change on ecosystem carbon cycling. However, due to a number of issues, approximately 30–60% of annual flux data obtained at EC flux sites around the world are reported as gaps. Given that the annual total NEE is mostly determined by variations in the NEE data with time scales longer than one day, we propose a novel framework to perform gap filling in NEE data based on machine learning (ML) and time series decomposition (TSD). The novel framework combines the advantages of ML models in predicting NEE with meteorological and environmental inputs and TSD methods in extracting the dominant varying trends in NEE time series. Using the NEE data from 25 AmeriFlux sites, the performance of the proposed framework is evaluated under four different artificial scenarios with gap lengths ranging in length from one hour to two months. The combined approach incorporating random forest and moving average (MA-RF) is observed to exhibit better performance than other approaches at filling NEE gaps in scenarios with different gap lengths. For the scenario with a gap length of seven days, the MA-RF improves the R2 by 34% and reduces the root mean square error (RMSE) by 55%, respectively, compared to a traditional RF-based model. The improved performance of MA-RF is most likely due to the reduction in data variability and complexity of the variations in the extracted low-frequency NEE data. Our results indicate that the proposed MA-RF framework can provide improved gap filling for NEE time series. Such improved continuous NEE data can enhance the accuracy of estimations regarding the ecosystem carbon budget. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Location of the 25 AmeriFlux sites and pie charts of percentages of high-quality data and data with gaps of different lengths at each site. The percentages of gaps are counted for five lengths (one hour, one day, one week, two months, and greater than two months), respectively. (<b>b</b>) Grouped histograms of all data gaps for different vegetation covers.</p>
Full article ">Figure 2
<p>Flow chart for the proposed three-layer machine learning framework, consisting of (top panel) pre-filling NEE data using RF, (middle panel) extracting the dominant signal by decomposing the time series using the empirical mode decomposition (EMD) and moving average (MA) methods, and (bottom panel) re-filling the gaps in the extracted signal and reconstructing the NEE time series.</p>
Full article ">Figure 3
<p>Time series decomposition using the (<b>a</b>) MA and (<b>b</b>) EMD methods, respectively, and (<b>c</b>) frequency histograms and PDF of the low- and high-frequency components after MA decomposition.</p>
Full article ">Figure 4
<p>Procedures for the generation of the artificial gap test, training, and validation sets to evaluate model performance. Artificial gaps are introduced to (<b>i</b>) the original time series to (<b>ii</b>) create the test set, and then (<b>iii</b>) the observation gaps are removed, followed by (<b>iv</b>) the generation of ten training/validation sets. Ten test sets are created for each gap scenario, and the model performance is assessed by running each model on the test set.</p>
Full article ">Figure 5
<p>Comparison of model performance between the traditional and the proposed gap-filling frameworks at five EC sites with different vegetation types.</p>
Full article ">Figure 6
<p>The median performance metrics of the proposed gap-filling framework for short, medium, long, and very long gap length scenarios, using two different time series decomposition methods (EMD and MA) in combination with four machine learning approaches (XGboost, RF, SVR, and BP neural network), respectively.</p>
Full article ">Figure 7
<p>The relative importance of input variables for (<b>a</b>) the trend term and (<b>b</b>) the fluctuation term, respectively, at each site. White-colored spots refer to values with statistical significance lower than the 95% level.</p>
Full article ">Figure 8
<p>Comparison of model performance of the proposed framework with results of a peer’s study for gap lengths of 24 h and 7 days, respectively.</p>
Full article ">Figure 9
<p>Comparison of the annual total NEE of the best gap-filling framework (MA-RF) with the other gap-filling approaches. (<b>a</b>) MA-RF and FLUXNET Comparison. (<b>b</b>) MA-RF and MA-XGboost Comparison. (<b>c</b>) MA-RF and EMD-RF Comparison. (<b>d</b>) MA-RF and EMD-XGboost Comparison.</p>
Full article ">Figure A1
<p>The ratio of the contribution of low- and high-frequency components to total NEE after MA and EMD decomposition.</p>
Full article ">Figure A2
<p>Comparison of MA decomposition trend terms and RF pre-filled annual total NEE.</p>
Full article ">Figure A3
<p>Comparison of the mean statistics of model performance for five different vegetation types. The five vegetation types are agricultural land (AL), shrubland (S), grassland (G), evergreen coniferous forest (ECF), and deciduous broadleaf forest (DBF).</p>
Full article ">Figure A4
<p>Comparison of monthly total NEE of the optimal gap-filling framework (MA-RF) using the FLUXNET gap-filling method.</p>
Full article ">
23 pages, 9214 KiB  
Article
Nested Fabric Adaptation to New Urban Heritage Development
by Naai-Jung Shih and Yu-Huan Qiu
Remote Sens. 2023, 15(10), 2694; https://doi.org/10.3390/rs15102694 - 22 May 2023
Cited by 2 | Viewed by 1571
Abstract
Old urban reform usually reactivates the urban fabric in a new era of sustainable development. However, what remains of the former fabric and how it interacts with the new one often inspires curiosity. How the old residents adapt their lives to the new [...] Read more.
Old urban reform usually reactivates the urban fabric in a new era of sustainable development. However, what remains of the former fabric and how it interacts with the new one often inspires curiosity. How the old residents adapt their lives to the new layout should be explored qualitatively and quantitatively. This research aimed to assess the old and new fabrics in the downtown area of Keelung, Taiwan, by considering the interactions between truncated layout, proportion, and infill orientation in the mature and immature interfaces. According to the historical reform map made in 1907, the newly constructed area occupied the old constructed area in seven downtown blocks. On average, the area composed of new buildings ranged from 135.60% to 239.20% of the old area, and the average volume of the buildings reached a maximum of 41.72 m when compared to the old buildings in place prior to the reform. It seems that the new fabric purposefully maintained the old temples at the centers of the blocks. However, the old alleys, which still remain within these blocks, have been significantly overloaded with services and have become auxiliary utility spaces for the in-block residences. With regard to the part of the fabric that was truncated or reoriented by new streets, the modification could also be easily found on the second skin. A physical model analysis used a UAV 3D cloud model and QGIS® to verify the axes, hierarchies, entrances, open spaces, and corners in the commission store block and temple blocks. We found that the 3D point model and historical maps presented a convincing explanation of the evolved fabric from the past to the present. The stepwise segmentation visualizes the enclosed block inside a block on the historical maps and according to the present sections. We found that new roles for old alleys have evolved behind the new fabric. Full article
Show Figures

Figure 1

Figure 1
<p>Keelung: (<b>a</b>) Field of study (A, B, D, E, F, G, H); (<b>b</b>) urban reform map with special crossroads and temple fronts indicated in black dotted circled: 1907 [<a href="#B1-remotesensing-15-02694" class="html-bibr">1</a>]; (<b>c</b>) alley scene; (<b>d</b>) 3D urban blocks.</p>
Full article ">Figure 2
<p>Referred maps: (<b>a</b>) [<a href="#B1-remotesensing-15-02694" class="html-bibr">1</a>]; (<b>b</b>–<b>e</b>) [<a href="#B43-remotesensing-15-02694" class="html-bibr">43</a>], (<b>f</b>) projection created in this study.</p>
Full article ">Figure 3
<p>Research flowchart, operational hierarchy, and tools.</p>
Full article ">Figure 4
<p>Field observations.</p>
Full article ">Figure 5
<p>Assessments: (<b>a</b>) boundaries traced on the old historical map blocks [<a href="#B1-remotesensing-15-02694" class="html-bibr">1</a>] and the new aerial image for area assessment; (<b>b</b>) volume assessment in CloudCompare<sup>®</sup>; (<b>c</b>) volume assessment in Geomagic Studio<sup>®</sup>.</p>
Full article ">Figure 6
<p>Evolving façades by years and months, and roof layout orders by central aisles.</p>
Full article ">Figure 7
<p>(<b>a</b>) UAV photogrammetric 3D mesh model of a temple at the center of the block in a hidden alley; (<b>b</b>) map [<a href="#B43-remotesensing-15-02694" class="html-bibr">43</a>] with proof and instances of fabric layout remodeling; (<b>c</b>) a temple at the center of the block exposed to the street along the old trails.</p>
Full article ">Figure 8
<p>(<b>a</b>) Percentage of inner boundary occupies 0% to 37.28% of each corresponding block area; (<b>b</b>) block infill as highlighted in black; (<b>c</b>) a covered old trail entrance to the infill.</p>
Full article ">Figure 8 Cont.
<p>(<b>a</b>) Percentage of inner boundary occupies 0% to 37.28% of each corresponding block area; (<b>b</b>) block infill as highlighted in black; (<b>c</b>) a covered old trail entrance to the infill.</p>
Full article ">Figure 9
<p>(<b>a</b>) Segmentation of the city inside the old and outside the new circulation system; (<b>b</b>) sections made through central alley on the heritage temple block axis.</p>
Full article ">Figure 10
<p>Historical map [<a href="#B1-remotesensing-15-02694" class="html-bibr">1</a>]-based segmentation of the city inside the old and outside the new circulation system: (<b>a</b>) 1941; (<b>b</b>) 1971.</p>
Full article ">Figure 11
<p>Different levels of tolerance and circulation, street partition, and offsetting by historical maps [<a href="#B1-remotesensing-15-02694" class="html-bibr">1</a>,<a href="#B43-remotesensing-15-02694" class="html-bibr">43</a>].</p>
Full article ">Figure 12
<p>Field AR simulation of assumed orthogonal and diagonal partitioning of shops.</p>
Full article ">Figure 13
<p>(<b>a</b>) The difference in bearing gradually varies from 89.67 degrees to 0.00 degrees; (<b>b</b>) the cadastral map-based illustration of axes; (<b>c</b>) the bearing measurement made in QGIS<sup>®</sup> for A-axis (A: Ching-An Temple).</p>
Full article ">Figure 13 Cont.
<p>(<b>a</b>) The difference in bearing gradually varies from 89.67 degrees to 0.00 degrees; (<b>b</b>) the cadastral map-based illustration of axes; (<b>c</b>) the bearing measurement made in QGIS<sup>®</sup> for A-axis (A: Ching-An Temple).</p>
Full article ">Figure 14
<p>(<b>a</b>) The evolving area and volumetric percentages of seven street blocks; (<b>b</b>) the quantitative assessment; (<b>c</b>) elevation diagram created in CloudCompare<sup>®</sup> from volume calculation.</p>
Full article ">
20 pages, 10895 KiB  
Article
In-Situ GNSS-R and Radiometer Fusion Soil Moisture Retrieval Model Based on LSTM
by Tianlong Zhang, Lei Yang, Hongtao Nan, Cong Yin, Bo Sun, Dongkai Yang, Xuebao Hong and Ernesto Lopez-Baeza
Remote Sens. 2023, 15(10), 2693; https://doi.org/10.3390/rs15102693 - 22 May 2023
Cited by 1 | Viewed by 2087
Abstract
Global navigation satellite system reflectometry (GNSS-R) is a remote sensing technology of soil moisture measurement using signals of opportunity from GNSS, which has the advantages of low cost, all-weather detection, and multi-platform application. An in situ GNSS-R and radiometer fusion soil moisture retrieval [...] Read more.
Global navigation satellite system reflectometry (GNSS-R) is a remote sensing technology of soil moisture measurement using signals of opportunity from GNSS, which has the advantages of low cost, all-weather detection, and multi-platform application. An in situ GNSS-R and radiometer fusion soil moisture retrieval model based on LSTM (long–short term memory) is proposed to improve accuracy and robustness as to the impacts of vegetation cover and soil surface roughness. The Oceanpal GNSS-R data obtained from the experimental campaign at the Valencia Anchor Station are used as the main input data, and the TB (brightness temperature) and TR (soil roughness and vegetation integrated attenuation coefficient) outputs of the ELBARA-II radiometer are used as auxiliary input data, while field measurements with a Delta-T ML2x ThetaProbe soil moisture sensor were used for reference and validation. The results show that the LSTM model can be used to retrieve soil moisture, and that it performs better in the data fusion scenario with GNSS-R and radiometer. The STD of the multi-satellite fusion model is 0.013. Among the single-satellite models, PRN13, 20, and 32 gave the best retrieval results with STD = 0.011, 0.012, and 0.007, respectively. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>GNSS-R scattering geometry.</p>
Full article ">Figure 2
<p>Schematic diagram of GNSS-R soil moisture retrieval.</p>
Full article ">Figure 3
<p>The three ELBARA-II prototype systems at the test site at the Swiss Federal Research Institute WSL [<a href="#B26-remotesensing-15-02693" class="html-bibr">26</a>]. Reprinted/adapted with permission from Ref. [<a href="#B26-remotesensing-15-02693" class="html-bibr">26</a>]. 2010, Schwank, M.; Wiesmann, A.; Werner, C.; Matzler, C.; Weber, D.; Murk, A.; Völksch, I.; Wegmüller, U.</p>
Full article ">Figure 4
<p>LSTM neuron structure diagram.</p>
Full article ">Figure 5
<p>LSTM structure diagram.</p>
Full article ">Figure 6
<p>Oceanpal antenna and calibration box.</p>
Full article ">Figure 7
<p>Schematic diagram of the Valencia experimental site.</p>
Full article ">Figure 8
<p>Soil moisture ThetaProbe sensors and the Davis Vantage Pro 2 meteorological station.</p>
Full article ">Figure 9
<p>LHCP reflectivity vs. elevation angle.</p>
Full article ">Figure 10
<p>Data processing workflow.</p>
Full article ">Figure 11
<p>Retrieval results of Wang model and ELBARA-Ⅱ.</p>
Full article ">Figure 12
<p>Retrieval results of the Wang model and the ELBARA-Ⅱ radiometer (averaged in hours).</p>
Full article ">Figure 13
<p>Soil moisture estimation results of the LSTM model optionally using different input data.</p>
Full article ">Figure 14
<p>TR data of October 2014–June 2016.</p>
Full article ">Figure 15
<p>Soil moisture estimation results of the LSTM model with low roughness conditions.</p>
Full article ">Figure 16
<p>Soil moisture estimation results of the LSTM model under high-roughness conditions.</p>
Full article ">Figure 17
<p>Retrieval results from the LSTM model taking <span class="html-italic">Rrl</span>, <span class="html-italic">TR</span> and <span class="html-italic">TB</span> as inputs (PRN 13, 20, 32).</p>
Full article ">Figure 18
<p>Correlation coefficients for the four configurations between <span class="html-italic">Rrl</span>, <span class="html-italic">TR</span> and <span class="html-italic">TB</span>.</p>
Full article ">Figure 19
<p>RMSE for the four configurations between <span class="html-italic">Rrl</span>, <span class="html-italic">TR</span>, and <span class="html-italic">TB</span>.</p>
Full article ">Figure 20
<p>Prediction error of the LSTM model under high-roughness conditions.</p>
Full article ">
18 pages, 907 KiB  
Article
FusionPillars: A 3D Object Detection Network with Cross-Fusion and Self-Fusion
by Jing Zhang, Da Xu, Yunsong Li, Liping Zhao and Rui Su
Remote Sens. 2023, 15(10), 2692; https://doi.org/10.3390/rs15102692 - 22 May 2023
Cited by 5 | Viewed by 1882
Abstract
In the field of unmanned systems, cameras and LiDAR are important sensors that provide complementary information. However, the question of how to effectively fuse data from two different modalities has always been a great challenge. In this paper, inspired by the idea of [...] Read more.
In the field of unmanned systems, cameras and LiDAR are important sensors that provide complementary information. However, the question of how to effectively fuse data from two different modalities has always been a great challenge. In this paper, inspired by the idea of deep fusion, we propose a one-stage end-to-end network named FusionPillars to fuse multisensor data (namely LiDAR point cloud and camera images). It includes three branches: a point-based branch, a voxel-based branch, and an image-based branch. We design two modules to enhance the voxel-wise features in the pseudo-image: the Set Abstraction Self (SAS) fusion module and the Pseudo View Cross (PVC) fusion module. For the data from a single sensor, by considering the relationship between the point-wise and voxel-wise features, the SAS fusion module self-fuses the point-based branch and the voxel-based branch to enhance the spatial information of the pseudo-image. For the data from two sensors, through the transformation of the images’ view, the PVC fusion module introduces the RGB information as auxiliary information and cross-fuses the pseudo-image and RGB image of different scales to supplement the color information of the pseudo-image. Experimental results revealed that, compared to existing current one-stage fusion networks, FusionPillars yield superior performance, with a considerable improvement in the detection precision for small objects. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Visualization of detection results. The top of the figure is the RGB image from the camera. The lower left part of the figure shows the original point cloud. The lower right part of the figure shows the detection results. The experiment is based on the KITTI dataset [<a href="#B15-remotesensing-15-02692" class="html-bibr">15</a>].</p>
Full article ">Figure 2
<p>Illustration of the architecture of FusionPillars, which is composed of a point-based branch, a voxel-based branch, and an image-based branch. We employ the SAS fusion module and PVC fusion module to enhance the feature expression of the pseudo-image.</p>
Full article ">Figure 3
<p>Illustration of the architecture of the the Feature Extraction Network.</p>
Full article ">Figure 4
<p>Illustration of the architecture of the SAS fusion module.</p>
Full article ">Figure 5
<p>Illustration of the architecture of the PVC fusion module.</p>
Full article ">Figure 6
<p>Illustration of the architecture of the Pseudo View Transformation.</p>
Full article ">Figure 7
<p>Illustration of the architecture of the Detection Head.</p>
Full article ">Figure 8
<p>The impact of different layers.</p>
Full article ">
25 pages, 25202 KiB  
Article
Integration of DInSAR-PS-Stacking and SBAS-PS-InSAR Methods to Monitor Mining-Related Surface Subsidence
by Yuejuan Chen, Xu Dong, Yaolong Qi, Pingping Huang, Wenqing Sun, Wei Xu, Weixian Tan, Xiujuan Li and Xiaolong Liu
Remote Sens. 2023, 15(10), 2691; https://doi.org/10.3390/rs15102691 - 22 May 2023
Cited by 13 | Viewed by 2887
Abstract
Over-exploitation of coal mines leads to surface subsidence, surface cracks, collapses, landslides, and other geological disasters. Taking a mining area in Nalintaohai Town, Ejin Horo Banner, Ordos City, Inner Mongolia Autonomous Region, as an example, Sentinel-1A data from January 2018 to October 2019 [...] Read more.
Over-exploitation of coal mines leads to surface subsidence, surface cracks, collapses, landslides, and other geological disasters. Taking a mining area in Nalintaohai Town, Ejin Horo Banner, Ordos City, Inner Mongolia Autonomous Region, as an example, Sentinel-1A data from January 2018 to October 2019 were used as the data source in this study. Based on the high interference coherence of the permanent scatterer (PS) over a long period of time, the problem of the manual selection of ground control points (GCPs) affecting the monitoring results during refinement and re-flattening is solved. A DInSAR-PS-Stacking method combining the PS three-threshold method (the coherence coefficient threshold, amplitude dispersion index threshold, and deformation velocity interval) is proposed as a means to select ground control points for refinement and re-flattening, as well as a means to obtain time-series deformation by weighted stacking processing. A SBAS-PS-InSAR method combining the PS three-threshold method to select PS points as GCPs for refinement and re-flattening is also proposed. The surface deformation results monitored by the DInSAR-PS-Stacking and SBAS-PS-InSAR methods are analyzed and verified. The results show that the subsidence location, range, distribution, and space–time subsidence law of surface deformation results obtained by DInSAR-PS-Stacking, SBAS-PS-InSAR, and GPS methods are basically the same. The deformation results obtained by these two InSAR methods have a good correlation with the GPS monitoring results, and the MAE and RMSE are within the acceptable range. The error showed that the edge of the subsidence basin was small and that the center was large. Both methods were found to be able to effectively monitor the coal mine, but there were also shortcomings. DInSAR-PS-Stacking has a strong ability to monitor the settlement center. SBAS-PS-InSAR performed well in monitoring slow and small deformations, but its monitoring of the settlement center was insufficient. Considering the advantages of these two InSAR methods, we proposed fusing the time-series deformation results obtained using these two InSAR methods to allow for more reliable deformation results and to carry out settlement analysis. The results showed that the automatic two-threshold (deformation threshold and average coherence threshold) fusion was effective for monitoring and analysis, and the deformation monitoring results are in good agreement with the actual situation. The deformation information obtained by the comparison, and fusion of multiple methods can allow for better monitoring and analysis of the mining area surface deformation, and can also provide a scientific reference for mining subsidence control and early disaster warning. Full article
(This article belongs to the Special Issue Mapping and Monitoring of Geohazards with Remote Sensing Technologies)
Show Figures

Figure 1

Figure 1
<p>Geographical location and scope of the mining area: (<b>a</b>) location of the mining area, (<b>b</b>) regional location of mining area, (<b>c</b>) and location of the working face and monitoring points.</p>
Full article ">Figure 2
<p>Technical flow chart.</p>
Full article ">Figure 3
<p>Differential interferogram and unwrapped phase diagram after refinement and re-flattening.</p>
Full article ">Figure 3 Cont.
<p>Differential interferogram and unwrapped phase diagram after refinement and re-flattening.</p>
Full article ">Figure 4
<p>The average subsidence velocity of the mining area monitored using SBAS-PS-InSAR.</p>
Full article ">Figure 5
<p>Comparison of the cumulative subsidence monitored using DInSAR-PS-Stacking and SBAS-PS-InSAR in the same period.</p>
Full article ">Figure 5 Cont.
<p>Comparison of the cumulative subsidence monitored using DInSAR-PS-Stacking and SBAS-PS-InSAR in the same period.</p>
Full article ">Figure 6
<p>Results and comparison of time-series subsidence points monitored using DInSAR-PS-Stacking and SBAS-PS-InSAR: (<b>a</b>) point 1, (<b>b</b>) point 6, (<b>c</b>) point 12, and (<b>d</b>) point 22.</p>
Full article ">Figure 7
<p>Comparing the correlation diagrams of settlement curves measured by DInSAR-PS-Stacking, SBAS-PS-InSAR, and GPS: (<b>a</b>) point 1, (<b>b</b>) point 6, (<b>c</b>) point 11, and (<b>d</b>) point 22.</p>
Full article ">Figure 7 Cont.
<p>Comparing the correlation diagrams of settlement curves measured by DInSAR-PS-Stacking, SBAS-PS-InSAR, and GPS: (<b>a</b>) point 1, (<b>b</b>) point 6, (<b>c</b>) point 11, and (<b>d</b>) point 22.</p>
Full article ">Figure 8
<p>The cumulative subsidence maps after the fusion of DInSAR-PS-Stacking and SBAS-PS-InSAR.</p>
Full article ">Figure 9
<p>Comparison between the fusion results of the two InSAR methods and GPS monitoring results: (<b>a</b>) 29 August 2019 and (<b>b</b>) 4 October 2019.</p>
Full article ">
19 pages, 9978 KiB  
Article
A Self-Adaptive Thresholding Approach for Automatic Water Extraction Using Sentinel-1 SAR Imagery Based on OTSU Algorithm and Distance Block
by Jianbo Tan, Yi Tang, Bin Liu, Guang Zhao, Yu Mu, Mingjiang Sun and Bo Wang
Remote Sens. 2023, 15(10), 2690; https://doi.org/10.3390/rs15102690 - 22 May 2023
Cited by 13 | Viewed by 3083
Abstract
As an indispensable material for animals, plants and human beings, obtaining accurate water body information rapidly is of great significance to maintain the balance of ecosystems and ensure normal production and the life of human beings. Due to its independence of the time [...] Read more.
As an indispensable material for animals, plants and human beings, obtaining accurate water body information rapidly is of great significance to maintain the balance of ecosystems and ensure normal production and the life of human beings. Due to its independence of the time of day and the weather conditions, synthetic aperture radar (SAR) data have been increasingly applied in the extraction of water bodies. However, there is a great deal of speckle noise in SAR images, which seriously affect the extraction accuracy of water. At present, most of the processing methods are filtering methods, which will cause the loss of detailed information. Based on the characteristic of side-looking SAR, this paper proposed a self-adaptive thresholding approach for automatic water extraction based on an OTSU algorithm and distance block. In this method, the whole images were firstly divided into uniform image blocks through a distance layer which was produced by the distance to the orbit. Then, a self-adaptive processing was conducted for merging blocks. The OTSU algorithm was used to obtain a threshold for classification and the Jeffries–Matusita (JM) distance was calculated with the classification result. The merge processing continued until the separability of image blocks reached the maximum. Subsequently, we started from the next block to repeat the merger, and so on until all blocks were processed. Ten study areas around the world and the local Dongting Lake area were applied to test the feasibility of the proposed method. In comparison with five other global threshold segmentation algorithms such as the traditional OTSU, MOMENTS, MEAN, ISODATA and MINERROR, the proposed method obtains the highest overall accuracy (OA) and kappa coefficient (KC), while this approach also demonstrates greater robustness in the analysis of time series. The findings of this study offer an effective method to improve water detection accuracy as well as reducing the influence of speckle noise and retaining details in the image. Full article
Show Figures

Figure 1

Figure 1
<p>The position of ten study areas around the world.</p>
Full article ">Figure 2
<p>Workflow of the proposed method.</p>
Full article ">Figure 3
<p>The Sentinel-1A map (<b>a</b>) and illustration map of distance (<b>b</b>) for one study area (the red line parallel to the orbit).</p>
Full article ">Figure 4
<p>The overall JM value under different initial patch sizes for segmentation of study areas.</p>
Full article ">Figure 5
<p>The OA and Kappa values of different methods along with study areas.</p>
Full article ">Figure 6
<p>The TP, FP, FN, and TN values of each method for each study area.</p>
Full article ">Figure 7
<p>The SAR SIGMA images, distribution of water with the proposed method, OTSU_all and sentinel-2 in study areas 2, 3, 4, 7, 8, and 9 (gray is water body and white is land).</p>
Full article ">Figure 8
<p>Comparison of classification results of a whole scene under six methods (gray is water body and white is land).</p>
Full article ">Figure 9
<p>The Sentinel-1A data in VV polarization mode (<b>a</b>) and corresponding water map with proposed method (<b>b</b>) on 25 December 2017.</p>
Full article ">Figure 10
<p>Classification results of six sub-regions under six methods (Gray is water body and white is land).</p>
Full article ">Figure 11
<p>Quantitative evaluation of six methods.</p>
Full article ">Figure 12
<p>The thresholds calculated by ISODATA, MEAN, MINERROR, MOMENTS, OTSU_all, and OTSU_opt in blocks from near-range to far-range (<b>a</b>–<b>h</b>).</p>
Full article ">Figure 13
<p>Dongting Lake water extents, water levels, and JM values in 2017.</p>
Full article ">Figure 14
<p>Water frequency map of Dongting Lake in 2017 derived from the OTSU_opt water maps.</p>
Full article ">Figure 15
<p>The water distribution maps on DOY 59, 71, 83 and 95 (<b>a</b>–<b>d</b>), and corresponding histograms (<b>e</b>–<b>h</b>).</p>
Full article ">Figure 16
<p>The VV polarization image on DOY 191 with six detail images (<b>a</b>–<b>f</b>), where (<b>b</b>,<b>e</b>) are the classification maps using this proposed method, and (<b>c</b>,<b>f</b>) are the results using the global OTSU method.</p>
Full article ">
16 pages, 3121 KiB  
Article
Dual-Stream Feature Extraction Network Based on CNN and Transformer for Building Extraction
by Liegang Xia, Shulin Mi, Junxia Zhang, Jiancheng Luo, Zhanfeng Shen and Yubin Cheng
Remote Sens. 2023, 15(10), 2689; https://doi.org/10.3390/rs15102689 - 22 May 2023
Cited by 13 | Viewed by 2961
Abstract
Automatically extracting 2D buildings from high-resolution remote sensing images is among the most popular research directions in the area of remote sensing information extraction. Semantic segmentation based on a CNN or transformer has greatly improved building extraction accuracy. A CNN is good at [...] Read more.
Automatically extracting 2D buildings from high-resolution remote sensing images is among the most popular research directions in the area of remote sensing information extraction. Semantic segmentation based on a CNN or transformer has greatly improved building extraction accuracy. A CNN is good at local feature extraction, but its ability to acquire global features is poor, which can lead to incorrect and missed detection of buildings. The advantage of transformer models lies in their global receptive field, but they do not perform well in extracting local features, resulting in poor local detail for building extraction. We propose a CNN-based and transformer-based dual-stream feature extraction network (DSFENet) in this paper, for accurate building extraction. In the encoder, convolution extracts the local features for buildings, and the transformer realizes the global representation of the buildings. The effective combination of local and global features greatly enhances the network’s feature extraction ability. We validated the capability of DSFENet on the Google Image dataset and the ISPRS Vaihingen dataset. DSEFNet achieved the best accuracy performance compared to other state-of-the-art models. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The architecture of DSFENet. Res Block represents residual block. Swin TR represents Swin transformer. FAM is the feature aggregation module. SEEM and SPEM are the semantic embedding module and spatial embedding module, respectively. Upsample is an upsampling module consisting of a bilinear interpolation and a convolution operation. ASPP is an atrous spatial pyramid pooling module.</p>
Full article ">Figure 2
<p>Res block.</p>
Full article ">Figure 3
<p>Swin-transformer block.</p>
Full article ">Figure 4
<p>Feature aggregation module. STR feature represents the feature of the Swin transformer. TR2CNN, the serialized data is spliced into multidimensional features. The fused features are processed separately. The number of feature channels is recovered using 1 × 1 convolution, then it is input to the CNN branch. CNN2TR function is used to map the multidimensional features, convert them into serialized features, and input them into the transformer branch.</p>
Full article ">Figure 5
<p>Semantic embedding module.</p>
Full article ">Figure 6
<p>Spatial embedding module.</p>
Full article ">Figure 7
<p>Sample of the Beijing and Vaihingen datasets. The first two rows are examples in the Beijing dataset. The last two rows are examples in the Vaihingen dataset.</p>
Full article ">Figure 8
<p>DSFENet comparison experiment results on the Beijng dataset.</p>
Full article ">Figure 9
<p>Comparative experimental results of DSFENet on the Vaihingen dataset.</p>
Full article ">Figure 10
<p>Comparison of DSFENet ablation experiments. The red circles illustrate that DSFENet-CNN had a certain false detection situation, while DSFENet-TR effectively alleviated this situation. The blue circles show that DSFENet-TR had poor prediction of building details, and DSFENet achieved a clearer building boundary prediction.</p>
Full article ">
18 pages, 8194 KiB  
Article
Detection and Attribution of Greening and Land Degradation of Dryland Areas in China and America
by Zheng Chen, Jieyu Liu, Xintong Hou, Peiyi Fan, Zhonghua Qian, Li Li, Zhisen Zhang, Guolin Feng, Bailian Li and Guiquan Sun
Remote Sens. 2023, 15(10), 2688; https://doi.org/10.3390/rs15102688 - 22 May 2023
Cited by 2 | Viewed by 1827
Abstract
Global dryland areas are vulnerable to climate change and anthropogenic activities, making it essential to understand the primary drivers and quantify their effects on vegetation growth. In this study, we used the Time Series Segmented Residual Trends (TSS-RESTREND) method to attribute changes in [...] Read more.
Global dryland areas are vulnerable to climate change and anthropogenic activities, making it essential to understand the primary drivers and quantify their effects on vegetation growth. In this study, we used the Time Series Segmented Residual Trends (TSS-RESTREND) method to attribute changes in vegetation to CO2, land use, climate change, and climate variability in Chinese and American dryland areas. Our analysis showed that both Chinese and American drylands have undergone a greening trend over the past four decades, with Chinese greening likely linked to climatic warming and humidification of Northwest China. Climate change was the dominant factor driving vegetation change in China, accounting for 48.3%, while CO2 fertilization was the dominant factor in American drylands, accounting for 47.9%. However, land use was the primary factor resulting in desertification in both regions. Regional analysis revealed the importance of understanding the drivers of vegetation change and land degradation in Chinese and American drylands to prevent desertification. These findings highlight the need for sustainable management practices that consider the complex interplay of climate change, land use, and vegetation growth in dryland areas. Full article
Show Figures

Figure 1

Figure 1
<p>Evolution of vegetation biomass, surface water and soil water. <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>0.6</mn> <mo>,</mo> <mspace width="4pt"/> <mi>l</mi> <mo>=</mo> <mn>0.15</mn> <mo>,</mo> <mspace width="4pt"/> <msub> <mi>C</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>400</mn> </mrow> </semantics></math>. Other parameters are shown in <a href="#app1-remotesensing-15-02688" class="html-app">Appendix A</a>.</p>
Full article ">Figure 2
<p>Sensitive analysis of different factors. (<b>a</b>) is for CO<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math> and (<b>b</b>) is for land use rate. Solid line is the stable state and dotted line is the unstable state. There exists one bifurcation when the system approaches a certain value under different climate conditions.</p>
Full article ">Figure 3
<p>Flowchart of the methods.</p>
Full article ">Figure 4
<p>(<b>a</b>) Observed change of vegetation growth in Chinese dryland area. Yellow means decrease in NDVI<math display="inline"><semantics> <msub> <mrow/> <mi>m</mi> </msub> </semantics></math> and green means increase. Dotted areas suggest the change is significant (<math display="inline"><semantics> <msub> <mi>α</mi> <mrow> <mi>F</mi> <mi>D</mi> <mi>R</mi> </mrow> </msub> </semantics></math> = 0.10). Areas that have large uncertainties are masked in white. (<b>b</b>) The change of NDVI attributed to CO<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>. (<b>c</b>) The change of NDVI attributed to LU. (<b>d</b>) The change of NDVI attributed to CC. (<b>e</b>) The change of NDVI attributed to CV.</p>
Full article ">Figure 5
<p>Map of three dominant factors distribution in Chinese dryland areas. The areas controlled by CO<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math> are shown in red. The areas controlled by LU are shown in green. The areas controlled by CC is shown in blue.</p>
Full article ">Figure 6
<p>Regional separation for analysis of China. Chinese dryland areas mainly locate in Northwest (NW), North China (NC) and Southwest (SW).</p>
Full article ">Figure 7
<p>The regional mean and magnitude (mean absolute value) of the different drivers of change in NDVI<math display="inline"><semantics> <msub> <mrow/> <mi>m</mi> </msub> </semantics></math> of Chinese dryland areas. The error bars show the SD of grid cells.</p>
Full article ">Figure 8
<p>(<b>a</b>) The regional mean and magnitude (mean absolute value) of the different drivers of positive change in NDVI<math display="inline"><semantics> <msub> <mrow/> <mi>m</mi> </msub> </semantics></math> of Chinese dryland areas. The error bars show the SD of grid cells. (<b>b</b>) The same as (<b>a</b>) for negative change.</p>
Full article ">Figure 9
<p>(<b>a</b>) Observed change of vegetation growth in American dryland area. Yellow means decrease in NDVI<math display="inline"><semantics> <msub> <mrow/> <mi>m</mi> </msub> </semantics></math> and green means increase. Dotted areas suggest the change is significant (<math display="inline"><semantics> <msub> <mi>α</mi> <mrow> <mi>F</mi> <mi>D</mi> <mi>R</mi> </mrow> </msub> </semantics></math> = 0.10). Areas that have large uncertainties are masked in white. (<b>b</b>) The change of NDVI<math display="inline"><semantics> <msub> <mrow/> <mi>m</mi> </msub> </semantics></math> attributed to CO<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>. (<b>c</b>) The change of NDVI<math display="inline"><semantics> <msub> <mrow/> <mi>m</mi> </msub> </semantics></math> attributed to LU. (<b>d</b>) The change of NDVI<math display="inline"><semantics> <msub> <mrow/> <mi>m</mi> </msub> </semantics></math> attributed to CC. (<b>e</b>) The change of NDVI<math display="inline"><semantics> <msub> <mrow/> <mi>m</mi> </msub> </semantics></math> attributed to CV.</p>
Full article ">Figure 10
<p>Map of three dominant factors’ distribution in American dryland areas. The areas controlled by CO<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math> are shown in red. The areas controlled by LU are shown in green. The areas controlled by CC are shown in blue.</p>
Full article ">Figure 11
<p>Regional separation for analysis of America. American dryland areas mainly locate in West, Midwest and South.</p>
Full article ">Figure 12
<p>The regional mean and magnitude (mean absolute value) of the different drivers of change in NDVI<math display="inline"><semantics> <msub> <mrow/> <mi>m</mi> </msub> </semantics></math> of American dryland areas. The error bars show the SD of grid cells.</p>
Full article ">Figure 13
<p>(<b>a</b>) The regional mean and magnitude (mean absolute value) of the different drivers of positive change in NDVI<math display="inline"><semantics> <msub> <mrow/> <mi>m</mi> </msub> </semantics></math> of American dryland areas. The error bars show the SD of grid cells. (<b>b</b>) The same as (<b>a</b>) for negative change.</p>
Full article ">
24 pages, 7856 KiB  
Article
Ionospheric–Thermospheric Responses to Geomagnetic Storms from Multi-Instrument Space Weather Data
by Rasim Shahzad, Munawar Shah, M. Arslan Tariq, Andres Calabia, Angela Melgarejo-Morales, Punyawi Jamjareegulgarn and Libo Liu
Remote Sens. 2023, 15(10), 2687; https://doi.org/10.3390/rs15102687 - 22 May 2023
Cited by 13 | Viewed by 4085
Abstract
We analyze vertical total electron content (vTEC) variations from the Global Navigation Satellite System (GNSS) at different latitudes in different continents of the world during the geomagnetic storms of June 2015, August 2018, and November 2021. The resulting ionospheric perturbations at the low [...] Read more.
We analyze vertical total electron content (vTEC) variations from the Global Navigation Satellite System (GNSS) at different latitudes in different continents of the world during the geomagnetic storms of June 2015, August 2018, and November 2021. The resulting ionospheric perturbations at the low and mid-latitudes are investigated in terms of the prompt penetration electric field (PPEF), the equatorial electrojet (EEJ), and the magnetic H component from INTERMAGNET stations near the equator. East and Southeast Asia, Russia, and Oceania exhibited positive vTEC disturbances, while South American stations showed negative vTEC disturbances during all the storms. We also analyzed the vTEC from the Swarm satellites and found similar results to the retrieved vTEC data during the June 2015 and August 2018 storms. Moreover, we observed that ionospheric plasma tended to increase rapidly during the local afternoon in the main phase of the storms and has the opposite behavior at nighttime. The equatorial ionization anomaly (EIA) crest expansion to higher latitudes is driven by PPEF during daytime at the main and recovery phases of the storms. The magnetic H component exhibits longitudinal behavior along with the EEJ enhancement near the magnetic equator. Full article
(This article belongs to the Special Issue Satellite Observations of the Global Ionosphere and Plasma Dynamics)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The geographical location of GNSS and INTERMAGNET stations used in this study. The black line represents the magnetic equator. The corresponding coordinates are given in <a href="#remotesensing-15-02687-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 2
<p>Space weather indices for the storm of 22 June 2015. Where (<b>a</b>,<b>b</b>) By and Bz component of magnetic field, (<b>c</b>) solar wind speed, (<b>d</b>) IMF Ey component, (<b>e</b>) F10.7 index, (<b>f</b>) Disturbance storm time Index, (<b>g</b>) Ae index and (<b>h</b>) planetary K index. The SSCs and different storm phases are marked with vertical dashed lines.</p>
Full article ">Figure 3
<p>Space weather indices during the storm of 26 August 2018. (<b>a</b>,<b>b</b>) By and Bz component of magnetic field, (<b>c</b>) solar wind speed, (<b>d</b>) IMF Ey component, (<b>e</b>) F10.7 index, (<b>f</b>) Disturbance storm time Index, (<b>g</b>) Ae index and (<b>h</b>) planetary K index. The SSC is marked with an orange arrow, and the different phases of the storm are marked with different dashed lines.</p>
Full article ">Figure 4
<p>Space weather indices during the storm of November 2021 from OMNI web NASA. (<b>a</b>,<b>b</b>) By and Bz component of magnetic field, (<b>c</b>) solar wind speed, (<b>d</b>) IMF Ey component, (<b>e</b>) F10.7 index, (<b>f</b>) Disturbance storm time Index, (<b>g</b>) Ae index and (<b>h</b>) planetary K index. The SSC and different storm phases are marked with vertical dashed lines of different colors.</p>
Full article ">Figure 5
<p>vTEC variation at the low-latitude stations in different longitudinal sectors for the geomagnetic storms of 2015 and 2018. (<b>a</b>–<b>h</b>) COCO, BAKO, HYDE, IISC, KOUC, GLPS, KOUR, RIOP stations vTEC of June 2015 storm. Additionally, (<b>a′</b>–<b>h′</b>) vTEC of COCO, BAKO, HYDE, IISC, KOUC, GLPS, KOUR and RIOP stations for August 2018. The locations of the stations are in <a href="#remotesensing-15-02687-f001" class="html-fig">Figure 1</a>. The different phases of the storm are marked with vertical dashed lines.</p>
Full article ">Figure 6
<p>Variation in vTEC at low-latitude sites in different longitudinal sectors for the 4 November 2021 geomagnetic storms. (<b>a</b>) COCO, (<b>b</b>) BAKO, (<b>c</b>) HYDE, (<b>d</b>) IISC, (<b>e</b>) KOUC, (<b>f</b>) GLPS, (<b>g</b>) KOUR, and (<b>h</b>) RIOP vTEC from IGS stations. The storm’s phases are denoted by vertical dashed lines.</p>
Full article ">Figure 7
<p>vTEC variation at the mid-latitude stations in different longitudinal sectors for the geomagnetic storms of 2015 and 2018. (<b>a</b>) the AUCK station vTEC of New Zealand, (<b>b</b>,<b>c</b>) the STK2 and USUD stations vTEC of Japan, (<b>d</b>) vTEC of YSSK station of Russia and (<b>e</b>) vTEC of SANT station of Chile for June 2015 geomagnetic storm. Similarly, (<b>a′</b>–<b>e′</b>) the AUCK, STK2, USUD, YSSK and SANT stations vTEC of August 2018 geomagnetic storm. The locations of the stations are shown in <a href="#remotesensing-15-02687-f001" class="html-fig">Figure 1</a>. The different phases of the storm are marked with vertical dashed lines.</p>
Full article ">Figure 8
<p>vTEC variation during the geomagnetic storms of 2021 observed in the mid-latitude sector. (<b>a</b>–<b>e</b>) vTEC of AUCK, STK2, USUD, YSSK and SANT stations of November 2021 geomagnetic storm. Vertical dashed lines are used to denote the various storm phases.</p>
Full article ">Figure 9
<p>vTEC variation at the high-latitude stations in different longitudinal sectors for the geomagnetic storms of 2015 and 2018. Where (<b>a</b>,<b>b</b>) the vTEC of KIR0 and MAR6 of June 2015 storm. (<b>a′</b>,<b>b′</b>) the KIR0 and MAR6 stations vTEC of August 2018 storm. The location of the stations is shown in <a href="#remotesensing-15-02687-f001" class="html-fig">Figure 1</a>. The different phases of the storm are marked with vertical dashed lines.</p>
Full article ">Figure 10
<p>vTEC fluctuation at high-latitude GNSS stations during the 2021 storm. (<b>top</b>) KIR0 station vTEC, and (<b>bottom</b>) MAR6 station vTEC. <a href="#remotesensing-15-02687-f001" class="html-fig">Figure 1</a> shows the position of the stations. The storm’s various phases are marked by vertical dashed lines.</p>
Full article ">Figure 11
<p>GIM TEC maps at different longitudinal sectors during the June 2015 geomagnetic storm, where (<b>a</b>–<b>c</b>) TEC maps of America, Africa, and Asia, and (<b>a`</b>–<b>c`</b>) dTEC maps of America, Africa and Asia.</p>
Full article ">Figure 12
<p>GIM TEC maps of the geomagnetic storm of August 2018, where (<b>a</b>,<b>a`</b>) TEC and dTEC maps of America, (<b>b</b>,<b>b`</b>) TEC and dTEC maps of Africa and (<b>c</b>,<b>c`</b>) TEC and dTEC maps of Asia.</p>
Full article ">Figure 13
<p>The ∑O/N2 ratio from GUVI during the storms of June 2018 and August 2015.</p>
Full article ">Figure 14
<p>vTEC from Swarm satellites during the geomagnetic storm of June 2015.</p>
Full article ">Figure 15
<p>vTEC from Swarm satellites during the geomagnetic storm of August 2018.</p>
Full article ">Figure 16
<p>PPEF behavior during geomagnetic storms of (<b>left</b>) June 2015 and (<b>right</b>) August 2018.</p>
Full article ">Figure 17
<p>PPEF behavior during geomagnetic storm of November 2021.</p>
Full article ">Figure 18
<p>Magnetic field variations during the storms of June 2015 and August 2018. (<b>a</b>,<b>a`</b>) Variation in SYM-H and ASY-H indices; (<b>b</b>–<b>d</b>) H component in 2015 at HUA, GUA and MBO stations; (<b>b`</b>–<b>d`</b>) H component in 2018 at HUA, GUA and DLT stations; (<b>e</b>,<b>e`</b>) EEJ responses. The different phases of the storm are marked with vertical dashed lines.</p>
Full article ">Figure 19
<p>Magnetic field variations during the storms of November 2021. (<b>a</b>) Variation in SYM-H and ASY-H indices; (<b>b</b>–<b>d</b>) H component in 2021 at HUA, GUA and DLT stations; (<b>e</b>) EEJ responses. The SSC is marked with a red arrow. The different phases of the storm are marked with vertical dashed lines.</p>
Full article ">
17 pages, 6972 KiB  
Article
Satellite-Based Estimation of Roughness Length over Vegetated Surfaces and Its Utilization in WRF Simulations
by Yiming Liu, Chong Shen, Xiaoyang Chen, Yingying Hong, Qi Fan, Pakwai Chan, Chunlin Wang and Jing Lan
Remote Sens. 2023, 15(10), 2686; https://doi.org/10.3390/rs15102686 - 22 May 2023
Cited by 3 | Viewed by 2094
Abstract
Based on morphological methods, MODIS satellite remote sensing data were used to establish a dataset of the local roughness length (Z0) of vegetation-covered surfaces in Guangdong Province. The local Z0 was used to update the mesoscale Weather Research and Forecasting [...] Read more.
Based on morphological methods, MODIS satellite remote sensing data were used to establish a dataset of the local roughness length (Z0) of vegetation-covered surfaces in Guangdong Province. The local Z0 was used to update the mesoscale Weather Research and Forecasting (WRF) model in order to quantitatively evaluate its impact on the thermodynamic environment of vegetation-covered surfaces. The specific results are as follows: evergreen broad-leaved forests showed the largest average Z0 values at 1.27 m (spring), 1.15 m (summer), 1.03 m (autumn), and 1.15 m (winter); the average Z0 values of mixed forests ranged from 0.90 to 1.20 m; and those for cropland-covered surfaces ranged from 0.17 to 0.20 m. The Z0 values of individual vegetation coverage types all exhibited relatively high values in spring and low values in autumn, and the default Z0 corresponding to specific vegetation-covered surfaces was significantly underestimated in the WRF model. Modifying the default Z0 of surfaces underlying evergreen broad-leaved forests, mixed forests, and croplands in the model induced only relatively small changes (<1%) in their 2 m temperature, relative humidity, skin surface temperature, and the planetary boundary layer height. However, the average daily wind speed of surfaces covered by evergreen broad-leaved forests, mixed forests, and croplands was reduced by 0.48 m/s, 0.43 m/s, and 0.26 m/s, respectively, accounting for changes of 12.0%, 11.1%, and 6.5%, respectively. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Spatial distribution of IGBP land cover types in Guangdong Province. The red and blue dots are the locations of the automatic surface weather stations over evergreen broad-leaved forest and croplands, respectively.</p>
Full article ">Figure 2
<p>Spatial distribution of vegetation canopy area index (dimensionless) in different months in Guangdong Province in 2017.</p>
Full article ">Figure 3
<p>Distribution of average normalized roughness length (dimensionless) in different months in Guangdong Province in 2017.</p>
Full article ">Figure 4
<p>Distribution of average roughness length (of main underlying surface types) in different months in Guangdong Province in 2017.</p>
Full article ">Figure 5
<p>Monthly variation of vegetation canopy area index (<b>a</b>), normalized roughness length (<b>b</b>), and roughness length (<b>c</b>) of main vegetation types in 2017.</p>
Full article ">Figure 6
<p>Three nested domains in the WRF simulation.</p>
Full article ">Figure 7
<p>Effects of updated Z<sub>0</sub> on meteorological elements in evergreen broad-leaved forest, mixed forest, and cropland land-use types during the daytime and nighttime in November; daytime: 8:00–16:00; night-time: 17:00–7:00; change rate: (case–base)/base×100. LST is the local standard time.</p>
Full article ">Figure 8
<p>Effects of updated Z<sub>0</sub> on the daily variation of (<b>a</b>) skin surface temperature (TSK), (<b>b</b>) ground heat flux (GRD), and (<b>c</b>) sensible heat flux (HFX) in evergreen broad-leaved forest, mixed forest, and cropland land-use types in November.</p>
Full article ">Figure 9
<p>Effects of updated Z<sub>0</sub> on the daily variation of the 10 m wind speed (WS) in evergreen broad-leaved forest, mixed forest, and cropland land-use types in November.</p>
Full article ">
18 pages, 8090 KiB  
Article
A Novel Edge Detection Method for Multi-Temporal PolSAR Images Based on the SIRV Model and a SDAN-Based 3D Gaussian-like Kernel
by Xiaolong Zheng, Dongdong Guan, Bangjie Li, Zhengsheng Chen and Lefei Pan
Remote Sens. 2023, 15(10), 2685; https://doi.org/10.3390/rs15102685 - 22 May 2023
Viewed by 1482
Abstract
Edge detection for PolSAR images has demonstrated its importance in various applications such as segmentation and classification. Although there are many edge detectors which have demonstrated an impressive ability to achieve accurate edge detection results, these methods only focus on edge detection in [...] Read more.
Edge detection for PolSAR images has demonstrated its importance in various applications such as segmentation and classification. Although there are many edge detectors which have demonstrated an impressive ability to achieve accurate edge detection results, these methods only focus on edge detection in a single-date PolSAR image. However, a single-date PolSAR image cannot fully characterize the changes in scattering mechanisms of land cover in different growth cycles, resulting in some omissions of the true edges. In this paper, we propose a novel edge detection method for multi-temporal PolSAR images based on the SIRV model and an SDAN-based 3D Gaussian-like kernel. The spherically invariant random vector (SIRV) and span-driven adaptive neighborhood (SDAN) improve the estimation accuracy of the average covariance matrix (ACM) in terms of data representation and spatial support, respectively. We propose an SDAN-based 2D Gaussian kernel to accurately extract the edge strength of single-date PolSAR images. Then, we design a 1D convolution kernel in the temporal dimension to smooth fluctuations in the edge strength of multi-temporal PolSAR images. The SDAN-based 2D Gaussian kernels in the X- and Y-directions are combined with the 1D convolution kernel in the Z-direction to form an SDAN-based 3D Gaussian-like kernel. In addition, we design an adaptive hysteresis threshold method to optimize the edge map. The performance of our proposed method is presented and analyzed on two real multi-temporal PolSAR datasets, and the results demonstrate that the proposed edge detector achieves a better performance than other edge detectors, particularly for crop regions with time-varying scattering mechanisms. Full article
(This article belongs to the Special Issue Advances of SAR Data Applications)
Show Figures

Figure 1

Figure 1
<p>PolSAR images on two dates obtained by thte Radarsat-2 system in Barrax, southeastern Spain. (<b>a1</b>) Pauli RGB image acquired on 17 May. (<b>b1</b>) Pauli RGB image acquired on 4 July. (<b>a2</b>,<b>b2</b>) are the spans of (<b>a1</b>,<b>b1</b>), respectively. This indicates that PolSAR images with different dates can provide different edge information. Therefore, multi-temporal PolSAR images can provide more edge information than a single-date PolSAR image.</p>
Full article ">Figure 2
<p>Schematic diagram of the proposed method.</p>
Full article ">Figure 3
<p>Diagram of the two window shapes. (<b>a</b>) Rectangular windows. (<b>b</b>) SDAN-based windows.</p>
Full article ">Figure 4
<p>Probability density of the ESM.</p>
Full article ">Figure 5
<p>Indian Head dataset. (<b>a</b>) 15 May. (<b>b</b>) 8 June. (<b>c</b>) 26 July. (<b>d</b>) 12 September. (<b>e</b>) Optical image. (<b>f</b>) Ground truth edges.</p>
Full article ">Figure 6
<p>Barrax dataset. (<b>a</b>) 23 April. (<b>b</b>) 17 May. (<b>c</b>) 10 June. (<b>d</b>) 4 July. (<b>e</b>) Optical image. (<b>f</b>) Ground truth edges.</p>
Full article ">Figure 7
<p>ESMs and edge maps of the Indian Head dataset obtained by different 3D kernels. (<b>A</b>) April 23 with the SDAN-based 2D Gaussian kernel. (<b>B</b>) May 17 with the SDAN-based 2D Gaussian kernel. (<b>C</b>) June 10 with th SDAN-based 2D Gaussian kernel. (<b>D</b>) September 12 with the SDAN-based 2D Gaussian kernel. (<b>E</b>) SDAN-based 3D mean kernel. (<b>F</b>) SDAN-based 3D maximum kernel. (<b>G</b>) SDAN-based 3D RMS kernel. (<b>H</b>) The proposed SDAN-based 3D Gaussian-like kernel. (<b>a</b>–<b>h</b>) are the edge maps of (<b>A</b>–<b>H</b>).</p>
Full article ">Figure 8
<p>Land cover type of the marked region.</p>
Full article ">Figure 9
<p>Photographs of <span class="html-italic">Field Pea</span> and <span class="html-italic">Canola</span> developments.</p>
Full article ">Figure 10
<p>ESMs and edge maps of the Barrax dataset obtained by different 3D kernels. (<b>A</b>) 23 April with the SDAN-based 2D Gaussian kernel. (<b>B</b>) May 17 with the SDAN-based 2D Gaussian kernel. (<b>C</b>) 10 June with the SDAN-based 2D Gaussian kernel. (<b>D</b>) September 12 with the SDAN-based 2D Gaussian kernel. (<b>E</b>) SDAN-based 3D mean kernel. (<b>F</b>) SDAN-based 3D maximum kernel. (<b>G</b>) SDAN-based 3D RMS kernel. (<b>H</b>) The proposed SDAN-based 3D Gaussian-like kernel. (<b>a</b>–<b>h</b>) are the edge maps of (<b>A</b>–<b>H</b>).</p>
Full article ">Figure 11
<p>Edge strength maps of each temporal PolSAR image obtained by different 2D convolution kernels on the Barrax dataset.</p>
Full article ">
23 pages, 13290 KiB  
Article
Performance Assessment of Four Data-Driven Machine Learning Models: A Case to Generate Sentinel-2 Albedo at 10 Meters
by Hao Chen, Xingwen Lin, Yibo Sun, Jianguang Wen, Xiaodan Wu, Dongqin You, Juan Cheng, Zhenzhen Zhang, Zhaoyang Zhang, Chaofan Wu, Fei Zhang, Kechen Yin, Huaxue Jian and Xinyu Guan
Remote Sens. 2023, 15(10), 2684; https://doi.org/10.3390/rs15102684 - 22 May 2023
Cited by 4 | Viewed by 3104
Abstract
High-resolution albedo has the advantage of a higher spatial scale from tens to hundreds of meters, which can fill the gaps of albedo applications from the global scale to the regional scale and can solve problems related to land use change and ecosystems. [...] Read more.
High-resolution albedo has the advantage of a higher spatial scale from tens to hundreds of meters, which can fill the gaps of albedo applications from the global scale to the regional scale and can solve problems related to land use change and ecosystems. The Sentinel-2 satellite provides high-resolution observations in the visible-to-NIR bands, giving possibilities to generate a high-resolution surface albedo at 10 m. This study attempted to evaluate the performance of the four data-driven machine learning algorithms (i.e., random forest (RF), artificial neural network (ANN), k-nearest neighbor (KNN), and XGBoost (XGBT)) for the generation of a Sentinel-2 albedo over flat and rugged terrain. First, we used the RossThick-LiSparseR model and the 3D discrete anisotropic radiative transfer (DART) model to build the narrowband surface reflectance and broadband surface albedo, which acted as the training and testing datasets over flat and rugged terrain. Second, we used the training and testing datasets to drive the four machine learning models, and evaluated the performance of these machine learning models for the generation of Sentinel-2 albedo. Finally, we used the four machine learning models to generate a Sentinel-2 albedo and compared them with in situ albedos to show the models’ application potentials. The results show that these machine learning models have great performance in estimating Sentinel-2 albedos at a 10 m spatial scale. The comparison with in situ albedos shows that the random forest model outperformed the others in estimating a high-resolution surface albedo based on Sentinel-2 datasets over the flat and rugged terrain, with an RMSE smaller than 0.0308 and R2 larger than 0.9472. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The geographic distribution of in situ observations was used in this study.</p>
Full article ">Figure 2
<p>The geographic distribution of MCD43A1 data used in this study. There are three surface types (vegetation, bare ground, snow, and ice).</p>
Full article ">Figure 3
<p>The workflow of the machine learning model for Sentinel-2 albedo retrieval.</p>
Full article ">Figure 4
<p>The scenes of forests on different slopes; (<b>a</b>) slope 10° and aspect 45°; (<b>b</b>) slope 20° and aspect 0°; (<b>c</b>) slope 30° and aspect 90°; (<b>d</b>) slope 40° and aspect 135°; (<b>e</b>) slope 50° and aspect 180°; (<b>f</b>) slope 60° and aspect 90°.</p>
Full article ">Figure 5
<p>Evaluation of BSA estimates for training datasets over flat terrain in (<b>a</b>–<b>d</b>); (<b>a</b>–<b>d</b>) denotes the result of ANN, KNN, RF, and XGBT; Evaluation of WSA estimates for training datasets over flat terrain in (<b>e</b>–<b>h</b>); (<b>e</b><span class="html-italic">–</span><b>h</b>) denotes the result of ANN, KNN, RF, and XGBT.</p>
Full article ">Figure 6
<p>Evaluation of BSA estimates for testing datasets over flat terrain in (<b>a</b>–<b>d</b>); (<b>a</b>–<b>d</b>) denotes the result of ANN, KNN, RF, and XGBT; Evaluation of WSA estimates for testing datasets over flat terrain in (<b>e</b>–<b>h</b>); (<b>e</b>–<b>h</b>) denotes the result of ANN, KNN, RF, and XGBT.</p>
Full article ">Figure 7
<p>Evaluation of BSA estimates for training datasets over rugged terrain in (<b>a</b>–<b>d</b>); (<b>a</b>–<b>d</b>) denotes the result of ANN, KNN, RF, and XGBT; Evaluation of WSA estimates for training datasets over rugged terrain in (<b>e</b>–<b>h</b>); (<b>e</b>–<b>h</b>) denotes the result of ANN, KNN, RF, and XGBT.</p>
Full article ">Figure 8
<p>Evaluation of BSA estimates for training datasets over rugged terrain in (<b>a</b>–<b>d</b>); (<b>a</b>–<b>d</b>) denotes the result of ANN, KNN, RF, and XGBT; Evaluation of WSA estimates for testing datasets over rugged terrain in (<b>e</b>–<b>h</b>); (<b>e</b>–<b>h</b>) denotes the result of ANN, KNN, RF, and XGBT.</p>
Full article ">Figure 9
<p>Evaluation of Sentinel-2 image estimates based on measured values at the in situ sites; (<b>a</b>) is ANN result; (<b>b</b>) is KNN result; (<b>c</b>) is RF result; (<b>d</b>) is XGBT result.</p>
Full article ">Figure 10
<p>Evaluation of Sentinel-2 image estimates based on measured values at the in situ sites over snow-covered terrain; (<b>a</b>) is ANN result; (<b>b</b>) is KNN result; (<b>c</b>) is RF result; (<b>d</b>) is XGBT result.</p>
Full article ">Figure 11
<p>Evaluation of Sentinel-2 image estimates based on measured values at the in situ sites on rugged terrain; (<b>a</b>) is ANN result; (<b>b</b>) is KNN result; (<b>c</b>) is RF result; (<b>d</b>) is XGBT result.</p>
Full article ">Figure 12
<p>Distribution of Sentinel-2 and MODIS black-sky albedo on the glacier, including (<b>a</b>) true-color imagery; (<b>b</b>) Sentinel-2 black-sky albedo; (<b>c</b>) MODIS black-sky albedo.</p>
Full article ">Figure 13
<p>The results of RMSE in different sample quantities. (<b>a</b>) is BSA result; (<b>b</b>) is WSA result.</p>
Full article ">
16 pages, 3766 KiB  
Article
Comparison of Forest Restorations with Different Burning Severities Using Various Restoration Methods at Tuqiang Forestry Bureau of Greater Hinggan Mountains
by Guangshuai Zhao, Erqi Xu, Xutong Yi, Ye Guo and Kun Zhang
Remote Sens. 2023, 15(10), 2683; https://doi.org/10.3390/rs15102683 - 22 May 2023
Cited by 3 | Viewed by 1602
Abstract
Forest disturbances and restoration are key processes in carbon transmission between the terrestrial surface and the atmosphere. In boreal forests, fire is the most common and main disturbance. The reconstruction process for post-disaster vegetation plays an essential role in the restoration of a [...] Read more.
Forest disturbances and restoration are key processes in carbon transmission between the terrestrial surface and the atmosphere. In boreal forests, fire is the most common and main disturbance. The reconstruction process for post-disaster vegetation plays an essential role in the restoration of a forest’s structure and function, and it also maintains the ecosystem’s health and stability. Remote sensing monitoring could reflect dynamic post-fire features of vegetation. However, there are still major differences in the remote sensing index in terms of regional feasibility and sensibility. In this study, the largest boreal primary coniferous forest area in China, the Greater Hinggan Mountains forest area, was chosen as the sampling area. Based on time series data from Landsat-5 TM surface reflectance (SR) and data obtained from sample plots, the burned area was extracted using the Normalized Burn Ratio (NBR). We used the pre- and post-fire difference values (dNBR) and compared them with survey data to classify the burn severity level. The Normalized Difference Vegetation Index (NDVI) (based on spectrum combination) and the Disturbance Index (DI) (based on Tasseled-Cap transformation) were chosen to analyze the difference in the degree of burn severity and vegetation restoration observed using various methods according to the sequential variation feature from 1986 to 2011. The results are as follows: (1) The two remote sensing indexes are both sensitive to fire and the burn severity level. When a fire occurred, the NDVI value for that year decreased dramatically while the DI value increased sharply. Alongside these findings, we observed that the rangeability and restoration period of the two indexes is significantly positively correlated with the degree of burn severity. (2) According to these two indexes, natural vegetation restoration was faster than the restoration achieved using artificial methods. However, compared with the NDVI, the DI showed a clearer improvement in restoration, as the restoration period the DI could evaluate was longer in two different ways: the NDVI illustrated great changes in the burn severity in the 5 years post-fire, while the DI was able to show the changes for more than 20 years. Additionally, from the DI, one could identify felling activities carried out when the artificial restoration methods were initially applied. (3) From the sample-plot data, there were few differences in forest canopy density—the average was between 0.55 and 0.6—between the diverse severity levels and restoration methods after 33 years of recovery. The average diameter at breast height (DBH) and height values of trees in naturally restored areas decreased with the increase in burn severity, but the values were obviously higher than those in artificially restored areas. This indicates that both the burn severity level and restoration methods have important effects on forest restoration, but the results may also have been affected by other factors. Full article
(This article belongs to the Special Issue Applications of Remote Sensing in Spatial Ecology)
Show Figures

Figure 1

Figure 1
<p>Study area.</p>
Full article ">Figure 2
<p>Spatial distribution of different forest restoration methods.</p>
Full article ">Figure 3
<p>Spatial distribution of vegetation index: (<b>a</b>) NDVI (Normalized Difference Vegetation Index); (<b>b</b>) DI (Disturbance index) in 2011.</p>
Full article ">Figure 4
<p>Burn severity mapping: (<b>a</b>) <span class="html-italic">dNBR</span> values; (<b>b</b>) different burn severities.</p>
Full article ">Figure 5
<p>Yearly NDVI (<b>a</b>) and DI (<b>b</b>) trends with different burn severity classes.</p>
Full article ">Figure 6
<p>Yearly NDVI (<b>a</b>) and DI (<b>b</b>) trends with different forest restoration methods.</p>
Full article ">
20 pages, 5271 KiB  
Article
Landsat Satellite Image-Derived Area Evolution and the Driving Factors Affecting Hulun Lake from 1986 to 2020
by Wei Song, Yinglan A, Yuntao Wang and Baolin Xue
Remote Sens. 2023, 15(10), 2682; https://doi.org/10.3390/rs15102682 - 22 May 2023
Cited by 7 | Viewed by 1733
Abstract
The area fluctuation of lakes directly affects the stability of the surrounding ecological environment. Research on the area evolution of lakes and the driving factors affecting it plays an important role in sustainable water resource management. In this study, Hulun Lake, located in [...] Read more.
The area fluctuation of lakes directly affects the stability of the surrounding ecological environment. Research on the area evolution of lakes and the driving factors affecting it plays an important role in sustainable water resource management. In this study, Hulun Lake, located in the Hulunbuir grassland, was taken as the research object. Based on remote sensing images of the Hulun Lake area from 1986 to 2020, MNDWI interpretation was used to obtain the change law of lake surface area over a long time frame. Combined with natural factors and anthropogenic factors, Pearson correlation analysis and principal component analysis were used to analyze the driving force. The results showed that (1) in the past 35 years, the water surface area of Hulun Lake has decreased significantly. The dynamic change in water area could be divided into four stages. The areas with dramatic changes in water area are distributed mainly in the northeast and south of Hulun Lake. (2) In terms of natural factors, the meteorological factors based on evaporation and relative humidity, the runoff of rivers entering the lake, and the vegetation with medium-high coverage and medium-low coverage had significant effects. In terms of anthropogenic factors, the population had the most significant impact. The artificial water diversion project had different degrees of influence on the response of the Hulun Lake area change to natural factors. (3) Anthropogenic factors were the main driving force causing the rapid change in the Hulun Lake area from 2000 to 2016, explaining 48% of the change in the Hulun Lake area. These research results can provide a scientific basis for the development and utilization of water resources and sustainable development in the Hulun Lake area. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

Figure 1
<p>Study area.</p>
Full article ">Figure 2
<p>Comparative verification of water area extraction results (The unit is km<sup>2</sup>): (<b>a</b>) MNDWI and (<b>b</b>) JRC.</p>
Full article ">Figure 3
<p>Characteristics of temporal variation in Hulun Lake. (<b>a</b>) Lake area and (<b>b</b>) water level and water volume.</p>
Full article ">Figure 4
<p>Distribution of the water area of Hulun Lake in key years.</p>
Full article ">Figure 5
<p>Distribution of meteorological factors in the Hulun Lake region.</p>
Full article ">Figure 6
<p>Correlation coefficient between the water area of Hulun Lake and meteorological factors: (<b>a</b>) 1986–2019; (<b>b</b>) 1986–2008; (<b>c</b>) 2009–2019. (Lake area (LA), precipitation (Pre), temperature (Temp), relative humidity (RH), evaporation (Evap), vapor pressure (Vap), and wet day frequency (Wet)).</p>
Full article ">Figure 7
<p>Dynamic changes in the water area and surface runoff in Hulun Lake from 2000 to 2020.</p>
Full article ">Figure 8
<p>1986–2020 annual variation in FVC in the Hulun Lake region (<b>a</b>) and M–K mutation test results (<b>b</b>).</p>
Full article ">Figure 9
<p>Grading map of FVC in the Hulun Lake area.</p>
Full article ">Figure 10
<p>Correlation coefficient between the area of Hulun Lake and the fractional vegetation cover: (<b>a</b>) 1986–2020; (<b>b</b>) 1986–2008; (<b>c</b>) 2009–2020.</p>
Full article ">Figure 11
<p>The correlation coefficient between the area of Hulun Lake and anthropogenic factors.</p>
Full article ">
20 pages, 12719 KiB  
Article
Large-Scale Urban Heating and Pollution Domes over the Indian Subcontinent
by Trisha Chakraborty, Debashish Das, Rafiq Hamdi, Ansar Khan and Dev Niyogi
Remote Sens. 2023, 15(10), 2681; https://doi.org/10.3390/rs15102681 - 22 May 2023
Cited by 3 | Viewed by 3899
Abstract
The unique geographical diversity and rapid urbanization across the Indian subcontinent give rise to large-scale spatiotemporal variations in urban heating and air emissions. The complex relationship between geophysical parameters and anthropogenic activity is vital in understanding the urban environment. This study analyses the [...] Read more.
The unique geographical diversity and rapid urbanization across the Indian subcontinent give rise to large-scale spatiotemporal variations in urban heating and air emissions. The complex relationship between geophysical parameters and anthropogenic activity is vital in understanding the urban environment. This study analyses the characteristics of heating events using aerosol optical depth (AOD) level variability, across 43 urban agglomerations (UAs) with populations of a million or more, along with 13 industrial districts (IDs), and 14 biosphere reserves (BRs) in the Indian sub-continent. Pre-monsoon average surface heating was highest in the urban areas of the western (42 °C), central (41.9 °C), and southern parts (40 °C) of the Indian subcontinent. High concentration of AOD in the eastern part of the Indo-Gangetic Plain including the megacity: Kolkata (decadal average 0.708) was noted relative to other UAs over time. The statistically significant negative correlation (−0.51) between land surface temperature (LST) and AOD in urban areas during pre-monsoon time illustrates how aerosol loading impacts the surface radiation and has a net effect of reducing surface temperatures. Notable interannual variability was noted with, the pre-monsoon LST dropping in 2020 across most of the selected urban regions (approx. 89% urban clusters) while it was high in 2019 (for approx. 92% urban clusters) in the pre-monsoon season. The results indicate complex variability and correlations between LST and urban aerosol at large scales across the Indian subcontinent. These large-scale observations suggest a need for more in-depth analysis at city scales to understand the interplay and combined variability between physical and anthropogenic atmospheric parameters in mesoscale and microscale climates. Full article
Show Figures

Figure 1

Figure 1
<p>Urban agglomerations, (UA1) Kolkata, (UA2) Asansol, (UA3) Dhanbad, (UA4) Ranchi, (UA5) Jamshedpur, (UA6) Patna,(UA7) Lucknow, (UA8) Kanpur, (UA9) Allahabad, (UA10) Varanasi, (UA11) Meerut, (UA12) Ghaziabad, (UA13) Agra, (UA14) Delhi, (UA15) Chandigarh, (UA16) Amritsar, (UA17) Jodhpur, (UA18) Gwalior, (UA19) Bhopal, (UA20) Indore, (UA21) Jabalpur, (UA22) Ahmedabad, (UA23) Surat, (UA24) Vadodara, (UA25) Rajkot, (UA26) Mumbai, (UA27) Pune, (UA28) Nasik, (UA29) Raipur, (UA30) Durg_Bhilainagar, (UA31) Hyderabad, (UA32) Vijayawada, (UA33) Bangalore, (UA34) Chennai, (UA35) Coimbatore, (UA36) Madurai, (UA37) Kannur, (UA38) Kozhikode, (UA39) Kollam, (UA40) Thiruvananthapuram, (UA41) Thrissur, (UA42) Kochi, (UA43) Malappuram and industrial districts (ID1) Jalpaiguri, (ID2) Purnia, (ID3) Gorakhpur, (ID4) Cuttak, (ID5) Lucknow, (ID6) Kanpur, (ID7) Bareilly, (ID8) Gwalior, (ID9) Jabalpur, (ID10) Bhopal, (ID11) Nagpur, (ID12) Kota, (ID13) Hyderabadand biosphere reserves(BR1) Panna, (BR2) Sundarban, (BR3) Seshachalam, (BR4) Nilgiri, (BR5) Agasthyamalai, (BR6) Khanchendzonga, (BR7) Nokrek, (BR8) Nanda Devi, (BR9) Pachmarhi, (BR10) Achanakmar_Amarkantak, (BR11) Simlipal, (BR12) Manas, (BR13) Dibru_Saikhowa, (BR14) Dihang_Dibang in India.</p>
Full article ">Figure 2
<p>Locations of the selected central points of the industrial region, industrial districts, and biosphere reserve for aerosol optical depth (AOD) level analysis. Black boxes denote 1. Indo-Gangetic plain, 2. Utkal Plain, 3. Andhra Plain.</p>
Full article ">Figure 3
<p>Time average map of pre_monsoon (March_May) aerosol optical depth (AOD) in India 2010_2020.</p>
Full article ">Figure 4
<p>(<b>a</b>) Time_series average map of pre_monsoon aerosol optical depth (AOD) level (2010_2020), (<b>b</b>) pre_monsoon aerosol optical depth (AOD) difference map (2020_2010).</p>
Full article ">Figure 5
<p>Spatiotemporal comparison of aerosol optical depth (AOD) level of industrial region and biosphere reserve; Respective year_wise AOD level of BRs (a) BR8, (b) BR2, (c) BR6, (d) BR11, (e) BR13, (f) BR14, (g) BR12, (h) BR7, (i) BR1, (j) BR10, (k) BR9, (l) BR3, (m) BR4, (n) BR5 and industrial regions and industrial districts (1) Gurgaon_Delhi_Meerut, (2) Ambala_Amritsar, (3) Jaipur_Ajmer, (4) ID12, (5) ID7, (6) ID6,(7) ID5, (8) ID3, (9) Chotonagpur, (10) Hooghli, (11) Bhojpur_Munger, (12) ID2, (13) ID1, (14) ID4, (15) Brahmaputra Valley, (16) Bilaspur_Korba, (17) Indore_Dewas_Ujjain, (18) Durg_Raipur, (19) ID10, (20) ID9, (21) ID8, (22) Vishakhapatnam_Guntur, (23) Bangalore_Tamilnadu, (24) Kollam_Thiruvananthapuram, (25) Adilabad_Nizamabad, (26) Kolhapur_South Kannada, (27) Northern Malabar, (28) Middle Malabar, (29) Hyderabad, (30) Gujrat, (31) Mumbai_Pune, (32) ID11.</p>
Full article ">Figure 6
<p>Time average map of pre_monsoon (March_May) land surface temperature (LST) in India 2010_2020.</p>
Full article ">Figure 7
<p>Land surface temperature (LST) anomaly map showing increasing and decreasing trends during pre-monsoon (March_May) season in India 2010_2020.</p>
Full article ">Figure 8
<p>Year_wise corresponding land surface temperature (LST) of urban agglomerations (UAs) and industrial districts (IDs) and biosphere reserves (BRs); Respective year wise LST of UAs and IDs (ID1, UA2, UA21, UA2, UA3, UA4, UA5, UA6, UA19, UA20, UA10, UA8, UA9, UA11, UA12, UA13, UA14, UA15, UA16, UA17, UA18, UA19, UA20, UA21, UA22, UA23, UA24, UA25, UA26, UA27, UA28, UA29, UA30, UA31, UA32, UA33, UA34, UA35, UA36, UA37, UA38, UA39, UA40, UA41, UA42, UA43, ID2, ID3, ID4, ID5, ID6, ID7, ID8, ID9, ID10, ID11, ID12) and BRs (a) BR14, (b) BR15, (c) BR12, (d) BR7, (e) BR6, (f) BR2, (g) BR1, (h) BR9, (i) BR10, (j) BR3, (k) BR4, (l) BR5, (m) BR11, (n) BR8.</p>
Full article ">Figure 9
<p>Scatter plot showing year_wise aerosol optical depth (AOD) and land surface temperature (LST) relationship in urban agglomerations (UAs) and biosphere reserves (BRs).</p>
Full article ">
18 pages, 7151 KiB  
Article
Cotton Seedling Detection and Counting Based on UAV Multispectral Images and Deep Learning Methods
by Yingxiang Feng, Wei Chen, Yiru Ma, Ze Zhang, Pan Gao and Xin Lv
Remote Sens. 2023, 15(10), 2680; https://doi.org/10.3390/rs15102680 - 22 May 2023
Cited by 9 | Viewed by 2469
Abstract
Cotton is one of the most important cash crops in Xinjiang, and timely seedling inspection and replenishment at the seedling stage are essential for cotton’s late production management and yield formation. The background conditions of the cotton seedling stage are complex and variable, [...] Read more.
Cotton is one of the most important cash crops in Xinjiang, and timely seedling inspection and replenishment at the seedling stage are essential for cotton’s late production management and yield formation. The background conditions of the cotton seedling stage are complex and variable, and deep learning methods are widely used to extract target objects from the complex background. Therefore, this study takes seedling cotton as the research object and uses three deep learning algorithms, YOLOv5, YOLOv7, and CenterNet, for cotton seedling detection and counting using images at six different times of the cotton seedling period based on multispectral images collected by UAVs to develop a model applicable to the whole cotton seedling period. The results showed that when tested with data collected at different times, YOLOv7 performed better overall in detection and counting, and the T4 dataset performed better in each test set. Precision, Recall, and F1-Score values with the best test results were 96.9%, 96.6%, and 96.7%, respectively, and the R2, RMSE, and RRMSE indexes were 0.94, 3.83, and 2.72%, respectively. In conclusion, the UAV multispectral images acquired about 23 days after cotton sowing (T4) with the YOLOv7 algorithm achieved rapid and accurate seedling detection and counting throughout the cotton seedling stage. Full article
Show Figures

Figure 1

Figure 1
<p>Study area overview: (<b>a</b>) is the map of Xinjiang, (<b>b</b>) is the map of Shihezi, (<b>c</b>) is the test site, and (<b>d</b>) is the multispectral images collected by the UAV.</p>
Full article ">Figure 2
<p>Image processing flow chart: (<b>a</b>) is the original images of the red, green and NIR bands, (<b>b</b>) is the image after three-band synthesis, (<b>c</b>) after pre-processing and cropping of the image to 640 × 640 size.</p>
Full article ">Figure 3
<p>YOLOv5 network structure.</p>
Full article ">Figure 4
<p>YOLOv7 network structure.</p>
Full article ">Figure 5
<p>CenterNet network structure.</p>
Full article ">Figure 6
<p>Model testing test results: (<b>a</b>) is YOLOv5, (<b>b</b>) is YOLOv7, and (<b>c</b>) is CenterNet.</p>
Full article ">Figure 7
<p>The results of YOLOv5, YOLOv7, and CenterNet counting tests at T1–T6: Each row from top to bottom represents the time points T1–T6, and each column from left to right represents YOLOv5, YOLOv7, and CenterNet. (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>,<b>m</b>,<b>p</b>) represent the test results of YOLOv5 from T1–T6; (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>,<b>n</b>,<b>q</b>) represent the test results of YOLOv7 during T1–T6, and (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>,<b>o</b>,<b>r</b>) represent the test results of CenterNet during T1–T6.</p>
Full article ">Figure 8
<p>An example of the results from the three algorithms for counting cotton seedlings in the T4 dataset: (<b>a</b>) is the manually labeled cotton seedling image, (<b>b</b>–<b>d</b>) represent the counting results using the YOLOv5, YOLOv7 and CenterNet algorithms, respectively; yellow boxes indicate missed detections, blue boxes indicate false positives.</p>
Full article ">Figure 9
<p>The T4 dataset was tested using YOLOv7, as well as the counts of the model in the other five datasets: (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>) indicate manually labeled images in the T1, T2, T3, T5, and T6 datasets, respectively. (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>) correspond to the count results of YOLOv7. The yellow box in the figure indicates a missed detection, and the blue box indicates a false detection. (<b>f</b>,<b>j</b>) indicate T3 and T6, which were cloudy at the time of image acquisition.</p>
Full article ">
14 pages, 2001 KiB  
Article
Establishing the Position and Drivers of the Eastern Andean Treeline with Automated Transect Sampling
by Przemyslaw Zelazowski, Stefan Jozefowicz, Kenneth J. Feeley and Yadvinder Malhi
Remote Sens. 2023, 15(10), 2679; https://doi.org/10.3390/rs15102679 - 22 May 2023
Viewed by 1870
Abstract
The eastern Andean treeline (EATL) is the world’s longest altitudinal ecotone and plays an important role in biodiversity conservation in the context of land use/cover and climate change. The purpose of this study was to assess to what extent the position of the [...] Read more.
The eastern Andean treeline (EATL) is the world’s longest altitudinal ecotone and plays an important role in biodiversity conservation in the context of land use/cover and climate change. The purpose of this study was to assess to what extent the position of the tropical EATL (9°N–18°S) is in near-equilibrium with the climate, which determines its potential to adapt to climate change. On a continental scale, we have used land cover maps (MODIS MCD12) and elevation data (SRTM) to make the first-order assessment of the EATL position and continuity. For the assessment on a local scale and to address the three-dimensional nature of environmental change in mountainous environments, a novel method of automated delineation and assessment of altitudinal transects was devised and applied to Landsat-based forest maps (GLAD) and fine-resolution climatology (CHELSA). The emergence of a consistent longitudinal gradient of the treeline elevation over half of the EATL extent, which increases towards the equator by ~30 m and ~60 m per geographic degree from the south and north, respectively, serves as a first-order validation of the approach, while the local transects reveal a more nuanced aspect-dependent pattern. We conclude that the applied dual-scale approach with automated mass transect sampling allows for an improved understanding of treeline dynamics. Full article
(This article belongs to the Section Biogeosciences Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study area, encompassing the Northern Andes and the northern part of the Central Andes, where the tropical part of the eastern Andean treeline is located. The MODIS-based mask of evergreen deciduous forests is shown in green. A–E are focus regions.</p>
Full article ">Figure 2
<p>Two strategies to investigate the treeline position through automated delineation of transects. The top row shows transects against high-resolution Microsoft Bing Maps with an overlaid colored elevation map (SRTM). The bottom row shows transects against the forest mask derived from the Global Land Cover and Land Use Change dataset [<a href="#B27-remotesensing-15-02679" class="html-bibr">27</a>]. Elements marked in red—local elevation maxima within the forest mask (<b>left</b>) and the MODIS MCD12Q1 product-based treeline (<b>right</b>)—were used as anchors for transects.</p>
Full article ">Figure 3
<p>Eastern Andean treeline of evergreen broadleaf forests and corresponding information on topography, temperature, and aridity, across ~30 degrees of latitude. All data points represent 2.5 degree-wide bins. Red points marked A–E represent the average treeline elevation in the focus regions, estimated through automated transect sampling.</p>
Full article ">Figure 4
<p><b>Left</b>: Eastern Andean treeline marked on the elevation model (SRTM). A–E are focus regions. <b>Middle</b>: Focus regions with marked elevational transects. <b>Right</b>: The dependence between the aspect of forested slopes and the treeline elevation within focus regions.</p>
Full article ">
22 pages, 7128 KiB  
Article
Regional Variability of Raindrop Size Distribution from a Network of Disdrometers over Complex Terrain in Southern China
by Asi Zhang, Chao Chen and Lin Wu
Remote Sens. 2023, 15(10), 2678; https://doi.org/10.3390/rs15102678 - 21 May 2023
Cited by 2 | Viewed by 1726
Abstract
Raindrop size distribution (DSD) over the complex terrain of Guangdong Province, southern China, was studied using six disdrometers operated by the Guangdong Meteorology Service during the period 1 March 2018 to 30 August 2022 (~5 years). To analyze the long-term DSD characteristics over [...] Read more.
Raindrop size distribution (DSD) over the complex terrain of Guangdong Province, southern China, was studied using six disdrometers operated by the Guangdong Meteorology Service during the period 1 March 2018 to 30 August 2022 (~5 years). To analyze the long-term DSD characteristics over complex topography in southern China, three stations on the windward side, Haifeng, Enping and Qingyuan, and three stations on the leeward side, Meixian, Luoding and Xuwen, were utilized. The median mass-weighted diameter (Dm) value was higher on the windward than on the leeward side, and the windward-side stations also showed greater Dm variability. With regard to the median generalized intercept (log10Nw) value, the log10Nw values decreased from coastal to mountainous areas. Although there were some differences in Dm, log10Nw and liquid water content (LWC) frequency between the six stations, there were still some similarities, with the Dm, log10Nw and LWC frequency all showing a single-peak curve. In addition, the diurnal variation of the mean log10Nw had a negative relationship with Dm diurnal variation although the inverse relationship was not particularly evident at the Haifeng site. The diurnal mean rainfall rate also peaked in the afternoon and exceeded the maximum at night which indicated that strong land heating in the daytime significantly influenced the local DSD variation. What is more, the number concentration of drops, N(D), showed an exponential shape which decreased monotonically for all rainfall rate types at the six observation sites, and an increase in diameter caused by increases in the rainfall rate was also noticeable. As the rainfall rate increased, the N(D) for sites on the windward side (i.e., Haifeng, Enping and Qingyuan) were higher than for the sites on the leeward side (i.e., Meixian, Luoding and Xuwen), and the difference between them also became distinct. The abovementioned DSD characteristic differences also showed appreciable variability in convective precipitation between stations on the leeward side (i.e., Meixian, Luoding and Xuwen) and those on the windward side (Haifeng and Enping, but not Qingyuan). This study enhances the precision of numerical weather forecast models in predicting precipitation and verifies the accuracy of measuring precipitation through remote sensing instruments, including weather radars located on the ground. Full article
(This article belongs to the Special Issue Remote Sensing of Clouds and Precipitation at Multiple Scales II)
Show Figures

Figure 1

Figure 1
<p>Guangdong province and its digital elevation model (DEM). The red symbols are the disdrometer locations used in this study. The locations of Nanling, Lianhua, Tianlu, Yunwu and Wuzhi Mountains are also displayed.</p>
Full article ">Figure 2
<p>Scatter plot of hourly accumulated rainfall collected from the HY-P1000 disdrometers and rain gauges from automatic weather stations.</p>
Full article ">Figure 3
<p>Box−and−whisker plot of (<b>a</b>) <span class="html-italic">D<sub>m</sub></span> and (<b>b</b>) log<sub>10</sub><span class="html-italic">N<sub>w</sub></span> distribution over the six different sites. The box represents the data between the 25th and 75th percentiles and the whiskers show the maximum and minimum values. The horizontal red line within the box represents the median value of the distribution.</p>
Full article ">Figure 4
<p>Frequency of (<b>a</b>) <span class="html-italic">D<sub>m</sub></span>, (<b>b</b>) log<sub>10</sub><span class="html-italic">N<sub>w</sub></span> and (<b>c</b>) <span class="html-italic">LWC</span> over the Meixian, Haifeng, Luoding, Enping, Xuwen and Qingyuan areas.</p>
Full article ">Figure 5
<p>Diurnal variation of the mean (<b>a</b>) <span class="html-italic">D<sub>m</sub></span> (<b>b</b>) log<sub>10</sub><span class="html-italic">N<sub>w</sub></span> and (<b>c</b>) rain rate over the Meixian, Haifeng, Luoding, Enping, Xuwen and Qingyuan areas.</p>
Full article ">Figure 6
<p>The number variation for every hour over the Meixian, Haifeng, Luoding, Enping, Xuwen and Qingyuan areas.</p>
Full article ">Figure 7
<p>Variation of the mean raindrop concentration and raindrop diameter (Diameter, mm) in Meixian, Haifeng, Luoding, Enping, Xuwen and Qingyuan over five years.</p>
Full article ">Figure 8
<p>Average raindrop spectra of Meixian, Haifeng, Luoding, Enping, Xuwen and Qingyuan for rainfall in 12 rainfall rate classes (<b>a</b>) C1: 0.1–0.2, (<b>b</b>) C2: 0.2–0.4, (<b>c</b>) C3: 0.4–0.7, (<b>d</b>) C4: 0.7–1.0, (<b>e</b>) C5: 1.0–2.0, (<b>f</b>) C6: 2.0–5.0, (<b>g</b>) C7: 5.0–8.0, (<b>h</b>) C8: 8.0–12.0, (<b>i</b>) C9: 12.0–18.0, (<b>j</b>) C10: 18.0–25.0, (<b>k</b>) C11: 25.0–40.0 and (<b>l</b>) C12: &gt;40 mm h<sup>−1</sup>.</p>
Full article ">Figure 8 Cont.
<p>Average raindrop spectra of Meixian, Haifeng, Luoding, Enping, Xuwen and Qingyuan for rainfall in 12 rainfall rate classes (<b>a</b>) C1: 0.1–0.2, (<b>b</b>) C2: 0.2–0.4, (<b>c</b>) C3: 0.4–0.7, (<b>d</b>) C4: 0.7–1.0, (<b>e</b>) C5: 1.0–2.0, (<b>f</b>) C6: 2.0–5.0, (<b>g</b>) C7: 5.0–8.0, (<b>h</b>) C8: 8.0–12.0, (<b>i</b>) C9: 12.0–18.0, (<b>j</b>) C10: 18.0–25.0, (<b>k</b>) C11: 25.0–40.0 and (<b>l</b>) C12: &gt;40 mm h<sup>−1</sup>.</p>
Full article ">Figure 9
<p>Distribution of (<b>a</b>) <span class="html-italic">D<sub>m</sub></span> and (<b>b</b>) log<sub>10</sub><span class="html-italic">N<sub>w</sub></span> for rainfall at Meixian, Haifeng, Luoding, Enping, Xuwen and Qingyuan in relation to rainfall rates (C1: 0.1–0.2, C2: 0.2–0.4, C3: 0.4–0.7, C4: 0.7–1.0, C5: 1.0–2.0, C6: 2.0–5.0, C7: 5.0–8.0, C8: 8.0–12.0, C9: 12.0–18.0, C10: 18.0–25.0, C11: 25.0–40.0 and C12: &gt;40 mm h<sup>−1</sup>).</p>
Full article ">Figure 10
<p>The scatter plots of D<sub>m</sub> vs. rain rate for (<b>a</b>) stratiform and (<b>b</b>) convective precipitation and log<sub>10</sub>N<sub>w</sub> vs. rain rate for (<b>c</b>) stratiform and (<b>d</b>) convective precipitation.</p>
Full article ">Figure 11
<p>Average raindrop spectra of stratiform and convective precipitation at Meixian, Haifeng, Luoding, Enping, Xuwen and Qingyuan stations. the dashed line represents the stratiform precipitation and the solid line represents the convective precipitation.</p>
Full article ">Figure 12
<p>Radar reflectivity <span class="html-italic">Z</span> and rainfall rate <span class="html-italic">R</span> relationship of stratiform and convective precipitation at Meixian, Haifeng, Luoding, Enping, Xuwen, Qingyuan stations and for all data.</p>
Full article ">
15 pages, 3351 KiB  
Article
Bias-Corrected RADARSAT-2 Soil Moisture Dynamics Reveal Discharge Hysteresis at An Agricultural Watershed
by Ju Hyoung Lee and Karl-Erich Lindenschmidt
Remote Sens. 2023, 15(10), 2677; https://doi.org/10.3390/rs15102677 - 21 May 2023
Cited by 2 | Viewed by 1866 | Correction
Abstract
Satellites are designed to monitor geospatial data over large areas at a catchment scale. However, most of satellite validation works are conducted at local point scales with a lack of spatial representativeness. Although upscaling them with a spatial average of several point data [...] Read more.
Satellites are designed to monitor geospatial data over large areas at a catchment scale. However, most of satellite validation works are conducted at local point scales with a lack of spatial representativeness. Although upscaling them with a spatial average of several point data collected in the field, it is almost impossible to reorganize backscattering responses at pixel scales. Considering the influence of soil storage on watershed streamflow, we thus suggested watershed-scale hydrological validation. In addition, to overcome the limitations of backscattering models that are widely used for C-band Synthetic Aperture Radar (SAR) soil moisture but applied to bare soils only, in this study, RADARSAT-2 soil moisture was stochastically retrieved to correct vegetation effects arising from agricultural lands. Roughness-corrected soil moisture retrievals were assessed at various spatial scales over the Brightwater Creek basin (land cover: crop lands, gross drainage area: 1540 km2) in Saskatchewan, Canada. At the point scale, local station data showed that the Root Mean Square Errors (RMSEs), Unbiased RMSEs (ubRMSEs) and biases of Radarsat-2 were 0.06~0.09 m3/m3, 0.04~0.08 m3/m3 and 0.01~0.05 m3/m3, respectively, while 1 km Soil Moisture Active Passive (SMAP) showed underestimation at RMSEs of 0.1~0.22 m3/m3 and biases of −0.036~−0.2080 m3/m3. Although SMAP soil moisture better distinguished the contributing area at the catchment scale, Radarsat-2 soil moisture showed a better discharge hysteresis. A reliable estimation of the soil storage dynamics is more important for discharge forecasting than a static classification of contributing and noncontributing areas. Full article
(This article belongs to the Special Issue Radar Remote Sensing for Monitoring Agricultural Management)
Show Figures

Figure 1

Figure 1
<p>Bright Water Creek watershed: (<b>a</b>) local stations located in the study domain (numbers refer to the last two digits of station ID 27010XX), (<b>b</b>) DEM in meters [<a href="#B4-remotesensing-15-02677" class="html-bibr">4</a>] and (<b>c</b>) land cover.</p>
Full article ">Figure 2
<p>Validation of the soil moisture retrievals from the local stations (figures (<b>a</b>–<b>g</b>) are in the same order with <a href="#remotesensing-15-02677-t001" class="html-table">Table 1</a>).</p>
Full article ">Figure 2 Cont.
<p>Validation of the soil moisture retrievals from the local stations (figures (<b>a</b>–<b>g</b>) are in the same order with <a href="#remotesensing-15-02677-t001" class="html-table">Table 1</a>).</p>
Full article ">Figure 3
<p>Spatial distribution on DoY 156: (<b>a</b>) Radarsat-2 soil moisture (m<sup>3</sup>/m<sup>3</sup>), (<b>b</b>) SMAP soil moisture (m<sup>3</sup>/m<sup>3</sup>) and (<b>c</b>) 500 m MODIS LAI on DoY 153. Contributing area is confined by the yellow line.</p>
Full article ">Figure 4
<p>Spatial distribution on DoY190: (<b>a</b>) Radarsat-2 soil moisture (m<sup>3</sup>/m<sup>3</sup>), (<b>b</b>) Radarsat-2 backscatters, (<b>c</b>) SMAP soil moisture (m<sup>3</sup>/m<sup>3</sup>) and (<b>d</b>) SMAP brightness temperature. Contributing area is confined by the yellow line.</p>
Full article ">Figure 5
<p>Areal averaged soil moisture over the contributing and noncontributing areas in comparison with the water levels at the riparian outlet: (<b>a</b>) Radarsat-2 and (<b>b</b>) SMAP.</p>
Full article ">
22 pages, 7575 KiB  
Article
A Novel Device-Free Positioning Method Based on Wi-Fi CSI with NLOS Detection and Bayes Classification
by Xingyu Zheng, Ruizhi Chen, Liang Chen, Lei Wang, Yue Yu, Zhenbing Zhang, Wei Li, Yu Pei, Dewen Wu and Yanlin Ruan
Remote Sens. 2023, 15(10), 2676; https://doi.org/10.3390/rs15102676 - 21 May 2023
Cited by 3 | Viewed by 2789
Abstract
Device-free wireless localization based on Wi-Fi channel state information (CSI) is an emerging technique that could estimate users’ indoor locations without invading their privacy or requiring special equipment. It deduces the position of a person by analyzing the influence on the CSI of [...] Read more.
Device-free wireless localization based on Wi-Fi channel state information (CSI) is an emerging technique that could estimate users’ indoor locations without invading their privacy or requiring special equipment. It deduces the position of a person by analyzing the influence on the CSI of Wi-Fi signals. When pedestrians block the signals between the transceivers, the non-line-of-sight (NLOS) transmission occurs. It should be noted that NLOS has been a significant factor restricting the device-free positioning accuracy due to signal reduction and abnormalities during multipath propagation. For this problem, we analyzed the NLOS effect in an indoor environment and found that the position error in the LOS condition is different from the NLOS condition. Then, two empirical models, namely, a CSI passive positioning model and a CSI NLOS/LOS detection model, have been derived empirically with extensive study, which can obtain better robustness identified results in the case of NLOS and LOS conditions. An algorithm called SVM-NB (Support Vector Machine-Naive Bayes) is proposed to integrate the SVM NLOS detection model with the Naive Bayes fingerprint method to narrow the matching area and improve position accuracy. The NLOS identification precision is better than 97%. The proposed method achieves localization accuracy of 0.82 and 0.73 m in laboratory and corridor scenes, respectively. Compared to the Bayes method, our tests showed that the positioning accuracy of the NLOS condition is improved by 28.7% and that of the LOS condition by 26.2%. Full article
Show Figures

Figure 1

Figure 1
<p>Device-free CSI fingerprinting position scheme.</p>
Full article ">Figure 2
<p>Fingerprint with NLOS detection System framework.</p>
Full article ">Figure 3
<p>The layout of Wi-Fi signals propagation.</p>
Full article ">Figure 4
<p>Cases of LOS and NLOS propagation of CSI.</p>
Full article ">Figure 5
<p>CSI amplitude in two cases: (<b>a</b>) NLOS condition amplitude and (<b>b</b>) LOS condition amplitude.</p>
Full article ">Figure 6
<p>Two training settings for the device-free position: (<b>a</b>) corridor experiment and (<b>b</b>) laboratory experiment.</p>
Full article ">Figure 7
<p>NLOS and LOS propagation cases.</p>
Full article ">Figure 8
<p>Experimental laboratory environment.</p>
Full article ">Figure 9
<p>Experimental corridor environment.</p>
Full article ">Figure 10
<p>Two testing settings for device-free position: (<b>a</b>) corridor experiment and (<b>b</b>) laboratory experiment.</p>
Full article ">Figure 11
<p>Comparison of four solutions position errors CDF in the laboratory scene.</p>
Full article ">Figure 12
<p>Comparison of four solutions position errors CDF in the corridor scene.</p>
Full article ">Figure 13
<p>Comparison of two solution test point average positions in corridor environment.</p>
Full article ">Figure 14
<p>Comparison of two solution test point average position in laboratory environment.</p>
Full article ">Figure 15
<p>Comparison of solution position errors under NLOS and LOS.</p>
Full article ">Figure 16
<p>Different router placements: (<b>a</b>) four routers placement information, (<b>b</b>) three routers placement information, (<b>c</b>) two routers placement information, and (<b>d</b>) one router placement information.</p>
Full article ">Figure 16 Cont.
<p>Different router placements: (<b>a</b>) four routers placement information, (<b>b</b>) three routers placement information, (<b>c</b>) two routers placement information, and (<b>d</b>) one router placement information.</p>
Full article ">Figure 17
<p>The relationship between number of NLOS transmission paths and position error.</p>
Full article ">
19 pages, 7687 KiB  
Article
Accelerated Restoration of Vegetation in Wuwei in the Arid Region of Northwestern China since 2000 Driven by the Interaction between Climate and Human Beings
by Xin Li and Liqin Yang
Remote Sens. 2023, 15(10), 2675; https://doi.org/10.3390/rs15102675 - 21 May 2023
Cited by 7 | Viewed by 1784
Abstract
The Wuwei area in the arid region of northwestern China is impacted by the harsh natural environment and human activities, and the problem of ecological degradation is severe there. In order to ensure the sustainable development of the regional social economy, it is [...] Read more.
The Wuwei area in the arid region of northwestern China is impacted by the harsh natural environment and human activities, and the problem of ecological degradation is severe there. In order to ensure the sustainable development of the regional social economy, it is necessary to monitor the changes in vegetation in Wuwei and its corresponding nonlinear relationships with climate change and human activities. In this study, the inter-annual and spatial–temporal evolution characteristics of vegetation in Wuwei from 1982 to 2015 have been analyzed based on non-parametric statistical methods. The analysis revealed that the areas of vegetation restoration and degradation accounted for 77 and 23% of the total area of the research area, respectively. From 1982 to 1999, vegetation degradation became extremely serious (14.4%) and was primarily concentrated in Gulang County and the high-altitude areas in the southwest. Since the ecological restoration project was implemented in 2000, there have been prominent results in vegetation restoration. The geographically and temporally weighted regression model shows that each climate factor has contributed to the vegetation restoration in the Wuwei area during the last 34 years, with their contributions ranked as precipitation (71.2%), PET (43.9%), solar radiation (34.8%), temperature (33.1%), and wind speed (31%). An analysis of the land-use data with 30 m resolution performed in this study revealed that the conversion area among land cover from 1985 to 2015 accounts for 14.9% of the total area. In it, the conversion area from non-ecological land to ecological land accounts for 5.7% of the total area. The farmland, grassland, and woodland areas have increased by 20.1, 20.6, and 8.5%, respectively, indicating that human activities such as agricultural intensification and ecological restoration projects have played a crucial role in vegetation restoration. Full article
Show Figures

Figure 1

Figure 1
<p>Flow chart of this study.</p>
Full article ">Figure 2
<p>Location of Wuwei prefecture, in northwest central Gansu province.</p>
Full article ">Figure 3
<p>The interannual evolution and abrupt change of NDVI throughout the growing season (April to October) in the whole study area during the period of 1982–2015.</p>
Full article ">Figure 4
<p>Change in NDVI trend among different land-use types in Wuwei region during the period of 1982–2015.</p>
Full article ">Figure 5
<p>The interannual change of climate factors throughout the growing season (April to October) in the whole study area during the period of 1982–2015.</p>
Full article ">Figure 6
<p>The spatial–temporal variations of NDVI trends in the three periods. (<b>a</b>) 1982–2015, (<b>b</b>) 1982–1999, and (<b>c</b>) 2000–2015.</p>
Full article ">Figure 7
<p>Spatial–temporal distribution diagram of the response of vegetation in the Wuwei area to climate change.</p>
Full article ">Figure 8
<p>The spatial–temporal distributions of land-use and land-cover type in 1985, 2000, 2005, 2010, and 2015 in Wuwei.</p>
Full article ">Figure 9
<p>Spatial diagram of the internal transition of NEL and EL during the period of 1982–2015 in Wuwei.</p>
Full article ">Figure 10
<p>The contributions of urbanization, ecological restoration, and agricultural expansion on NDVI’s increased trends during the period of 1982–2015.</p>
Full article ">
24 pages, 25319 KiB  
Article
Sensitivity Assessment of Land Desertification in China Based on Multi-Source Remote Sensing
by Yu Ren, Xiangjun Liu, Bo Zhang and Xidong Chen
Remote Sens. 2023, 15(10), 2674; https://doi.org/10.3390/rs15102674 - 21 May 2023
Cited by 6 | Viewed by 2905
Abstract
Desertification, a current serious global environmental problem, has caused ecosystems and the environment to degrade. The total area of desertified land is about 1.72 million km2 in China, which is extensively affected by desertification. Estimating land desertification risks is the top priority [...] Read more.
Desertification, a current serious global environmental problem, has caused ecosystems and the environment to degrade. The total area of desertified land is about 1.72 million km2 in China, which is extensively affected by desertification. Estimating land desertification risks is the top priority for the sustainable development of arid and semi-arid lands in China. In this study, the Mediterranean Desertification and Land Use (MEDALUS) model was used to assess the sensitivity of land desertification in China. Based on multi-source remote sensing data, this study integrated natural and human factors, calculated the land desertification sensitivity index by overlaying four indicators (soil quality, vegetation quality, climate quality, and management quality), and explored the driving forces of desertification using a principal component and correlation analysis. It was found that the spatial distribution of desertification sensitivity areas in China shows a distribution pattern of gradually decreasing from northwest to southeast, and the areas with very high and high desertification sensitivities were about 620,629 km2 and 2,384,410 km2, respectively, which accounts for about 31.84% of the total area of the country. The very high and high desertification sensitivity areas were mainly concentrated in the desert region of northwest China. The principal component and correlation analysis of the sub-indicators in the MEDALUS model indicated that erosion protection, drought resistance, and land use were the main drivers of desertification in China. Furthermore, the aridity index, soil pH, plant coverage, soil texture, precipitation, soil depth, and evapotranspiration were the secondary drivers of desertification in China. Moreover, the desertification sensitivity caused by drought resistance, erosion protection, and land use was higher in the North China Plain region and Guanzhong Basin. The results of the quantitative analysis of the driving forces of desertification based on mathematical statistical methods in this study provide a reference for a comprehensive strategy to combat desertification in China and offer new ideas for the assessment of desertification sensitivity at macroscopic scales. Full article
(This article belongs to the Special Issue Integrating Earth Observations into Ecosystem Service Models)
Show Figures

Figure 1

Figure 1
<p>Study area overview. The map includes the types of deserts in China and the specific locations of the eight major deserts and four major sandy land areas. The data set was provided by the Environmental and Ecological Science Data Center for West China, National Natural Science Foundation of China (<a href="http://westdcwestgis.ac.cn" target="_blank">http://westdcwestgis.ac.cn</a>, accessed on 2 February 2022). Review Number: GS (2020) 4619.</p>
Full article ">Figure 2
<p>Flowchart of land desertification sensitivity assessment in China, modified from Ferrara et al. [<a href="#B28-remotesensing-15-02674" class="html-bibr">28</a>].</p>
Full article ">Figure 3
<p>Soil quality sub-index classification results. (<b>a</b>) The result after classifying and assigning weights to soil pH; (<b>b</b>) the result after classifying and assigning weights to rock fragments; (<b>c</b>) the result after classifying and assigning weights to terrain slope; (<b>d</b>) the result after classifying and assigning weights to soil texture; (<b>e</b>) the result after classifying and assigning weights to soil depth.</p>
Full article ">Figure 4
<p>Vegetation quality sub-index classification results. (<b>a</b>) The result after classifying and assigning weights to drought resistance; (<b>b</b>) the result after classifying and assigning weights to fire risk; (<b>c</b>) the result after classifying and assigning weights to erosion protection; (<b>d</b>) the result after classifying and assigning weights to plant cover.</p>
Full article ">Figure 5
<p>Climate quality sub-index classification results. (<b>a</b>) The result after classifying and assigning weights to evapotranspiration; (<b>b</b>) the result after classifying and assigning weights to the aridity index; (<b>c</b>) the result after classifying and assigning weights to precipitation.</p>
Full article ">Figure 6
<p>Management quality sub-index classification results. (<b>a</b>) The result after classifying and assigning weights to land use; (<b>b</b>) the result after classifying and assigning weights to population density.</p>
Full article ">Figure 7
<p>Spatial distribution of soil, vegetation, climate, and management quality indexes. (<b>a</b>) Soil quality index map; (<b>b</b>) vegetation quality index map; (<b>c</b>) climate quality index map; (<b>d</b>) management quality index map.</p>
Full article ">Figure 8
<p>Spatial distribution of desertification sensitivity index and grade. (<b>a</b>) Desertification sensitivity index map; (<b>b</b>) desertification sensitivity grade map. Grades 1–8 represent different levels of desertification sensitivity, with a higher grade indicating a greater risk of desertification.</p>
Full article ">Figure 9
<p>Principal component analysis score plot. VL indicates the sample points with very low sensitivity to desertification; L indicates the sample points with low sensitivity to desertification; M indicates the sample points with medium sensitivity to desertification; H indicates the sample points with high sensitivity to desertification; VH indicates the sample points with very high sensitivity to desertification.</p>
Full article ">Figure 10
<p>Plot of principal component analysis loadings. DSI, desertification sensitivity index; SQI, soil quality index; VQI, vegetation quality index; CQI, climate quality index; MQI, management quality index; TS, terrain slope; SP, soil pH; SD, soil depth; RF, rock fragments; ST, soil texture; FR, fire risk; DR, drought resistance; EP, erosion protection; PC, plant cover; AI, aridity index; PRE, precipitation; ETP, evapotranspiration; POP, population density; LU, land use.</p>
Full article ">Figure 11
<p>Heat map of desertification correlation in China. DSI, desertification sensitivity index; SQI, soil quality index; VQI, vegetation quality index; CQI, climate quality index; MQI, management quality index; TS, terrain slop; SP, soil pH; SD, soil depth; RF, rock fragments; ST, soil texture; FR, fire risk; DR, drought resistance; EP, erosion protection; PC, plant cover; AI, aridity index; PRE, precipitation; ETP, evapotranspiration; POP, population density; LU, land use. <span class="html-italic">p</span> &lt; 0.05 is a slightly significant correlation; <span class="html-italic">p</span> &lt; 0.01 is a significant correlation; <span class="html-italic">p</span> &lt; 0.001 is a very significant correlation.</p>
Full article ">Figure 12
<p>Localized areas of high desertification sensitivity.</p>
Full article ">Figure 13
<p>Heat map of desertification correlation in region A in <a href="#remotesensing-15-02674-f012" class="html-fig">Figure 12</a>. DSI, desertification sensitivity index; SQI, soil quality index; VQI, vegetation quality index; CQI, climate quality index; MQI, management quality index; TS, terrain slope; SP, soil pH; SD, soil depth; RF, rock fragments; ST, soil texture; FR, fire risk; DR, drought resistance; EP, erosion protection; PC, plant cover; AI, aridity index; PRE, precipitation; ETP, evapotranspiration; POP, population density; LU, land use. <span class="html-italic">p</span> &lt; 0.05 is a slightly significant correlation; <span class="html-italic">p</span> &lt; 0.01 is a significant correlation; <span class="html-italic">p</span> &lt; 0.001 is a very significant correlation.</p>
Full article ">Figure 14
<p>Heat map of desertification correlation in region B in <a href="#remotesensing-15-02674-f012" class="html-fig">Figure 12</a>. DSI, desertification sensitivity index; SQI, soil quality index; VQI, vegetation quality index; CQI, climate quality index; MQI, management quality index; TS, terrain slope; SP, soil pH; SD, soil depth; RF, rock fragments; ST, soil texture; FR, fire risk; DR, drought resistance; EP, erosion protection; PC, plant cover; AI, aridity index; PRE, precipitation; ETP, evapotranspiration; POP, population density; LU, land use. <span class="html-italic">p</span> &lt; 0.05 is a slightly significant correlation; <span class="html-italic">p</span> &lt; 0.01 is a significant correlation; <span class="html-italic">p</span> &lt; 0.001 is a very significant correlation.</p>
Full article ">
19 pages, 4746 KiB  
Article
An Efficient Cloud Classification Method Based on a Densely Connected Hybrid Convolutional Network for FY-4A
by Bo Wang, Mingwei Zhou, Wei Cheng, Yao Chen, Qinghong Sheng, Jun Li and Li Wang
Remote Sens. 2023, 15(10), 2673; https://doi.org/10.3390/rs15102673 - 21 May 2023
Cited by 5 | Viewed by 2001
Abstract
Understanding atmospheric motions and projecting climate changes depends significantly on cloud types, i.e., different cloud types correspond to different atmospheric conditions, and accurate cloud classification can help forecasts and meteorology-related studies to be more effectively directed. However, accurate classification of clouds is challenging [...] Read more.
Understanding atmospheric motions and projecting climate changes depends significantly on cloud types, i.e., different cloud types correspond to different atmospheric conditions, and accurate cloud classification can help forecasts and meteorology-related studies to be more effectively directed. However, accurate classification of clouds is challenging and often requires certain manual involvement due to the complex cloud forms and dispersion. To address this challenge, this paper proposes an improved cloud classification method based on a densely connected hybrid convolutional network. A dense connection mechanism is applied to hybrid three-dimensional convolutional neural network (3D-CNN) and two-dimensional convolutional neural network (2D-CNN) architectures to use the feature information of the spatial and spectral channels of the FY-4A satellite fully. By using the proposed network, cloud categorization solutions with a high temporal resolution, extensive coverage, and high accuracy can be obtained without the need for any human intervention. The proposed network is verified using tests, and the results show that it can perform real-time classification tasks for seven different types of clouds and clear skies in the Chinese region. For the CloudSat 2B-CLDCLASS product as a test target, the proposed network can achieve an overall accuracy of 95.2% and a recall of more of than 82.9% for all types of samples, outperforming the other deep-learning-based techniques. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Examples of the advanced geostationary radiation imager’s 14-channel observation images.</p>
Full article ">Figure 2
<p>Illustration of the data preprocessing process.</p>
Full article ">Figure 3
<p>Illustration of the selected area and the observation area of the FY-4A satellite.</p>
Full article ">Figure 4
<p>Basic 2D-CNN structure.</p>
Full article ">Figure 5
<p>Schematics of the convolution calculation: (<b>a</b>) 2D-CNN; (<b>b</b>) 3D-CNN.</p>
Full article ">Figure 6
<p>The Dense connection mechanism of DenseNet.</p>
Full article ">Figure 7
<p>Flowchart of our proposed model.</p>
Full article ">Figure 8
<p>Using the 3D-CNN for extracting the spatial–spectral features.</p>
Full article ">Figure 9
<p>Hybrid convolutional layers with a dense connection.</p>
Full article ">Figure 10
<p>Image data for model training. The two diagrams on the left show the raw FY-4A data and the matching points with the corresponding colored boxes marked with the label positions, selected according to the color scale in the legend. The figure on the right shows a zoomed-in example of the training set data from the left figure.</p>
Full article ">Figure 11
<p>Classification results obtained under different experimental conditions: (<b>a</b>) Classification results obtained at different spatial sizes; (<b>b</b>) classification results obtained for different spectral dimensions of the output. In (<b>b</b>) it can be seen that, starting from four spectral dimensions, the OA reached the maximum for eight spectral dimensions. With a further increase in the spectral dimension based on eight spectral dimensions, the OA started to decrease. This indicated that with the increase in spectral dimension number, the noise information also increased gradually, which affected the acquisition of spectral feature information and led to a decrease in classification accuracy. According to the experimental results, the number of spectral channel dimensions of the 3D-CNN output feature map was set to eight.</p>
Full article ">Figure 12
<p>Original FY-4A satellite cloud image data collected at 5:45 a.m. on 21 May 2018.</p>
Full article ">Figure 13
<p>Classification maps obtained using classification methods on the dataset: (<b>a</b>) 2D-CNN; (<b>b</b>) 3D-CNN; (<b>c</b>) HybridSN; (<b>d</b>) UNet; (<b>e</b>) U2Net; (<b>f</b>) proposed DCHCN.</p>
Full article ">
21 pages, 4002 KiB  
Article
An Enhanced Storm Warning and Nowcasting Model in Pre-Convection Environments
by Zheng Ma, Zhenglong Li, Jun Li, Min Min, Jianhua Sun, Xiaocheng Wei, Timothy J. Schmit and Lidia Cucurull
Remote Sens. 2023, 15(10), 2672; https://doi.org/10.3390/rs15102672 - 20 May 2023
Cited by 1 | Viewed by 2047
Abstract
A storm tracking and nowcasting model was developed for the contiguous US (CONUS) by combining observations from the advanced baseline imager (ABI) and numerical weather prediction (NWP) short-range forecast data, along with the precipitation rate from CMORPH (the Climate Prediction Center morphing technique). [...] Read more.
A storm tracking and nowcasting model was developed for the contiguous US (CONUS) by combining observations from the advanced baseline imager (ABI) and numerical weather prediction (NWP) short-range forecast data, along with the precipitation rate from CMORPH (the Climate Prediction Center morphing technique). A random forest based model was adopted by using the maximum precipitation rate as the benchmark for convection intensity, with the location and time of storms optimized by using optical flow (OF) and continuous tracking. Comparative evaluations showed that the optimized models had higher accuracy for severe storms with areas equal to or larger than 5000 km2 over smaller samples, and loweraccuracy for cases smaller than 1000 km2, while models with sample-balancing applied showed higher possibilities of detection (PODs). A typical convective event from August 2019 was presented to illustrate the application of the nowcasting model on local severe storm (LSS) identification and warnings in the pre-convection stage; the model successfully provided warnings with a lead time of 1–2 h before heavy rainfall. Importance score analysis showed that the overall impact from ABI observations was much higher than that from NWP, with the brightness temperature difference between 6.2 and 10.3 microns ranking at the top in terms of feature importance. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Tracked convective storm candidates at 1940 UTC on 07/01/2018 over CONUS. Subplots are (<b>a</b>–<b>c</b>) the three consecutive brightness temperature (unit: K) imagery of band 10.3 μm from ABI, (<b>d</b>,<b>e</b>) the maximum cooling rates (unit: K/h) calculated from tracked cloud candidates that exceeded −16 K/h, and (<b>f</b>) the identified potential convective candidates (masked by orange) to be monitored and nowcasted through the RF framework.</p>
Full article ">Figure 2
<p>Schematics of the application of optical flow and area overlapping in collocating precipitation data for fast moving cloud candidates. For a moving cloud that has some overlap with its original area at the time of identification, (<b>a</b>) the SWIPEv1 [<a href="#B29-remotesensing-15-02672" class="html-bibr">29</a>] searches for the maximum rain rate within the overlapped area, while (<b>c</b>) the enhanced SWIPEv2 is able to locate the whole area of the precipitation cloud through application of area overlapping with actual ABI imagery at t4 in the collocation process. For a moving cloud that does not have any overlapped area with its original area, (<b>b</b>) SWIPEv1 fails to find the corresponding rain rate, while (<b>d</b>) the enhanced SWIPEv2 is able to collocate the ABI image at t4 with an estimated cloud area provide by OF, and successfully find the whole area of the target precipitation cloud.</p>
Full article ">Figure 3
<p>The flowchart of the tracking and collocation framework of (<b>a</b>) SWIPEv1; and (<b>b</b>) SWIPEv2 to build the training dataset. Here, t1, t2 and t3 are times for (<b>a</b>) AHI; (<b>b</b>) ABI CONUS images with a time interval of (<b>a</b>) 10 min; (<b>b</b>) 5 min. In addition, t4, t5 and t6 are times for CMORPH precipitation data with a time interval of 30 min. Note that the continuous tracking for CMORPH precipitation at t5 and t6 enhances the chance to capture the true storm intensity based on the precipitation rate.</p>
Full article ">Figure 4
<p>(<b>a</b>) Sample sizes of non-severe and severe cases for small-area group (areas smaller than 1000 km<sup>2</sup>), medium-area group (areas in the range of 1000 to 5000 km<sup>2</sup>) and large-area group (areas no less than 5000 km<sup>2</sup>) in the validation dataset. POD, FAR and CSI scores for the severe cases with (<b>b</b>) small-area group, (<b>c</b>) medium-area group and (<b>d</b>) large-area group, respectively. Refer to <a href="#remotesensing-15-02672-t003" class="html-table">Table 3</a> for the definition of the four scenarios.</p>
Full article ">Figure 5
<p>Two convective cloud candidates that merged into a severe convective storm case tracked by the enhanced SWIPEv2 model on 7 August 2019 in Wisconsin. The first row shows the greyscale BT of ABI’s 10.3 μm channel, overlaid with the results predicted by the enhanced SWIPEv2 model with ‘severe’ indicating unanimous classification decisions of severe from both models 3CBA and 2CBA, while ‘potential’ indicates split decisions from the two models with severe predicted by only one of them. The second row shows the colored BT(K) from ABI’s 10.3 μm channel. The third row shows the precipitation rate (mm/h) at the given times from CMORPH, with red circles marking the precipitation associated with the tracked storm system.</p>
Full article ">
17 pages, 4756 KiB  
Article
Detecting Individual Plants Infected with Pine Wilt Disease Using Drones and Satellite Imagery: A Case Study in Xianning, China
by Peihua Cai, Guanzhou Chen, Haobo Yang, Xianwei Li, Kun Zhu, Tong Wang, Puyun Liao, Mengdi Han, Yuanfu Gong, Qing Wang and Xiaodong Zhang
Remote Sens. 2023, 15(10), 2671; https://doi.org/10.3390/rs15102671 - 20 May 2023
Cited by 11 | Viewed by 2926
Abstract
In recent years, remote sensing techniques such as satellite and drone-based imaging have been used to monitor Pine Wilt Disease (PWD), a widespread forest disease that causes the death of pine species. Researchers have explored the use of remote sensing imagery and deep [...] Read more.
In recent years, remote sensing techniques such as satellite and drone-based imaging have been used to monitor Pine Wilt Disease (PWD), a widespread forest disease that causes the death of pine species. Researchers have explored the use of remote sensing imagery and deep learning algorithms to improve the accuracy of PWD detection at the single-tree level. This study introduces a novel framework for PWD detection that combines high-resolution RGB drone imagery with free-access Sentinel-2 satellite multi-spectral imagery. The proposed approach includes an PWD-infected tree detection model named YOLOv5-PWD and an effective data augmentation method. To evaluate the proposed framework, we collected data and created a dataset in Xianning City, China, consisting of object detection samples of infected trees at middle and late stages of PWD. Experimental results indicate that the YOLOv5-PWD detection model achieved 1.2% higher mAP compared to the original YOLOv5 model and a further improvement of 1.9% mAP was observed after applying our dataset augmentation method, which demonstrates the effectiveness and potential of the proposed framework for PWD detection. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the study areas in this study. Left and top right are the locations of the study area in Xianning City, Hubei Province, China. (<b>A</b>,<b>B</b>) are UAV Orthophoto maps of two study areas.</p>
Full article ">Figure 2
<p>The process of dataset labeling and checking.</p>
Full article ">Figure 3
<p>Examples of UAV images of middle-stage infected trees and late-stage infected trees. The boxes in the images indicate the PWD-affected trees. (<b>a</b>) Middle-stage infected trees are generally yellow-green, yellow and orange. (<b>b</b>) Late-stage infected trees are generally orange-red and red.</p>
Full article ">Figure 4
<p>The network architecture of the YOLOv5-PWD model (X * Y represents repeating module Y for X times).</p>
Full article ">Figure 5
<p>The technical process of synthesizing samples by combining satellite and UAV images.</p>
Full article ">Figure 6
<p>An example of synthetic images with different synthesis strategies. (<b>a</b>) Without infected trees; (<b>b</b>) Synthesis by Direct Synthesis; (<b>c</b>) Synthesis by Weighted synthesis.</p>
Full article ">Figure 7
<p>Diagram of adding weights. (<b>a</b>) Schematic diagram of the relevant points in Formule (<a href="#FD2-remotesensing-15-02671" class="html-disp-formula">2</a>). (<b>b</b>) The target frame image after adding weights.</p>
Full article ">Figure 8
<p>The P-R curves of the YOLOv5 and YOLOv5-PWD models on the test set. (<b>a</b>) The detection results of middle-stage infection trees; (<b>b</b>) The detection results of late-stage infection trees; (<b>c</b>) The detection results of two classes infection trees.</p>
Full article ">Figure 9
<p>Visualization of sample data distribution of our dataset in two-dimensional space by t-SNE.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop