[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 13, May-2
Previous Issue
Volume 13, April-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 13, Issue 9 (May-1 2021) – 260 articles

Cover Story (view full-size image): Meeting the demand of food supply and environment protection in rocky desertified regions (e.g., Southeast China) is of great significance for enhancing human wellbeing. Herein, we calculated the yield gap for 6 main crop species in Guizhou Province and simulated crop yield using ensembled artificial neural networks. We also tested the influence of adjusting the quantity of local fertilization and irrigation on crop production in Guizhou Province. Results showed that the total yield of the selected crops had, on average, reached over 72.5% of the theoretical maximum yield. For most crop species, the bonus of fertilization intensification has reached the condition of “stagnation”. By contrast, increasing irrigation tended to be more consistently effective at increasing crop yield. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 28843 KiB  
Article
Lifting Scheme-Based Sparse Density Feature Extraction for Remote Sensing Target Detection
by Ling Tian, Yu Cao, Zishan Shi, Bokun He, Chu He and Deshi Li
Remote Sens. 2021, 13(9), 1862; https://doi.org/10.3390/rs13091862 - 10 May 2021
Cited by 2 | Viewed by 3038
Abstract
The design of backbones is of great significance for enhancing the location and classification precision in the remote sensing target detection task. Recently, various approaches have been proposed on altering the feature extraction density in the backbones to enlarge the receptive field, make [...] Read more.
The design of backbones is of great significance for enhancing the location and classification precision in the remote sensing target detection task. Recently, various approaches have been proposed on altering the feature extraction density in the backbones to enlarge the receptive field, make features prominent, and reduce computational complexity, such as dilated convolution and deformable convolution. Among them, one of the most widely used methods is strided convolution, but it loses the information about adjacent feature points which leads to the omission of some useful features and the decrease of detection precision. This paper proposes a novel sparse density feature extraction method based on the relationship between the lifting scheme and convolution, which improves the detection precision while keeping the computational complexity almost the same as the strided convolution. Experimental results on remote sensing target detection indicate that our proposed method improves both detection performance and network efficiency. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Typical remote sensing target detection flowchart. There are mainly two types of target detection networks, including two-stage (containing both Stage1 in the dashed box and Stage2) and one-stage networks (containing Stage2 only). For both of them, convolutional neural networks are the most common backbones for feature extraction. The vanilla convolution with the stride of one is repeatedly used in the backbone for normal density feature extraction. The sparse density feature extraction module is for receptive field enlargement and dimension reduction, concrete existing approaches of which are vanilla convolutional layer followed by a pooling layer (which is denoted as downsampling after extraction approach in this paper), dilated convolution, deformable convolution, and the most frequently used method—strided convolution, etc.</p>
Full article ">Figure 2
<p>The method of downsampling after extraction is generally implemented with a vanilla convolutional layer and a pooling layer, while the lifting scheme is another alternative implementation. The symbols <math display="inline"><semantics> <mi mathvariant="bold-italic">x</mi> </semantics></math>, <math display="inline"><semantics> <mi mathvariant="bold-italic">h</mi> </semantics></math>, and <math display="inline"><semantics> <mi mathvariant="bold-italic">y</mi> </semantics></math> denote the input signal, convolutional kernel, and output signal, respectively. The lifting scheme contains three steps: split, prediction and update. These two methods have the same <math display="inline"><semantics> <mi mathvariant="bold-italic">x</mi> </semantics></math> and <math display="inline"><semantics> <mi mathvariant="bold-italic">y</mi> </semantics></math> theoretically, and <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">h</mi> <mo>=</mo> <mo>[</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>h</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> </semantics></math> in (<b>a</b>) is transformed into the prediction and update operators as showned in (<b>b</b>).</p>
Full article ">Figure 3
<p>Two stage detection network flowchart based on lifting scheme layer. Different from the existing two-stage detection networks, the lifting scheme layer is used as the sparse density feature extraction module in the backbone CNN, which is superior to the frequently used strided convolution in detection precision. A series of candidate boxes are generated in the first stage, and then they are classified in the second stage. The candidate box that is classified as a ship and exceeds the IOU threshold is reserved in the output.</p>
Full article ">Figure 4
<p>Samples from the datasets. Three remote sensing ship datasets are used in the experiments to evaluate the proposed method, including two SAR datasets SSDD and AIR-SarSHIP-1.0, and one optical remote sensing image dataset DOTA-ship.</p>
Full article ">Figure 5
<p>Precision-recall curves on three remote sensing image datasets of two pairs of detection networks, including (1) Cascade R-CNN and Cascade R-CNN-LS, (2) Faster R-CNN and Faster R-CNN-LS. For all the datasets, Cascade R-CNN-LS consistently performs the best, and the lifting scheme-based method is superior to the strided convolution in both experiments with the baselines of Cascade R-CNN and Faster R-CNN.</p>
Full article ">Figure 6
<p>Detection results on SSDD samples. The 1st column displays the samples from the ground truth, while the 2nd and 3rd columns are the detection results of the relative samples of Cascade R-CNN and Cascade R-CNN-LS, respectively. The lifting scheme-based network tends to detect small objects better than the strided convolution as the 2nd and the 5th rows show.</p>
Full article ">Figure 7
<p>Detection results on AIR-SarSHIP-1.0 samples. The 1st column displays the samples from the ground truth, while the 2nd and 3rd columns are the detection results of the relative samples of Cascade R-CNN and Cascade R-CNN-LS, respectively. The lifting scheme-based network tends to detect small objects better than the strided convolution as the 1st and the 3rd rows show.</p>
Full article ">Figure 8
<p>Detection results on DOTA-ship samples. The 1st column displays the samples from the ground truth, while the 2nd and 3rd columns are the detection results of the relative samples of Cascade R-CNN and Cascade R-CNN-LS, respectively. The lifting scheme-based network detects targets correctly in these samples, while Cascade R-CNN fails to detect some small, middle, and large size targets and has false alarms.</p>
Full article ">
25 pages, 32847 KiB  
Article
Analysis of Activity in an Open-Pit Mine by Using InSAR Coherence-Based Normalized Difference Activity Index
by Jihyun Moon and Hoonyol Lee
Remote Sens. 2021, 13(9), 1861; https://doi.org/10.3390/rs13091861 - 10 May 2021
Cited by 14 | Viewed by 4011
Abstract
In this study, time-series of Sentinel-1A/B Interferometric Synthetic Aperture Radar (InSAR) coherence images were used to monitor the mining activity of Musan open-pit mine, the largest iron mine in North Korea. First, the subtraction of SRTM DEM (2000) from TanDEM-X DEM (2010–2015) has [...] Read more.
In this study, time-series of Sentinel-1A/B Interferometric Synthetic Aperture Radar (InSAR) coherence images were used to monitor the mining activity of Musan open-pit mine, the largest iron mine in North Korea. First, the subtraction of SRTM DEM (2000) from TanDEM-X DEM (2010–2015) has identified two major accumulation areas, one in the east (+112.33 m) and the other in the west (+84.03 m), and a major excavation area (−42.54 m) at the center of the mine. A total of 89 high-quality coherence images with a 12-day baseline from 2015 to 2020 were converted to the normalized difference activity index (NDAI), a newly developed activity indicator robust to spatial and temporal decorrelation. An RGB composite of annually averaged NDAI maps (red for 2019, green for 2018, and blue for 2017) showed that overall activity has diminished since 2018. Dumping slopes were categorized into shrinking, expanding, or transitional, according to the color pattern. Migration and expansion of excavation sites were also found on the pit floor. Time series of 12-day NDAI graphs revealed the date of activities with monthly accuracy. It is believed that NDAI with continuous acquisition of Sentinel-1A/B data can provide detailed monitoring of various types of activities in open-pit mines especially with limited in situ data. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>North Korean iron ore production [<a href="#B24-remotesensing-13-01861" class="html-bibr">24</a>].</p>
Full article ">Figure 2
<p>The study area, Musan open-pit mine, in (<b>a</b>) small scale and (<b>b</b>) large scale (image from Google Earth).</p>
Full article ">Figure 3
<p>Flow chart of data processing.</p>
Full article ">Figure 4
<p>InSAR coherence dataset with a 12-day temporal baseline. Points represent SAR data, while lines are 12-day coherence data with a perpendicular baseline.</p>
Full article ">Figure 5
<p>Selection process of stable points. (<b>a</b>) Time-averaged coherence image. (<b>b</b>) Standard deviation image. (<b>c</b>) Stable points in blue color. The red polygon is the mining area, the region of interest.</p>
Full article ">Figure 6
<p>Spatially averaged coherence values for stable points as a function of (<b>a</b>) master SAR acquisition date and (<b>b</b>) perpendicular baseline. Gray points were excluded from further analysis. The red line is a quadratic regression line (black points only).</p>
Full article ">Figure 7
<p>NDAI as a function of coherence.</p>
Full article ">Figure 8
<p>A pseudo-color scheme of surface activity by the RGB composite of annually averaged NDAI images (R: 2019, G: 2018, and B: 2017).</p>
Full article ">Figure 9
<p>The InSAR DEM change results. (<b>a</b>) is the change in DEM from SRTM DEM (2000) to TanDEM-X DEM (2010–2015). (<b>b</b>) is an index map showing the profile lines created in the dumping areas and excavation site overlain by Google Earth, 16 October 2013 image. (<b>c</b>,<b>d</b>) represent the elevation change in the west dumping area. (<b>e</b>,<b>f</b>) represent the elevation change in the east dumping area. (<b>g</b>,<b>h</b>) represent the elevation change in the excavation site.</p>
Full article ">Figure 10
<p>Overall mining activities on the averaged NDAI maps during 2015–2020. Overall NDAI is low for bare-rock-covered mining area while high in the surrounding vegetated area.</p>
Full article ">Figure 11
<p>RGB composite of the annually averaged NDAI map (R: 2019, G: 2018, and B: 2017). Square boxes are the regions for the denoted figures, and arrows are the viewing directions.</p>
Full article ">Figure 12
<p>The west dumping area. (<b>a</b>) is the RGB composite of the annually averaged NDAI map (R: 2019, G: 2018, and B: 2017). (<b>b</b>,<b>c</b>) are Google Earth images taken on 7 October 2015 and 26 September 2019, respectively. Four active dumping sites named WD1, WD2, WD3, and WD4 are drawn in orange boxes, and the image viewing directions are indicated by arrows.</p>
Full article ">Figure 13
<p>West Dumping Site 1 (WD1). (<b>a</b>,<b>b</b>) are Google Earth images taken on 7 October 2015 and 26 September 2019, respectively. (<b>c</b>) is the RGB composite of the annually averaged NDAI map (R: 2019, G: 2018, and B: 2017) with several color points where 12-day NDAI graphs are shown in <a href="#remotesensing-13-01861-f014" class="html-fig">Figure 14</a>. Dashed lines are drawn based on the 2015 Google Earth image, while solid lines are on 2019.</p>
Full article ">Figure 14
<p>The 12-day NDAI time-series graphs of WD1 showing the detailed activation or deactivation date in (<b>a</b>) blue (B), (<b>b</b>) cyan (C), (<b>c</b>) yellow (Y), and (<b>d</b>) red (R) points in <a href="#remotesensing-13-01861-f013" class="html-fig">Figure 13</a>c. The red, green, and blue dots in the graphs indicate the data used for generating the RGB composite of the annually averaged NDAI images, while black dots were not.</p>
Full article ">Figure 15
<p>West Dumping Site 2 (WD2). (<b>a</b>,<b>b</b>) are Google Earth images taken on 7 October 2015 and 26 September 2019, respectively. (<b>c</b>) is the RGB composite of the annually averaged NDAI map (R: 2019, G: 2018, and B: 2017). Solid lines are drawn based on the 2019 Google Earth image.</p>
Full article ">Figure 16
<p>West Dumping Site 3 (WD3). (<b>a</b>,<b>b</b>) are Google Earth images taken on 7 October 2015 and 26 September 2019, respectively. (<b>c</b>) is the RGB composite of the annually averaged NDAI map (R: 2019, G: 2018, and B: 2017). Dashed lines are drawn based on the 2015 Google Earth image, while solid lines are on 2019.</p>
Full article ">Figure 17
<p>West Dumping Site 4 (WD4). (<b>a</b>,<b>b</b>) are Google Earth images taken on 7 October 2015 and 26 September 2019, respectively. (<b>c</b>) is the RGB composite of the annually averaged NDAI map (R: 2019, G: 2018, and B: 2017).</p>
Full article ">Figure 18
<p>The east dumping area. (<b>a</b>) is the RGB composite of the annually averaged NDAI map (R: 2019, G: 2018, and B: 2017). (<b>b</b>,<b>c</b>) are Google Earth images taken on 7 October 2015 and 26 September 2019, respectively. Three active dumping sites named ED1, ED2, and ED3 are drawn in orange boxes, and the image viewing directions are indicated by arrows.</p>
Full article ">Figure 19
<p>East Dumping Site 1 (ED1). (<b>a</b>,<b>b</b>) are Google Earth images taken on 7 October 2015 and 26 September 2019, respectively. (<b>c</b>) is the RGB composite of the annually averaged NDAI map (R: 2019, G: 2018, and B: 2017) with several color points where 12-day NDAI graphs are shown in <a href="#remotesensing-13-01861-f020" class="html-fig">Figure 20</a>. Solid lines are drawn based on the 2019 Google Earth image.</p>
Full article ">Figure 20
<p>The 12-day NDAI time-series graphs of ED1 showing the detailed activation or deactivation date in (<b>a</b>) blue (B), (<b>b</b>) cyan (C), (<b>c</b>) white (W), (<b>d</b>) magenta (M), (<b>e</b>) yellow (Y), and (<b>f</b>) red (R) points in <a href="#remotesensing-13-01861-f019" class="html-fig">Figure 19</a>c. The red, green, and blue dots in the graphs indicate the data used for generating the RGB composite of the annually averaged NDAI images, while black dots were not.</p>
Full article ">Figure 20 Cont.
<p>The 12-day NDAI time-series graphs of ED1 showing the detailed activation or deactivation date in (<b>a</b>) blue (B), (<b>b</b>) cyan (C), (<b>c</b>) white (W), (<b>d</b>) magenta (M), (<b>e</b>) yellow (Y), and (<b>f</b>) red (R) points in <a href="#remotesensing-13-01861-f019" class="html-fig">Figure 19</a>c. The red, green, and blue dots in the graphs indicate the data used for generating the RGB composite of the annually averaged NDAI images, while black dots were not.</p>
Full article ">Figure 21
<p>East Dumping Site 2 (ED2). (<b>a</b>,<b>b</b>) are Google Earth images taken on 7 October 2015 and 26 September 2019, respectively. (<b>c</b>) is the RGB composite of the annually averaged NDAI map (R: 2019, G: 2018, and B: 2017). Dashed lines are drawn based on the 2015 Google Earth image, while solid lines are on 2019.</p>
Full article ">Figure 22
<p>East Dumping Site 3 (ED3). (<b>a</b>,<b>b</b>) are Google Earth images taken on 7 October 2015 and 26 September 2019, respectively. (<b>c</b>) is the RGB composite of the annually averaged NDAI map (R: 2019, G: 2018, and B: 2017). Dashed lines are drawn based on the 2015 Google Earth image, while solid lines are on 2019.</p>
Full article ">Figure 23
<p>An excavation mining site. (<b>a</b>,<b>b</b>) are Google Earth images taken on 7 October 2015 and 26 September 2019, respectively. (<b>c</b>) is the RGB composite of the annually averaged NDAI map (R: 2019, G: 2018, and B: 2017) with several color points where 12-day NDAI graphs are shown in <a href="#remotesensing-13-01861-f024" class="html-fig">Figure 24</a>. Dashed lines are drawn based on the 2015 Google Earth image, while solid lines are on 2019.</p>
Full article ">Figure 24
<p>The 12-day NDAI time-series graphs of the excavation site in (<b>a</b>) blue (B), (<b>b</b>) cyan (C), (<b>c</b>) yellow (Y), and (<b>d</b>) red (R) points of <a href="#remotesensing-13-01861-f023" class="html-fig">Figure 23</a>c. The red, green, and blue dots in the graphs indicate the data used for generating the RGB composite of the annually averaged NDAI images, while black dots were not.</p>
Full article ">
14 pages, 1257 KiB  
Technical Note
NDFTC: A New Detection Framework of Tropical Cyclones from Meteorological Satellite Images with Deep Transfer Learning
by Shanchen Pang, Pengfei Xie, Danya Xu, Fan Meng, Xixi Tao, Bowen Li, Ying Li and Tao Song
Remote Sens. 2021, 13(9), 1860; https://doi.org/10.3390/rs13091860 - 10 May 2021
Cited by 32 | Viewed by 3991
Abstract
Accurate detection of tropical cyclones (TCs) is important to prevent and mitigate natural disasters associated with TCs. Deep transfer learning methods have advantages in detection tasks, because they can further improve the stability and accuracy of the detection model. Therefore, on the basis [...] Read more.
Accurate detection of tropical cyclones (TCs) is important to prevent and mitigate natural disasters associated with TCs. Deep transfer learning methods have advantages in detection tasks, because they can further improve the stability and accuracy of the detection model. Therefore, on the basis of deep transfer learning, we propose a new detection framework of tropical cyclones (NDFTC) from meteorological satellite images by combining the deep convolutional generative adversarial networks (DCGAN) and You Only Look Once (YOLO) v3 model. The algorithm process of NDFTC consists of three major steps: data augmentation, a pre-training phase, and transfer learning. First, to improve the utilization of finite data, DCGAN is used as the data augmentation method to generate images simulated to TCs. Second, to extract the salient characteristics of TCs, the generated images obtained from DCGAN are inputted into the detection model YOLOv3 in the pre-training phase. Furthermore, based on the network-based deep transfer learning method, we train the detection model with real images of TCs and its initial weights are transferred from the YOLOv3 trained with generated images. Training with real images helps to extract universal characteristics of TCs and using transferred weights as initial weights can improve the stability and accuracy of the model. The experimental results show that the NDFTC has a better performance, with an accuracy (ACC) of 97.78% and average precision (AP) of 81.39%, in comparison to the YOLOv3, with an ACC of 93.96% and AP of 80.64%. Full article
(This article belongs to the Special Issue Deep Learning and Computer Vision in Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the proposed new detection framework of tropical cyclones (NDFTC).</p>
Full article ">Figure 2
<p>(<b>a</b>) The change of loss function values of YOLOv3 to train real TC images; (<b>b</b>) the change of loss function values of NDFTC to train real TC images.</p>
Full article ">Figure 3
<p>(<b>a</b>) The change in region average IOU of YOLOv3 to train real TC images; (<b>b</b>) the change in region average IOU of NDFTC to train real TC images.</p>
Full article ">Figure 4
<p>Performance of NDFTC and other models with ACC and AP: (<b>a</b>) ACC of NDFTC and other models; (<b>b</b>) AP of NDFTC and other models.</p>
Full article ">Figure 5
<p>An example of TC detection results, which is the super typhoon Marcus in 2018. (<b>a</b>) The detection result of YOLOv3; (<b>b</b>) the detection result of NDFTC.</p>
Full article ">
22 pages, 8592 KiB  
Article
Canopy Parameter Estimation of Citrus grandis var. Longanyou Based on LiDAR 3D Point Clouds
by Xiangyang Liu, Yaxiong Wang, Feng Kang, Yang Yue and Yongjun Zheng
Remote Sens. 2021, 13(9), 1859; https://doi.org/10.3390/rs13091859 - 10 May 2021
Cited by 15 | Viewed by 4149
Abstract
The characteristic parameters of Citrus grandis var. Longanyou canopies are important when measuring yield and spraying pesticides. However, the feasibility of the canopy reconstruction method based on point clouds has not been confirmed with these canopies. Therefore, LiDAR point cloud data for C. [...] Read more.
The characteristic parameters of Citrus grandis var. Longanyou canopies are important when measuring yield and spraying pesticides. However, the feasibility of the canopy reconstruction method based on point clouds has not been confirmed with these canopies. Therefore, LiDAR point cloud data for C. grandis var. Longanyou were obtained to facilitate the management of groves of this species. Then, a cloth simulation filter and European clustering algorithm were used to realize individual canopy extraction. After calculating canopy height and width, canopy reconstruction and volume calculation were realized using six approaches: by a manual method and using five algorithms based on point clouds (convex hull, CH; convex hull by slices; voxel-based, VB; alpha-shape, AS; alpha-shape by slices, ASBS). ASBS is an innovative algorithm that combines AS with slices optimization, and can best approximate the actual canopy shape. Moreover, the CH algorithm had the shortest run time, and the R2 values of VCH, VVB, VAS, and VASBS algorithms were above 0.87. The volume with the highest accuracy was obtained from the ASBS algorithm, and the CH algorithm had the shortest computation time. In addition, a theoretical but preliminarily system suitable for the calculation of the canopy volume of C. grandis var. Longanyou was developed, which provides a theoretical reference for the efficient and accurate realization of future functional modules such as accurate plant protection, orchard obstacle avoidance, and biomass estimation. Full article
(This article belongs to the Special Issue 3D Point Clouds for Agriculture Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Photos of the test location and scenario: (<b>a</b>) aerial view of the grapefruit orchard; (<b>b</b>) ground-based view of the <span class="html-italic">Citrus grandis</span> var. <span class="html-italic">Longanyou</span> orchard.</p>
Full article ">Figure 2
<p>A 3D LiDAR-based map of the <span class="html-italic">Citrus grandis</span> var. <span class="html-italic">Longanyou</span> orchard, colored to indicate reflection intensity and elevation; the white dotted line is the motion trajectory of the LiDAR scanning device.</p>
Full article ">Figure 3
<p>Data processing flow of 3D point cloud in <span class="html-italic">Citrus grandis</span> var. <span class="html-italic">Longanyou</span> orchard. Note: CC, CloudCompare™ software; CSF, cloth simulation filter (CSF) algorithm; manual method, MM; the canopy reconstruction algorithms based on point cloud: convex hull, CH; convex hull by slices, CHBS; voxel-based, VB; alpha-shape, AS; alpha-shape by slices, ASBS.</p>
Full article ">Figure 4
<p>Three-dimensional point cloud maps: (<b>a</b>) initial test area diagram; an inset diagram shows a view looking down orchard rows and to the left, where the red wavy dotted line indicates the topography of the <span class="html-italic">Citrus grandis</span> var. <span class="html-italic">Longanyou</span> orchard; (<b>b</b>) test area diagram after ground filtering, where some ground point clouds still existed in the red dashed ellipse after filtering.</p>
Full article ">Figure 5
<p>Clustering results for the experimental area, where each color represents one tree.</p>
Full article ">Figure 6
<p>Schematic diagram of manual measurement parameters of <span class="html-italic">Citrus grandis</span> var. <span class="html-italic">Longanyou</span>, where the blue curves show the ellipsoid model, the voids between the canopy and the theoretical model represent external space, and the red parts refer to the parts of the internal holes and gaps between branches and leaves.</p>
Full article ">Figure 7
<p>Canopy reconstruction renderings of tree No. 1 with (<b>a</b>) the manual method (MM) and five algorithms based on point clouds: (<b>b</b>) convex hull (CH); (<b>c</b>) convex hull by slices (CHBS); (<b>d</b>) voxel-based (VB); (<b>e</b>) alpha-shape (AS); and (<b>f</b>) alpha-shape by slices (ASBS) algorithms. Note: X, Y, and Z are the three dimensions of the canopy, length, width, and height, respectively.</p>
Full article ">Figure 8
<p>Diagrams showing canopy reconstruction of tree No. 1 with different <span class="html-italic">α</span> values: (<b>a</b>) <span class="html-italic">α</span> = 0.05; (<b>b</b>) 0.25; (<b>c</b>) 0.5; (<b>d</b>) 0.75. Note: X, Y, and Z are the three dimensions of the canopy, length, width, and height, respectively.</p>
Full article ">Figure 8 Cont.
<p>Diagrams showing canopy reconstruction of tree No. 1 with different <span class="html-italic">α</span> values: (<b>a</b>) <span class="html-italic">α</span> = 0.05; (<b>b</b>) 0.25; (<b>c</b>) 0.5; (<b>d</b>) 0.75. Note: X, Y, and Z are the three dimensions of the canopy, length, width, and height, respectively.</p>
Full article ">Figure 9
<p>The volume values of 36 trees calculated by six methods, indicated by different colors (lesser to greater sequences). Note: manual method, MM; canopy reconstruction algorithms, convex hull, CH; convex hull by slices, CHBS; voxel-based, VB; alpha-shape, AS; alpha-shape by slices, ASBS.</p>
Full article ">Figure 10
<p>Run time of reconstruction algorithms based on point clouds for crown reconstruction and volume calculation, indicated in order of number of crown points. Canopy reconstruction algorithms included convex hull, CH; convex hull by slices, CHBS; voxel-based, VB; alpha-shape, AS; alpha-shape by slices, ASBS.</p>
Full article ">Figure 11
<p>Scatter plot and linear fitting of crown volumes between the convex hull (CH) algorithm and different numbers of slices (<span class="html-italic">N</span>): (<b>a</b>) <span class="html-italic">N</span> = 10; (<b>b</b>) 20; (<b>c</b>) 30; (<b>d</b>) 40; (<b>e</b>) 50. (<b>f</b>) Processing time of different slices using the convex hull by slices (CHBS) algorithm.</p>
Full article ">Figure 12
<p>Scatter plot and linear fitting of crown volumes between the alpha-shape (AS) algorithm and different numbers of slices (<span class="html-italic">N</span>): (<b>a</b>) <span class="html-italic">N</span> = 10; (<b>b</b>) 20; (<b>c</b>) 30; (<b>d</b>) 40; (<b>e</b>) 50. (<b>f</b>) Processing time of different slices using the alpha-shape by slices (ASBS) algorithm.</p>
Full article ">Figure 12 Cont.
<p>Scatter plot and linear fitting of crown volumes between the alpha-shape (AS) algorithm and different numbers of slices (<span class="html-italic">N</span>): (<b>a</b>) <span class="html-italic">N</span> = 10; (<b>b</b>) 20; (<b>c</b>) 30; (<b>d</b>) 40; (<b>e</b>) 50. (<b>f</b>) Processing time of different slices using the alpha-shape by slices (ASBS) algorithm.</p>
Full article ">
23 pages, 14071 KiB  
Article
Optical Remote Sensing Image Denoising and Super-Resolution Reconstructing Using Optimized Generative Network in Wavelet Transform Domain
by Xubin Feng, Wuxia Zhang, Xiuqin Su and Zhengpu Xu
Remote Sens. 2021, 13(9), 1858; https://doi.org/10.3390/rs13091858 - 10 May 2021
Cited by 52 | Viewed by 5159
Abstract
High spatial quality (HQ) optical remote sensing images are very useful for target detection, target recognition and image classification. Due to the influence of imaging equipment accuracy and atmospheric environment, HQ images are difficult to acquire, while low spatial quality (LQ) remote sensing [...] Read more.
High spatial quality (HQ) optical remote sensing images are very useful for target detection, target recognition and image classification. Due to the influence of imaging equipment accuracy and atmospheric environment, HQ images are difficult to acquire, while low spatial quality (LQ) remote sensing images are very easy to acquire. Hence, denoising and super-resolution (SR) reconstruction technology are the most important solutions to improve the quality of remote sensing images very effectively, which can lower the cost as much as possible. Most existing methods usually only employ denoising or SR technology to obtain HQ images. However, due to the complex structure and the large noise of remote sensing images, the quality of the remote sensing image obtained only by denoising method or SR method cannot meet the actual needs. To address these problems, a method of reconstructing HQ remote sensing images based on Generative Adversarial Network (GAN) named “Restoration Generative Adversarial Network with ResNet and DenseNet” (RRDGAN) is proposed, which can acquire better quality images by incorporating denoising and SR into a unified framework. The generative network is implemented by fusing Residual Neural Network (ResNet) and Dense Convolutional Network (DenseNet) in order to consider denoising and SR problems at the same time. Then, total variation (TV) regularization is used to furthermore enhance the edge details, and the idea of Relativistic GAN is explored to make the whole network converge better. Our RRDGAN is implemented in wavelet transform (WT) domain, since different frequency parts could be handled separately in the wavelet domain. The experimental results on three different remote sensing datasets shows the feasibility of our proposed method in acquiring remote sensing images. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>The comparison of implementing our method in WT domain or in spatial domain.</p>
Full article ">Figure 2
<p>The flowchart of Haar wavelet transform.</p>
Full article ">Figure 3
<p>The main steps of the generative part in RRDGAN.</p>
Full article ">Figure 4
<p>The architecture of RRDGAN.</p>
Full article ">Figure 5
<p>Low resolution with White Gaussian noise input.</p>
Full article ">Figure 6
<p>Low resolution with salt and pepper noise input.</p>
Full article ">Figure 7
<p>Comparison results among different methods of “Airplane”, scale factor is 4.</p>
Full article ">Figure 8
<p>Comparison results among different methods of “CircularFarmland”, scale factor is 4.</p>
Full article ">Figure 9
<p>Comparison results among different methods of “BaseballDiamond”, scale factor is 4.</p>
Full article ">Figure 10
<p>Comparison results among different methods of “Stadium”, scale factor is 4.</p>
Full article ">Figure 11
<p>Comparison results among different methods of “Railway”, scale factor is 4.</p>
Full article ">Figure 12
<p>The influence of DBN numbers on performance of PSNR.</p>
Full article ">Figure 13
<p>The influence of DBN numbers on performance of training time.</p>
Full article ">Figure 14
<p>The influence of batch normalization.</p>
Full article ">Figure 15
<p>Wavelet transform schematic diagram of image with salt and pepper noise.</p>
Full article ">Figure 16
<p>Comparison results of Applying RRDGAN (only denoising) in both WT domain and spatial domain.</p>
Full article ">Figure 17
<p>Comparison results of implementing BM3D (or NLM) and RRDGAN and implementing RRDGAN only.</p>
Full article ">Figure 18
<p>Comparison results of whether using relativistic loss or not.</p>
Full article ">Figure 19
<p>Comparison results of whether using TV loss or not.</p>
Full article ">Figure 20
<p>Comparing the super-resolution part of our method with Fractional Charlier moments using Set14.</p>
Full article ">Figure 21
<p>Comparing the super-resolution part of our method with Hahn moments using AVLetters.</p>
Full article ">
21 pages, 12259 KiB  
Article
Develop of New Tools for 4D Monitoring: Case Study of Cliff in Apulia Region (Italy)
by Domenica Costantino, Francesco Settembrini, Massimiliano Pepe and Vincenzo Saverio Alfio
Remote Sens. 2021, 13(9), 1857; https://doi.org/10.3390/rs13091857 - 10 May 2021
Cited by 5 | Viewed by 2229
Abstract
The monitoring of areas at risk is one of the topics of great interest in the scientific world in order to preserve natural areas of particular environmental value. The present work aims to develop a suitable survey and analysis methodology, in order to [...] Read more.
The monitoring of areas at risk is one of the topics of great interest in the scientific world in order to preserve natural areas of particular environmental value. The present work aims to develop a suitable survey and analysis methodology, in order to optimise multi-temporal processing. In particular, the phenomenon investigated the monitoring of cliffs in southern Apulia (Italy). To achieve this objective, different algorithms were tested and implemented in an in-house software called ICV. The implementation involved the use of different calculation procedures, combined and aimed at the analysis of the phenomenon in question. The validation of the experimentation was shown through the elaboration of a series of datasets of a particular area within the investigated coastline. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

Figure 1
<p>Classification of cliffs (modified from Sunamura, 1983; 1992).</p>
Full article ">Figure 2
<p>Study area: (<b>a</b>) identification of the area of interest and (<b>b</b>) orthophoto.</p>
Full article ">Figure 3
<p>Overview of the Sant’Andrea coastline.</p>
Full article ">Figure 4
<p>Thematic maps of geomorphological hazard zones.</p>
Full article ">Figure 5
<p>Cliffs with: (<b>a</b>) presence of a sub-horizontal platform and (<b>b</b>) cavity formations and leaf groove.</p>
Full article ">Figure 6
<p>Workflow of implemented method.</p>
Full article ">Figure 7
<p>Identification of the five detailed areas.</p>
Full article ">Figure 8
<p>Print screen of the processes performed in Photoscan software.</p>
Full article ">Figure 9
<p>Visualization of the course registration process.</p>
Full article ">Figure 10
<p>Result of course registration process.</p>
Full article ">Figure 11
<p>Detection of bounding boxes on models.</p>
Full article ">Figure 12
<p>Workflow of the algorithm in ICV for the end registration.</p>
Full article ">Figure 13
<p>Result of point cloud comparison in fine registration.</p>
Full article ">Figure 14
<p>Graphical representation of the two directions of maximum eigenvalues.</p>
Full article ">Figure 15
<p>Surface calculation workflow.</p>
Full article ">Figure 16
<p>Surveys performed: (<b>a</b>) terrestrial photogrammetric survey of 16 March 2019; (<b>b</b>) terrestrial photogrammetric survey of 14 September 2019 and 16 June 2020; (<b>c</b>) TLS survey of 20 July 2019.</p>
Full article ">Figure 17
<p>Course registration phase between aero-photogrammetric and TLS: (<b>a</b>) window for identifying duplicate points, (<b>b</b>) result of the process.</p>
Full article ">Figure 18
<p>Choice of bounding boxes for end of registration.</p>
Full article ">Figure 19
<p>Results of M3C2 application.</p>
Full article ">Figure 20
<p>Result of fine registration between TLS and terrestrial photogrammetry.</p>
Full article ">Figure 21
<p>Calculation of volume difference in ICV software.</p>
Full article ">Figure 22
<p>Analysis of the variation of distances, using the M3C2 algorithm, in Cloud Compare.</p>
Full article ">Figure 23
<p>Analysis of the variation of distances in Geomagic software.</p>
Full article ">Figure 24
<p>Calculation of the 2D volumes in Cloud Compare.</p>
Full article ">
18 pages, 9142 KiB  
Article
SNR-Based Water Height Retrieval in Rivers: Application to High Amplitude Asymmetric Tides in the Garonne River
by Pierre Zeiger, Frédéric Frappart, José Darrozes, Nicolas Roussel, Philippe Bonneton, Natalie Bonneton and Guillaume Detandt
Remote Sens. 2021, 13(9), 1856; https://doi.org/10.3390/rs13091856 - 10 May 2021
Cited by 13 | Viewed by 2955
Abstract
Signal-to-noise ratio (SNR) time series acquired by a geodetic antenna were analyzed to retrieve water heights during asymmetric tides on a narrow river using the Interference Pattern Technique (IPT) from Global Navigation Satellite System Reflectometry (GNSS-R). The dynamic SNR method was selected because [...] Read more.
Signal-to-noise ratio (SNR) time series acquired by a geodetic antenna were analyzed to retrieve water heights during asymmetric tides on a narrow river using the Interference Pattern Technique (IPT) from Global Navigation Satellite System Reflectometry (GNSS-R). The dynamic SNR method was selected because the elevation rate of the reflecting surface during rising tides is high in the Garonne River with macro tidal conditions. A new process was developed to filter out the noise introduced by the environmental conditions on the reflected signal due to the narrowness of the river compared to the size of the Fresnel areas, the presence of vegetation on the river banks, and the presence of boats causing multiple reflections. This process involved the removal of multipeaks in the Lomb-Scargle Periodogram (LSP) output and an iterative least square estimation (LSE) of the output heights. Evaluation of the results was performed against pressure-derived water heights. The best results were obtained using all GNSS bands (L1, L2, and L5) simultaneously: R = 0.99, ubRMSD = 0.31 m. We showed that the quality of the retrieved heights was consistent, whatever the vertical velocity of the reflecting surface, and was highly dependent on the number of satellites visible. The sampling period of our solution was 1 min with a 5-min moving window, and no tide models or fit were used in the inversion process. This highlights the potential of the dynamic SNR method to detect and monitor extreme events with GNSS-R, including those affecting inland waters such as flash floods. Full article
(This article belongs to the Special Issue Radar Based Water Level Estimation)
Show Figures

Figure 1

Figure 1
<p>Location of the study area. (<b>a</b>) Location of Podensac on the Garonne River; (<b>b</b>) the Gironde/Garonne/Dordogne estuary in southwest France; (<b>c</b>) drone image of the Garonne River taken by V. Marie (EPOC), showing the first waves of the tidal bore, its direction of propagation, and the platform location; (<b>d</b>) photo of the Garonne River from the platform with the GNSS antenna installed. The narrowness of the river and the vegetation on riverbanks are visible in both images.</p>
Full article ">Figure 2
<p>1-Hz resampled pressure water level time series of the Garonne River at Podensac. (<b>a</b>) During the GNSS-R SNR acquisition with red rectangle indicating tidal bores; (<b>b</b>) during four consecutive tide periods. These figures are representative of the entire pressure water level time series and do not show the tidal oscillations over a longer period, as the acquisition was performed during spring tides only when both the tidal range is maximum and tidal bores can form.</p>
Full article ">Figure 3
<p>Flowchart of the dynamic SNR method with our improvements adapted from [<a href="#B29-remotesensing-13-01856" class="html-bibr">29</a>]. (<b>a</b>) Processing chain with the addition of a two-step filtering of the dominant frequencies (in orange); (<b>b</b>,<b>c</b>) respective examples of a single-peak output and a multipeak output from LSP for the same satellite track (G01). Red line materializes the level of filtering which depends on parameter <span class="html-italic">k</span> (here <span class="html-italic">k</span> = 0.6), and <math display="inline"><semantics> <mover accent="true"> <mi>f</mi> <mo>˜</mo> </mover> </semantics></math> ~100 Hz.</p>
Full article ">Figure 4
<p>Dominant frequencies extracted from GPS (green) and GLONASS (orange) satellites using the LSP, and river heights inverted with the dynamic SNR method (blue) compared to pressure water levels (red). (<b>a</b>,<b>b</b>) <math display="inline"><semantics> <mrow> <mo> </mo> <mover accent="true"> <mi>f</mi> <mo>˜</mo> </mover> </mrow> </semantics></math> and <span class="html-italic">h</span> from raw LSP output respectively; (<b>c</b>) frequencies filtered out after multipeak rejection with parameter <span class="html-italic">k</span> = 0.6 and iterative LSE; (<b>d</b>) concordant time series of water levels estimated with iterative LSE. Grey areas are masks due to tidal bore occurrence (17 h) and data gaps (21–22 h).</p>
Full article ">Figure 5
<p>Final results using the adapted dynamic SNR inversion. (<b>a</b>) Comparison of <span class="html-italic">h</span> calculated using L1 only (orange), L2 only (green), and L1 + L2 + L5 (blue) frequencies for GPS and GLONASS satellites with pressure water levels (red); (<b>b</b>) output <math display="inline"><semantics> <mover accent="true"> <mi>h</mi> <mo>˙</mo> </mover> </semantics></math> with L1 + L2 + L5 bands; (<b>c</b>) number of GPS and GLONASS satellites for the calculation of <span class="html-italic">h</span> and <math display="inline"><semantics> <mover accent="true"> <mi>h</mi> <mo>˙</mo> </mover> </semantics></math>. The value of <math display="inline"><semantics> <mover accent="true"> <mi>h</mi> <mo>˙</mo> </mover> </semantics></math> was derived from the relative antenna height <span class="html-italic">h</span>, thus it was negative during rising tides as the relative antenna height decreased.</p>
Full article ">Figure 6
<p>Statistical results depending on the number of satellites and the vertical velocity. (<b>a</b>) R and ubRMSD computed according to the number of satellites for the inversion of <span class="html-italic">h</span> and <math display="inline"><semantics> <mover accent="true"> <mi>h</mi> <mo>˙</mo> </mover> </semantics></math>; (<b>b</b>) Idem according to the vertical velocity class (intervals computed with range 1 × 10<sup>−4</sup> m.s<sup>−1</sup>).</p>
Full article ">
21 pages, 8833 KiB  
Article
Airborne LiDAR-Derived Digital Elevation Model for Archaeology
by Benjamin Štular, Edisa Lozić and Stefan Eichert
Remote Sens. 2021, 13(9), 1855; https://doi.org/10.3390/rs13091855 - 10 May 2021
Cited by 53 | Viewed by 7760
Abstract
The use of topographic airborne LiDAR data has become an essential part of archaeological prospection, and the need for an archaeology-specific data processing workflow is well known. It is therefore surprising that little attention has been paid to the key element of processing: [...] Read more.
The use of topographic airborne LiDAR data has become an essential part of archaeological prospection, and the need for an archaeology-specific data processing workflow is well known. It is therefore surprising that little attention has been paid to the key element of processing: an archaeology-specific DEM. Accordingly, the aim of this paper is to describe an archaeology-specific DEM in detail, provide a tool for its automatic precision assessment, and determine the appropriate grid resolution. We define an archaeology-specific DEM as a subtype of DEM, which is interpolated from ground points, buildings, and four morphological types of archaeological features. We introduce a confidence map (QGIS plug-in) that assigns a confidence level to each grid cell. This is primarily used to attach a confidence level to each archaeological feature, which is useful for detecting data bias in archaeological interpretation. Confidence mapping is also an effective tool for identifying the optimal grid resolution for specific datasets. Beyond archaeological applications, the confidence map provides clear criteria for segmentation, which is one of the unsolved problems of DEM interpolation. All of these are important steps towards the general methodological maturity of airborne LiDAR in archaeology, which is our ultimate goal. Full article
(This article belongs to the Special Issue Perspectives on Digital Elevation Model Applications)
Show Figures

Figure 1

Figure 1
<p>An outline of the differences between DEMs, DFMs, DTMs, and DSMs.</p>
Full article ">Figure 2
<p>Test data: (<b>a</b>) location of test sites: AT—46°53′05″N, 15°30′48′′E, SI1—45°40′21″N, 14°11′40′′E, SI2—45°40′56″N, 14°12′25″E, ES—42°44′30″N, 8°33′02″E; (<b>b</b>) test site AT; (<b>c</b>) test sites SI1 and SI2; (<b>d</b>) test site ES. (<b>b</b>–<b>d</b>): shown at 100% crop size; left-digital orthophoto; middle-enhanced visualization of manually processed DFM; right-archaeological features (blue: embedded features; green: partially embedded features; red: standing features). DFMs are visualised using sky view factor (RVT 2.2, default settings; after [<a href="#B1-remotesensing-13-01855" class="html-bibr">1</a>], Figure 1; CC-BY 4.0).</p>
Full article ">Figure 2 Cont.
<p>Test data: (<b>a</b>) location of test sites: AT—46°53′05″N, 15°30′48′′E, SI1—45°40′21″N, 14°11′40′′E, SI2—45°40′56″N, 14°12′25″E, ES—42°44′30″N, 8°33′02″E; (<b>b</b>) test site AT; (<b>c</b>) test sites SI1 and SI2; (<b>d</b>) test site ES. (<b>b</b>–<b>d</b>): shown at 100% crop size; left-digital orthophoto; middle-enhanced visualization of manually processed DFM; right-archaeological features (blue: embedded features; green: partially embedded features; red: standing features). DFMs are visualised using sky view factor (RVT 2.2, default settings; after [<a href="#B1-remotesensing-13-01855" class="html-bibr">1</a>], Figure 1; CC-BY 4.0).</p>
Full article ">Figure 3
<p>Classification tree resulting from a CART analysis of absolute errors for 0.5 m DFM. The model is read from top to bottom until the terminal nodes, which predict the confidence level of the selected variables (one: lowest confidence; six: highest confidence).</p>
Full article ">Figure 4
<p>Processing pipeline diagram for a 0.5 m DFM confidence map.</p>
Full article ">Figure 5
<p>Morphological types of archaeological features detectable via DFM (Bottom-right figure after [<a href="#B22-remotesensing-13-01855" class="html-bibr">22</a>], Figure 8, CC-BY 4.0).</p>
Full article ">Figure 6
<p>Test site AT, the effects of reducing ground point density (<b>left</b>) and reducing DFM resolution (<b>right</b>). The ground point density and DEM/DFM resolution are indicated in each image. The processing pipeline is the same for all instances (point cloud processing according to [<a href="#B1-remotesensing-13-01855" class="html-bibr">1</a>]; OK interpolation; sky view factor visualization with default settings in RVT v.2.2). However, to demonstrate the effect of reduced point density on ground point filtering, the data in the left column lacks manual reclassification.</p>
Full article ">Figure 7
<p>Ground point density for each test site (in pnts/m<sup>2</sup>).</p>
Full article ">Figure 8
<p>DFM confidence maps for each test site (higher is better).</p>
Full article ">Figure 9
<p>Visualizations of DFMs (<b>left</b>) and DFM confidence maps (<b>right</b>) for selected areas at 200% crop size. See <a href="#remotesensing-13-01855-f008" class="html-fig">Figure 8</a> for the location. The processing pipeline is the same for all instances (point cloud processing according to [<a href="#B1-remotesensing-13-01855" class="html-bibr">1</a>]; TLI interpolation; 0.5 m DFM resolution; sky view factor visualization with default settings in RVT v.2.2).</p>
Full article ">
21 pages, 60660 KiB  
Article
Small Object Detection in Remote Sensing Images with Residual Feature Aggregation-Based Super-Resolution and Object Detector Network
by Syed Muhammad Arsalan Bashir and Yi Wang
Remote Sens. 2021, 13(9), 1854; https://doi.org/10.3390/rs13091854 - 10 May 2021
Cited by 50 | Viewed by 7973
Abstract
This paper deals with detecting small objects in remote sensing images from satellites or any aerial vehicle by utilizing the concept of image super-resolution for image resolution enhancement using a deep-learning-based detection method. This paper provides a rationale for image super-resolution for small [...] Read more.
This paper deals with detecting small objects in remote sensing images from satellites or any aerial vehicle by utilizing the concept of image super-resolution for image resolution enhancement using a deep-learning-based detection method. This paper provides a rationale for image super-resolution for small objects by improving the current super-resolution (SR) framework by incorporating a cyclic generative adversarial network (GAN) and residual feature aggregation (RFA) to improve detection performance. The novelty of the method is threefold: first, a framework is proposed, independent of the final object detector used in research, i.e., YOLOv3 could be replaced with Faster R-CNN or any object detector to perform object detection; second, a residual feature aggregation network was used in the generator, which significantly improved the detection performance as the RFA network detected complex features; and third, the whole network was transformed into a cyclic GAN. The image super-resolution cyclic GAN with RFA and YOLO as the detection network is termed as SRCGAN-RFA-YOLO, which is compared with the detection accuracies of other methods. Rigorous experiments on both satellite images and aerial images (ISPRS Potsdam, VAID, and Draper Satellite Image Chronology datasets) were performed, and the results showed that the detection performance increased by using super-resolution methods for spatial resolution enhancement; for an IoU of 0.10, AP of 0.7867 was achieved for a scale factor of 16. Full article
(This article belongs to the Special Issue Convolutional Neural Networks for Object Detection)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The pipeline of EDSR super-resolution architecture with four residual blocks: the <b><span style="color:#70AD47">green</span></b> color represents the convolution layers. The <b><span style="color:#FFC000">yellow</span></b> color represents the normalization layer and the <b><span style="color:#4472C4">blue</span></b> color represents the ReLU activation layer while the <b><span style="color:#C45911">brown</span></b> color represents the pixel rearrangement layer. The pipeline’s input is shown in <b><span style="color:#767171">grey</span></b> and labeled as X<sub>LR</sub> (which is a low-resolution image) while the output, represented in black, is the high-resolution images Y<sub>HR</sub>.</p>
Full article ">Figure 2
<p>A super-resolution pipeline using Residual Feature Aggregation (RFA) blocks. Color coding is the same as <a href="#remotesensing-13-01854-f001" class="html-fig">Figure 1</a>. The outputs of all RB (shown in the grey box) are aggregated at the output using a 1 × 1 convolution layer.</p>
Full article ">Figure 3
<p>An illustration of the results provided by EDSR and EDSR-RFA: high-resolution (HR) image at the top left has a ground resolution of 5 cm/pixel while the LR version was generated using a scale factor of 8, which corresponds to a ground resolution of 40 cm/pixel, image super-resolution at 5 cm/pixel using bicubic interpolation, EDSR with four residual blocks and EDSR with residual feature aggregation (EDSR-RFA). Top: full image 512 × 512 pixels, Bottom: the zoomed image of 100 × 100 pixels showing a black car.</p>
Full article ">Figure 4
<p>Comparing SR images generated by EDSR and EDSR-RFA: HR image at the top left has a ground resolution of 5 cm/pixel while the LR version was generated using a scale factor of 16, which corresponds to a ground resolution of 80 cm/pixel, image super-resolution at 5 cm/pixel using bicubic interpolation, EDSR, and EDSR-RFA. Top: full image 512 × 512 pixels, Bottom: the zoomed image of 100 × 100 pixels showing a black car.</p>
Full article ">Figure 5
<p>An illustration of the results for changing the number of blocks and block size for scale factor 16.</p>
Full article ">Figure 6
<p>SR-GAN with an EDSR-RFA generator at the top with the discriminator at the bottom. The layers include convolution (RFA block) (<b><span style="color:#70AD47">green</span></b>), normalization (RFA block) (<b><span style="color:#FFC000">yellow</span></b>), ReLU activation (RFA Block) (<b><span style="color:#1F4E79">blue</span></b>), a 1 × 1 reduction layer (<b><span style="color:#9CC2E5">light blue</span></b>), and a pixel rearrangement layer (<b><span style="color:#C45911">brown</span></b>). Output Y<sub>HR</sub> of the generator is the input X<sub>HRD</sub> to the discriminator.</p>
Full article ">Figure 7
<p>The cyclic approach: GAN-EDSR-RFA. Gen (HR) and its discriminator is shown on top, and Gen (LR) for cyclic feedback is shown in the lower half of the figure.</p>
Full article ">Figure 8
<p>The network architecture for super-resolution SRCGAN with RFA and YOLOv3 detector (SRCGAN-RFA-YOLO).</p>
Full article ">Figure 9
<p>Comparison of the results for SR with a scale factor of 16 using EDSR-RFA, SR-CGAN, and SRGAN-RFA-YOLO. The LR version is 80 cm/pixel while the HR image is 5 cm/pixel. (<b>a</b>) SR images of different methods and their image quality metrics in terms of PSNR and SSIM; (<b>b</b>) zoomed sections for two different locations.</p>
Full article ">Figure 10
<p>Performance evaluation of various methods used in this study based on AP and precision/recall curves with a YOLOv3 detector and an IoU of 0.10. (<b>a</b>) AP of the various methods during training; (<b>b</b>) precision versus recall curves.</p>
Full article ">Figure 11
<p>Object reconstruction using SRCGAN-RFA-YOLO having a scale factor of 8 from Draper Satellite Image Chronology (<b>top</b>) and VAID dataset (<b>bottom</b>) using the parameters learned from the Potsdam dataset.</p>
Full article ">Figure 12
<p>Detection examples using YOLOv3 as detector network.</p>
Full article ">Figure 13
<p>Detection on an independent dataset for an IoU of 0.10. <b><span style="color:#70AD47">Green</span></b>—True Positive, <b><span style="color:red">red</span></b>—False Positive, and <b><span style="color:#5B9BD5">blue</span></b>—False Negative.</p>
Full article ">
28 pages, 11976 KiB  
Article
Sequence Image Datasets Construction via Deep Convolution Networks
by Xing Jin, Ping Tang and Zheng Zhang
Remote Sens. 2021, 13(9), 1853; https://doi.org/10.3390/rs13091853 - 10 May 2021
Cited by 2 | Viewed by 2888
Abstract
Remote-sensing time-series datasets are significant for global change research and a better understanding of the Earth. However, remote-sensing acquisitions often provide sparse time series due to sensor resolution limitations and environmental factors such as cloud noise for optical data. Image transformation is the [...] Read more.
Remote-sensing time-series datasets are significant for global change research and a better understanding of the Earth. However, remote-sensing acquisitions often provide sparse time series due to sensor resolution limitations and environmental factors such as cloud noise for optical data. Image transformation is the method that is often used to deal with this issue. This paper considers the deep convolution networks to learn the complex mapping between sequence images, called adaptive filter generation network (AdaFG), convolution long short-term memory network (CLSTM), and cycle-consistent generative adversarial network (CyGAN) for construction of sequence image datasets. AdaFG network uses a separable 1D convolution kernel instead of 2D kernels to capture the spatial characteristics of input sequence images and then is trained end-to-end using sequence images. CLSTM network can map between different images using the state information of multiple time-series images. CyGAN network can map an image from a source domain to a target domain without additional information. Our experiments, which were performed with unmanned aerial vehicle (UAV) and Landsat-8 datasets, show that the deep convolution networks are effective to produce high-quality time-series image datasets, and the data-driven deep convolution networks can better simulate complex and diverse nonlinear data information. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of unmanned aerial vehicle (UAV) dataset: 1, 2, 3 band composite.</p>
Full article ">Figure 2
<p>Location of Landsat-8 dataset: 5, 4, 3 band composite.</p>
Full article ">Figure 3
<p>Overview of AdaFG architecture: (<b>a</b>–<b>d</b>) represent feature extracting part, feature expanding part, separable convolution part, and backpropagation part, respectively.</p>
Full article ">Figure 4
<p>Cell structure of CLSTM.</p>
Full article ">Figure 5
<p>Structure diagram of CyGAN: (<b>a</b>,<b>b</b>) show forward and backward cycle-consistent loss.</p>
Full article ">Figure 6
<p>Visual effect of training (<math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>t</mi> </msub> </mrow> </semantics></math>) and testing (<math display="inline"><semantics> <mrow> <msubsup> <mi>I</mi> <mi>t</mi> <mo>′</mo> </msubsup> </mrow> </semantics></math> ) images using different deep convolution networks in the first aspect: (<b>A</b>) UAV and (<b>B</b>) Landsat-8 images; (<b>a</b>–<b>c</b>) show the visual effect using AdaFG, CLSTM, and CyGAN network, respectively.</p>
Full article ">Figure 7
<p>Strategy for constructing sequence datasets in the second and third aspect: (<b>A</b>) available UAV images from 2017 to 2019 and Landsat-8 images from 2013 to 2015; (<b>B</b>) one construction strategy with single network; (<b>C</b>) one construction strategy with multiple networks.</p>
Full article ">Figure 8
<p>(<b>a</b>) Visual effect and (<b>b</b>) pixel error between generated result and reference image using different deep convolution networks: (<b>A</b>) UAV and (<b>B</b>) Landsat-8 datasets.</p>
Full article ">Figure 9
<p>Spectral curves between generated result and reference image using different deep convolution networks at different coordinates: (<b>A</b>) UAV and (<b>B</b>) Landsat-8 datasets.</p>
Full article ">Figure 10
<p>Generated results of UAV images using AdaFG network from 2017 to 2019 according to construction strategy in <a href="#remotesensing-13-01853-t003" class="html-table">Table 3</a>: existing images and generated results in (<b>A</b>) 2019 sequence, (<b>B</b>) 2018 sequence, and (<b>C</b>) 2017 sequence.</p>
Full article ">Figure 11
<p>Generated results of Landsat-8 images using CyGAN network from 2013 to 2015 according to construction strategy in <a href="#remotesensing-13-01853-t004" class="html-table">Table 4</a>: existing images and generated results in (<b>A</b>) 2015 sequence, (<b>B</b>) 2014 sequence, and (<b>C</b>) 2013 sequence.</p>
Full article ">Figure 12
<p>Generated results of UAV images using AdaFG, CLSTM, and CyGAN network from 2017 to 2019 according to construction strategy in <a href="#remotesensing-13-01853-t005" class="html-table">Table 5</a>: existing images and generated results in (<b>A</b>) 2019 sequence, (<b>B</b>) 2018 sequence, and (<b>C</b>) 2017 sequence.</p>
Full article ">Figure 13
<p>Generated results of Landsat-8 images using AdaFG, CLSTM, and CyGAN network from 2013 to 2015 according to construction strategy in <a href="#remotesensing-13-01853-t006" class="html-table">Table 6</a>: existing images and generated results in (<b>A</b>) 2015 sequence, (<b>B</b>) 2014 sequence, and (<b>C</b>) 2013 sequence.</p>
Full article ">Figure 14
<p>(<b>a</b>) Initial image, (<b>b</b>–<b>d</b>) Visual effect and (<b>e</b>–<b>g</b>) pixel error between the generated results and reference image using separable convolution kernel sizes 11, 13, and 15: (<b>A</b>) UAV and (<b>B</b>) Landsat-8 datasets.</p>
Full article ">Figure 15
<p>(<b>a</b>) Initial image, (<b>b</b>–<b>d</b>) Visual effect and (<b>e</b>–<b>g</b>) pixel error between the generated result and reference image using stacks of 1 × 1, 2 × 2, and 3 × 3 CLSTM network layers: (<b>A</b>) UAV and (<b>B</b>) Landsat-8 datasets.</p>
Full article ">Figure 16
<p>(<b>a</b>) Initial image, (<b>b</b>–<b>d</b>) Visual effect and (<b>e</b>–<b>g</b>) pixel error between the generated result and reference image using different proportional hyper-parameters: (<b>A</b>) UAV and (<b>B</b>) Landsat-8 datasets.</p>
Full article ">
17 pages, 4135 KiB  
Technical Note
Day and Night Clouds Detection Using a Thermal-Infrared All-Sky-View Camera
by Yiren Wang, Dong Liu, Wanyi Xie, Ming Yang, Zhenyu Gao, Xinfeng Ling, Yong Huang, Congcong Li, Yong Liu and Yingwei Xia
Remote Sens. 2021, 13(9), 1852; https://doi.org/10.3390/rs13091852 - 10 May 2021
Cited by 16 | Viewed by 6384
Abstract
The formation and evolution of clouds are associated with their thermodynamical and microphysical progress. Previous studies have been conducted to collect images using ground-based cloud observation equipment to provide important cloud characteristics information. However, most of this equipment cannot perform continuous observations during [...] Read more.
The formation and evolution of clouds are associated with their thermodynamical and microphysical progress. Previous studies have been conducted to collect images using ground-based cloud observation equipment to provide important cloud characteristics information. However, most of this equipment cannot perform continuous observations during the day and night, and their field of view (FOV) is also limited. To address these issues, this work proposes a day and night clouds detection approach integrated into a self-made thermal-infrared (TIR) all-sky-view camera. The TIR camera consists of a high-resolution thermal microbolometer array and a fish-eye lens with a FOV larger than 160°. In addition, a detection scheme was designed to directly subtract the contamination of the atmospheric TIR emission from the entire infrared image of such a large FOV, which was used for cloud recognition. The performance of this scheme was validated by comparing the cloud fractions retrieved from the infrared channel with those from the visible channel and manual observation. The results indicated that the current instrument could obtain accurate cloud fraction from the observed infrared image, and the TIR all-sky-view camera developed in this work exhibits good feasibility for long-term and continuous cloud observation. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) All-sky camera 200 (ASC-200) and (<b>b</b>) ASC-200 in the meteorological observation station.</p>
Full article ">Figure 2
<p>(<b>a</b>) Thermal-infrared all-sky-view image and (<b>b</b>) visible image at the same time.</p>
Full article ">Figure 3
<p>Clear-sky downwelling spectrum radiance (red line) and atmospheric transmittance (blue line) versus wavelength, simulated using MODTRAN for the zenith path to space through a 1976 US Standard Atmospheric model.</p>
Full article ">Figure 4
<p>Variation of atmospheric transmittance with sensor zenith angle.</p>
Full article ">Figure 5
<p>(<b>a</b>) Infrared raw clear-sky image, (<b>b</b>) measured profiles of the gray value of infrared clear-sky images along one azimuth position at different times on March 12th, 2019, and (<b>c</b>) measured profiles of the gray value of infrared images under different sky conditions.</p>
Full article ">Figure 6
<p>(<b>a</b>) Measured and modeled profile of the pixel value of clear sky along one azimuth; (<b>b</b>) modeled infrared clear sky image; (<b>c</b>) the framework of the proposed method.</p>
Full article ">Figure 7
<p>(<b>a1</b>–<b>a4</b>) are the original infrared sky images, and (<b>b1</b>–<b>b4</b>) show the segmentation results of clouds detection, indicating the clouds (in white) and clear sky (in blue).</p>
Full article ">Figure 8
<p>Correlation of infrared cloud fraction estimation with (<b>a</b>) manual and (<b>b</b>) visible observation results. The dotted line in the figure represents the 1:1 line, and the solid line represents the fitting line.</p>
Full article ">Figure 9
<p>Distribution of cloud fraction difference between the infrared and visible (IR-VIS), and the infrared and manual observations (IR-Manual).</p>
Full article ">Figure 10
<p>Comparison of the cloud detection results of infrared and visible sky image with cirrus in the sky. (<b>a1</b>,<b>b1</b>) show the visible and the TIR cloud images, and (<b>a2</b>,<b>b2</b>) show the results of the cloud identification of the two cloud images.</p>
Full article ">Figure 11
<p>Comparison of the cloud detection results of infrared and visible images with hazy weather conditions. (<b>a1</b>,<b>b1</b>) show the visible and TIR cloud images, respectively. (<b>a2</b>,<b>b2</b>) are the results of the cloud identification of the two cloud images.</p>
Full article ">
19 pages, 2747 KiB  
Article
Monitoring Terrestrial Water Storage Changes with the Tongji-Grace2018 Model in the Nine Major River Basins of the Chinese Mainland
by Zhiwei Chen, Xingfu Zhang and Jianhua Chen
Remote Sens. 2021, 13(9), 1851; https://doi.org/10.3390/rs13091851 - 10 May 2021
Cited by 14 | Viewed by 3259
Abstract
Data from the Gravity Recovery and Climate Experiment (GRACE) satellite mission can be used to monitor changes in terrestrial water storage (TWS). In this study, we exploit the TWS observations from a new temporal gravity field model, Tongji-Grace2018, which was developed using an [...] Read more.
Data from the Gravity Recovery and Climate Experiment (GRACE) satellite mission can be used to monitor changes in terrestrial water storage (TWS). In this study, we exploit the TWS observations from a new temporal gravity field model, Tongji-Grace2018, which was developed using an optimized short-arc approach at Tongji University. We analyzed the changes in the TWS and groundwater storage (GWS) in each of the nine major river basins of the Chinese mainland from April 2002 to August 2016, using Tongji-Grace2018, the Global Land Data Assimilation System (GLDAS) hydrological model, in situ observations, and additional auxiliary data (such as precipitation and temperature). Our results indicate that the TWS of the Songliao, Yangtze, Pearl, and Southeastern River Basins are all increasing, with the most drastic TWS growth occurring in the Southeastern River Basin. The TWS of the Yellow, Haihe, Huaihe, and Southwestern River Basins are all decreasing, with the most drastic TWS loss occurring in the Haihe River Basin. The Continental River Basin TWS has remained largely unchanged over time. With the exception of the Songliao and Pearl River Basins, the GWS results produced by the Tongji-Grace2018 model are consistent with the in situ observations of these basins. The correlation coefficients for the Tongji-Grace2018 model results and the in situ observations for the Yellow, Huaihe, Yangtze, Southwestern, and Continental River Basins are higher than 0.710. Overall, the GWS results for the Songliao, Yellow, Haihe, Huaihe, Southwestern, and Continental River Basins all exhibit a downward trend, with the most severe groundwater loss occurring in the Haihe and Huaihe River Basins. However, the Yangtze and Southeastern River Basins both have upward-trending modeled and measured GWS values. This study demonstrates the effectiveness of the Tongji-Grace2018 model for the reliable estimation of TWS and GWS changes on the Chinese mainland, and may contribute to the management of available water resources. Full article
(This article belongs to the Special Issue Terrestrial Hydrology Using GRACE and GRACE-FO)
Show Figures

Figure 1

Figure 1
<p>The nine major river basins in China. (<b>1</b>) Songliao River Basin (SLRB); (<b>2</b>) Yellow River Basin, (YERB); (<b>3</b>) Haihe River Basin (HARB); (<b>4</b>) Huaihe River Basin (HURB); (<b>5</b>) Yangtze River Basin, (YARB); (<b>6</b>) Pearl River Basin (PERB); (<b>7</b>) Southeastern River Basin (SERB); (<b>8</b>) Southwestern River Basin (SWRB); (<b>9</b>) Continental River Basin (CORB). The black dots represent the locations of the in situ observations.</p>
Full article ">Figure 2
<p>The spatial variations in (<b>a</b>) the monthly average precipitation values and (<b>b</b>) the monthly average temperature values on the Chinese mainland.</p>
Full article ">Figure 3
<p>The workflow in this study.</p>
Full article ">Figure 4
<p>Spatial variations in the TWS in China for different combinations of the Duan de-striping method and the Gaussian filter radius. (<b>a</b>) The Tongji-Grace2018 model without applying any filters; (<b>b</b>) The Tongji-Grace2018 model after applying the 150 km Gaussian filter; (<b>c</b>) The Tongji-Grace2018 model after applying the Duan de-striping method and the 150 km Gaussian filter; (<b>d</b>) The Tongji-Grace2018 model after applying the Duan de-striping method and the 200 km Gaussian filter; (<b>e</b>) The Tongji-Grace2018 model after applying the Duan de-striping method and the 250 km Gaussian filter; (<b>f</b>) The Tongji-Grace2018 model after applying the Duan de-striping method and the 300 km Gaussian filter.</p>
Full article ">Figure 5
<p>The TWS time series for each river basin. The red curve represents the regional TWS changes (Tongji-Grace2018 model), the purple curve represents the 13 month moving average of the TWS changes, the black straight line is the TWS trend line, the green curve is the SWS (SMS + SWES) (GLDAS hydrological model), and the blue column is the monthly rainfall. The grey column is the annual rainfall anomaly, which is obtained by subtracting the average yearly rainfall value from the annual rainfall. The yellow column is the annual temperature anomaly, which is obtained by subtracting the average yearly temperature from the annual temperature.</p>
Full article ">Figure 6
<p>Comparison of the GWS results from the GRACE models and from the in situ observations in each river basin; ρ represents the correlation coefficient between the measured and modeled GWS values from January of 2005 to August of 2016.</p>
Full article ">Figure 6 Cont.
<p>Comparison of the GWS results from the GRACE models and from the in situ observations in each river basin; ρ represents the correlation coefficient between the measured and modeled GWS values from January of 2005 to August of 2016.</p>
Full article ">
21 pages, 7309 KiB  
Article
Towards the Spectral Mapping of Plastic Debris on Beaches
by Jenna A. Guffogg, Mariela Soto-Berelov, Simon D. Jones, Chris J. Bellman, Jennifer L. Lavers and Andrew K. Skidmore
Remote Sens. 2021, 13(9), 1850; https://doi.org/10.3390/rs13091850 - 10 May 2021
Cited by 16 | Viewed by 4795
Abstract
Floating and washed ashore marine plastic debris (MPD) is a growing environmental challenge. It has become evident that secluded locations including the Arctic, Antarctic, and remote islands are being impacted by plastic pollution generated thousands of kilometers away. Optical remote sensing of MPD [...] Read more.
Floating and washed ashore marine plastic debris (MPD) is a growing environmental challenge. It has become evident that secluded locations including the Arctic, Antarctic, and remote islands are being impacted by plastic pollution generated thousands of kilometers away. Optical remote sensing of MPD is an emerging field that can aid in monitoring remote environments where in-person observation and data collection is not always feasible. Here we evaluate MPD spectral features in the visible to shortwave infrared regions for detecting varying quantities of MPD that have accumulated on beaches using a spectroradiometer. Measurements were taken from a range of in situ MPD accumulations ranging from 0.08% to 7.94% surface coverage. Our results suggest that spectral absorption features at 1215 nm and 1732 nm are useful for detecting varying abundance levels of MPD in a complex natural environment, however other absorption features at 931 nm, 1045 nm and 2046 nm could not detect in situ MPD. The reflectance of some in situ MPD accumulations was statistically different from samples that only contained organic debris and sand between 1.56% and 7.94% surface cover; however other samples with similar surface cover did not have reflectance that was statistically different from samples containing no MPD. Despite MPD being detectable against a background of sand and organic beach debris, a clear relationship between the surface cover of MPD and the strength of key absorption features could not be established. Additional research is needed to advance our understanding of the factors, such as type of MPD assemblage, that contribute to the bulk reflectance of MPD contaminated landscapes. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The Cocos (Keeling) Islands (North Keeling atoll not shown). West and Home Islands have permanent settlements, while Direction Island is a popular tourist location. South Island is more remote and is visited infrequently. Location of field sites indicated with stars, the barrier reefs are shown in dark grey and areas of shallow water in light grey.</p>
Full article ">Figure 2
<p>Marine plastic debris (MPD) on the Cocos (Keeling) Islands, February 2020 (<b>A</b>) Northern ocean-facing beach on West Island (<b>B</b>) southern ocean-facing beach on South Island (<b>C</b>) a quadrat setup with an artificially created MPD surface cover of 25%.</p>
Full article ">Figure 3
<p>Plastic items within quadrats were digitised to determine the apparent surface coverage of plastic debris (yellow outline).</p>
Full article ">Figure 4
<p>Spectral Feature Analysis (<b>A</b>) Continuum endpoints used to determine the continuum line for the spectral absorption feature (<b>B</b>) Continuum-hull removed spectral absorption (<b>C</b>) feature depth and feature width (full width at half-maximum as defined by Kokalay and Skidmore [<a href="#B51-remotesensing-13-01850" class="html-bibr">51</a>] and (<b>D</b>) feature area, calculated by summing the individual band depths of each channel within the continuum-removed spectral absorption feature.</p>
Full article ">Figure 5
<p>Average spectra of representative quadrats collected on Cocos (Keeling) Island beaches, including a quadrat with no MPD and quadrats with varying MPD abundance (&lt;1%, 1–2%, 2–3%, 3–4% and +5%). The presence of organic matter like seaweed in some quadrats reduced overall reflectance. Wavelengths of interest for plastic detection are shown as vertical lines. Modelled atmospheric transmittance is shown as shaded grey to illustrate the position of atmospheric windows [Adapted with permission from [<a href="#B52-remotesensing-13-01850" class="html-bibr">52</a>] ©The Optical Society].</p>
Full article ">Figure 6
<p>Average continuum-removed reflectance of representative quadrats collected on Cocos (Keeling) Island beaches, including a quadrat with no MPD, quadrats with varying MPD abundance (&lt;1%, 1–2%, 2–3%, 3–4% and +5%) and an intentionally created quadrat with 25% surface cover. Vertical lines show the position of absorption features identified in the literature [<a href="#B30-remotesensing-13-01850" class="html-bibr">30</a>]. Plastic absorption features at 931 nm, 1045 nm and 2046 nm were not readily distinguishable in the in situ quadrats when compared to the quadrat with no MPD. The 1215 nm feature and the 1732 nm features are distinguishable when compared to the sand-only sample.</p>
Full article ">Figure 7
<p>Continuum-removed reflectance of quadrats that had statistically significant differences in spectra when compared to a control sand group. The mean and 95% upper and lower confidence intervals are shown for both sand-only spectra (dark tones) and the MPD contaminated quadrat (in colour). Statistically significant differences are shaded in grey. Quadrats exhibited surface covers of (<b>a</b>) 1.56% at 1215 nm and (<b>b</b>) 1732 nm, (<b>c</b>) 3.43% at 1215 nm and (<b>d</b>) 1732 nm (<b>e</b>) 4.88% at 1215 nm and (<b>f</b>) 1732 nm and (<b>g</b>) 7.94% at 1215 nm and (<b>h</b>) 1732 nm.</p>
Full article ">Figure 8
<p>Continuum-removed reflectance of samples that were statistically different from sand spectra around 1215 nm, compared to each other. The mean and 95% upper and lower confidence intervals are shown for the MPD contaminated quadrats in each graph (in colour). Statistically significant differences are shaded in grey. Sample pairings are (<b>a</b>) 1.56% and 3.43% (<b>b</b>) 1.56% and 4.88% (<b>c</b>) 3.43% and 4.88% (<b>d</b>) 1.56% and 7.94% (<b>e</b>) 3.43% and 7.94% and (<b>f</b>) 4.88% and 7.94%.</p>
Full article ">Figure 9
<p>Continuum-removed reflectance of quadrats that were statistically different from sand spectra around 1732 nm, compared to each other. The mean and 95% upper and lower confidence intervals are shown for the MPD contaminated quadrats in each graph (in colour). Sample pairings are (<b>a</b>) 1.56% and 3.43% (<b>b</b>) 1.56% and 4.88% (<b>c</b>) 3.43% and 4.88% (<b>d</b>) 1.56% and 7.94% (<b>e</b>) 3.43% and 7.94% and (<b>f</b>) 4.88% and 7.94%.</p>
Full article ">Figure A1
<p>Average spectral reflectance measurements from each of the MPD containing quadrats from the two field sites, grouped by surface cover.</p>
Full article ">
16 pages, 3997 KiB  
Article
Evaluation of Light Pollution in Global Protected Areas from 1992 to 2018
by Haowei Mu, Xuecao Li, Xiaoping Du, Jianxi Huang, Wei Su, Tengyun Hu, Yanan Wen, Peiyi Yin, Yuan Han and Fei Xue
Remote Sens. 2021, 13(9), 1849; https://doi.org/10.3390/rs13091849 - 9 May 2021
Cited by 37 | Viewed by 8422
Abstract
Light pollution, a phenomenon in which artificial nighttime light (NTL) changes the form of brightness and darkness in natural areas such as protected areas (PAs), has become a global concern due to its threat to global biodiversity. With ongoing global urbanization and climate [...] Read more.
Light pollution, a phenomenon in which artificial nighttime light (NTL) changes the form of brightness and darkness in natural areas such as protected areas (PAs), has become a global concern due to its threat to global biodiversity. With ongoing global urbanization and climate change, the light pollution status in global PAs deserves attention for mitigation and adaptation. In this study, we developed a framework to evaluate the light pollution status in global PAs, using the global NTL time series data. First, we classified global PAs (30,624) into three pollution categories: non-polluted (5974), continuously polluted (8141), and discontinuously polluted (16,509), according to the time of occurrence of lit pixels in/around PAs from 1992 to 2018. Then, we explored the NTL intensity (e.g., digital numbers) and its trend in those polluted PAs and identified those hotspots of PAs at the global scale with consideration of global urbanization. Our study shows that global light pollution is mainly distributed within the range of 30°N and 60°N, including Europe, north America, and East Asia. Although the temporal trend of NTL intensity in global PAs is increasing, Japan and the United States of America (USA) have opposite trends due to the implementation of well-planned ecological conservation policies and declining population growth. For most polluted PAs, the lit pixels are close to their boundaries (i.e., less than 10 km), and the NTL in/around these lit areas has become stronger over the past decades. The identified hotspots of PAs (e.g., Europe, the USA, and East Asia) help support decisions on global biodiversity conservation, particularly with global urbanization and climate change. Full article
(This article belongs to the Special Issue Light Pollution Monitoring Using Remote Sensing Data)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The proposed analyses framework of light pollution in global PAs by combining different light pollution categories (<b>a</b>) and the temporal trend of NTL in polluted PAs (<b>b</b>). Note: PAs are simplified as ellipses for illustration.</p>
Full article ">Figure 2
<p>Conceptual illustration of light pollution categories of non-polluted (<b>a</b>), continuously polluted (<b>b</b>), and discontinuously polluted (<b>c</b>).</p>
Full article ">Figure 3
<p>Conceptual illustration of buffer intervals around PA revealed by the first polluted buffer (<b>a</b>) and the high-intensity buffer (<b>b</b>). Note: yellow and purple were used to indicate different pollution intervals (i.e., the first pollution and high-intensity interval).</p>
Full article ">Figure 4
<p>Spatial distribution of different light pollution categories (<b>a</b>) and their composited visualization using continuously polluted (R), non-polluted (G), and discontinuously polluted (B) (<b>b</b>).</p>
Full article ">Figure 5
<p>The temporal trends of NTL worldwide in continuously polluted (<b>a</b>) and discontinuously polluted (<b>b</b>) PAs. The visualization of kernel density maps is composited by channels of increase (R), decrease (G), and no change (B). Note: the search radius of the kernel density is 5 degrees, and the weight is the natural logarithm of the area.</p>
Full article ">Figure 6
<p>The distance of PAs to the averaged first (<b>a</b>) and high-intensity (<b>b</b>) polluted buffers. The visualization of maps is composited by channels of distance to buffers (buffer &lt;= 10, R; 10 &lt; buffer &lt; 25, G; buffer &gt;= 25, B). Those red points with distances beyond 40 km are mainly caused by unstable light sources (e.g., ports, logging, and shipping).</p>
Full article ">Figure 7
<p>Distribution of global polluted PAs with high value of NTL in the high-intensity interval (<b>a</b>) and their temporal trends of NTL over the past decades (<b>b</b>). Detailed cases in (<b>c</b>–<b>g</b>) in (<b>a</b>) can be found in enlarged views. Note: the mean value of NTL and temporal trend of annual NTL sum in the high-intensity interval were visualized by the kernel density approach to identify these five global hotspots.</p>
Full article ">Figure 8
<p>The relationship between light pollution (i.e., the ratio of polluted PA with an increasing trend) and urbanization (i.e., the increasing rate of impervious surface areas).</p>
Full article ">
30 pages, 6983 KiB  
Article
Estimation of Long-Term Surface Downward Longwave Radiation over the Global Land from 2000 to 2018
by Chunjie Feng, Xiaotong Zhang, Yu Wei, Weiyu Zhang, Ning Hou, Jiawen Xu, Shuyue Yang, Xianhong Xie and Bo Jiang
Remote Sens. 2021, 13(9), 1848; https://doi.org/10.3390/rs13091848 - 9 May 2021
Cited by 9 | Viewed by 3651
Abstract
It is of great importance for climate change studies to construct a worldwide, long-term surface downward longwave radiation (Ld, 4–100 μm) dataset. Although a number of global Ld datasets are available, their low accuracies and coarse spatial resolutions limit [...] Read more.
It is of great importance for climate change studies to construct a worldwide, long-term surface downward longwave radiation (Ld, 4–100 μm) dataset. Although a number of global Ld datasets are available, their low accuracies and coarse spatial resolutions limit their applications. This study generated a daily Ld dataset with a 5-km spatial resolution over the global land surface from 2000 to 2018 using atmospheric parameters, which include 2-m air temperature (Ta), relative humidity (RH) at 1000 hPa, total column water vapor (TCWV), surface downward shortwave radiation (Sd), and elevation, based on the gradient boosting regression tree (GBRT) method. The generated Ld dataset was evaluated using ground measurements collected from AmeriFlux, AsiaFlux, baseline surface radiation network (BSRN), surface radiation budget network (SURFRAD), and FLUXNET networks. The validation results showed that the root mean square error (RMSE), mean bias error (MBE), and correlation coefficient (R) values of the generated daily Ld dataset were 17.78 W m−2, 0.99 W m−2, and 0.96 (p < 0.01). Comparisons with other global land surface radiation products indicated that the generated Ld dataset performed better than the clouds and earth’s radiant energy system synoptic (CERES-SYN) edition 4.1 dataset and ERA5 reanalysis product at the selected sites. In addition, the analysis of the spatiotemporal characteristics for the generated Ld dataset showed an increasing trend of 1.8 W m−2 per decade (p < 0.01) from 2003 to 2018, which was closely related to Ta and water vapor pressure. In general, the generated Ld dataset has a higher spatial resolution and accuracy, which can contribute to perfect the existing radiation products. Full article
(This article belongs to the Special Issue Advances on Land–Ocean Heat Fluxes Using Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographical distribution of observation sites used to model (314 sites in total, green) and validate (35 sites in total, red) the <span class="html-italic">L<sub>d</sub></span> dataset in this study collected at AmeriFlux (squares) with 159 and 16 sites, AsiaFlux (pentagrams) with 23 and 3 sites, BSRN networks (circles) with 51 and 6 sites, FLUXNET (inverted triangle) with 75 and 9 sites, and SURFRAD (positive triangle) with 6 and 1 sites, respectively.</p>
Full article ">Figure 2
<p>The main flowchart in this study.</p>
Full article ">Figure 3
<p>Evaluation results of daily <span class="html-italic">L<sub>d</sub></span> estimates on the basis of the GBRT model for (<b>a</b>) the training dataset and (<b>b</b>) the test dataset against the ground measurements from March 2000 to December 2018.</p>
Full article ">Figure 4
<p>Evaluation results of <span class="html-italic">L<sub>d</sub></span> estimates with 5-km resolution based on the GBRT model on the (<b>a</b>) daily and (<b>b</b>) monthly time scales against the ground measurements from March 2000 to December 2018.</p>
Full article ">Figure 5
<p>(<b>a</b>) RMSE and (<b>b</b>) MBE histograms of daily <span class="html-italic">L<sub>d</sub></span> estimates with 5-km resolution based on the GBRT model against the ground measurements from March 2000 to December 2018.</p>
Full article ">Figure 6
<p>Evaluation results of the daily and monthly (<b>a</b>,<b>d</b>) <span class="html-italic">L<sub>d</sub></span> estimates based on the GBRT model, (<b>b</b>,<b>e</b>) CERES-SYN <span class="html-italic">L<sub>d</sub></span> product, and (<b>c</b>,<b>f</b>) ERA5 <span class="html-italic">L<sub>d</sub></span> retrieval with a 100-km resolution against the ground measurements from March 2000 to December 2018.</p>
Full article ">Figure 7
<p>(<b>a</b>) RMSE and (<b>b</b>) MBE histograms of the daily <span class="html-italic">L<sub>d</sub></span> estimates based on the GBRT model, CERES-SYN <span class="html-italic">L<sub>d</sub></span> product, and ERA5 <span class="html-italic">L<sub>d</sub></span> retrieval with a 100-km resolution against the ground measurements from March 2000 to December 2018.</p>
Full article ">Figure 8
<p>The spatial distribution of the multiyear seasonal mean value of the generated <span class="html-italic">L<sub>d</sub></span> dataset in Northern hemisphere (<b>a</b>) spring (March, April, and May), (<b>b</b>) summer (June, July, and August), (<b>c</b>) autumn (September, October, and November), and (<b>d</b>) winter (December, January, and February) over the global land surface from 2003 to 2018.</p>
Full article ">Figure 9
<p>The spatial distribution of the multiyear annual mean value of the (<b>a</b>) generated <span class="html-italic">L<sub>d</sub></span> dataset, (<b>b</b>) generated <span class="html-italic">L<sub>d</sub></span> minus CERES-SYN, and (<b>c</b>) generated <span class="html-italic">L<sub>d</sub></span> minus ERA5 over the global land surface from 2003 to 2018.</p>
Full article ">Figure 9 Cont.
<p>The spatial distribution of the multiyear annual mean value of the (<b>a</b>) generated <span class="html-italic">L<sub>d</sub></span> dataset, (<b>b</b>) generated <span class="html-italic">L<sub>d</sub></span> minus CERES-SYN, and (<b>c</b>) generated <span class="html-italic">L<sub>d</sub></span> minus ERA5 over the global land surface from 2003 to 2018.</p>
Full article ">Figure 10
<p>Multiyear (<b>a</b>) monthly mean values, (<b>b</b>) annual mean values, and (<b>c</b>) annual mean anomaly values of the generated, ERA5, and CERES-SYN <span class="html-italic">L<sub>d</sub></span> from 2003 to 2018, respectively.</p>
Full article ">Figure 11
<p>The trend of the annual mean anomalies of the generated <span class="html-italic">L<sub>d</sub></span> estimation, ERA5 2-m air temperature, and water vapor pressure from 2003 to 2018.</p>
Full article ">Figure 12
<p>The spatial distribution of annual mean values for the (<b>a</b>) ERA5 2-m air temperature and (<b>b</b>) water vapor pressure from 2003 to 2018.</p>
Full article ">Figure 13
<p>The spatial distribution of the correlation coefficient between the generated <span class="html-italic">L<sub>d</sub></span> estimation and (<b>a</b>) ERA5 2-m air temperature and (<b>b</b>) water vapor pressure from 2003 to 2018. Only significant pixels where <span class="html-italic">p</span> values are less than 0.05 appeared.</p>
Full article ">
20 pages, 4168 KiB  
Article
Comparing PlanetScope to Landsat-8 and Sentinel-2 for Sensing Water Quality in Reservoirs in Agricultural Watersheds
by Abubakarr S. Mansaray, Andrew R. Dzialowski, Meghan E. Martin, Kevin L. Wagner, Hamed Gholizadeh and Scott H. Stoodley
Remote Sens. 2021, 13(9), 1847; https://doi.org/10.3390/rs13091847 - 9 May 2021
Cited by 46 | Viewed by 7496
Abstract
Agricultural runoff transports sediments and nutrients that deteriorate water quality erratically, posing a challenge to ground-based monitoring. Satellites provide data at spatial-temporal scales that can be used for water quality monitoring. PlanetScope nanosatellites have spatial (3 m) and temporal (daily) resolutions that may [...] Read more.
Agricultural runoff transports sediments and nutrients that deteriorate water quality erratically, posing a challenge to ground-based monitoring. Satellites provide data at spatial-temporal scales that can be used for water quality monitoring. PlanetScope nanosatellites have spatial (3 m) and temporal (daily) resolutions that may help improve water quality monitoring compared to coarser-resolution satellites. This work compared PlanetScope to Landsat-8 and Sentinel-2 in their ability to detect key water quality parameters. Spectral bands of each satellite were regressed against chlorophyll a, turbidity, and Secchi depth data from 13 reservoirs in Oklahoma over three years (2017–2020). We developed significant regression models for each satellite. Landsat-8 and Sentinel-2 explained more variation in chlorophyll a than PlanetScope, likely because they have more spectral bands. PlanetScope and Sentinel-2 explained relatively similar amounts of variations in turbidity and Secchi Disk data, while Landsat-8 explained less variation in these parameters. Since PlanetScope is a commercial satellite, its application may be limited to cases where the application of coarser-resolution satellites is not feasible. We identified scenarios where PS may be more beneficial than Landsat-8 and Sentinel-2. These include measuring water quality parameters that vary daily, in small ponds and narrow coves of reservoirs, and at reservoir edges. Full article
(This article belongs to the Special Issue Remote Sensing for Water Resources Assessment in Agriculture)
Show Figures

Figure 1

Figure 1
<p>Map of Oklahoma showing the BUMP reservoirs across the state (grey polygons and speckles) and the 13 reservoirs used in this study (in colors). The spatial data for this map were obtained from the Oklahoma Water Resource Board’s website [<a href="#B34-remotesensing-13-01847" class="html-bibr">34</a>].</p>
Full article ">Figure 2
<p>Flow diagram showing the procedure for model development, model selection, and model validation.</p>
Full article ">Figure 3
<p>Histograms of the Chl-a (µg/L), Turb (NTU), and SD (cm) data from the study reservoirs that correspond to their respective PS, Landsat-8, and Sentinel-2 images.</p>
Full article ">Figure 4
<p>Scatter plots showing relationships between the model-derived concentrations and measured concentrations of Chl-a (µg/L), Turb (NTU), and SD (cm) with PS, Landsat-8 (L8), and Sentinel-2 (S2). The R<sup>2</sup> (R-sq) values are displayed at the top of each graph along with the associated parameters and satellite platforms.</p>
Full article ">Figure 5
<p>Average R<sup>2</sup> values in the 10-fold cv of the parameters with PS, Landsat-8, and Sentinel-2.</p>
Full article ">Figure 6
<p>Average RMSE values in the 10-fold cv of the parameters with PS, Landsat-8, and Sentinel-2.</p>
Full article ">Figure 7
<p>PS (<b>Left</b>) and Landsat-8 (<b>Right</b>) maps of Chl-a in Lake McMurtry, northcentral Oklahoma. Both satellites acquired their images on 27 November 2019, during an active algal bloom event. The overview maps at the top show the location of Lake McMurtry in Oklahoma (Top Right spec in a red box) and Lake McMurtry (Top Left) showing the focus area delineated in a red box. The color bars represent concentration ranges as estimated by each of the two satellites.</p>
Full article ">Figure 8
<p>PS (<b>Left</b>) and Sentinel-2 (<b>Right</b>) maps of Chl-a in Lake McMurtry, northcentral Oklahoma. Both satellites acquired their images on 1 December 2019, during an active algal bloom event. The overview maps at the top show the location of Lake McMurtry in Oklahoma (Top right spec in a red box) and Lake McMurtry (Top left) showing the focus area delineated by a red box. The color bars represent concentration ranges as estimated by each of the two satellites.</p>
Full article ">Figure 9
<p>PlanetScope maps of Chl-a on four dates of image acquisition in Lake McMurtry, northcentral Oklahoma. The upper left panel is from an image acquired on 27 November 2019; the upper right panel is from an image acquired on 30 November 2019; the lower left panel is from an image acquired on 1 December 2019; the bottom right panel is from an image acquired on 3 December 2019. The color bars in the legends represent Chl-a concentration ranges on each date.</p>
Full article ">
20 pages, 7700 KiB  
Article
Identifying Spatial and Temporal Variations in Concrete Bridges with Ground Penetrating Radar Attributes
by Vivek Kumar, Isabel M. Morris, Santiago A. Lopez and Branko Glisic
Remote Sens. 2021, 13(9), 1846; https://doi.org/10.3390/rs13091846 - 9 May 2021
Cited by 10 | Viewed by 3172
Abstract
Estimating variations in material properties over space and time is essential for the purposes of structural health monitoring (SHM), mandated inspection, and insurance of civil infrastructure. Properties such as compressive strength evolve over time and are reflective of the overall condition of the [...] Read more.
Estimating variations in material properties over space and time is essential for the purposes of structural health monitoring (SHM), mandated inspection, and insurance of civil infrastructure. Properties such as compressive strength evolve over time and are reflective of the overall condition of the aging infrastructure. Concrete structures pose an additional challenge due to the inherent spatial variability of material properties over large length scales. In recent years, nondestructive approaches such as rebound hammer and ultrasonic velocity have been used to determine the in situ material properties of concrete with a focus on the compressive strength. However, these methods require personnel expertise, careful data collection, and high investment. This paper presents a novel approach using ground penetrating radar (GPR) to estimate the variability of in situ material properties over time and space for assessment of concrete bridges. The results show that attributes (or features) of the GPR data such as raw average amplitudes can be used to identify differences in compressive strength across the deck of a concrete bridge. Attributes such as instantaneous amplitudes and intensity of reflected waves are useful in predicting the material properties such as compressive strength, porosity, and density. For compressive strength, one alternative approach of the Maturity Index (MI) was used to estimate the present values and compare with GPR estimated values. The results show that GPR attributes could be successfully used for identifying spatial and temporal variation of concrete properties. Finally, discussions are presented regarding their suitability and limitations for field applications. Full article
(This article belongs to the Special Issue Trends in GPR and Other NDTs for Transport Infrastructure Assessment)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the experimental approach and data collected in the laboratory study.</p>
Full article ">Figure 2
<p>Front view of the Streicker bridge at Princeton University.</p>
Full article ">Figure 3
<p>A single fiber-optics sensor present as part of the SHM system in the Streicker bridge (Adapted from [<a href="#B4-remotesensing-13-01846" class="html-bibr">4</a>]).</p>
Full article ">Figure 4
<p>Elevation view of the Streicker bridge with sensor locations and direction of GPR transects.</p>
Full article ">Figure 5
<p>(<b>a</b>) Compressive strength for the August pours (NE and main span.) (<b>b</b>) Compressive strength for the October pour (SE leg).</p>
Full article ">Figure 6
<p>GPR data collection on Streicker bridge by the authors.</p>
Full article ">Figure 7
<p>Southeast leg section showing location of GPR scans. Transects run longitudinally from main span down the SE and NE legs (adapted from [<a href="#B47-remotesensing-13-01846" class="html-bibr">47</a>]).</p>
Full article ">Figure 8
<p>(<b>a</b>) Raw B-scan along the deck of the bridge collected with 900 MHz antenna and (<b>b</b>) preprocessed B-scan truncated to remove all internal reflections except the upper rebar layer.</p>
Full article ">Figure 9
<p>Flowchart depicting the steps involved in estimating the material properties using a machine learning model.</p>
Full article ">Figure 10
<p>Compressive strength obtained as a function of maturity index based on the initial concrete core sample tests.</p>
Full article ">Figure 11
<p>Attributes capturing qualitative differences in the two construction phases.</p>
Full article ">Figure 12
<p>Attributes unsuccessful in capturing differences between the construction phases.</p>
Full article ">Figure 13
<p>Figure showing increased loss of cover on the southeast leg compared to northeast leg.</p>
Full article ">Figure 14
<p>Spatial differences in material properties between the southeast and northeast legs.</p>
Full article ">Figure 15
<p>Strength calculation at different sensor locations using maturity method.</p>
Full article ">
29 pages, 3572 KiB  
Article
Quantifying the Response of German Forests to Drought Events via Satellite Imagery
by Marius Philipp, Martin Wegmann and Carina Kübert-Flock
Remote Sens. 2021, 13(9), 1845; https://doi.org/10.3390/rs13091845 - 9 May 2021
Cited by 15 | Viewed by 5340
Abstract
Forest systems provide crucial ecosystem functions to our environment, such as balancing carbon stocks and influencing the local, regional and global climate. A trend towards an increasing frequency of climate change induced extreme weather events, including drought, is hereby a major challenge for [...] Read more.
Forest systems provide crucial ecosystem functions to our environment, such as balancing carbon stocks and influencing the local, regional and global climate. A trend towards an increasing frequency of climate change induced extreme weather events, including drought, is hereby a major challenge for forest management. Within this context, the application of remote sensing data provides a powerful means for fast, operational and inexpensive investigations over large spatial scales and time. This study was dedicated to explore the potential of satellite data in combination with harmonic analyses for quantifying the vegetation response to drought events in German forests. The harmonic modelling method was compared with a z-score standardization approach and correlated against both, meteorological and topographical data. Optical satellite imagery from Landsat and the Moderate Resolution Imaging Spectroradiometer (MODIS) was used in combination with three commonly applied vegetation indices. Highest correlation scores based on the harmonic modelling technique were computed for the 6th harmonic degree. MODIS imagery in combination with the Normalized Difference Vegetation Index (NDVI) generated hereby best results for measuring spectral response to drought conditions. Strongest correlation between remote sensing data and meteorological measures were observed for soil moisture and the self-calibrated Palmer Drought Severity Index (scPDSI). Furthermore, forests regions over sandy soils with pine as the dominant tree type were identified to be particularly vulnerable to drought. In addition, topographical analyses suggested mitigated drought affects along hill slopes. While the proposed approaches provide valuable information about vegetation dynamics as a response to meteorological weather conditions, standardized in-situ measurements over larger spatial scales and related to drought quantification are required for further in-depth quality assessment of the used methods and data. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of the study outline. The analysis was conducted on two levels. The first level was designed to identify optimal predictor combinations for drought assessment on a national scale. The second level was dedicated to study local drought response of different tree types depending on present soil types and topographic positioning.</p>
Full article ">Figure 2
<p>Study area covering all German forest regions for Analysis Level 1 (<b>a</b>), as well as 60 forest reference areas across Bavaria with information about dominant tree species and forest age for Analysis Level 2 (<b>b</b>).</p>
Full article ">Figure 3
<p><math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonics calculation based on detrended monthly median MODIS NDVI data of the Steigerwald between 2010 and 2019. (<b>a</b>) Fitted curve using a 1st harmonic degree. (<b>b</b>) Fitted curve using a 3rd harmonic degree. (<b>c</b>) Fitted curve using a 6th harmonic degree. (<b>d</b>) Difference (<math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonics) between NDVI values and the 6th degree harmonic fitted curve for months May–October.</p>
Full article ">Figure 3 Cont.
<p><math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonics calculation based on detrended monthly median MODIS NDVI data of the Steigerwald between 2010 and 2019. (<b>a</b>) Fitted curve using a 1st harmonic degree. (<b>b</b>) Fitted curve using a 3rd harmonic degree. (<b>c</b>) Fitted curve using a 6th harmonic degree. (<b>d</b>) Difference (<math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonics) between NDVI values and the 6th degree harmonic fitted curve for months May–October.</p>
Full article ">Figure 4
<p>Boxplots of correlation scores between monthly MODIS NDVI <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonic values and MODIS NDVI z-score (<b>a</b>) as well as scPDSI (<b>b</b>) values. A higher harmonic degree leads to overall higher correlation scores. scPDSI vs. <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonic <span class="html-italic">r</span> values (<b>b</b>) are significantly lower compared to z-score vs. <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonic <span class="html-italic">r</span> values (<b>a</b>).</p>
Full article ">Figure 5
<p>Boxplots of correlation scores between monthly scPDSI values and MODIS <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonic (<b>a</b>) as well as z-score (<b>b</b>) values. NDVI outperformed the other two indices for both <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonics (<b>a</b>) and z-score (<b>b</b>) data. z-score generated overall slightly higher <span class="html-italic">r</span> values.</p>
Full article ">Figure 6
<p>Boxplots of correlation scores between annual minimum scPDSI values and annual minimum <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonics (<b>a</b>) as well as z-score (<b>b</b>) data. NDVI outperformed the other two indices for both <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonics (<b>a</b>) and z-score (<b>b</b>) data. z-score generated overall slightly higher <span class="html-italic">r</span> values.</p>
Full article ">Figure 7
<p>Spatial variability of correlation coefficients between scPDSI and (<b>a</b>) MODIS NDVI <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonic as well as (<b>b</b>) MODIS NDVI z-score values across German forest areas.</p>
Full article ">Figure 8
<p>Drought maps derived from (<b>a</b>) MODIS NDVI <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonic, (<b>b</b>) MODIS NDVI z-score and (<b>c</b>) scPDSI values for German forests in August 2018.</p>
Full article ">Figure 9
<p>Boxplots of (<b>a</b>) 6th degree <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonic NDVI and (<b>b</b>) z-score NDVI values in August 2018 for forests with different dominant tree species based on Landsat data. Forests with Larch and Pine as the dominant species feature overall lowest values. Tree types are ordered after the overall mean value.</p>
Full article ">Figure 10
<p>Frequency of combined Landsat data per year from the sensors TM, ETM+ and OLI of the Steigerwald. The remote sensing imagery was masked from clouds, cloud shadows and snow. An overall increase in data frequency can be observed.</p>
Full article ">Figure 11
<p>Monthly median NDVI values for Landsat and MODIS data of the Steigerwald. Remote sensing imagery was masked from clouds, cloud shadows and snow. Adjacent months with available data are connected with a line. MODIS data has a smaller temporal coverage compared to Landsat data, but provides less noisy values. Furthermore, MODIS features a higher data continuity than Landsat.</p>
Full article ">Figure 12
<p>Number of monthly median images after cloud, cloud shadow and snow masking since the year 2000. Landsat imagery (<b>b</b>) exhibits inconsistent data frequency across Germany, while MODIS data (<b>c</b>) features overall more homogeneous and higher data frequency. Areas with lower numbers of available Landsat scenes overlap with tile gaps of the Worldwide Reference System (WRS) as seen in (<b>a</b>).</p>
Full article ">Figure A1
<p>Mean scPDSI of the Steigerwald. More recent years feature significantly lower scPDSI values.</p>
Full article ">Figure A2
<p>Spectral wavelengths covered by different sensors within the visible RGB, NIR and SWIR regions. Bands of the sensors TM [<a href="#B66-remotesensing-13-01845" class="html-bibr">66</a>], ETM+ [<a href="#B67-remotesensing-13-01845" class="html-bibr">67</a>], OLI [<a href="#B67-remotesensing-13-01845" class="html-bibr">67</a>] and the MODIS product MOD09A1 [<a href="#B56-remotesensing-13-01845" class="html-bibr">56</a>] are compared.</p>
Full article ">Figure A3
<p>Mean correlation coefficients between scPDSI and (<b>a</b>) MODIS <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Harmonic NDVI as well as (<b>b</b>) MODIS z-score NDVI values for each main natural unit in Germany after Schmithüsen and Meynen [<a href="#B99-remotesensing-13-01845" class="html-bibr">99</a>], who divided Germany into 86 nature units based on climatic characteristics, soil types and topographic parameters.</p>
Full article ">
26 pages, 16537 KiB  
Article
Terrain Proxy-Based Site Classification for Seismic Zonation in North Korea within a Geospatial Data-Driven Workflow
by Han-Saem Kim, Chang-Guk Sun, Moon-Gyo Lee and Hyung-Ik Cho
Remote Sens. 2021, 13(9), 1844; https://doi.org/10.3390/rs13091844 - 9 May 2021
Cited by 4 | Viewed by 3419
Abstract
Numerous seismic activities occur in North Korea. However, it is difficult to perform seismic hazard assessment and obtain zonal data in the Korean Peninsula, including North Korea, when applying parametric or nonparametric methods. Remote sensing can be implemented for soil characterization or spatial [...] Read more.
Numerous seismic activities occur in North Korea. However, it is difficult to perform seismic hazard assessment and obtain zonal data in the Korean Peninsula, including North Korea, when applying parametric or nonparametric methods. Remote sensing can be implemented for soil characterization or spatial zonation studies on irregular, surficial, and subsurface systems of inaccessible areas. Herein, a data-driven workflow for extracting the principal features using a digital terrain model (DTM) is proposed. In addition, geospatial grid information containing terrain features and the average shear wave velocity in the top 30 m of the subsurface (VS30) are employed using geostatistical interpolation methods; machine learning (ML)-based regression models were optimized and VS30-based seismic zonation in the test areas in North Korea were forecasted. The interrelationships between VS30 and terrain proxy (elevation, slope, and landform class) in the training area in South Korea were verified to define the input layer in regression models. The landform class represents a new proxy of VS30 and was subgrouped according to the correlation with grid-based VS30. The geospatial grid information was generated via the optimum geostatistical interpolation method (i.e., sequential Gaussian simulation (SGS)). The best-fitting model among four ML methods was determined by evaluating cost function-based prediction performance, performing uncertainty analysis for the empirical correlations of VS30, and studying spatial correspondence with the borehole-based VS30 map. Subsequently, the best-fitting regression models were designed by training the geospatial grid in South Korea. Then, DTM and its terrain features were constructed along with VS30 maps for three major cities (Pyongyang, Kaesong, and Nampo) in North Korea. A similar distribution of the VS30 grid obtained using SGS was shown in the multilayer perceptron-based VS30 map. Full article
Show Figures

Figure 1

Figure 1
<p>Study areas in the Korean Peninsula: (<b>a</b>) training and test areas in Korean Peninsula; (<b>b</b>) central-western region involving Seoul and partial Incheon for the training area of the site classification model; (<b>c</b>) test area in Pyongyang; (<b>d</b>) test area in Kaesong; (<b>e</b>) test area in Nampo. The base map has been obtained from the OpenStreetMap project.</p>
Full article ">Figure 2
<p>DTM-based elevation and slope: (<b>a</b>) training area in Seoul and Incheon; (<b>b</b>) test area in Pyongyang; (<b>c</b>) test area in Kaesong; and (<b>d</b>) test area in Nampo.</p>
Full article ">Figure 3
<p>Topographic position index (TPI)-based classification of landforms: (<b>a</b>) the training area in Seoul and Incheon; (<b>b</b>) test area in Pyongyang; (<b>c</b>) test area in Kaesong; (<b>d</b>) test area in Nampo.</p>
Full article ">Figure 4
<p>Histogram of the TPI classes in the training and test areas.</p>
Full article ">Figure 5
<p><span class="html-italic">V<sub>S</sub></span><sub>30</sub> mapping using (<b>a</b>) borehole (SPT-<span class="html-italic">N</span>) datasets in the training area based on the interpolation algorithms: (<b>b</b>) inverse distance weight; (<b>c</b>) simple kriging; (<b>d</b>) ordinary kriging; (<b>e</b>) universal kriging; (<b>f</b>) empirical Bayesian kriging; (<b>g</b>) sequential Gaussian simulation—5th realization; (<b>h</b>) sequential Gaussian simulation—50th realization; (<b>i</b>) sequential Gaussian simulation—100th realization; (<b>j</b>) sequential Gaussian simulation—E-type. <span class="html-italic">V<sub>S</sub></span><sub>30</sub> was calculated using <span class="html-italic">V<sub>S</sub></span> that was transformed from the SPT-<span class="html-italic">N</span> value by using correlation #1 (<a href="#remotesensing-13-01844-t001" class="html-table">Table 1</a>). The unit of <span class="html-italic">V<sub>S</sub></span><sub>30</sub> is m/s.</p>
Full article ">Figure 6
<p>Cross-validation results on borehole location. The measured versus predicted <span class="html-italic">V<sub>S</sub></span><sub>30</sub> for: (<b>a</b>) inverse distance weight; (<b>b</b>) simple kriging; (<b>c</b>) ordinary kriging; (<b>d</b>) universal kriging; (<b>e</b>) empirical Bayesian kriging; (<b>f</b>) sequential Gaussian simulation—5th realization; (<b>g</b>) sequential Gaussian simulation—50th realization; (<b>h</b>) sequential Gaussian simulation—100th realization; (<b>i</b>) sequential Gaussian simulation—E-type. The black dotted line represents the 1:1 ratio. The blue-dotted line indicates the linear regression line.</p>
Full article ">Figure 7
<p>Relations of SGS-E-type based <span class="html-italic">V<sub>S</sub></span><sub>30</sub> and terrain proxy values classified by the TPI-based landform class in the training area: (<b>a</b>) relations between TPI-based landform class and <span class="html-italic">V<sub>S</sub></span><sub>30</sub>; (<b>b</b>) relations between grouped TPI-based landform class and <span class="html-italic">V<sub>S</sub></span><sub>30</sub>; (<b>c</b>) relations between elevation and <span class="html-italic">V<sub>S</sub></span><sub>30</sub> classified by the grouped TPI-based landform class; (<b>d</b>) relations between slope and <span class="html-italic">V<sub>S</sub></span><sub>30</sub> classified by the grouped TPI-based landform class. The criteria of terrain proxy-based site classification are also presented.</p>
Full article ">Figure 8
<p>Bar charts of four cost function-based prediction performance of the predicted <span class="html-italic">V<sub>S</sub></span><sub>30</sub> by the best-fitting model within four regression methods and three <span class="html-italic">N</span>-<span class="html-italic">V<sub>S</sub></span> correlations. The metrics of the indices are (<b>a</b>) MAE, (<b>b</b>) RMSE, (<b>c</b>) RRSE, and (<b>d</b>) <span class="html-italic">R</span><sup>2</sup>. The black dotted line in MAE and RMSE indicates the minimum deviation (140 m/s) of <span class="html-italic">V<sub>S</sub></span><sub>30</sub> threshold in the site classification system (<a href="#remotesensing-13-01844-t003" class="html-table">Table 3</a>).</p>
Full article ">Figure 9
<p>Correlations between estimated <span class="html-italic">V<sub>S</sub></span><sub>30</sub> and prediction residuals using four regression models for the grouped TPI-based landform class based on <span class="html-italic">N</span>-<span class="html-italic">V<sub>S</sub></span> correlation #1. The blue dotted line indicates linear relations between the measured and residual values of <span class="html-italic">V<sub>S</sub></span><sub>30</sub>.</p>
Full article ">Figure 10
<p><span class="html-italic">V<sub>S</sub></span><sub>30</sub> mapping for the training area in Seoul and Incheon using regression models: (<b>a</b>) logistic regression; (<b>b</b>) K-nearest neighbors; (<b>c</b>) support vector regression; and (<b>d</b>) multilayer perceptron.</p>
Full article ">Figure 11
<p><span class="html-italic">V<sub>S</sub></span><sub>30</sub> mapping for the test areas in Pyongyang, Kaesong, and Nampo, using regression models: logistic regression; K-nearest neighbors; support vector regression; multilayer perceptron.</p>
Full article ">Figure 12
<p>Normal distribution of <span class="html-italic">V<sub>S</sub></span><sub>30</sub> map from the regression models for the (<b>a</b>) training area in Seoul and Incheon; (<b>b</b>) test area in Pyongyang; (<b>c</b>) test area in Kaesong; and (<b>d</b>) test area in Nampo.</p>
Full article ">
18 pages, 5859 KiB  
Article
Distribution and Attribution of Terrestrial Snow Cover Phenology Changes over the Northern Hemisphere during 2001–2020
by Xiaona Chen, Yaping Yang, Yingzhao Ma and Huan Li
Remote Sens. 2021, 13(9), 1843; https://doi.org/10.3390/rs13091843 - 9 May 2021
Cited by 22 | Viewed by 3362
Abstract
Snow cover phenology has exhibited dramatic changes in the past decades. However, the distribution and attribution of the hemispheric scale snow cover phenology anomalies remain unclear. Using satellite-retrieved snow cover products, ground observations, and reanalysis climate variables, this study explored the distribution and [...] Read more.
Snow cover phenology has exhibited dramatic changes in the past decades. However, the distribution and attribution of the hemispheric scale snow cover phenology anomalies remain unclear. Using satellite-retrieved snow cover products, ground observations, and reanalysis climate variables, this study explored the distribution and attribution of snow onset date, snow end date, and snow duration days over the Northern Hemisphere from 2001 to 2020. The latitudinal and altitudinal distributions of the 20-year averaged snow onset date, snow end date, and snow duration days are well represented by satellite-retrieved snow cover phenology matrixes. The validation results by using 850 ground snow stations demonstrated that satellite-retrieved snow cover phenology matrixes capture the spatial variability of the snow onset date, snow end date, and snow duration days at the 95% significance level during the overlapping period of 2001–2017. Moreover, a delayed snow onset date and an earlier snow end date (1.12 days decade−1, p < 0.05) are detected over the Northern Hemisphere during 2001–2020 based on the satellite-retrieved snow cover phenology matrixes. In addition, the attribution analysis indicated that snow end date dominates snow cover phenology changes and that an increased melting season temperature is the key driving factor of snow end date anomalies over the NH during 2001–2020. These results are helpful in understanding recent snow cover change and can contribute to climate projection studies. Full article
Show Figures

Figure 1

Figure 1
<p>Distribution of study area and exclude season snow-covered area in this study.</p>
Full article ">Figure 2
<p>Distribution of the selected GHCN snow depth observations over the NH.</p>
Full article ">Figure 3
<p>Flowchart of gap-free MOD10C2-based snow cover extent dataset generation.</p>
Full article ">Figure 4
<p>Definition of the snow onset date (<span class="html-italic">D</span><sub>o</sub>), snow end date (<span class="html-italic">D</span><sub>e</sub>), and snow duration days (<span class="html-italic">D</span><sub>d</sub>) based on the original MOD10C2 SCF dataset.</p>
Full article ">Figure 5
<p>Monthly snow cover extent (SCE) anomalies from 1966 to 2020 over the NH derived from NH SCE CDR v01r01. Anomalies were calculated by subtracting the 55-year averaged SCE in each month from the monthly SCE.</p>
Full article ">Figure 6
<p>Climatology of snow cover phenology over the NH from 2001 to 2020. 20-year averaged (<b>a</b>) snow onset date (<span class="html-italic">D</span><sub>o</sub>), (<b>b</b>) snow end date (<span class="html-italic">D</span><sub>e</sub>), and (<b>c</b>) snow duration days (<span class="html-italic">D</span><sub>d</sub>) over the NH from 2001 to 2020.</p>
Full article ">Figure 7
<p>17-year averaged (<b>a</b>) snow onset date (<span class="html-italic">D</span><sub>o</sub>), (<b>b</b>) snow end date (<span class="html-italic">D</span><sub>e</sub>), and (<b>c</b>) snow duration days (<span class="html-italic">D</span><sub>d</sub>) derived from GHCN observations from 2001 to 2017. Scatter plots between GHCN observations and (<b>d</b>) <span class="html-italic">D</span><sub>o</sub>, (<b>e</b>) <span class="html-italic">D</span><sub>e</sub>, and (<b>f</b>) <span class="html-italic">D</span><sub>e</sub> from satellite-retrieved SCP matrices.</p>
Full article ">Figure 8
<p>Changes in the (<b>a</b>) snow onset date, (<b>b</b>) snow end date, and (<b>c</b>) snow duration days over the NH from 2001 to 2020. Changes are estimated using the five-year averaged SCP during 2016–2020 minus the comparable value during 2001–2005.</p>
Full article ">Figure 9
<p>Latitudinal distribution of changes in the (<b>a</b>) snow onset date, (<b>b</b>) snow end date, and (<b>c</b>) snow duration days over the NH from 2001 to 2020. Changes are estimated using the five-year averaged SCP during 2016–2020 minus the comparable value during 2001–2005.</p>
Full article ">Figure 10
<p>Attribution of changes in snow cover phenology over the NH from 2001 to 2020. Changes in (<b>a</b>) snow accumulation season temperature, <span class="html-italic">T</span><sub>a</sub>, (<b>b</b>) precipitation, <span class="html-italic">P</span><sub>a</sub>, and (<b>c</b>) snow melting season temperature, <span class="html-italic">T</span><sub>m</sub>. (<b>d</b>) Contribution of snow onset date (<span class="html-italic">D</span><sub>o</sub>) and snow end date (<span class="html-italic">D</span><sub>e</sub>) to snow duration day (<span class="html-italic">D</span><sub>d</sub>) anomalies over the NH from 2001 to 2020. Attribution analysis of (<b>e</b>) <span class="html-italic">D</span><sub>o</sub> and (<b>f</b>) <span class="html-italic">D</span><sub>e</sub> anomalies over the NH from 2001 to 2020.</p>
Full article ">
24 pages, 3358 KiB  
Article
Analyzing the Performance of GPS Data for Earthquake Prediction
by Valeri Gitis, Alexander Derendyaev and Konstantin Petrov
Remote Sens. 2021, 13(9), 1842; https://doi.org/10.3390/rs13091842 - 9 May 2021
Cited by 18 | Viewed by 4573
Abstract
The results of earthquake prediction largely depend on the quality of data and the methods of their joint processing. At present, for a number of regions, it is possible, in addition to data from earthquake catalogs, to use space geodesy data obtained with [...] Read more.
The results of earthquake prediction largely depend on the quality of data and the methods of their joint processing. At present, for a number of regions, it is possible, in addition to data from earthquake catalogs, to use space geodesy data obtained with the help of GPS. The purpose of our study is to evaluate the efficiency of using the time series of displacements of the Earth’s surface according to GPS data for the systematic prediction of earthquakes. The criterion of efficiency is the probability of successful prediction of an earthquake with a limited size of the alarm zone. We use a machine learning method, namely the method of the minimum area of alarm, to predict earthquakes with a magnitude greater than 6.0 and a hypocenter depth of up to 60 km, which occurred from 2016 to 2020 in Japan, and earthquakes with a magnitude greater than 5.5. and a hypocenter depth of up to 60 km, which happened from 2013 to 2020 in California. For each region, we compare the following results: random forecast of earthquakes, forecast obtained with the field of spatial density of earthquake epicenters, forecast obtained with spatio-temporal fields based on GPS data, based on seismological data, and based on combined GPS data and seismological data. The results confirm the effectiveness of using GPS data for the systematic prediction of earthquakes. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of operations in the intervals <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> before the forecast.</p>
Full article ">Figure 2
<p>Flowchart of the algorithm.</p>
Full article ">Figure 3
<p>Areas of analysis, Global Positioning System (GPS) ground receiving stations, and the epicenters of target earthquakes. (<b>Left</b>) Japan, epicenters of target earthquakes with magnitude <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>≥</mo> <mn>6.0</mn> </mrow> </semantics></math> in the interval 1 January 2011–26 July 2020; (<b>Right</b>) California, epicenters of target earthquakes with magnitude <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>≥</mo> <mn>5.5</mn> </mrow> </semantics></math>, in the interval 23 December 2009–14 November 2020. The epicenters of the target earthquakes for which the forecast was tested are highlighted in red and the epicenters on which the forecast was initial trained in yellow.</p>
Full article ">Figure 4
<p>Time series of displacements in the W–E direction of the Japanese receiving station at <math display="inline"><semantics> <mrow> <mn>35</mn> <mo>.</mo> <msup> <mn>45</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>N and <math display="inline"><semantics> <mrow> <mn>139</mn> <mo>.</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>E. (<b>Left</b>) Interval 1 January 2009–26 July 2020. (<b>Right</b>) Interval 1 January 2016–14 December 2016. (<b>A</b>,<b>D</b>) time series of coordinates <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> of the station (mm); (<b>B</b>,<b>E</b>) time series of daily rates <math display="inline"><semantics> <mrow> <msub> <mi>g</mi> <mrow> <mi>x</mi> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> of the station (mm/day); (<b>C</b>,<b>F</b>) time series <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mrow> <mi>x</mi> <mi>n</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> at the point <span class="html-italic">n</span> of the field <math display="inline"><semantics> <msub> <mi>V</mi> <mi>x</mi> </msub> </semantics></math> (mm/day).</p>
Full article ">Figure 5
<p>Time series of displacements in the N–S direction of the Japanese receiving station at <math display="inline"><semantics> <mrow> <mn>35</mn> <mo>.</mo> <msup> <mn>45</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>N and <math display="inline"><semantics> <mrow> <mn>139</mn> <mo>.</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>E. (<b>Left</b>) Interval 1 January 2009–26 July 2020. (<b>Right</b>) Interval 1 January 2016–14 December 2016. (<b>A</b>,<b>D</b>) time series of coordinates <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> of the station (mm); (<b>B</b>,<b>E</b>) time series of daily rates <math display="inline"><semantics> <mrow> <msub> <mi>g</mi> <mrow> <mi>y</mi> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> of the station (mm/day); (<b>C</b>,<b>F</b>) time series <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mrow> <mi>y</mi> <mi>n</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> at the point <span class="html-italic">n</span> of the field <math display="inline"><semantics> <msub> <mi>V</mi> <mi>y</mi> </msub> </semantics></math> (mm/day).</p>
Full article ">Figure 6
<p>Time series of displacements in the W–E direction of the Californian receiving station at <math display="inline"><semantics> <mrow> <mn>39</mn> <mo>.</mo> <msup> <mn>49</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>N and <math display="inline"><semantics> <mrow> <mn>123</mn> <mo>.</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>W. (<b>Left</b>) Interval 1 January 2008–14 November 2020. (<b>Right</b>) Interval 1 January 2013–1 October 2013. (<b>A</b>,<b>D</b>) time series of coordinates <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> of the station (mm); (<b>B</b>,<b>E</b>) time series of daily rates <math display="inline"><semantics> <mrow> <msub> <mi>g</mi> <mrow> <mi>x</mi> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> of the station (mm/day); (<b>C</b>,<b>F</b>) time series <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mrow> <mi>x</mi> <mi>n</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> at the point <span class="html-italic">n</span> of the field <math display="inline"><semantics> <msub> <mi>V</mi> <mi>x</mi> </msub> </semantics></math> (mm/day).</p>
Full article ">Figure 7
<p>Time series of displacements in the N–S direction of the Californian receiving station at <math display="inline"><semantics> <mrow> <mn>39</mn> <mo>.</mo> <msup> <mn>49</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>N and <math display="inline"><semantics> <mrow> <mn>123</mn> <mo>.</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>W. (<b>Left</b>) Interval 1 January 2008–14 November 2020. (<b>Right</b>) Interval 1 January 2013–1 October 2013. (<b>A</b>,<b>D</b>) time series of coordinates <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> of the station (mm); (<b>B</b>,<b>E</b>) time series of daily rates <math display="inline"><semantics> <mrow> <msub> <mi>g</mi> <mrow> <mi>y</mi> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> of the station (mm/day); (<b>C</b>,<b>F</b>) time series <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mrow> <mi>y</mi> <mi>n</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> at the point <span class="html-italic">n</span> of the field <math display="inline"><semantics> <msub> <mi>V</mi> <mi>y</mi> </msub> </semantics></math> (mm/day).</p>
Full article ">Figure 8
<p>Japan. Time series at point <math display="inline"><semantics> <mrow> <mn>35</mn> <mo>.</mo> <msup> <mn>45</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>N and <math display="inline"><semantics> <mrow> <mn>139</mn> <mo>.</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>E. (<b>A</b>) The divergence field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>1</mn> </msub> </semantics></math> (<math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </semantics></math>/day); (<b>B</b>) the rotor field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>2</mn> </msub> </semantics></math> (<math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </semantics></math>/day); (<b>C</b>) the shear field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>3</mn> </msub> </semantics></math> (<math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </semantics></math>/day).</p>
Full article ">Figure 9
<p>California. Time series at point <math display="inline"><semantics> <mrow> <mn>39</mn> <mo>.</mo> <msup> <mn>49</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>N and <math display="inline"><semantics> <mrow> <mn>123</mn> <mo>.</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>W. (<b>A</b>) The divergence field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>1</mn> </msub> </semantics></math> (<math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </semantics></math>/day); (<b>B</b>) the rotor field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>2</mn> </msub> </semantics></math> (<math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </semantics></math>/day); (<b>C</b>) the shear field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>3</mn> </msub> </semantics></math> (<math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </semantics></math>/day).</p>
Full article ">Figure 10
<p>Japan. Time series at point <math display="inline"><semantics> <mrow> <mn>35</mn> <mo>.</mo> <msup> <mn>45</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>N and <math display="inline"><semantics> <mrow> <mn>139</mn> <mo>.</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>E. (<b>A</b>) Field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>4</mn> </msub> </semantics></math>; (<b>B</b>) field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>5</mn> </msub> </semantics></math>; (<b>C</b>) field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>6</mn> </msub> </semantics></math>.</p>
Full article ">Figure 11
<p>California. Time series at point <math display="inline"><semantics> <mrow> <mn>39</mn> <mo>.</mo> <msup> <mn>49</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>N and <math display="inline"><semantics> <mrow> <mn>123</mn> <mo>.</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>W. (<b>A</b>) Field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>4</mn> </msub> </semantics></math>; (<b>B</b>) field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>5</mn> </msub> </semantics></math>; (<b>C</b>) field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>6</mn> </msub> </semantics></math>.</p>
Full article ">Figure 12
<p>Japan. Time series at point <math display="inline"><semantics> <mrow> <mn>35</mn> <mo>.</mo> <msup> <mn>45</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>N and <math display="inline"><semantics> <mrow> <mn>139</mn> <mo>.</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>E. (<b>A</b>) Field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>7</mn> </msub> </semantics></math>; (<b>B</b>) field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>8</mn> </msub> </semantics></math>; (<b>C</b>) field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>9</mn> </msub> </semantics></math>.</p>
Full article ">Figure 13
<p>California. Time series at point <math display="inline"><semantics> <mrow> <mn>39</mn> <mo>.</mo> <msup> <mn>49</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>N and <math display="inline"><semantics> <mrow> <mn>123</mn> <mo>.</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>W. (<b>A</b>) Field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>7</mn> </msub> </semantics></math>; (<b>B</b>) field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>8</mn> </msub> </semantics></math>; (<b>C</b>) field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>9</mn> </msub> </semantics></math>.</p>
Full article ">Figure 14
<p>Japan. The time series of field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>10</mn> </msub> </semantics></math> at point <math display="inline"><semantics> <mrow> <mn>35</mn> <mo>.</mo> <msup> <mn>45</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>N and <math display="inline"><semantics> <mrow> <mn>139</mn> <mo>.</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>California. The time series of field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>10</mn> </msub> </semantics></math> at point <math display="inline"><semantics> <mrow> <mn>39</mn> <mo>.</mo> <msup> <mn>49</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>N and <math display="inline"><semantics> <mrow> <mn>123</mn> <mo>.</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>W.</p>
Full article ">Figure 16
<p>Dependences <math display="inline"><semantics> <mrow> <mi>U</mi> <mo>(</mo> <mi>V</mi> <mo>)</mo> </mrow> </semantics></math> of the probability of a successful earthquake prediction <span class="html-italic">U</span> on the alarm volume <span class="html-italic">V</span> obtained with the different fields. (<b>Left</b>) Japan: (1) field <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mn>7</mn> </msub> </semantics></math>; (2) field <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>10</mn> </msub> </semantics></math>; (3) fields <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mn>9</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mn>10</mn> </msub> </semantics></math>; (4) fields <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mn>9</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mn>10</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>10</mn> </msub> </semantics></math>. (<b>Right</b>) California: (1) field <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mn>7</mn> </msub> </semantics></math>; (2) fields <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>4</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>6</mn> </msub> </semantics></math>; (3) fields <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mn>9</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mn>11</mn> </msub> </semantics></math>; (4) fields <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mn>9</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mn>11</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>4</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mn>6</mn> </msub> </semantics></math>.</p>
Full article ">Figure 17
<p>Alarm area zones of prognosis for Japan and California. (<b>Left</b>) Japan alarm area zone on 14 December 2018 with the epicenter of target earthquake with magnitude <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>6.3</mn> </mrow> </semantics></math> on 8 January 2019; (<b>Right</b>) California alarm area zone on 17 April 2020 with the epicenter of target earthquake with magnitude <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>6.5</mn> </mrow> </semantics></math> on 15 May 2020. The epicenters of the target earthquake are highlighted in red. Alarm volume values are shown on palettes in the picture. Field values with <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>&gt;</mo> <mn>0.2</mn> </mrow> </semantics></math> are not shown.</p>
Full article ">
12 pages, 3810 KiB  
Communication
Impact of Assimilating FY-3D MWTS-2 Upper Air Sounding Data on Forecasting Typhoon Lekima (2019)
by Zeyi Niu, Lei Zhang, Peiming Dong, Fuzhong Weng and Wei Huang
Remote Sens. 2021, 13(9), 1841; https://doi.org/10.3390/rs13091841 - 9 May 2021
Cited by 11 | Viewed by 2569
Abstract
In this study, the Fengyun-3D (FY-3D) clear-sky microwave temperature sounder-2 (MWTS-2) radiances were directly assimilated in the regional mesoscale Weather Research and Forecasting (WRF) model using the Gridpoint Statistical Interpolation (GSI) data assimilation system. The assimilation experiments were conducted to compare the track [...] Read more.
In this study, the Fengyun-3D (FY-3D) clear-sky microwave temperature sounder-2 (MWTS-2) radiances were directly assimilated in the regional mesoscale Weather Research and Forecasting (WRF) model using the Gridpoint Statistical Interpolation (GSI) data assimilation system. The assimilation experiments were conducted to compare the track errors of typhoon Lekima from uses of the Advanced Microwave Sounding Unit-A (AMSU-A) radiances (EXP_AD) with those from FY-3D MWTS-2 upper-air sounding data at channels 5–7 (EXP_AMD). The clear-sky mean bias-corrected observation-minus-background (O-B) values of FY-3D MWTS-2 channels 5, 6, and 7 are 0.27, 0.10 and 0.57 K, respectively, which are smaller than those without bias corrections. Compared with the control experiment, which was the forecast of the WRF model without use of satellite data, the assimilation of satellite radiances can improve the forecast performance and reduce the mean track error by 8.7% (~18.4 km) and 30% (~58.6 km) beyond 36 h through the EXP_AD and EXP_AMD, respectively. The direction of simulated steering flow changed from southwest in the EXP_AD to southeast in the EXP_AMD, which can be pivotal to forecasting the landfall of typhoon Lekima (2019) three days in advance. Assimilation of MWTS-2 upper-troposphere channels 5–7 has great potential to improve the track forecasts for typhoon Lekima. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Model domain configuration and the best track of typhoon Lekima during the period from 0000 UTC on 2 August 2019, to 1800 UTC on 14 August 2019. Lekima made first landfall at 1800 UTC on 9 August 2019 in the Zhejiang Province. TD, TS, STS, TY, and L stand for tropical depression, tropical storm, severe tropical storm, typhoon, and extra-tropical cyclone, respectively. The red points stand for the FY-3D MWTS-2 data points after data thinning in the GSI with (<b>a</b>) a 1.5 h data assimilation time window and (<b>b</b>) a 3 h data assimilation time window at 0600 UTC on 6 August 2019.</p>
Full article ">Figure 2
<p>(<b>a</b>–<b>f</b>) O–B for FY-3D MWTS-2 channels 5, 6, and 7 before (left panel) and after (right panel) bias corrections at 0600 UTC on 7 August 2019.</p>
Full article ">Figure 3
<p>Domain-averaged RMSEs of vertical profiles of the (<b>a</b>) temperature, (<b>b</b>) water vapor mixing ratio, (<b>c</b>) U and (<b>d</b>) V increments of the EXP_AD (green) and EXP_AMD (red) at the initial time at 0600 UTC on 7 August 2019.</p>
Full article ">Figure 4
<p>(<b>a</b>) The best track (white) and the track forecasts from CTRL (blue), EXP_AD (green), and EXP_AMD (red) from 0600 UTC on 7 August 2019 to 0600 UTC on 10 August 2019. The background image shows the Fengyun-4A (FY-4A) Advanced Geostationary Radiation Imager (AGRI) True Color Imagery on 0600 UTC 7 August 2019. The mean wind flow, wind speed (shaded), and calculated steering flow (solid circle with black arrow) by the 48 h forecasts of the (<b>b</b>) EXP_AD and (<b>c</b>) EXP_AMD on 0600 UTC 9 August 2019.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) The best track (white) and the track forecasts from CTRL (blue), EXP_AD (green), and EXP_AMD (red) from 0600 UTC on 7 August 2019 to 0600 UTC on 10 August 2019. The background image shows the Fengyun-4A (FY-4A) Advanced Geostationary Radiation Imager (AGRI) True Color Imagery on 0600 UTC 7 August 2019. The mean wind flow, wind speed (shaded), and calculated steering flow (solid circle with black arrow) by the 48 h forecasts of the (<b>b</b>) EXP_AD and (<b>c</b>) EXP_AMD on 0600 UTC 9 August 2019.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>h</b>) 72 h track errors (unit: km) of the CTRL (blue), EXP_AD (green), and EXP_AMD (red) with the model initialization times at 0600 UTC (left panel) and 1800 UTC (right panel) from 5 to 8 August 2019. The <span class="html-italic">x</span>-axis represents the forecast hours (unit: h).</p>
Full article ">Figure 6
<p>(<b>a</b>) Mean track errors (unit: km) calculated by averaging track errors of eight different initial times from <a href="#remotesensing-13-01841-f005" class="html-fig">Figure 5</a> and (<b>b</b>) mean track forecast improvement (unit: %) relative to the CTRL (blue) for EXP_AD (green) and EXP_AMD (red). The <span class="html-italic">x</span>-axis represents the forecast hours.</p>
Full article ">
20 pages, 11430 KiB  
Article
The Intra-Tidal Characteristics of Tidal Front and Their Spring–Neap Tidal and Seasonal Variations in Bungo Channel, Japan
by Menghong Dong and Xinyu Guo
Remote Sens. 2021, 13(9), 1840; https://doi.org/10.3390/rs13091840 - 9 May 2021
Viewed by 2914
Abstract
The intra-tidal variations of a tidal front in Bungo Channel, Japan and their dependence on the spring–neap tidal cycle and month were analyzed utilizing high-resolution (~2 km) hourly sea surface temperature (SST) data obtained from a Himawari-8 geostationary satellite from April 2016 to [...] Read more.
The intra-tidal variations of a tidal front in Bungo Channel, Japan and their dependence on the spring–neap tidal cycle and month were analyzed utilizing high-resolution (~2 km) hourly sea surface temperature (SST) data obtained from a Himawari-8 geostationary satellite from April 2016 to August 2020. A gradient-based front detection method was utilized to define the position and intensity of the front. Similar to previous ship-based studies, SST data were utilized to identify tidal fronts between a well-mixed strait and its surrounding stratified area. The hourly SST data confirmed the theoretical intra-tidal movement of the tidal front, which is mainly controlled by tidal current advection. Notably, the intensity of the front increases during the ebb current phase, which carries the front toward the stratified area, but decreases during the flood current phase that drives the front in the opposite direction. Due to a strong dependence on tidal currents, the intra-tidal variations appear in a fortnight cycle, and the fortnightly variations of the front are dependent on the month in which the background stratification and residual current changes occur. Additionally, tidal current convergence and divergence are posited to cause tidal front intensification and weakening. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Study area location. (<b>b</b>) Enlarged map of the study area showing the bathymetry from ETOPO1 of the Bungo Channel and Iyo Nada. The black dot indicates the location where the tidal current was synthesized by harmonic constants. Arrows mean the synthesized tidal current for 6 hours from maximum ebb current (15:00 29 April 2017) to maximum flood current (21:00 29 April 2017) during one spring tide. “HyS” denotes Hayasui Strait. (<b>c</b>) Tidal front location near Hayasui Strait. Contours indicate monthly average sea surface temperature (SST, °C) in June, and colors indicate the magnitude of the horizontal gradient (°C/km). (<b>d</b>) Schematic of the method utilized to determine tidal front position. Colors indicate the magnitude of the SST gradient. There are 45 sectors, each with a 2° angle. The figure shows every five sectors with dashed lines. The black “X” symbols indicate the position of the front where the maximum magnitude of the SST gradient along each broken line occurs.</p>
Full article ">Figure 2
<p>Monthly mean SST (contours) and the magnitude of the SST gradients (colors) from April to September in the Bungo Channel.</p>
Full article ">Figure 3
<p>SST gradient magnitudes of the continuous hourly SST data from 07:00 29 April to 06:00 30 April 2017 (UTC+9) during spring tide. The thick black lines indicate the position of the tidal front, and the star, triangle, and circle denote three representative points.</p>
Full article ">Figure 4
<p>SST gradient magnitudes of the continuous hourly SST data from 20:00 29 April to 19:00 30 April 2016, during neap tide. The thick black lines indicate the position of the tidal front, and the star, triangle, and circle denote three representative points.</p>
Full article ">Figure 5
<p>Lag correlations between the particle coordinator calculated by the predicted tidal current velocity using Equation (2) and (<b>a</b>) front position (<math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>f</mi> </msub> </mrow> </semantics></math>, distance between the point denoted by a star in <a href="#remotesensing-13-01840-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-13-01840-f004" class="html-fig">Figure 4</a> and the center of the strait), and (<b>b</b>) front intensity (<math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>G</mi> <mi>f</mi> </mrow> </msub> </mrow> </semantics></math>, magnitude of the SST gradient at the point denoted by a star in <a href="#remotesensing-13-01840-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-13-01840-f004" class="html-fig">Figure 4</a>). Pink and blue plus symbols represent the spring tide (<a href="#remotesensing-13-01840-f003" class="html-fig">Figure 3</a>) and neap tide (<a href="#remotesensing-13-01840-f004" class="html-fig">Figure 4</a>) values, respectively. Pink (blue) <span class="html-italic">r</span><sup>2</sup> means the correlation coefficient at spring (neap) tide when the particle coordinator lags behind the tidal front position by one (one and half) hour.</p>
Full article ">Figure 6
<p>Time series of the front position (<math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>f</mi> </msub> </mrow> </semantics></math>, distance between the point denoted by a star and the center of the strait (thick blue line) in <a href="#remotesensing-13-01840-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-13-01840-f004" class="html-fig">Figure 4</a>), the particle coordinator calculated by the predicted tidal current velocity using Equation (2) (black line), and the magnitude of the SST gradient along the line passing through the star in <a href="#remotesensing-13-01840-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-13-01840-f004" class="html-fig">Figure 4</a> and the center of the strait (color) in April from 2016 (<b>a</b>) to 2020 (<b>e</b>).</p>
Full article ">Figure 7
<p>Predicted tidal current at the Hayasui Strait (red line), particle coordinator calculated by the tidal current velocity predicted by Equation (2) (blue line), and the front position (<math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>f</mi> </msub> </mrow> </semantics></math>, distances from the center point of the strait to the tidal front of the star, triangle, and circle symbols in <a href="#remotesensing-13-01840-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-13-01840-f004" class="html-fig">Figure 4</a>) during the (<b>a</b>) spring tide and (<b>b</b>) neap tide periods of <a href="#remotesensing-13-01840-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-13-01840-f004" class="html-fig">Figure 4</a>, respectively. Positive tidal current values indicate northward flow. The black lines with the circle, triangle, and star symbols represent the distance of the front from the center of the strait, <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>f</mi> </msub> </mrow> </semantics></math>, of the black circle, triangle, star symbols in <a href="#remotesensing-13-01840-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-13-01840-f004" class="html-fig">Figure 4</a>. The time series length is 48 h, and the shaded periods in (<b>a</b>) and (<b>b</b>) correspond to the data shown in <a href="#remotesensing-13-01840-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-13-01840-f004" class="html-fig">Figure 4</a>. The pink and cyan dotted lines indicate time of sunrise and sunset, respectively.</p>
Full article ">Figure 8
<p>(<b>a</b>) Particle coordinator calculated by the tidal current velocity at Hayasui Strait using Equation (2) for 14 days. Five phases denoted as Indexes 1–5 were determined based on the <span class="html-italic">M</span><sub>2</sub> tide as follows: the beginning of the southward ebb current, maximum ebb current, end of the ebb current (i.e., beginning of the northward flood current), maximum flood current, and end of the flood current. (<b>b</b>–<b>f</b>) Monthly (for the April months of 2016–2020) average SST (contours) and the magnitude of the SST gradients (colors) during spring tide corresponding to Indexes 1–5. The thick black lines denote the position of the tidal front.</p>
Full article ">Figure 9
<p>Same as <a href="#remotesensing-13-01840-f008" class="html-fig">Figure 8</a> but for the monthly averages during neap tide in April.</p>
Full article ">Figure 10
<p>Five points on the pink and blue lines (spring tide and neap tide, respectively) represent the spatially averaged position (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mi>D</mi> <mrow> <mi>f</mi> <mo>,</mo> </mrow> </msub> </mrow> <mo stretchy="true">¯</mo> </mover> <mo>,</mo> <mtext> </mtext> </mrow> </semantics></math><b>top left</b> panel) and intensity (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mi>T</mi> <mrow> <mi>G</mi> <mi>f</mi> </mrow> </msub> <mo>,</mo> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> <b>top right</b> panel) of the temporally averaged tidal front shown as a thick black line in <a href="#remotesensing-13-01840-f008" class="html-fig">Figure 8</a>b–f and <a href="#remotesensing-13-01840-f009" class="html-fig">Figure 9</a>b–f, respectively. The bar is the standard deviation of the spatially averaged position and the spatially averaged intensity (<b>left</b> and <b>right</b>, respectively). The other panels correspond to the monthly results of May to August after the same data processing.</p>
Full article ">Figure 11
<p>(<b>a</b>) Schematic of the intra-tidal variation in position and intensity of the tidal front. The yellow and red lines denote the isothermal lines for the front at the end of the flood (H) and ebb (L) current phases, which have small and large gradients, respectively. Schematics of the fortnightly variations of the front in (<b>b</b>) April and (<b>c</b>) July. Blue and pink lines indicate the isothermal lines during neap tide (N) and spring tide (S), and the horizontal arrows show the direction of the background residual currents.</p>
Full article ">Figure A1
<p>Spatial-temporal variations of (<b>a</b>) the front position (<math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>f</mi> </msub> </mrow> </semantics></math>, distance from the center point of the strait to the thick gray line in <a href="#remotesensing-13-01840-f003" class="html-fig">Figure 3</a>) and (<b>b</b>) the front intensity (<math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>G</mi> <mi>f</mi> </mrow> </msub> </mrow> </semantics></math>, magnitude of the SST gradient on the thick gray line in <a href="#remotesensing-13-01840-f003" class="html-fig">Figure 3</a>). The ordinate is angle for 46 rays starting from the strait (<a href="#remotesensing-13-01840-f001" class="html-fig">Figure 1</a>d).</p>
Full article ">Figure A2
<p>Spatial-temporal variations of (<b>a</b>) the front position (<math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>f</mi> </msub> </mrow> </semantics></math>, distance from the center point of the strait to the thick gray line in <a href="#remotesensing-13-01840-f004" class="html-fig">Figure 4</a>) and (<b>b</b>) the front intensity (<math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>G</mi> <mi>f</mi> </mrow> </msub> </mrow> </semantics></math>, magnitude of the SST gradient on the thick gray line in <a href="#remotesensing-13-01840-f004" class="html-fig">Figure 4</a>). The ordinate is angle for 46 rays starting from the strait (<a href="#remotesensing-13-01840-f001" class="html-fig">Figure 1</a>d).</p>
Full article ">
11 pages, 2051 KiB  
Communication
Seasonal Trends in Movement Patterns of Birds and Insects Aloft Simultaneously Recorded by Radar
by Xu Shi, Baptiste Schmid, Philippe Tschanz, Gernot Segelbacher and Felix Liechti
Remote Sens. 2021, 13(9), 1839; https://doi.org/10.3390/rs13091839 - 9 May 2021
Cited by 12 | Viewed by 4514
Abstract
Airspace is a key but not well-understood habitat for many animal species. Enormous amounts of insects and birds use the airspace to forage, disperse, and migrate. Despite numerous studies on migration, the year-round flight activities of both birds and insects are still poorly [...] Read more.
Airspace is a key but not well-understood habitat for many animal species. Enormous amounts of insects and birds use the airspace to forage, disperse, and migrate. Despite numerous studies on migration, the year-round flight activities of both birds and insects are still poorly studied. We used a 2 year dataset from a vertical-looking radar in Central Europe and developed an iterative hypothesis-testing algorithm to investigate the general temporal pattern of migratory and local movements. We estimated at least 3 million bird and 20 million insect passages over a 1 km transect annually. Most surprisingly, peak non-directional bird movement intensities during summer were of the same magnitude as seasonal directional movement peaks. Birds showed clear peaks in seasonally directional movements during day and night, coinciding well with the main migration period documented in this region. Directional insect movements occurred throughout the year, paralleling non-directional movements. In spring and summer, insect movements were non-directional; in autumn, their movements concentrated toward the southwest, similar to birds. Notably, the nocturnal movements of insects did not appear until April, while directional movements mainly occurred in autumn. This simple monitoring reveals how little we still know about the movement of biomass through airspace. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A graphical description of the iterative Rayleigh’s approach to calculate the proportion of directional (blue dots) and non-directional movement (green dots) for each day and night. Echoes (dots) are binned per flight direction and incrementally stacked (layers 1 to <span class="html-italic">n</span>). The mth layer is the first layer that is directional (Rayleigh test: <span class="html-italic">p</span> &lt; 0.05). Note that if the first layer is already directional (m = 1), then the proportion of directional movements reaches 1 (all movements are migratory), and if none of the layers are directional, the proportion is 0 (all movements are non-migratory).</p>
Full article ">Figure 2
<p>Results of the simulation tests showing on the x-axis, the input values for the proportion of directional and non-directional movements (0 = 100% non-directional, 1 = 100% directional), and on the y-axis, the estimated proportions based on our method. Colored lines represent different sample sizes. For this graph, the bin width was five degrees and assumed standard deviation for directional movements was 40°. For further graphs (SD = 20° and 60°), see the <a href="#app1-remotesensing-13-01839" class="html-app">supplemental material</a>.</p>
Full article ">Figure 3
<p>Year-round birds’ (<b>a</b>–<b>d</b>) and insects’ (<b>e</b>–<b>h</b>) daily mean flight directions in 2016 and 2017. The plots with a white background represent diurnal movements and the plots with a light yellow background represent nocturnal movements. Note that in the first half of a year (left column), directions in ±22.5 degrees from NE (45 degrees from N) are shaded in grey, and this is the same for directions in ±22.5 degrees from SW (225 degrees from N) in the second half of a year (right column).</p>
Full article ">Figure 4
<p>Year-round predicted trend in directional (blue) and non-directional (green) flight activity of birds (above) and insects (below): (<b>a</b>) trends in birds during day and (<b>b</b>) night; (<b>c</b>) trends in insects during day and (<b>d</b>) night. X-axis represents every odd-numbered month. See <a href="#app1-remotesensing-13-01839" class="html-app">Figure S1 in the electronic supplementary material</a> for the data of the two years.</p>
Full article ">
23 pages, 11653 KiB  
Article
Assessing the Accuracy of ALOS/PALSAR-2 and Sentinel-1 Radar Images in Estimating the Land Subsidence of Coastal Areas: A Case Study in Alexandria City, Egypt
by Noura Darwish, Mona Kaiser, Magaly Koch and Ahmed Gaber
Remote Sens. 2021, 13(9), 1838; https://doi.org/10.3390/rs13091838 - 9 May 2021
Cited by 19 | Viewed by 5856
Abstract
Recently, the Differential Interferometric Synthetic Aperture Radar (DInSAR) technique is widely used for quantifying the land surface deformation, which is very important to assess the potential impact on social and economic activities. Radar satellites operate in different wavelengths and each provides different levels [...] Read more.
Recently, the Differential Interferometric Synthetic Aperture Radar (DInSAR) technique is widely used for quantifying the land surface deformation, which is very important to assess the potential impact on social and economic activities. Radar satellites operate in different wavelengths and each provides different levels of vertical displacement accuracy. In this study, the accuracies of Sentinel-1 (C-band) and ALOS/PALSAR-2 (L-band) were investigated in terms of estimating the land subsidence rate along the study area of Alexandria City, Egypt. A total of nine Sentinel-1 and 11 ALOS/PALSAR-2 scenes were used for such assessment. The small baseline subset (SBAS) processing scheme, which detects the land deformation with a high spatial and temporal coverage, was performed. The results show that the threshold coherence values of the generated interferograms from ALOS-2 data are highly concentrated between 0.2 and 0.3, while a higher threshold value of 0.4 shows no coherent pixels for about 80% of Alexandria’s urban area. However, the coherence values of Sentinel-1 interferograms ranged between 0.3 and 1, with most of the urban area in Alexandria showing coherent pixels at a 0.4 value. In addition, both data types produced different residual topography values of almost 0 m with a standard deviation of 13.5 m for Sentinel-1 and −20.5 m with a standard deviation of 33.24 m for ALOS-2 using the same digital elevation model (DEM) and wavelet number. Consequently, the final deformation was estimated using high coherent pixels with a threshold of 0.4 for Sentinel-1, which is comparable to a threshold of about 0.8 when using ALOS-2 data. The cumulative vertical displacement along the study area from 2017 to 2020 reached −60 mm with an average of −12.5 mm and mean displacement rate of −1.73 mm/year. Accordingly, the Alexandrian coastal plain and city center are found to be relatively stable, with land subsidence rates ranging from 0 to −5 mm/year. The maximum subsidence rate reached −20 mm/year and was found along the boundary of Mariout Lakes and former Abu Qir Lagoon. Finally, the affected buildings recorded during the field survey were plotted on the final land subsidence maps and show high consistency with the DInSAR results. For future developmental urban plans in Alexandria City, it is recommended to expand towards the western desert fringes instead of the south where the present-day ground lies on top of the former wetland areas. Full article
(This article belongs to the Special Issue ALOS-2/PALSAR-2 Calibration, Validation, Science and Applications)
Show Figures

Figure 1

Figure 1
<p>The urban study area in Alexandria City is outlined with a red border plotted on a Sentinel-2 image of October 2019. The yellow dots represent the recently recorded subsidence events.</p>
Full article ">Figure 2
<p>Sub-swaths (red) and bursts (white) of Sentinel-1 products of the study area.</p>
Full article ">Figure 3
<p>Flowchart of SBAS-DInSAR processing steps.</p>
Full article ">Figure 4
<p>The temporal and spatial baseline distributions of the SAR interferograms from the Sentinel-1A and ALOS/PALSAR-2 data sets (<b>a</b>–<b>d</b>), where each acquisition is represented by a diamond associated to an ID number; the green diamonds represent the valid acquisitions and the yellow diamonds represent the super master image of the small baseline subset (SBAS). (<b>a</b>) Time–baseline plot of SBAS interferograms generated by the Sentinel-1A data, with 27 June 2018 as the super master image (<b>b</b>) time–baseline plot of SBAS interferograms generated by the ALOS/PALSAR-2 data, with 27 November 2016 as the super master image; (<b>c</b>) time–position plot generated by the Sentinel-1A and (<b>d</b>) time–position plot generated by the ALOS/PALSAR-2.</p>
Full article ">Figure 5
<p>Illustrates the coherence values distribution of interferograms produced from the ALOS-2 data (<b>a</b>) and coherent interferogram values produced from the Sentinel-1 data (<b>b</b>).</p>
Full article ">Figure 6
<p>(<b>a</b>) Wrapped ALOS-2 low coherent interferogram showing errors during the flattening sub-step; white arrow indicates systematic residual fringes that could have been caused by strong orbital inaccuracy or issues with some parameter settings, while red arrows indicate strong atmospheric artifacts and phase jumps. (<b>b</b>) Histogram representing the coherence value of interferogram with a mean coherent value of 0.15.</p>
Full article ">Figure 7
<p>Wrapped Sentinel-1 interferogram with low coherence. The cause is the very large temporal and normal baseline between the two acquisitions used to make the interferogram (<b>a</b>). The histogram illustrates the coherence value between two interferogram pairs (master 15 December 2018 and Slave 10 June 2019) (<b>b</b>).</p>
Full article ">Figure 8
<p>Statistics of estimated residual topography: (<b>a</b>) for ALOS-2 interferograms by using the SRTM data, which showed average residual topography equal to −100 m and high standard deviation equal to 70 m; (<b>b</b>) the average residual topography and standard deviation reduced by increasing the wavelet number; (<b>c</b>) for Sentinel-1 interferogram with average residual topography to almost 0 m with a standard deviation equal to 13.5 m.</p>
Full article ">Figure 9
<p>(<b>a</b>) Statistics histograms of precision height and (<b>b</b>) the corresponding height precision for ALOS/PALSAR-2 data. (<b>c</b>) Representative histograms of Sentinel-1 data precision height and (<b>d</b>) the corresponding precision velocity.</p>
Full article ">Figure 10
<p>An excellent final coherence map result is shown based on the C-band Sentinel-1 data with values ranging from 0.5 to 0.8 (<b>a</b>) and its associated histogram (<b>b</b>).</p>
Full article ">Figure 11
<p>The final coherence coverage of ALOS-2 data which is very poor with average coherence values of about 0.6, standard deviation of 0.04 and about 80% of Alexandria City show no coherent data (<b>a</b>) and its associated histogram (<b>b</b>).</p>
Full article ">Figure 12
<p>Vertical displacement of Alexandria City using coherence threshold value of 0.2 during the period from 2015 to 2019 using ALOS/PALSAR-2 images.</p>
Full article ">Figure 13
<p>Vertical displacement of Alexandria City from: (<b>a</b>) 2017/2018; (<b>c</b>) 2017/2019; (<b>e</b>) 2017/2020 and the corresponding histograms of each period (<b>b</b>,<b>d</b>,<b>f</b>).</p>
Full article ">Figure 14
<p>(<b>a</b>) The estimated mean displacement velocity of Alexandria and Alagami Cities using the Sentinel-1 data from August 2017 to September 2020 with coherence threshold of 0.4; (<b>b</b>) the corresponding histogram distribution of the derived displacement velocity rates.</p>
Full article ">Figure 15
<p>Shows the recent urban expansions along the border of former Mariout Lake causing land subsidence.</p>
Full article ">Figure 16
<p>Former lakes and lagoons in Alexandria based on displacement velocity of Alexandria and Alagami cities using the Sentinel-1 data from August 2017 to September 2020.</p>
Full article ">Figure 17
<p>Land-Subsidence map shows the locations of the affected buildings (<b>a</b>); samples of cracks on roads and ground deformations located in high subsided areas (<b>b</b>–<b>d</b>).</p>
Full article ">
21 pages, 5666 KiB  
Article
Towards Vine Water Status Monitoring on a Large Scale Using Sentinel-2 Images
by Eve Laroche-Pinel, Sylvie Duthoit, Mohanad Albughdadi, Anne D. Costard, Jacques Rousseau, Véronique Chéret and Harold Clenet
Remote Sens. 2021, 13(9), 1837; https://doi.org/10.3390/rs13091837 - 9 May 2021
Cited by 18 | Viewed by 4188
Abstract
Wine growing needs to adapt to confront climate change. In fact, the lack of water becomes more and more important in many regions. Whereas vineyards have been located in dry areas for decades, so they need special resilient varieties and/or a sufficient water [...] Read more.
Wine growing needs to adapt to confront climate change. In fact, the lack of water becomes more and more important in many regions. Whereas vineyards have been located in dry areas for decades, so they need special resilient varieties and/or a sufficient water supply at key development stages in case of severe drought. With climate change and the decrease of water availability, some vineyard regions face difficulties because of unsuitable variety, wrong vine management or due to the limited water access. Decision support tools are therefore required to optimize water use or to adapt agronomic practices. This study aimed at monitoring vine water status at a large scale with Sentinel-2 images. The goal was to provide a solution that would give spatialized and temporal information throughout the season on the water status of the vines. For this purpose, thirty six plots were monitored in total over three years (2018, 2019 and 2020). Vine water status was measured with stem water potential in field measurements from pea size to ripening stage. Simultaneously Sentinel-2 images were downloaded and processed to extract band reflectance values and compute vegetation indices. In our study, we tested five supervised regression machine learning algorithms to find possible relationships between stem water potential and data acquired from Sentinel-2 images (bands reflectance values and vegetation indices). Regression model using Red, NIR, Red-Edge and SWIR bands gave promising result to predict stem water potential (R2=0.40, RMSE=0.26). Full article
(This article belongs to the Special Issue Remote and Proximal Sensing for Precision Agriculture and Viticulture)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of the method from the features extraction to the analysis step.</p>
Full article ">Figure 2
<p>Location of the study vine plots for 2018, 2019 and 2020 and position of the corresponding S2 tiles.</p>
Full article ">Figure 3
<p>Inter-row management example. (<b>a</b>) Plot ungrassed. (<b>b</b>) Plot with grass one row on two. (<b>c</b>) Plot totally grassed.</p>
Full article ">Figure 4
<p>Example of 2 subplots in a plot and location of 10 SWP measurements in a subplot. The number and location of the subplots are chosen according to (1) the S2 20 m pixel grid in order to have a good representation of the variability in the entire pixel, (2) plot soil and/or vegetation variability as reported by agronomic expert or vine-growers and (3) the time needed to perform the SWP measurements in the field. Vis for SWP measurements were selected in order to have a good representation of the entire pixel and according to the rows orientation inside the pixel.</p>
Full article ">Figure 5
<p>Flowchart of the resampling and the merge of S2 bands used in this study. Example for an image with only one vine plot with three subplots. This method as been used for the 38 images and the 103 subplot.</p>
Full article ">Figure 6
<p>Illustration of the database sub-sample decomposition.</p>
Full article ">Figure 7
<p>Flowchart details of analysis (n=349 for the entire dataset, 1: pea-size, 2: Pre-veraison, 3: veraison, 4: ripening).</p>
Full article ">Figure 8
<p>Box plot of SWP values (in MegaPascal) measured in field for each development stage covered by our study. The size of the colored rectangles indicates the percentage of data per year.</p>
Full article ">Figure 9
<p>Boxplot of <math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math> and RMSE over the 10 split for Linear model with the whole dataset (all years and all inter-row management).</p>
Full article ">Figure 10
<p>Best band to use for regression model.</p>
Full article ">Figure 11
<p>Distribution of predicted and observed data according to inter-row management.</p>
Full article ">Figure 12
<p>Data distribution according to grapes variety.</p>
Full article ">Figure 13
<p>Data distribution according to development stage.</p>
Full article ">Figure 14
<p>Data distribution according to study year.</p>
Full article ">Figure 15
<p>Distribution of the observed SWP according to (<b>a</b>) NDVI values, (<b>b</b>) REP values, (<b>c</b>) values of the model developed in this study.</p>
Full article ">Figure 15 Cont.
<p>Distribution of the observed SWP according to (<b>a</b>) NDVI values, (<b>b</b>) REP values, (<b>c</b>) values of the model developed in this study.</p>
Full article ">
18 pages, 6987 KiB  
Article
Determination of Key Phenological Phases of Winter Wheat Based on the Time-Weighted Dynamic Time Warping Algorithm and MODIS Time-Series Data
by Fa Zhao, Guijun Yang, Xiaodong Yang, Haiyan Cen, Yaohui Zhu, Shaoyu Han, Hao Yang, Yong He and Chunjiang Zhao
Remote Sens. 2021, 13(9), 1836; https://doi.org/10.3390/rs13091836 - 8 May 2021
Cited by 16 | Viewed by 3502
Abstract
Accurate determination of phenological information of crops is essential for field management and decision-making. Remote sensing time-series data are widely used for extracting phenological phases. Existing methods mainly extract phenological phases directly from individual remote sensing time-series, which are easily affected by clouds, [...] Read more.
Accurate determination of phenological information of crops is essential for field management and decision-making. Remote sensing time-series data are widely used for extracting phenological phases. Existing methods mainly extract phenological phases directly from individual remote sensing time-series, which are easily affected by clouds, noise, and mixed pixels. This paper proposes a novel method of phenological phase extraction based on the time-weighted dynamic time warping (TWDTW) algorithm using MODIS Normalized Difference Vegetation Index (NDVI) 5-day time-series data with a spatial resolution of 500 m. Firstly, based on the phenological differences between winter wheat and other land cover types, winter wheat distribution is extracted using the TWDTW classification method, and the results show that the overall classification accuracy and Kappa coefficient reach 94.74% and 0.90, respectively. Then, we extract the pure winter-wheat pixels using a method based on the coefficient of variation, and use these pixels to generate the average phenological curve. Next, the difference between each winter-wheat phenological curve and the average winter-wheat phenological curve is quantitatively calculated using the TWDTW algorithm. Finally, the key phenological phases of winter wheat in the study area, namely, the green-up date (GUD), heading date (HD), and maturity date (MD), are determined. The results show that the phenological phase extraction using the TWDTW algorithm has high accuracy. By verification using phenological station data from the Meteorological Data Sharing Service System of China, the root mean square errors (RMSEs) of the GUD, HD, and MD are found to be 9.76, 5.72, and 6.98 days, respectively. Additionally, the method proposed in this article is shown to have a better extraction performance compared with several other methods. Furthermore, it is shown that, in Hebei Province, the GUD, HD, and MD are mainly affected by latitude and accumulated temperature. As the latitude increases from south to north, the GUD, HD, and MD are delayed, and for each 1° increment in latitude, the GUD, HD, and MD are delayed by 4.84, 5.79, and 6.61 days, respectively. The higher the accumulated temperature, the earlier the phenological phases occur. However, latitude and accumulated temperature have little effect on the length of the phenological phases. Additionally, the lengths of time between GUD and HD, HD and MD, and GUD and MD are stable at 46, 41, and 87 days, respectively. Overall, the proposed TWDTW method can accurately determine the key phenological phases of winter wheat at a regional scale using remote sensing time-series data. Full article
(This article belongs to the Special Issue Remote Sensing and Decision Support for Precision Orchard Production)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The location of the study area. (<b>b</b>) Phenological monitoring stations (red) and field data.</p>
Full article ">Figure 2
<p>The Normalized Difference Vegetation Index (NDVI) time-series curves of various land cover types during the growth period of winter wheat in Hebei Province.</p>
Full article ">Figure 3
<p>An illustration of the TWDTW algorithm. (<b>a</b>) The reference curve (green) and the target curve (blue). (<b>b</b>) The distance matrix <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mrow> <mi>m</mi> <mo>×</mo> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math> between the reference curve and the target curve. Here, to simplify the calculation, <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math> was set to 1. (<b>c</b>) The cumulative distance matrix <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mrow> <mi>m</mi> <mo>×</mo> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math> between the reference curve and the target curve and the warping path (red boxes) computed by the TWDTW algorithm. (<b>d</b>) The time alignment relationship (red line segments) between the reference curve and the target curve.</p>
Full article ">Figure 4
<p>Workflow of the time-weighted dynamic time warping (TWDTW) method for determining the phenological phases of winter wheat. (a) Selection of pure winter-wheat pixels. (b) Definition of the average GUD, HD, and MD. (c) Waveform adjustment. (d) Calculation of the difference between each NDVI phenological curve and the average phenological curve. (e) Determination of key winter-wheat phenological phases and accuracy assessment.</p>
Full article ">Figure 5
<p>The definition of the average green-up date (GUD), heading date (HD), and maturity date (MD). DOY: Day of year. The average winter-wheat phenological curve is divided into two parts according to the maximum value of the curve, namely, S<sub>A</sub> and S<sub>B</sub>. The GUD was defined as the day when the NDVI first reached 10% of Amp<sub>1</sub>, and was obtained by subtracting NVDI<sub>min1</sub> from NDVI<sub>max</sub>. The HD was defined as the day when the NDVI reached the maximum. The MD was defined as the day after the HD when the NDVI dropped to 10% of Amp<sub>2</sub>, and was obtained by subtracting NVDI<sub>min2</sub> from NDVI<sub>max</sub>. Where NDVI<sub>max</sub> represents the maximum NDVI of the average winter-wheat phenological curve, NDVI<sub>min1</sub> and NDVI<sub>min2</sub> represent the minimum NDVI of S<sub>A</sub> and S<sub>B</sub>, respectively. Amp1 and Amp2 represent the amplitude of S<sub>A</sub> and S<sub>B</sub>, respectively.</p>
Full article ">Figure 6
<p>An illustration of the approach used by the TWDTW algorithm for calculating the difference between the reference curve (green) and target curve (blue).</p>
Full article ">Figure 7
<p>Winter wheat distribution and pure winter-wheat pixels.</p>
Full article ">Figure 8
<p>The spatial distribution of winter-wheat phenological phases for (<b>a</b>) GUD, (<b>b</b>) HD, and (<b>c</b>) MD, and frequency distributions for (<b>d</b>) GUD, (<b>e</b>) HD, and (<b>f</b>) MD.</p>
Full article ">Figure 9
<p>The validation of the satellite-derived winter-wheat phenological phases obtained using the TWDTW algorithm via comparison with ground-observed data. (<b>a</b>) GUD. (<b>b</b>) HD. (<b>c</b>) MD. The black line is the 1:1 line.</p>
Full article ">Figure 10
<p>The relationships between (<b>a</b>) GUD and latitude (<b>b</b>) HD and latitude (<b>c</b>) MD and latitude (<b>d</b>) GUD and longitude (<b>e</b>) HD and longitude, and (<b>f</b>) MD and longitude.</p>
Full article ">Figure 11
<p>The relationships between (<b>a</b>) the length of time between GUD and HD, and latitude, (<b>b</b>) the length of time between HD and MD, and latitude, (<b>c</b>) the length of time between GUD and MD, and latitude, (<b>d</b>) the length of time between GUD and HD, and longitude (<b>e</b>), the length of time between HD and MD, and longitude, and (<b>f</b>) the length of time between GUD and MD, and longitude.</p>
Full article ">Figure 12
<p>The relationships between accumulated temperature and winter-wheat phenology. The horizontal axes represent the accumulated temperature of the corresponding days, and the vertical axes represent the main phenological phases, namely (<b>a</b>) GUD, (<b>b</b>) HD, and (<b>c</b>) MD, and the lengths of time between the phenological phases, namely, (<b>d</b>) between GUD and HD, (<b>e</b>) between HD and MD, and (<b>f</b>) between GUD and MD.</p>
Full article ">Figure 13
<p>The values of (<b>a</b>) the coefficient of determination (R<sup>2</sup>), (<b>b</b>) root mean square error (RMSE), and (<b>c</b>) bias obtained for average phenological curves produced using different sample sizes.</p>
Full article ">Figure 14
<p>Comparisons between the TWDTW method and other methods on (<b>a</b>) R<sup>2</sup>, (<b>b</b>) RMSE, and (<b>c</b>) bias.</p>
Full article ">Figure A1
<p>A comparison of the NDVI time-series curve and the average phenological curve before (<b>a</b>) and after (<b>b</b>) waveform adjustment.</p>
Full article ">
20 pages, 3601 KiB  
Article
Multi-Dimensional Drought Assessment in Abbay/Upper Blue Nile Basin: The Importance of Shared Management and Regional Coordination Efforts for Mitigation
by Yared Bayissa, Semu Moges, Assefa Melesse, Tsegaye Tadesse, Anteneh Z. Abiy and Abeyou Worqlul
Remote Sens. 2021, 13(9), 1835; https://doi.org/10.3390/rs13091835 - 8 May 2021
Cited by 8 | Viewed by 3026
Abstract
Drought is one of the least understood and complex natural hazards often characterized by a significant decrease in water availability for a prolonged period. It can be manifested in one or more forms as meteorological, agricultural, hydrological, and/or socio-economic drought. The overarching objective [...] Read more.
Drought is one of the least understood and complex natural hazards often characterized by a significant decrease in water availability for a prolonged period. It can be manifested in one or more forms as meteorological, agricultural, hydrological, and/or socio-economic drought. The overarching objective of this study is to demonstrate and characterize the different forms of droughts and to assess the multidimensional nature of drought in the Abbay/ Upper Blue Nile River (UBN) basin and its national and regional scale implications. In this study, multiple drought indices derived from in situ and earth observation-based hydro-climatic variables were used. The meteorological drought was characterized using the Standardized Precipitation Index (SPI) computed from the earth observation-based gridded CHIRPS (Climate Hazards Group InfraRed Precipitation with Station) rainfall data. Agricultural and hydrological droughts were characterized by using the Soil Moisture Deficit Index (SMDI) and Standardized Runoff-discharge Index (SRI), respectively. The monthly time series of SMDI was derived from model-based gridded soil moisture and SRI from observed streamflow data from 1982 to 2019. The preliminary result illustrates the good performance of the drought indices in capturing the historic severe drought events (e.g., 1984 and 2002) and the spatial extents across the basin. The results further indicated that all forms of droughts (i.e., meteorological, agricultural, and hydrological) occurred concurrently in Abbay/Upper Blue Nile basin with a Pearson correlation coefficient ranges from 0.5 to 0.85 both Kiremt and annual aggregate periods. The concurrent nature of drought is leading to a multi-dimensional socio-economic crisis as indicated by rainfall, and soil moisture deficits, and drying of small streams. Multi-dimensional drought mitigation necessitates regional cooperation and watershed management to protect both the common water sources of the Abbay/Upper Blue Nile basin and the socio-economic activities of the society in the basin. This study also underlines the need for multi-scale drought monitoring and management practices in the basin. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location, elevation, and boundary of the Upper Blue Nile Basin (bold dark violet color).</p>
Full article ">Figure 2
<p>The flow chart that shows the steps followed in this study.</p>
Full article ">Figure 3
<p>Spatial patterns of meteorological and agricultural droughts for 3-months <span class="html-italic">Kiremt</span> (first and second panels from left) and 12-month annual (third and fourth panels from left) aggregate periods for selected historic drought years (1984, 1995, 2002, 2009, and 2015).</p>
Full article ">Figure 4
<p>The spatial Pearson correlation coefficient values generated using the time series meteorological (SPI) and agricultural (SMDI) drought indices for <span class="html-italic">Kiremt</span> season (<b>a</b>) and annual (<b>b</b>) time scales.</p>
Full article ">Figure 5
<p>Spatial pattern of frequency of occurrence of mild (top panel), moderate (second panel), severe (third panel) and extreme (bottom panel) meteorological (first and third columns) and agricultural (second and fourth columns) droughts during Kiremt and annual aggregate periods.</p>
Full article ">Figure 6
<p>The temporal patterns of meteorological (SPI), agricultural (stdSMDI), and hydrological (SRI) droughts for <span class="html-italic">Kiremt</span> (<b>a</b>) and annual (<b>b</b>) aggregate periods from 1982–2019. The time span for hydrological drought is from 1982–2017 due to the inaccessibility of the recent flow data. The broken lines show the different drought severity categories as explained in <a href="#remotesensing-13-01835-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure A1
<p>Rainfall.</p>
Full article ">Figure A2
<p>Soil moisture.</p>
Full article ">Figure A3
<p>Streamflow.</p>
Full article ">
21 pages, 4012 KiB  
Article
Predicting Height to Crown Base of Larix olgensis in Northeast China Using UAV-LiDAR Data and Nonlinear Mixed Effects Models
by Xin Liu, Yuanshuo Hao, Faris Rafi Almay Widagdo, Longfei Xie, Lihu Dong and Fengri Li
Remote Sens. 2021, 13(9), 1834; https://doi.org/10.3390/rs13091834 - 8 May 2021
Cited by 17 | Viewed by 2570
Abstract
As a core content of forest management, the height to crown base (HCB) model can provide a theoretical basis for the study of forest growth and yield. In this study, 8364 trees of Larix olgensis within 118 sample plots from 11 sites were [...] Read more.
As a core content of forest management, the height to crown base (HCB) model can provide a theoretical basis for the study of forest growth and yield. In this study, 8364 trees of Larix olgensis within 118 sample plots from 11 sites were measured to establish a two-level nonlinear mixed effect (NLME) HCB model. All predictors were derived from an unmanned aerial vehicle light detection and ranging (UAV-LiDAR) laser scanning system, which is reliable for extensive forest measurement. The effects of the different individual trees, stand factors, and their combinations on the HCB were analyzed, and the leave-one-site-out cross-validation was utilized for model validation. The results showed that the NLME model significantly improved the prediction accuracy compared to the base model, with a mean absolute error and relative mean absolute error of 0.89% and 9.71%, respectively. In addition, both site-level and plot-level sampling strategies were simulated for NLME model calibration. According to different prediction scale and accuracy requirements, selecting 15 trees randomly per site or selecting the three largest trees and three medium-size trees per plot was considered the most favorable option, especially when both investigations cost and the model’s accuracy are primarily considered. The newly established HCB model will provide valuable tools to effectively utilize the UAV-LiDAR data for facilitating decision making in larch plantations management. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study site: Mengjiagang Forest farm in northeast China.</p>
Full article ">Figure 2
<p>Flow chart showing acquisition processes of UAV-LiDAR.</p>
Full article ">Figure 3
<p>The standardized residuals distribution of the base model and calibrated NLME model.</p>
Full article ">Figure 4
<p>The predicted vs. field-measured height to crown base (HCB) calculated by different models.</p>
Full article ">Figure 5
<p>The boxplots of Bias% (<b>A</b>) and MAE% (<b>B</b>) of the three models across 15 diameter classes.</p>
Full article ">Figure 6
<p>The Bias% (<b>A</b>) and MAE% (<b>B</b>) of the site-level calibration.</p>
Full article ">Figure 7
<p>The bias% (<b>A</b>) and MAE% (<b>B</b>) of the eight sampling strategies using the plot-level calibration.</p>
Full article ">
27 pages, 9633 KiB  
Article
Coupling of Dual Channel Waveform ALS and Sonar for Investigation of Lake Bottoms and Shore Zones
by Jarosław Chormański, Barbara Nowicka, Aleksander Wieckowski, Maurycy Ciupak, Jacek Jóźwiak and Tadeusz Figura
Remote Sens. 2021, 13(9), 1833; https://doi.org/10.3390/rs13091833 - 8 May 2021
Cited by 12 | Viewed by 3025
Abstract
In this work, we proposed to include remote sensing techniques as a part of the methodology for natural lake bottom mapping, with a focus on the littoral zone. Due to the inaccessibility of this zone caused by dense vegetation, measurements of the lake [...] Read more.
In this work, we proposed to include remote sensing techniques as a part of the methodology for natural lake bottom mapping, with a focus on the littoral zone. Due to the inaccessibility of this zone caused by dense vegetation, measurements of the lake bottom and the coastline are also difficult to perform using traditional methods. The authors of this paper present, discuss and verify the applicability of remote sensing active sensors as a tool for measurements in the shore zone of a lake. The single-beam Lowrance HDS-7 ComboGPS echosounder with an 83/200 kHz transducer and a two-beam LiDAR RIEGL VQ-1560i-DW scanner have been used for reservoir bottom measurements of two neighboring lakes, which differ in terms of water transparency. The research has found a strong correlation between both sonar and LiDAR for mapping the bottom depth in a range up to 1.6 m, and allowed LiDAR mapping of approximately 20% of the highly transparent lake, but it has not been found to be useful in water with low transparency. In the light of the conducted research, both devices, sonar and LiDAR, have potential for complementary use by fusing both methods: the sonar for mapping of the sublittoral and the pelagic zone, and the LiDAR for mapping of the littoral zone, overcoming limitation related to vegetation in the lake shore zone. Full article
(This article belongs to the Special Issue Geoinformation Technologies in Civil Engineering and the Environment)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographical location of studied lakes with their watersheds.</p>
Full article ">Figure 2
<p>Bathymetry of studied lakes (own elaboration based on [<a href="#B33-remotesensing-13-01833" class="html-bibr">33</a>]).</p>
Full article ">Figure 3
<p>Equipment used during the echosounder measurements owned by IMGW-PIB: (<b>a</b>) Texas 360 boat; (<b>b</b>) mounting the echosounder and side-scan sonar sensors during pilot measurements (20 cm draft); (<b>c</b>) images of the lake bottom analyzed during the measurements on the Full VGA SolarMAX ™ PLUS TFT screen.</p>
Full article ">Figure 4
<p>Echosounder coverage of the studied lakes.</p>
Full article ">Figure 5
<p>LiDAR Vulcanair P68C plane and VQ-1560i DW laser scanner, property of Opegieka Sp. z o.o.</p>
Full article ">Figure 6
<p>The process of determining the course of the shoreline based on LiDAR data.</p>
Full article ">Figure 7
<p>Distribution of geodetic measurement points in the coastal zone of the analyzed lakes against their borders (gray line) derived from MPHP10 [<a href="#B38-remotesensing-13-01833" class="html-bibr">38</a>]. In the zoom-in, an example of using an RGB photo from the UAV as background is shown (at the water table level 72.53 m above sea level).</p>
Full article ">Figure 8
<p>Equipment used during UAV flights (<b>a</b>)—DJI Phantom 3 PRO, (<b>b</b>)—DJI Mavic 2 PRO, (<b>c</b>)—DJI Phantom 4 PRO).</p>
Full article ">Figure 9
<p>Using the UAV orthophotomap to verify the range of the water table. (<b>A</b>): proximity of reed beds; (<b>B</b>): dumping the sand on the beach.</p>
Full article ">Figure 10
<p>Adjusting the measured water table level from LiDAR data to the results of geodetic measurements. Examples of the weakest matching of the water table level and their causes: (<b>A</b>)—pier in close proximity to the shoreline, (<b>B</b>)—anthropogenic changes in shore height, e.g., related to the dumping of sand on the beach, (<b>C</b>)—occurrence of dense tree crowns covering the water–land boundary line (coastline).</p>
Full article ">Figure 11
<p>Deviations of the water table level readings measured by LiDAR and geodetic surveying.</p>
Full article ">Figure 12
<p>Bottom zone available for LiDAR and location of transects compared to points measured by Green LiDAR and sonar on Białe Lake.</p>
Full article ">Figure 13
<p>Comparison of the bathymetric measurements results using the sonar (grey) and Green LiDAR (green) methods in selected profiles on the Białe Lake. Profiles 7, 8—transect through the sill in the eastern part of the lake. Profiles 10, 14, 29—transects through the central part of the bowl. Profiles 36, 38, 49—transects through the coastal zone in the western part of the lake (with underwater vegetation. Profiles 50, 51, 52—transects through the slope and shallow water zone in the western part of the lake.</p>
Full article ">Figure 14
<p>Verification of the measurement dependence for LiDAR and sonar data using the runs test, i.e., the runs above and below the median as well the runs up and down, where Me represents the median of all observations.</p>
Full article ">Figure 14 Cont.
<p>Verification of the measurement dependence for LiDAR and sonar data using the runs test, i.e., the runs above and below the median as well the runs up and down, where Me represents the median of all observations.</p>
Full article ">Figure 15
<p>Autocorrelation function (ACF) of measurement data made by LiDAR and sonar for a 100 delay with autocorrelation coefficients and confidence level <span class="html-italic">p</span> (dotted line).</p>
Full article ">Figure 16
<p>Linear regression of the results of depth measurements made by sonar and LiDAR.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop