[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 13, June-2
Previous Issue
Volume 13, May-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 13, Issue 11 (June-1 2021) – 213 articles

Cover Story (view full-size image): Monitoring biodiversity on a global scale is a major challenge for biodiversity conservation. In tropical forests, exhaustive and detailed field surveys are costly and challenging. In the face of increasing anthropogenic pressures, it is imperative that we find ways to efficiently assess patterns of biodiversity change. Recent developments in optical remote sensing have proven effective at estimating the biophysical parameters of vegetation, but a gap has yet to be bridged to manage remote-sensing-based biodiversity assessments. Quantifying spectral variation in terms of diversity indices is a novel possibility to link spectral information and field-based indices. This work presents the complementarity between those two sources of information to study patterns of biodiversity in secondary tropical forest. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 7529 KiB  
Article
A Novel Query Strategy-Based Rank Batch-Mode Active Learning Method for High-Resolution Remote Sensing Image Classification
by Xin Luo, Huaqiang Du, Guomo Zhou, Xuejian Li, Fangjie Mao, Di’en Zhu, Yanxin Xu, Meng Zhang, Shaobai He and Zihao Huang
Remote Sens. 2021, 13(11), 2234; https://doi.org/10.3390/rs13112234 - 7 Jun 2021
Cited by 9 | Viewed by 3073
Abstract
An informative training set is necessary for ensuring the robust performance of the classification of very-high-resolution remote sensing (VHRRS) images, but labeling work is often difficult, expensive, and time-consuming. This makes active learning (AL) an important part of an image analysis framework. AL [...] Read more.
An informative training set is necessary for ensuring the robust performance of the classification of very-high-resolution remote sensing (VHRRS) images, but labeling work is often difficult, expensive, and time-consuming. This makes active learning (AL) an important part of an image analysis framework. AL aims to efficiently build a representative and efficient library of training samples that are most informative for the underlying classification task, thereby minimizing the cost of obtaining labeled data. Based on ranked batch-mode active learning (RBMAL), this paper proposes a novel combined query strategy of spectral information divergence lowest confidence uncertainty sampling (SIDLC), called RBSIDLC. The base classifier of random forest (RF) is initialized by using a small initial training set, and each unlabeled sample is analyzed to obtain the classification uncertainty score. A spectral information divergence (SID) function is then used to calculate the similarity score, and according to the final score, the unlabeled samples are ranked in descending lists. The most “valuable” samples are selected according to ranked lists and then labeled by the analyst/expert (also called the oracle). Finally, these samples are added to the training set, and the RF is retrained for the next iteration. The whole procedure is iteratively implemented until a stopping criterion is met. The results indicate that RBSIDLC achieves high-precision extraction of urban land use information based on VHRRS; the accuracy of extraction for each land-use type is greater than 90%, and the overall accuracy (OA) is greater than 96%. After the SID replaces the Euclidean distance in the RBMAL algorithm, the RBSIDLC method greatly reduces the misclassification rate among different land types. Therefore, the similarity function based on SID performs better than that based on the Euclidean distance. In addition, the OA of RF classification is greater than 90%, suggesting that it is feasible to use RF to estimate the uncertainty score. Compared with the three single query strategies of other AL methods, sample labeling with the SIDLC combined query strategy yields a lower cost and higher quality, thus effectively reducing the misclassification rate of different land use types. For example, compared with the Batch_Based_Entropy (BBE) algorithm, RBSIDLC improves the precision of barren land extraction by 37% and that of vegetation by 14%. The 25 characteristics of different land use types screened by RF cross-validation (RFCV) combined with the permutation method exhibit an excellent separation degree, and the results provide the basis for VHRRS information extraction in urban land use settings based on RBSIDLC. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area: (<b>a</b>) Zhejiang Province. The blue polygon represents Hangzhou; (<b>b</b>) the subregion of the West Lake District of Hangzhou.</p>
Full article ">Figure 2
<p>Multi-scale segmentation-based sample selection. Segment the VHRRS image with multi-scale segmentation and select the center pixel of each object with a “pure” classification to enlarge the training set. (<b>a</b>) Multi-scale image segmentation; (<b>b</b>) initial classification diagram of the RF<sub>0</sub>; (<b>c</b>) selection of the center pixel of an object; (<b>d</b>) expanded training set.</p>
Full article ">Figure 3
<p>Overall workflow of the proposed AL algorithm.</p>
Full article ">Figure 4
<p>Effect of the number of variables on OOB error obtained by RFCV for feature selection with all 210 features.</p>
Full article ">Figure 5
<p>Variable importance scores of top 15 features. Variable names are given as (feature name)-(base spectral band)-(window size). For example, MEA_NIR_13 indicates an MEA based on an NIR band with a window size of 13 × 13. Bar color indicates the spectral bands, where green band is in green, red band in red, blue band in blue, NIR band in orange, and vegetation index variables in dark blue.</p>
Full article ">Figure 6
<p>Classification diagram based on preferred features: (<b>a</b>) RBSIDLC; (<b>b</b>) RBMAL; (<b>c</b>) RF. And two subsets classification diagram (Subset (1) and Subset (2)). Subset (1): (<b>d</b>) RBSIDLC; (<b>e</b>) RBMAL; (<b>f</b>) RF. Subset (2): (<b>g</b>) RBSIDLC; (<b>h</b>) RBMAL; (<b>i</b>) RF.</p>
Full article ">Figure 7
<p>Optical classification results of the three AL methods when the “preferred feature” combination was used: (<b>a</b>) BBLC; (<b>b</b>) BBM; (<b>c</b>) BBE. And two subsets classification diagram (Subset (1) and Subset (2)). Subset (1): (<b>d</b>) BBLC; (<b>e</b>) BBM; (<b>f</b>) BBE. Subset (2): (<b>g</b>) BBLC; (<b>h</b>) BBM; (<b>i</b>) BBE.</p>
Full article ">Figure 8
<p>OAs of the preferred feature classification results with different algorithms.</p>
Full article ">Figure 9
<p>Confusion matrix of the classification results of six algorithms based on preferred features.</p>
Full article ">Figure 10
<p>Iterative querying of preferred features: (<b>a</b>) training accuracy; (<b>b</b>) effects of the batch size in the proposed AL algorithm on preferred feature results.</p>
Full article ">Figure 11
<p>Preferred feature index. The centerline in each box in the boxplot is the median, and the edges of the box represent the upper and lower quartiles. Spectral characteristics: NIR; vegetation indexes: GLI, GNDVI, TVI, DVI, and VIgreen; texture feature: MEA_G_5, MEA_R_13, and SEC_NIR_13.</p>
Full article ">
30 pages, 14269 KiB  
Article
A Novel GIS-Based Approach for Automated Detection of Nearshore Sandbar Morphological Characteristics in Optical Satellite Imagery
by Rasa Janušaitė, Laurynas Jukna, Darius Jarmalavičius, Donatas Pupienis and Gintautas Žilinskas
Remote Sens. 2021, 13(11), 2233; https://doi.org/10.3390/rs13112233 - 7 Jun 2021
Cited by 12 | Viewed by 4204
Abstract
Satellite remote sensing is a valuable tool for coastal management, enabling the possibility to repeatedly observe nearshore sandbars. However, a lack of methodological approaches for sandbar detection prevents the wider use of satellite data in sandbar studies. In this paper, a novel fully [...] Read more.
Satellite remote sensing is a valuable tool for coastal management, enabling the possibility to repeatedly observe nearshore sandbars. However, a lack of methodological approaches for sandbar detection prevents the wider use of satellite data in sandbar studies. In this paper, a novel fully automated approach to extract nearshore sandbars in high–medium-resolution satellite imagery using a GIS-based algorithm is proposed. The method is composed of a multi-step workflow providing a wide range of data with morphological nearshore characteristics, which include nearshore local relief, extracted sandbars, their crests and shoreline. The proposed processing chain involves a combination of spectral indices, ISODATA unsupervised classification, multi-scale Relative Bathymetric Position Index (RBPI), criteria-based selection operations, spatial statistics and filtering. The algorithm has been tested with 145 dates of PlanetScope and RapidEye imagery using a case study of the complex multiple sandbar system on the Curonian Spit coast, Baltic Sea. The comparison of results against 4 years of in situ bathymetric surveys shows a strong agreement between measured and derived sandbar crest positions (R2 = 0.999 and 0.997) with an average RMSE of 5.8 and 7 m for PlanetScope and RapidEye sensors, respectively. The accuracy of the proposed approach implies its feasibility to study inter-annual and seasonal sandbar behaviour and short-term changes related to high-impact events. Algorithm-provided outputs enable the possibility to evaluate a range of sandbar characteristics such as distance from shoreline, length, width, count or shape at a relevant spatiotemporal scale. The design of the method determines its compatibility with most sandbar morphologies and suitability to other sandy nearshores. Tests of the described technique with Sentinel-2 MSI and Landsat-8 OLI data show that it can be applied to publicly available medium resolution satellite imagery of other sensors. Full article
(This article belongs to the Special Issue Remote Sensing and GIS for Geomorphological Mapping)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area: (<b>a</b>) a configuration of the study area and locations of nearshore cross-shore profiling used for algorithm validation; (<b>b</b>) the situation of the study area; (<b>c</b>) examples of nearshore cross-shore profiles at three different locations in the Curonian Spit; (<b>d</b>,<b>e</b>) examples of PlanetScope and RapidEye (<b>f</b>) imagery at the same locations as examples of cross-shore profiles (profile 1 corresponds to (<b>d</b>); 2–(<b>e</b>), 3–(<b>f</b>)).</p>
Full article ">Figure 2
<p>An adaptive median filter by Li and Fan [<a href="#B75-remotesensing-13-02233" class="html-bibr">75</a>]. <math display="inline"><semantics> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msubsup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mrow> <mi>m</mi> <mi>e</mi> <mi>d</mi> </mrow> </msubsup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msubsup> </mrow> </semantics></math> denote, minimum, median and maximum values of the pixel in the filter window <math display="inline"><semantics> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mi>w</mi> </msubsup> </mrow> </semantics></math> cantered at (<span class="html-italic">x</span>, <span class="html-italic">y</span>) with a window of size <span class="html-italic">w</span>; <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math>—maximum filter window; <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> </msub> </mrow> </semantics></math>—primary value of pixel; <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> </msub> </mrow> </semantics></math>—value of filtered pixel; <math display="inline"><semantics> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>x</mi> <mo>,</mo> <mo> </mo> <mi>y</mi> </mrow> <mrow> <mi>m</mi> <mi>e</mi> <msub> <mi>d</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </msubsup> </mrow> </semantics></math>—median value within <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math>. Primary filter window w is set to 3 × 3 pixels and <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> to 15 × 15. If it is determined that the pixel is not contaminated with noise within the 3 × 3 window, the primary pixel value is preserved. If the pixel is contaminated with noise, the filtering window is increased by 2 pixels. The procedure is repeated until the maximum window is reached.</p>
Full article ">Figure 3
<p>A flowchart of the land-sea masking and shoreline derivation procedure used in this study.</p>
Full article ">Figure 4
<p>The Relative Bathymetric Position Index (RBPI) dependence on local neighbourhood size and multiscale approach solution: (<b>a</b>) nearshore cross-shore-distance-based sectors used in this study; (<b>b</b>–<b>e</b>) the mean RBPI in different size local neighbourhoods ((<b>b</b>) circle 3, 5, 7, 9; (<b>c</b>) circle 9, 11, 15; (<b>d</b>) circle 15, 19, 23, 31; (<b>e</b>) circle 23, 31, 39; (<b>f</b>) multiscale RBPI with the combination of (<b>b</b>–<b>e</b>) neighbourhoods in nearshore cross-shore-distance-based sectors.</p>
Full article ">Figure 5
<p>A flowchart of the generation of input for sandbar extraction: first, the multiscale Relative Bathymetric Position Index (RBPI) is computed, and the curvature output is created. Then, the multiscale RBPI and curvature outputs are combined and filtered with different size kernels based on nearshore cross-shore sectors for the final RBPI-Curvature raster creation. Numbers by the neighbourhoods/kernels are in pixels.</p>
Full article ">Figure 6
<p>Examples of outputs at different stages of the proposed algorithm for PlanetScope imagery: (<b>a</b>) shoreline extracted after land-sea masking procedure; (<b>b</b>) multiscale RBPI Raster; (<b>c</b>) Curvature Raster; (<b>d</b>) Final RBPI-Curvature Raster; (<b>e</b>) Rescaled RBPI-Curvature Raster; (<b>f</b>) Bar Mask; (<b>g</b>) Final Bar Raster; (<b>h</b>) Bar-Slope Raster; (<b>i</b>) Primary Bar Crest Raster; (<b>j</b>) Secondary Bar Crest Raster; (<b>k</b>) Final Bar Crest Raster; (<b>l</b>) Final Bar Crest Polyline.</p>
Full article ">Figure 7
<p>A flowchart of the sandbar extraction procedure.</p>
Full article ">Figure 8
<p>A flowchart of the sandbar crest extraction procedure: at first, the primary crest as maximum value pixels within three kernels are identified, then secondary crest with cross-shore/longshore transects is extracted, and the final bar crest is obtained after cleaning procedure with proximity-based filter.</p>
Full article ">Figure 9
<p>Time series of sandbar boundaries and crestlines delineated using the proposed algorithm in RapidEye (<b>a</b>–<b>c</b>) and PlanetScope (<b>d</b>–<b>j</b>) imagery. Typical characteristics of interannual and seasonal dynamics of multiple sandbar systems can be observed here: seaward migration (<b>d</b>–<b>g</b>) and decay (<b>h</b>) of the outer sandbar; fast development and seaward migration of the middle sandbar after the decay of the outer one (<b>i</b>,<b>j</b>); development of complex morphologies during the period of low wave energy (<b>e</b>,<b>f</b>,<b>h</b>,<b>j</b>) and straightening during the period of high wave energy (<b>d</b>,<b>g</b>,<b>i</b>).</p>
Full article ">Figure 10
<p>Correlation between measured and satellite-derived sandbar crest positions: (<b>a</b>–<b>c</b>) PlanetScope; (<b>d</b>,<b>e</b>) RapidEye. Column 1 (<b>a</b>,<b>d</b>) shows correlation for inner, middle and outer sandbars; column 2 (<b>b</b>,<b>e</b>) shows correlation for dates of image acquisition, column 3 (<b>c</b>,<b>f</b>) shows total correlation and root-mean-square-error for analysed sensors.</p>
Full article ">Figure 11
<p>The ability of the algorithm to detect sandbars in PlanetScope and RapidEye images compared to measured data: undetected—sandbars visible in bathymetric cross-shore profile and undetected by the algorithm; detected—sandbars visible in bathymetric cross-shore profile and detected by the algorithm; falsely detected—non-existing sandbars, extracted by the algorithm; satellite detected—sandbars not-visible in cross-shore profiles, but identified by the algorithm.</p>
Full article ">Figure 12
<p>An example of sandbar crestlines delineated using the proposed method in Sentinel-2 MSI (<b>a</b>) and Landsat-8 OLI (<b>b</b>) images with a correlation of measured and extracted crest distance from the shoreline (<b>c</b>,<b>d</b>) during low wave energy conditions.</p>
Full article ">Figure 13
<p>An example of sandbar crestlines delineated using the proposed method in Sentinel-2 MSI (<b>a</b>) and Landsat-8 OLI (<b>b</b>) images with a correlation of measured and extracted crest distance from the shoreline (<b>c</b>,<b>d</b>) during high wave energy conditions.</p>
Full article ">
28 pages, 2485 KiB  
Article
Making Use of 3D Models for Plant Physiognomic Analysis: A Review
by Abhipray Paturkar, Gourab Sen Gupta and Donald Bailey
Remote Sens. 2021, 13(11), 2232; https://doi.org/10.3390/rs13112232 - 7 Jun 2021
Cited by 24 | Viewed by 5586
Abstract
Use of 3D sensors in plant phenotyping has increased in the last few years. Various image acquisition, 3D representations, 3D model processing and analysis techniques exist to help the researchers. However, a review of approaches, algorithms, and techniques used for 3D plant physiognomic [...] Read more.
Use of 3D sensors in plant phenotyping has increased in the last few years. Various image acquisition, 3D representations, 3D model processing and analysis techniques exist to help the researchers. However, a review of approaches, algorithms, and techniques used for 3D plant physiognomic analysis is lacking. In this paper, we investigate the techniques and algorithms used at various stages of processing and analysing 3D models of plants, and identify their current limiting factors. This review will serve potential users as well as new researchers in this field. The focus is on exploring studies monitoring the plant growth of single plants or small scale canopies as opposed to large scale monitoring in the field. Full article
(This article belongs to the Special Issue 3D Point Clouds for Agriculture Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Classification of 3D acquisition methods.</p>
Full article ">Figure 2
<p>Outline of processing and analysis stages for 3D plant growth monitoring.</p>
Full article ">Figure 3
<p>General configuration of laser triangulation.</p>
Full article ">Figure 4
<p>Structured-light technique diagram.</p>
Full article ">Figure 5
<p>ToF measurement principle.</p>
Full article ">Figure 6
<p>Stereo vision technique.</p>
Full article ">Figure 7
<p>Illustration of structure-from-motion technique.</p>
Full article ">
36 pages, 93238 KiB  
Article
Evaluating Carbon Monoxide and Aerosol Optical Depth Simulations from CAM-Chem Using Satellite Observations
by Débora Souza Alvim, Júlio Barboza Chiquetto, Monica Tais Siqueira D’Amelio, Bushra Khalid, Dirceu Luis Herdies, Jayant Pendharkar, Sergio Machado Corrêa, Silvio Nilo Figueroa, Ariane Frassoni, Vinicius Buscioli Capistrano, Claudia Boian, Paulo Yoshio Kubota and Paulo Nobre
Remote Sens. 2021, 13(11), 2231; https://doi.org/10.3390/rs13112231 - 7 Jun 2021
Cited by 13 | Viewed by 4375
Abstract
The scope of this work was to evaluate simulated carbon monoxide (CO) and aerosol optical depth (AOD) from the CAM-chem model against observed satellite data and additionally explore the empirical relationship of CO, AOD and fire radiative power (FRP). The simulated seasonal global [...] Read more.
The scope of this work was to evaluate simulated carbon monoxide (CO) and aerosol optical depth (AOD) from the CAM-chem model against observed satellite data and additionally explore the empirical relationship of CO, AOD and fire radiative power (FRP). The simulated seasonal global concentrations of CO and AOD were compared, respectively, with the Measurements of Pollution in the Troposphere (MOPITT) and the Moderate-Resolution Imaging Spectroradiometer (MODIS) satellite products for the period 2010–2014. The CAM-chem simulations were performed with two configurations: (A) tropospheric-only; and (B) tropospheric with stratospheric chemistry. Our results show that the spatial and seasonal distributions of CO and AOD were reasonably reproduced in both model configurations, except over central China, central Africa and equatorial regions of the Atlantic and Western Pacific, where CO was overestimated by 10–50 ppb. In configuration B, the positive CO bias was significantly reduced due to the inclusion of dry deposition, which was not present in the model configuration A. There was greater CO loss due to the chemical reactions, and shorter lifetime of the species with stratospheric chemistry. In summary, the model has difficulty in capturing the exact location of the maxima of the seasonal AOD distributions in both configurations. The AOD was overestimated by 0.1 to 0.25 over desert regions of Africa, the Middle East and Asia in both configurations, but the positive bias was even higher in the version with added stratospheric chemistry. By contrast, the AOD was underestimated over regions associated with anthropogenic activity, such as eastern China and northern India. Concerning the correlations between CO, AOD and FRP, high CO is found during March–April–May (MAM) in the Northern Hemisphere, mainly in China. In the Southern Hemisphere, high CO, AOD, and FRP values were found during August–September–October (ASO) due to fires, mostly in South America and South Africa. In South America, high AOD levels were observed over subtropical Brazil, Paraguay and Bolivia. Sparsely urbanized regions showed higher correlations between CO and FRP (0.7–0.9), particularly in tropical areas, such as the western Amazon region. There was a high correlation between CO and aerosols from biomass burning at the transition between the forest and savanna environments over eastern and central Africa. It was also possible to observe the transport of these pollutants from the African continent to the Brazilian coast. High correlations between CO and AOD were found over southeastern Asian countries, and correlations between FRP and AOD (0.5–0.8) were found over higher latitude regions such as Canada and Siberia as well as in tropical areas. Higher correlations between CO and FRP are observed in Savanna and Tropical forests (South America, Central America, Africa, Australia, and Southeast Asia) than FRP x AOD. In contrast, boreal forests in Russia, particularly in Siberia, show a higher FRP x AOD correlation than FRP x CO. In tropical forests, CO production is likely favored over aerosol, while in temperate forests, aerosol production is more than CO compared to tropical forests. On the east coast of the United States, the eastern border of the USA with Canada, eastern China, on the border between China, Russia, and Mongolia, and the border between North India and China, there is a high correlation of CO x AOD and a low correlation between FRP with both CO and AOD. Therefore, such emissions in these regions are not generated by forest fires but by industries and vehicular emissions since these are densely populated regions. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) CO (ppb) observed by MOPITT sensor for December-January-February (DJF), in the period of 2010–2014, (<b>b</b>) CO simulated by the CAM5-MAM3 model using tropospheric chemistry and (<b>d</b>) tropospheric/stratospheric chemistry mechanisms using the calculations of Equation (4). The right side of the figure shows the model difference minus the observation (MOPITT), (<b>c</b>) for tropospheric chemistry and (<b>e</b>) for tropospheric/stratospheric chemistry.</p>
Full article ">Figure 2
<p>(<b>a</b>) CO (ppb) observed by MOPITT sensor for March-April-May (MAM), in the period of 2010–2014, (<b>b</b>) CO simulated by the CAM5-MAM3 model using tropospheric chemistry and (<b>d</b>) tropospheric/stratospheric chemistry mechanisms using the calculations of Equation (4). The right side of the figure shows the model difference minus the observation (MOPITT), (<b>c</b>) for tropospheric chemistry and (<b>e</b>) for tropospheric/stratospheric chemistry.</p>
Full article ">Figure 3
<p>(<b>a</b>) CO (ppb) observed by MOPITT sensor for June-July-August (JJA), in the period of 2010–2014, (<b>b</b>) CO simulated by the CAM5-MAM3 model using tropospheric chemistry and (<b>d</b>) tropospheric/stratospheric chemistry mechanisms using the calculations of Equation (4). The right side of the figure shows the model difference minus the observation (MOPITT), (<b>c</b>) for tropospheric chemistry and (<b>e</b>) for tropospheric/stratospheric chemistry.</p>
Full article ">Figure 4
<p>(<b>a</b>) CO (ppb) observed by MOPITT sensor for September-October-November (SON), in the period of 2010–2014, (<b>b</b>) CO simulated by the CAM5-MAM3 model using tropospheric chemistry and (<b>d</b>) tropospheric/stratospheric chemistry mechanisms using the calculations of Equation (4). The right side of the figure shows the model difference minus the observation (MOPITT), (<b>c</b>) for tropospheric chemistry and (<b>e</b>) for tropospheric/stratospheric chemistry.</p>
Full article ">Figure 5
<p>(<b>a</b>) CO (ppb) observed by MOPITT sensor for August-September-October (ASO), in the period of 2010–2014, (<b>b</b>) CO simulated by the CAM5-MAM3 model using tropospheric chemistry and (<b>d</b>) tropospheric/stratospheric chemistry mechanisms using the calculations of Equation (4). The right side of the figure shows the model difference minus the observation (MOPITT), (<b>c</b>) for tropospheric chemistry and (<b>e</b>) for tropospheric/stratospheric chemistry.</p>
Full article ">Figure 6
<p>(<b>a</b>) AOD observed by MODIS sensor for December-January-February (DJF), in the period of 2010–2014, (<b>b</b>) AOD simulated by the CAM5-MAM3 model using tropospheric chemistry and (<b>d</b>) tropospheric/stratospheric chemistry mechanisms. The right side of the figure shows the model difference minus the observation (MODIS), (<b>c</b>) for tropospheric chemistry and (<b>e</b>) for tropospheric/stratospheric chemistry.</p>
Full article ">Figure 7
<p>(<b>a</b>) AOD observed by MODIS sensor for March-April-May (MAM), in the period of 2010–2014, (<b>b</b>) AOD simulated by the CAM5-MAM3 model using tropospheric chemistry and (<b>d</b>) tropospheric/stratospheric chemistry mechanisms. The right side of the figure shows the model difference minus the observation (MODIS), (<b>c</b>) for tropospheric chemistry and (<b>e</b>) for tropospheric/stratospheric chemistry.</p>
Full article ">Figure 8
<p>(<b>a</b>) AOD observed by MODIS sensor for June-July-August (JJA), in the period of 2010–2014, (<b>b</b>) AOD simulated by the CAM5-MAM3 model using tropospheric chemistry and (<b>d</b>) tropospheric/stratospheric chemistry mechanisms. The right side of the figure shows the model difference minus the observation (MODIS), (<b>c</b>) for tropospheric chemistry and (<b>e</b>) for tropospheric/stratospheric chemistry.</p>
Full article ">Figure 9
<p>(<b>a</b>) AOD observed by MODIS sensor for September-October-November (SON), in the period of 2010–2014, (<b>b</b>) AOD simulated by the CAM5-MAM3 model using tropospheric chemistry and (<b>d</b>) tropospheric/stratospheric chemistry mechanisms. The right side of the figure shows the model difference minus the observation (MODIS), (<b>c</b>) for tropospheric chemistry and (<b>e</b>) for tropospheric/stratospheric chemistry.</p>
Full article ">Figure 10
<p>(<b>a</b>) AOD observed by MODIS sensor for August-September-October (ASO), in the period of 2010–2014, (<b>b</b>) AOD simulated by the CAM5-MAM3 model using tropospheric chemistry and (<b>d</b>) tropospheric/stratospheric chemistry mechanisms. The right side of the figure shows the model difference minus the observation (MODIS), (<b>c</b>) for tropospheric chemistry and (<b>e</b>) for tropospheric/stratospheric chemistry.</p>
Full article ">Figure 11
<p>Correlation maps between the annual average of observed and simulated AOD (<b>a</b>,<b>b</b>) and CO (ppb) (<b>c</b>,<b>d</b>) in both model configurations.</p>
Full article ">Figure 12
<p>Root mean square error (RMSE) maps of annual average simulated AOD (<b>a</b>,<b>b</b>) and CO (ppb) (<b>c</b>,<b>d</b>) in both model configurations relative to the observations.</p>
Full article ">Figure 13
<p>Seasonal mean fire radiative power (FRP) detected by the MODIS sensor with AQUA active fire products over the globe during DJF, MAM, JJA, SON and ASO from 2010 to 2014.</p>
Full article ">Figure 14
<p>(<b>a</b>) Correlation coefficient between FRP (MW) MODIS and CO (ppb) from MOPITT, showing the highest prevalence of both parameters over the agricultural regions such as southeast Asia and southern China. (<b>b</b>) Correlation coefficient between FRP (MW) with AOD from MODIS, showing a positive correlation over less dense regions such as forested and agricultural lands across the globe. (<b>c</b>) Correlation coefficient of CO from MOPITT (ppb) with AOD from MODIS, showing different trends compared to correlations between FRP and CO and FRP and AOD (<b>a</b>,<b>b</b>). The AOD prevalence is higher in the tropical and sub-tropical regions.</p>
Full article ">
18 pages, 2203 KiB  
Article
Caching-Aware Intelligent Handover Strategy for LEO Satellite Networks
by Tao Leng, Yuanyuan Xu, Gaofeng Cui and Weidong Wang
Remote Sens. 2021, 13(11), 2230; https://doi.org/10.3390/rs13112230 - 7 Jun 2021
Cited by 12 | Viewed by 4198
Abstract
Recently, many Low Earth Orbit (LEO) satellite networks are being implemented to provide seamless communication services for global users. Since the high mobility of LEO satellites, handover strategy has become one of the most important topics for LEO satellite systems. However, the limited [...] Read more.
Recently, many Low Earth Orbit (LEO) satellite networks are being implemented to provide seamless communication services for global users. Since the high mobility of LEO satellites, handover strategy has become one of the most important topics for LEO satellite systems. However, the limited on-board caching resource of satellites make it difficult to guarantee the handover performance. In this paper, we propose a multiple attributes decision handover strategy jointly considering three factors, which are caching capacity, remaining service time and the remaining idle channels of the satellites. Furthermore, a caching-aware intelligent handover strategy is given based on the deep reinforcement learning (DRL) to maximize the long-term benefits of the system. Compared with the traditional strategies, the proposed strategy reduces the handover failure rate by up to nearly 81% when the system caching occupancy reaches 90%, and it has a lower call blocking rate in high user arrival scenarios. Simulation results show that this strategy can effectively mitigate handover failure rate due to caching resource occupation, as well as flexibly allocate channel resources to reduce call blocking. Full article
(This article belongs to the Special Issue Advanced Satellite-Terrestrial Networks)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Handover scenario in LEO satellite system.</p>
Full article ">Figure 2
<p>Satellite coverage at different times.</p>
Full article ">Figure 3
<p>The geometric relationship of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Υ</mi> <mo>(</mo> <msub> <mi>t</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The geometric relationship of <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Υ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Angular velocity of satellite.</p>
Full article ">Figure 6
<p>Handover flow chart based on serving satellite decision.</p>
Full article ">Figure 7
<p>The intelligent handover network based on DQN.</p>
Full article ">Figure 8
<p>Convergence of the DQN loss function.</p>
Full article ">Figure 9
<p>Comparison of reward values under different learning rates.</p>
Full article ">Figure 10
<p>Performance comparison of handover failure rate.</p>
Full article ">Figure 11
<p>Performance comparison of call blocking rate.</p>
Full article ">
17 pages, 8865 KiB  
Article
Deep Learning-Based Radar Composite Reflectivity Factor Estimations from Fengyun-4A Geostationary Satellite Observations
by Fenglin Sun, Bo Li, Min Min and Danyu Qin
Remote Sens. 2021, 13(11), 2229; https://doi.org/10.3390/rs13112229 - 7 Jun 2021
Cited by 16 | Viewed by 3868
Abstract
Ground-based weather radar data plays an essential role in monitoring severe convective weather. The detection of such weather systems in time is critical for saving people’s lives and property. However, the limited spatial coverage of radars over the ocean and mountainous regions greatly [...] Read more.
Ground-based weather radar data plays an essential role in monitoring severe convective weather. The detection of such weather systems in time is critical for saving people’s lives and property. However, the limited spatial coverage of radars over the ocean and mountainous regions greatly limits their effective application. In this study, we propose a novel framework of a deep learning-based model to retrieve the radar composite reflectivity factor (RCRF) maps from the Fengyun-4A new-generation geostationary satellite data. The suggested framework consists of three main processes, i.e., satellite and radar data preprocessing, the deep learning-based regression model for retrieving the RCRF maps, as well as the testing and validation of the model. In addition, three typical cases are also analyzed and studied, including a cluster of rapidly developing convective cells, a Northeast China cold vortex, and the Super Typhoon Haishen. Compared with the high-quality precipitation rate products from the integrated Multi-satellite Retrievals for Global Precipitation Measurement, it is found that the retrieved RCRF maps are in good agreement with the precipitation pattern. The statistical results show that retrieved RCRF maps have an R-square of 0.88-0.96, a mean absolute error of 0.3-0.6 dBZ, and a root-mean-square error of 1.2-2.4 dBZ. Full article
(This article belongs to the Special Issue Optical and Laser Remote Sensing of Atmospheric Composition)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of the data preprocessing and the training module (<b>left</b>) and the RCRF map validating module after training (<b>right</b>) of the deep learning model.</p>
Full article ">Figure 2
<p>A U-net regression architecture (example of 50 × 80 pixels in the lowest resolution). Each colored box corresponds to a multi-channel feature map and the numbers of channels for the feature maps are denoted at the bottom of the box.</p>
Full article ">Figure 3
<p>(<b>a</b>) RCRF maps (dBZ) retrieved from infrared channels, (<b>b</b>) visible channels and (<b>c</b>) ground-based radars, and (<b>d</b>) the corresponding precipitation rate from the GPM IMERG data (mm∙h<sup>−1</sup>) at 08:00 UTC on June 5, 2020. The light blue areas are the regions covered by radar echoes.</p>
Full article ">Figure 4
<p>A typical case of Super Typhoon Haishen at 3:00 UTC on September 6,2020. (<b>a</b>) RCRF maps (dBZ) retrieved from infrared channels, (<b>b</b>) visible channels and (<b>c</b>) ground-based radars, and (<b>d</b>) the corresponding precipitation rate from the GPM IMERG data (mm∙h<sup>−1</sup>). The light blue areas are the regions covered by radar echoes.</p>
Full article ">Figure 5
<p>A typical case of a Northeast China cold vortex occurred on September 16, 2020. (<b>a</b>) RCRF maps (dBZ) retrieved from infrared channels, (<b>b</b>) visible channels and (<b>c</b>) ground-based radars, and (<b>d</b>) the corresponding precipitation rate from the GPM IMERG data (mm∙h<sup>−1</sup>). The light blue areas are the regions covered by radar echoes.</p>
Full article ">Figure 6
<p>The occurrence and development for the case of a convective system occurring over North China from 04:30 UTC to 08:00 UTC on July 8, 2020. The subplots at each row of the panel are RCRF maps (dBZ) retrieved from (<b>a</b>) infrared channels, (<b>b</b>) visible channels, and (<b>c</b>) ground-based radar, and (<b>d</b>) the corresponding precipitation rate from the GPM IMERG data (mm∙h<sup>−1</sup>). The red circles stand for target-1, the pink circles for target-2, the blue circles for target-3 and the black circles for target-4. The light blue areas are the regions covered by radar echoes.</p>
Full article ">Figure 7
<p>Comparisons of the RCRF maps between ground-based radars and deep learning-based retrieving model. The color bar represents the occurrence frequency (in logarithmics scale) for the retrieved RCRF maps. (<b>a</b>) The infrared model over the land, (<b>b</b>) the visible model over the land, (<b>c</b>) the infrared model over the ocean, (<b>d</b>) the visible model over the ocean.</p>
Full article ">Figure 8
<p>Validations on the RCRF obtained by the U-net regression-based retrieving algorithm, for land and ocean regions during May–October, respectively.(<b>a</b>) pixel-wise RS, (<b>b</b>) pixel-wise MAE, (<b>c</b>) pixel-wise RMSE.</p>
Full article ">Figure 9
<p>Precipitation Rain rate (mm/h) maps at 02:00 UTC on July 27 of 2020, including (<b>a</b>) PR retrieved from RCRF maps using infrared channels, (<b>b</b>) PR retrieved from RCRF maps using visible/near-infrared channels, (<b>c</b>) PR from radar RCRF maps, and (<b>d</b>) PR from GPM IMGER data.</p>
Full article ">
25 pages, 3763 KiB  
Article
Validation of Sentinel-3 SLSTR Land Surface Temperature Retrieved by the Operational Product and Comparison with Explicitly Emissivity-Dependent Algorithms
by Lluís Pérez-Planells, Raquel Niclòs, Jesús Puchades, César Coll, Frank-M. Göttsche, José A. Valiente, Enric Valor and Joan M. Galve
Remote Sens. 2021, 13(11), 2228; https://doi.org/10.3390/rs13112228 - 7 Jun 2021
Cited by 26 | Viewed by 4581
Abstract
Land surface temperature (LST) is an essential climate variable (ECV) for monitoring the Earth climate system. To ensure accurate retrieval from satellite data, it is important to validate satellite derived LSTs and ensure that they are within the required accuracy and precision thresholds. [...] Read more.
Land surface temperature (LST) is an essential climate variable (ECV) for monitoring the Earth climate system. To ensure accurate retrieval from satellite data, it is important to validate satellite derived LSTs and ensure that they are within the required accuracy and precision thresholds. An emissivity-dependent split-window algorithm with viewing angle dependence and two dual-angle algorithms are proposed for the Sentinel-3 SLSTR sensor. Furthermore, these algorithms are validated together with the Sentinel-3 SLSTR operational LST product as well as several emissivity-dependent split-window algorithms with in-situ data from a rice paddy site. The LST retrieval algorithms were validated over three different land covers: flooded soil, bare soil, and full vegetation cover. Ground measurements were performed with a wide band thermal infrared radiometer at a permanent station. The coefficients of the proposed split-window algorithm were estimated using the Cloudless Land Atmosphere Radiosounding (CLAR) database: for the three surface types an overall systematic uncertainty (median) of −0.4 K and a precision (robust standard deviation) 1.1 K were obtained. For the Sentinel-3A SLSTR operational LST product, a systematic uncertainty of 1.3 K and a precision of 1.3 K were obtained. A first evaluation of the Sentinel-3B SLSTR operational LST product was also performed: systematic uncertainty was 1.5 K and precision 1.2 K. The results obtained over the three land covers found at the rice paddy site show that the emissivity-dependent split-window algorithms, i.e., the ones proposed here as well as previously proposed algorithms without angular dependence, provide more accurate and precise LSTs than the current version of the operational SLSTR product. Full article
Show Figures

Figure 1

Figure 1
<p>RGB true color compositions (R-G-B 4-3-2; <b>top</b>) and false color compositions (R-G-B 8-4-3; <b>bottom</b>) for three Sentinel-2 Multispectral Instrument (MSI) scenes. The three land covers at the site are: bare soil (April, <b>left</b>), flooded soil, i.e., water (May, <b>center</b>), full vegetation (August, <b>right</b>). The location of the validation site is shown in the composition.</p>
Full article ">Figure 2
<p>Fraction of vegetation cover given by the SLSTR L2 product as a function of day of year. A representative photo for each land cover is also shown.</p>
Full article ">Figure 3
<p>Angular emissivity variation of the bare soil (<b>left</b>) and flooded soil (<b>right</b>) for the CE-312 channels centered on 11 and 12 µm.</p>
Full article ">Figure 4
<p>LST–<span class="html-italic">T</span><sub>11</sub> against <span class="html-italic">T</span><sub>11</sub>–<span class="html-italic">T</span><sub>12</sub> simulated from the CLAR database at the different view angles for the SLSTR SWA atmospheric coefficients retrieval. The regression functions corresponding to each angular dataset are plotted as lines in the same color as their corresponding data.</p>
Full article ">Figure 5
<p>LST obtained with the fixed SI-121 radiometer compared to LST obtained along the transects with mobile CE-312 radiometers.</p>
Full article ">Figure 6
<p>Operational Sentinel-3A SLSTR LST product against ground LST obtained from the SI-121 radiometer over the three seasonal land cover types at the Valencia rice paddy site. The dark grey and light grey shadows show 1-RSD and 3-RSD around the regression (dashed line).</p>
Full article ">Figure 7
<p>LST retrieved from Sentinel-3A with emissivity-dependent algorithms against in-situ LST obtained from the SI-121 radiometer. Top left: Sobrino16. Top right: Zhang19. Bottom left: Zheng19. Bottom right: the proposed algorithm.</p>
Full article ">Figure 8
<p>SLSTR LST retrieved with the dual-angle algorithms for the 11 µm channel (left; DAA11) and 12 µm channel (right; DAA12) against in-situ LST for the three seasonal land covers at the Valencia rice paddy site.</p>
Full article ">
19 pages, 5935 KiB  
Article
Multi-Scale Fused SAR Image Registration Based on Deep Forest
by Shasha Mao, Jinyuan Yang, Shuiping Gou, Licheng Jiao, Tao Xiong and Lin Xiong
Remote Sens. 2021, 13(11), 2227; https://doi.org/10.3390/rs13112227 - 7 Jun 2021
Cited by 16 | Viewed by 3166
Abstract
SAR image registration is a crucial problem in SAR image processing since the registration results with high precision are conducive to improving the quality of other problems, such as change detection of SAR images. Recently, for most DL-based SAR image registration methods, the [...] Read more.
SAR image registration is a crucial problem in SAR image processing since the registration results with high precision are conducive to improving the quality of other problems, such as change detection of SAR images. Recently, for most DL-based SAR image registration methods, the problem of SAR image registration has been regarded as a binary classification problem with matching and non-matching categories to construct the training model, where a fixed scale is generally set to capture pair image blocks corresponding to key points to generate the training set, whereas it is known that image blocks with different scales contain different information, which affects the performance of registration. Moreover, the number of key points is not enough to generate a mass of class-balance training samples. Hence, we proposed a new method of SAR image registration that meanwhile utilizes the information of multiple scales to construct the matching models. Specifically, considering that the number of training samples is small, deep forest was employed to train multiple matching models. Moreover, a multi-scale fusion strategy is proposed to integrate the multiple predictions and obtain the best pair matching points between the reference image and the sensed image. Finally, experimental results on four datasets illustrate that the proposed method is better than the compared state-of-the-art methods, and the analyses for different scales also indicate that the fusion of multiple scales is more effective and more robust for SAR image registration than one single fixed scale. Full article
Show Figures

Figure 1

Figure 1
<p>Patches with different sizes.</p>
Full article ">Figure 2
<p>The framework of the proposed method.</p>
Full article ">Figure 3
<p>An example of constructing training sets with multiple scales.</p>
Full article ">Figure 4
<p>Diversity maps corresponding to a pair of matching image blocks and a pair of non-matching image blocks.</p>
Full article ">Figure 5
<p>Reference and Sensed Images of Wuhan Data.</p>
Full article ">Figure 6
<p>Reference and Sensed Images of YellowR1 Data.</p>
Full article ">Figure 7
<p>Reference and Sensed Images of YellowR2 Data.</p>
Full article ">Figure 8
<p>Reference and Sensed Images of Australia-Yama Data.</p>
Full article ">Figure 9
<p>The Chessboard Diagram for Wuhan Image.</p>
Full article ">Figure 10
<p>The Chessboard Mosaicked Image for YellowR1 Image.</p>
Full article ">Figure 11
<p>The Chessboard Diagram for Yamba Image.</p>
Full article ">Figure 12
<p>The Chessboard Diagram for YellowR2 Image.</p>
Full article ">Figure 13
<p>Reference and Sensed Images of Bern Flood Data.</p>
Full article ">
25 pages, 62360 KiB  
Article
From Acquisition to Presentation—The Potential of Semantics to Support the Safeguard of Cultural Heritage
by Jean-Jacques Ponciano, Claire Prudhomme and Frank Boochs
Remote Sens. 2021, 13(11), 2226; https://doi.org/10.3390/rs13112226 - 7 Jun 2021
Cited by 6 | Viewed by 3332
Abstract
The signature of the 2019 Declaration of Cooperation on advancing the digitization of cultural heritage in Europe shows the important role that the 3D digitization process plays in the safeguard and sustainability of cultural heritage. The digitization also aims at sharing and presenting [...] Read more.
The signature of the 2019 Declaration of Cooperation on advancing the digitization of cultural heritage in Europe shows the important role that the 3D digitization process plays in the safeguard and sustainability of cultural heritage. The digitization also aims at sharing and presenting cultural heritage. However, the processing steps of data acquisition to its presentation requires an interdisciplinary collaboration, where understanding and collaborative work is difficult due to the presence of different expert knowledge involved. This study proposes an end-to-end method from the cultural data acquisition to its presentation thanks to explicit semantics representing the different fields of expert knowledge intervening in this process. This method is composed of three knowledge-based processing steps: (i) a recommendation process of acquisition technology to support cultural data acquisition; (ii) an object recognition process to structure the unstructured acquired data; and (iii) an enrichment process based on Linked Open Data to document cultural objects with further information, such as geospatial, cultural, and historical information. The proposed method was applied in two case studies concerning the watermills of Ephesos terrace house 2 and the first Sacro Monte chapel in Varallo. These application cases show the proposed method’s ability to recognize and document digitized cultural objects in different contexts thanks to the semantics. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Proposed method overview. Green arrows correspond to user inputs, and red arrows correspond to automatically generated outputs.</p>
Full article ">Figure 2
<p>Knowledge base content.</p>
Full article ">Figure 3
<p>Explanation of interface made to present information and knowledge related to cultural heritage objects.</p>
Full article ">Figure 4
<p>Original point cloud of Ephesos terrace house 2.</p>
Full article ">Figure 5
<p>RDF representation of the watermills acquisition in Ephesos terrace house 2.</p>
Full article ">Figure 6
<p>Original point cloud of the first chapel of Sacro Monte.</p>
Full article ">Figure 7
<p>RDF representation of the chapel “Adam and Eve” acquisition in the Sacro Monte Varallo.</p>
Full article ">Figure 8
<p>Knowledge about the object column.</p>
Full article ">Figure 9
<p>Object recognition results in Ephesos terrace house 2 point cloud: (<b>a</b>) room recognition and (<b>b</b>) watermill recognition.</p>
Full article ">Figure 10
<p>Results of object recognition phases (segmentation and classification) applied to the chapel of the Sacro Monte.</p>
Full article ">Figure 11
<p>Results of the object recognition improved by cultural heritage (CH) knowledge.</p>
Full article ">Figure 12
<p>Enriched cultural heritage information related to watermills of Ephesos Terrace House 2.</p>
Full article ">Figure 13
<p>Enriched cultural heritage information related to the chapel.</p>
Full article ">Figure 14
<p>Enriched cultural heritage information related to first statue in front of the chapel.</p>
Full article ">Figure 15
<p>Watermill recognition in Ephesos terrace house 2: (<b>left</b>) point cloud with detected watermills in blue and red; (<b>middle</b>) watermills with their water wheel [<a href="#B50-remotesensing-13-02226" class="html-bibr">50</a>]; (<b>right</b>) terrace house 2 plan presenting watermills and their construction phases [<a href="#B50-remotesensing-13-02226" class="html-bibr">50</a>].</p>
Full article ">Figure 16
<p>The unrecognized watermill composed of the large room (in brown) and the narrow room (framed in yellow).</p>
Full article ">Figure 17
<p>Optimal result obtained for the object recognition of the Sacro Monte chapel.</p>
Full article ">Figure 18
<p>Isolated view of object recognized in the first chapel of Sacro Monte.</p>
Full article ">
19 pages, 6626 KiB  
Article
Tropospheric Volcanic SO2 Mass and Flux Retrievals from Satellite. The Etna December 2018 Eruption
by Stefano Corradini, Lorenzo Guerrieri, Hugues Brenot, Lieven Clarisse, Luca Merucci, Federica Pardini, Alfred J. Prata, Vincent J. Realmuto, Dario Stelitano and Nicolas Theys
Remote Sens. 2021, 13(11), 2225; https://doi.org/10.3390/rs13112225 - 7 Jun 2021
Cited by 17 | Viewed by 3702
Abstract
The presence of volcanic clouds in the atmosphere affects air quality, the environment, climate, human health and aviation safety. The importance of the detection and retrieval of volcanic SO2 lies with risk mitigation as well as with the possibility of providing insights [...] Read more.
The presence of volcanic clouds in the atmosphere affects air quality, the environment, climate, human health and aviation safety. The importance of the detection and retrieval of volcanic SO2 lies with risk mitigation as well as with the possibility of providing insights into the mechanisms that cause eruptions. Due to their intrinsic characteristics, satellite measurements have become an essential tool for volcanic monitoring. In recent years, several sensors, with different spectral, spatial and temporal resolutions, have been launched into orbit, significantly increasing the effectiveness of the estimation of the various parameters related to the state of volcanic activity. In this work, the SO2 total masses and fluxes were obtained from several satellite sounders—the geostationary (GEO) MSG-SEVIRI and the polar (LEO) Aqua/Terra-MODIS, NPP/NOAA20-VIIRS, Sentinel5p-TROPOMI, MetopA/MetopB-IASI and Aqua-AIRS—and compared to one another. As a test case, the Christmas 2018 Etna eruption was considered. The characteristics of the eruption (tropospheric with low ash content), the large amount of (simultaneously) available data and the different instrument types and SO2 columnar abundance retrieval strategies make this cross-comparison particularly relevant. Results show the higher sensitivity of TROPOMI and IASI and a general good agreement between the SO2 total masses and fluxes obtained from all the satellite instruments. The differences found are either related to inherent instrumental sensitivity or the assumed and/or calculated SO2 cloud height considered as input for the satellite retrievals. Results indicate also that, despite their low revisit time, the LEO sensors are able to provide information on SO2 flux over large time intervals. Finally, a complete error assessment on SO2 flux retrievals using SEVIRI data was realized by considering uncertainties in wind speed and SO2 abundance. Full article
(This article belongs to the Special Issue Multi-Sensor Remote Sensing Data for Volcanic Hazards Monitoring)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Image of southern Italy collected from the MODIS satellite sensor on board the Terra/Aqua NASA satellite the 27 December 2018 at 12:20 UTC. (<b>b</b>) Map of Etna volcano with indication of the ground-based visible cameras (yellow points). In the zoom, the main active craters are indicated: Voragine (VOR), Bocca Nuova (BN), North East Crater (NEC), South East Crater (SEC) and New SEC (NSEC). (<b>c</b>) Etna activities of 27 December 2018 as seen from Taormina. Images modified from Corradini et al., 2020.</p>
Full article ">Figure 2
<p>Nadir view ground pixel sizes for the different satellite instruments considered in the cross-comparison.</p>
Full article ">Figure 3
<p>Time distribution of the LEO satellite data processed.</p>
Full article ">Figure 4
<p>Red rectangle: area selected for the SO<sub>2</sub> total mass and flux cross-comparison. The two satellite images collected on 27 and 28 December 2018 at 18:00 and 06:00 UTC, respectively, are representative of the higher volcanic cloud extension detected from SEVIRI in the whole 26–30 December period.</p>
Full article ">Figure 5
<p>Near simultaneous SO<sub>2</sub> maps obtained from the different satellite measurements, collected on 27 December 2018 at around 12:30 UTC (except IASI, collected at 09:12 UTC).</p>
Full article ">Figure 5 Cont.
<p>Near simultaneous SO<sub>2</sub> maps obtained from the different satellite measurements, collected on 27 December 2018 at around 12:30 UTC (except IASI, collected at 09:12 UTC).</p>
Full article ">Figure 6
<p>SO<sub>2</sub> total area computed from the images collected by the different satellite instruments in the latitude–longitude grid 34–38N, 14–18E.</p>
Full article ">Figure 7
<p>SO<sub>2</sub> total masses computed from the images collected by the different satellite instruments in the latitude–longitude grid 34–38N, 14–18E.</p>
Full article ">Figure 8
<p>VPTH used for the SEVIRI, MODIS, VIIRS and TROPOMI retrievals (gray line) and VPTH used for IASI SO<sub>2</sub> retrievals.</p>
Full article ">Figure 9
<p>SO<sub>2</sub> fluxes obtained from SEVIRI, MODIS, VIIRS and TROPOMI measurements. The colored rectangle areas indicate the regions where the GEO and LEO retrieval discrepancies are significant. Light red areas: differences due to LEO images not available and/or where the plume dilution of the distal part of the cloud is significant. Light green area: differences due to the presence of volcanic ash. Point “P” is an example of an SO<sub>2</sub> peak not detected from SEVIRI.</p>
Full article ">Figure 10
<p>Cross-comparison between the SO<sub>2</sub> fluxes obtained from SEVIRI, TROPOMI, IASI and AIRS measurements. The colored rectangle areas indicate the regions where the GEO and LEO retrieval discrepancies are significant. Light red areas: differences due to LEO images not available and/or where the plume dilution of the distal part of the cloud is significant. Light green area: differences due to the presence of volcanic ash.</p>
Full article ">Figure 11
<p>SO<sub>2</sub> flux obtained from SEVIRI 15-minute measurements, considering a fixed transect at 30 km from the vents. The gray dashed line indicates the flux obtained considering daily constant wind speed, while the gray solid line indicates the flux obtained considering the 6 h step wind speeds. The dashed and solid light blue lines indicate, respectively, the daily and 6 h wind speeds used for the SO<sub>2</sub> flux computations.</p>
Full article ">Figure 12
<p>SEVIRI SO<sub>2</sub> fluxes computed considering different wind speeds obtained by considering an uncertainty of +/− 500 m of VPTH.</p>
Full article ">Figure 13
<p>SEVIRI SO<sub>2</sub> flux. The uncertainty derives from the sum of the SO<sub>2</sub> columnar abundance and VPTH uncertainties.</p>
Full article ">
18 pages, 6101 KiB  
Article
Squint Model InISAR Imaging Method Based on Reference Interferometric Phase Construction and Coordinate Transformation
by Yu Li, Yunhua Zhang and Xiao Dong
Remote Sens. 2021, 13(11), 2224; https://doi.org/10.3390/rs13112224 - 7 Jun 2021
Cited by 6 | Viewed by 2566
Abstract
The imaging quality of InISAR under squint geometry can be greatly degraded due to the serious interferometric phase ambiguity (InPhaA) and thus result in image distortion problems. Aiming to solve these problems, a three-dimensional InISAR (3D ISAR) imaging method based on reference InPhas [...] Read more.
The imaging quality of InISAR under squint geometry can be greatly degraded due to the serious interferometric phase ambiguity (InPhaA) and thus result in image distortion problems. Aiming to solve these problems, a three-dimensional InISAR (3D ISAR) imaging method based on reference InPhas construction and coordinate transformation is presented in this paper. First, the target’s 3D coarse location is obtained by the cross-correlation algorithm, and a relatively stronger scatterer is taken as the reference scatterer to construct the reference interferometric phases (InPhas) so as to remove the InPhaA and restore the real InPhas. The selected scatterer needs not to be exactly in the center of the coarsely located target. Then, the image distortion is corrected by coordinate transformation, and finally the 3D coordinates of the target can be accurately estimated. Both simulation and practical experiment results validate the effectiveness of the method. Full article
Show Figures

Figure 1

Figure 1
<p>InISAR imaging under squint model.</p>
Full article ">Figure 2
<p>The influence of the squint effect on InPhas: (<b>a</b>) the real InPha; (<b>b</b>) the measured InPhas.</p>
Full article ">Figure 3
<p>Rotation of coordinate systems.</p>
Full article ">Figure 4
<p>Flowchart of the proposed InISAR imaging method.</p>
Full article ">Figure 5
<p>Target model in (<b>a</b>) 3D; (<b>b</b>) projection on <span class="html-italic">x-y</span> plane; (<b>c</b>) projection on <span class="html-italic">x-z</span> plane; (<b>d</b>) projection on <span class="html-italic">y-z</span> plane.</p>
Full article ">Figure 6
<p>ISAR images of antenna A obtained by (<b>a</b>) the proposed method; (<b>b</b>) the method in [<a href="#B25-remotesensing-13-02224" class="html-bibr">25</a>] by taking <span class="html-italic">Q</span> as the reference scatterer; (<b>c</b>) the method in [<a href="#B25-remotesensing-13-02224" class="html-bibr">25</a>] by taking the target center as the reference scatterer.</p>
Full article ">Figure 7
<p>Interferometric phases: (<b>a</b>) before InPha restoration; (<b>b</b>) after InPha restoration.</p>
Full article ">Figure 8
<p>3D imaging results by the traditional method without removing the squint effect, in (<b>a</b>) 3D; (<b>b</b>) projection on <span class="html-italic">x-y</span> plane; (<b>c</b>) projection on <span class="html-italic">x-z</span> plane; (<b>d</b>) projection on <span class="html-italic">y-z</span> plane.</p>
Full article ">Figure 9
<p>3D imaging results obtained by the proposed method with the Q scatterer at position (9.980, 9.939, 9.980) km taken as the reference scatterer in (<b>a</b>) 3D; (<b>b</b>) projection on <span class="html-italic">x-y</span> plane; (<b>c</b>) projection on <span class="html-italic">x-z</span> plane; (<b>d</b>) projection on <span class="html-italic">y-z</span> plane.</p>
Full article ">Figure 10
<p>3D imaging results obtained by the method in [<a href="#B25-remotesensing-13-02224" class="html-bibr">25</a>] with the Q scatterer at position (9.980, 9.939, 9.980) km taken as the reference scatterer in (<b>a</b>) 3D; (<b>b</b>) projection on <span class="html-italic">x</span>-<span class="html-italic">y</span> plane; (<b>c</b>) projection on <span class="html-italic">x-z</span> plane; (<b>d</b>) projection on <span class="html-italic">y-z</span> plane.</p>
Full article ">Figure 11
<p>3D imaging results obtained by the method in [<a href="#B25-remotesensing-13-02224" class="html-bibr">25</a>] with the scatterer at position (9.995, 9.995, 9.995) km taken as the reference scatterer in (<b>a</b>) 3D; (<b>b</b>) projection on <span class="html-italic">x-y</span> plane; (<b>c</b>) projection on <span class="html-italic">x-z</span> plane; (<b>d</b>) projection on <span class="html-italic">y-z</span> plane.</p>
Full article ">Figure 12
<p>RMSE under different SNR.</p>
Full article ">Figure 13
<p>Radar system and target: (<b>a</b>) the InISAR system; (<b>b</b>) the rotating target.</p>
Full article ">Figure 14
<p>Experiment observation geometry: (<b>a</b>) real observation geometry; (<b>b</b>) equivalent observation geometry.</p>
Full article ">Figure 15
<p>InPha images of measured InPhas: (<b>a</b>) from <b>X</b>-direction interferometry, the InPha is discontinuous as indicated by the abrupt change of colors between yellow and dark blue; (<b>b</b>) from <b>Z</b>-direction interferometry, the InPha is continuous as indicated by the gradual change of colors between yellow and light blue.</p>
Full article ">Figure 16
<p>InPha images of real InPhas: (<b>a</b>) from <span class="html-italic">X</span>-direction interferometry, the InPha is continuous as indicated by the gradual change of colors between yellow and light blue; (<b>b</b>) from <span class="html-italic">Z</span>-direction interferometry, the InPha is continuous as indicated by the gradual change of colors between yellow and light blue.</p>
Full article ">Figure 17
<p>Three-dimensional imaging results without InPhaA removed: (<b>a</b>) 3D reconstruction; (<b>b</b>) projection on <span class="html-italic">x</span>-<span class="html-italic">y</span> plane; (<b>c</b>) projection on <span class="html-italic">x</span>-<span class="html-italic">z</span> plane; (<b>d</b>) projection on <span class="html-italic">y</span>-<span class="html-italic">z</span> plane.</p>
Full article ">Figure 18
<p>Three-dimensional imaging results with InPhaA removed (<b>a</b>) 3D reconstruction; (<b>b</b>) projection on <span class="html-italic">x</span>-<span class="html-italic">y</span> plane; (<b>c</b>) projection on <span class="html-italic">x</span>-<span class="html-italic">z</span> plane; (d) projection on <span class="html-italic">y</span>-<span class="html-italic">z</span> plane.</p>
Full article ">Figure 19
<p>Three-dimensional imaging results with InPhaA removed and image distortion corrected: (<b>a</b>) 3D reconstruction; (<b>b</b>) projection on <span class="html-italic">x</span>-<span class="html-italic">y</span> plane; (<b>c</b>) projection on <span class="html-italic">x</span>-<span class="html-italic">z</span> plane; (<b>d</b>) projection on <span class="html-italic">y</span>-<span class="html-italic">z</span> plane.</p>
Full article ">
32 pages, 13227 KiB  
Article
Drivers of Organic Carbon Stocks in Different LULC History and along Soil Depth for a 30 Years Image Time Series
by Mahboobeh Tayebi, Jorge Tadeu Fim Rosas, Wanderson de Sousa Mendes, Raul Roberto Poppiel, Yaser Ostovari, Luis Fernando Chimelo Ruiz, Natasha Valadares dos Santos, Carlos Eduardo Pellegrino Cerri, Sérgio Henrique Godinho Silva, Nilton Curi, Nélida Elizabet Quiñonez Silvero and José A. M. Demattê
Remote Sens. 2021, 13(11), 2223; https://doi.org/10.3390/rs13112223 - 7 Jun 2021
Cited by 33 | Viewed by 5247
Abstract
Soil organic carbon (SOC) stocks are a remarkable property for soil and environmental monitoring. The understanding of their dynamics in crop soils must go forward. The objective of this study was to determine the impact of temporal environmental controlling factors obtained by satellite [...] Read more.
Soil organic carbon (SOC) stocks are a remarkable property for soil and environmental monitoring. The understanding of their dynamics in crop soils must go forward. The objective of this study was to determine the impact of temporal environmental controlling factors obtained by satellite images over the SOC stocks along soil depth, using machine learning algorithms. The work was carried out in São Paulo state (Brazil) in an area of 2577 km2. We obtained a dataset of boreholes with soil analyses from topsoil to subsoil (0–100 cm). Additionally, remote sensing covariates (30 years of land use history, vegetation indexes), soil properties (i.e., clay, sand, mineralogy), soil types (classification), geology, climate and relief information were used. All covariates were confronted with SOC stocks contents, to identify their impact. Afterwards, the abilities of the predictive models were tested by splitting soil samples into two random groups (70 for training and 30% for model testing). We observed that the mean values of SOC stocks decreased by increasing the depth in all land use and land cover (LULC) historical classes. The results indicated that the random forest with recursive features elimination (RFE) was an accurate technique for predicting SOC stocks and finding controlling factors. We also found that the soil properties (especially clay and CEC), terrain attributes, geology, bioclimatic parameters and land use history were the most critical factors in controlling the SOC stocks in all LULC history and soil depths. We concluded that random forest coupled with RFE could be a functional approach to detect, map and monitor SOC stocks using environmental and remote sensing data. Full article
(This article belongs to the Special Issue Topsoil Characterization by Means of Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart showing the location and the methodology applied for this study.</p>
Full article ">Figure 2
<p>The frequency of alteration for agricultural use during 1985–2015 (higher percentage shows less changes in LULC during 1985–2015) (<b>a</b>); the frequency of alteration for forest use during 1985–2015 (<b>b</b>); LULC history map during 1985–2015 (<b>c</b>); the location of soil samples in each LULC history (the number of samples represent the whole sample along depth) for Piracicaba region (<b>d</b>).</p>
Full article ">Figure 3
<p>The percentage of SOC stocks at the entire soil profile of each LULC history class.</p>
Full article ">Figure 4
<p>Mean values of SOC stocks (gm<sup>−2</sup>) at 0–100 cm soil depth over different LULC history classes from the study (<b>a</b>); relative frequency of LULC (in percent) during 1985–2015 (<b>b</b>); pie chart of the percent of area for each LULC history classes (<b>c</b>).</p>
Full article ">Figure 5
<p>RMSE values in five-fold cross-validation in the process of selecting the number of important variables in each LULC history class and depth with RFE algorithm (<b>a</b>) Ag = 100, (<b>b</b>) Pas = 100, (<b>c</b>) Ag &gt; 50–Pas &lt; 50, (<b>d</b>) Ag &gt; 50–Fo &lt; 50, (<b>e</b>) total samples, (<b>f</b>) the number of important variables selected with RFE for each depth and each LULC history class.</p>
Full article ">Figure 6
<p>Variable importance for SOC stocks prediction in different depth at Ag = 100 class by RF method.</p>
Full article ">Figure 7
<p>Variable importance for SOC stocks prediction in different depth at Pas = 100 class by random forest.</p>
Full article ">Figure 8
<p>Variable importance for SOC stocks prediction in different depth at Ag &gt; 50–Pas &lt; 50 class by RF.</p>
Full article ">Figure 9
<p>Variable importance for SOC stocks prediction in different depth at Ag &gt; 50–Fo &lt; 50 class by RF.</p>
Full article ">Figure 10
<p>Variable importance for SOC stocks prediction in different depth for total samples by RF.</p>
Full article ">Figure 11
<p>Comparing the importance (%) of all factors in (<b>a</b>) Ag = 100, (<b>b</b>) Pas = 100, (<b>c</b>) Ag &gt; 50–Pas &lt; 50 (<b>d</b>) Ag &gt; 50–Fo &lt; 50, (<b>e</b>) total samples for SOC stocks prediction in different depth.</p>
Full article ">Figure 12
<p>SOC stocks maps for (<b>a</b>) layer 1(0–10cm) and (<b>b</b>) layer 10 (90–100cm) from bootstrapped (100 runs) Lower, Mean and Upper predicted values and CV (percent of coefficient of variation).</p>
Full article ">
14 pages, 6782 KiB  
Communication
Spatio-Temporal Distribution of Ground Deformation Due to 2018 Lombok Earthquake Series
by Sandy Budi Wibowo, Danang Sri Hadmoko, Yunus Isnaeni, Nur Mohammad Farda, Ade Febri Sandhini Putri, Idea Wening Nurani and Suhono Harso Supangkat
Remote Sens. 2021, 13(11), 2222; https://doi.org/10.3390/rs13112222 - 6 Jun 2021
Cited by 14 | Viewed by 4896
Abstract
Lombok Island in Indonesia was hit by four major earthquakes (6.4 Mw to 7 Mw) and by at least 818 earthquakes between 29 July and 31 August 2018. The aims of this study are to measure ground deformation due to the 2018 Lombok [...] Read more.
Lombok Island in Indonesia was hit by four major earthquakes (6.4 Mw to 7 Mw) and by at least 818 earthquakes between 29 July and 31 August 2018. The aims of this study are to measure ground deformation due to the 2018 Lombok earthquake series and to map its spatio-temporal distribution. The application of DinSAR was performed to produce an interferogram and deformation map. Time series Sentinel-1 satellite imageries were used as master and slave for each of these four major earthquakes. The spatio-temporal distribution of the ground deformation was analyzed using a zonal statistics algorithm in GIS. It focused on the overlapping area between the raster layer of the deformation map and the polygon layer of six observation sites (Mataram City, Pamenang, Tampes, Sukadana, Sembalun, and Belanting). The results showed that the deformation includes uplift and subsidence. The first 6.4 Mw foreshock hitting on 29 July 2018 produces a minimum uplift effect on the island. The 7.0 Mw mainshock on 5 August 2018 causes extreme uplift at the northern shore. The 6.2 Mw Aftershock on 9 August 2018 generates subsidence throughout the study area. The final earthquake of 6.9 Mw on 19 August 2018 initiates massive uplift in the study area and extreme uplift at the northeastern shore. The highest uplift reaches 0.713 m at the northern shore, while the deepest subsidence is measured −0.338 m at the northwestern shore. Dominant deformation on the northern area of Lombok Island indicates movement of Back Arc Trust in the north of the island. The output of this study would be valuable to local authorities to evaluate existing earthquake’s impacts and to design mitigation strategies to face earthquake-induced ground displacement. Full article
Show Figures

Figure 1

Figure 1
<p>Epicentres of major earthquakes in Lombok Island.</p>
Full article ">Figure 2
<p>Geology map of Lombok Island (adapted from [<a href="#B27-remotesensing-13-02222" class="html-bibr">27</a>]). Back Arc Thrust is located next to the northern shore [<a href="#B1-remotesensing-13-02222" class="html-bibr">1</a>]. Photos of building structural damage were taken at alluvial area of Mataram City.</p>
Full article ">Figure 3
<p>Methodology for this study.</p>
Full article ">Figure 4
<p>Interferogram of earthquakes on 29 July, 5 August, 9 August, and 19 August 2018.</p>
Full article ">Figure 5
<p>Spatial distribution of vertical displacement due to earthquakes on 29 July, 5 August, 9 August, and 19 August 2018.</p>
Full article ">Figure 6
<p>Summary of displacement after each earthquake in selected areas (Mataram city, Pamenang, Tampes, Sukadana, Sembalun, and Belanting).</p>
Full article ">
26 pages, 6427 KiB  
Article
An Efficient Approach Based on Privacy-Preserving Deep Learning for Satellite Image Classification
by Munirah Alkhelaiwi, Wadii Boulila, Jawad Ahmad, Anis Koubaa and Maha Driss
Remote Sens. 2021, 13(11), 2221; https://doi.org/10.3390/rs13112221 - 6 Jun 2021
Cited by 73 | Viewed by 7126
Abstract
Satellite images have drawn increasing interest from a wide variety of users, including business and government, ever since their increased usage in important fields ranging from weather, forestry and agriculture to surface changes and biodiversity monitoring. Recent updates in the field have also [...] Read more.
Satellite images have drawn increasing interest from a wide variety of users, including business and government, ever since their increased usage in important fields ranging from weather, forestry and agriculture to surface changes and biodiversity monitoring. Recent updates in the field have also introduced various deep learning (DL) architectures to satellite imagery as a means of extracting useful information. However, this new approach comes with its own issues, including the fact that many users utilize ready-made cloud services (both public and private) in order to take advantage of built-in DL algorithms and thus avoid the complexity of developing their own DL architectures. However, this presents new challenges to protecting data against unauthorized access, mining and usage of sensitive information extracted from that data. Therefore, new privacy concerns regarding sensitive data in satellite images have arisen. This research proposes an efficient approach that takes advantage of privacy-preserving deep learning (PPDL)-based techniques to address privacy concerns regarding data from satellite images when applying public DL models. In this paper, we proposed a partially homomorphic encryption scheme (a Paillier scheme), which enables processing of confidential information without exposure of the underlying data. Our method achieves robust results when applied to a custom convolutional neural network (CNN) as well as to existing transfer learning methods. The proposed encryption scheme also allows for training CNN models on encrypted data directly, which requires lower computational overhead. Our experiments have been performed on a real-world dataset covering several regions across Saudi Arabia. The results demonstrate that our CNN-based models were able to retain data utility while maintaining data privacy. Security parameters such as correlation coefficient (−0.004), entropy (7.95), energy (0.01), contrast (10.57), number of pixel change rate (4.86), unified average change intensity (33.66), and more are in favor of our proposed encryption scheme. To the best of our knowledge, this research is also one of the first studies that applies PPDL-based techniques to satellite image data in any capacity. Full article
Show Figures

Figure 1

Figure 1
<p>CNN Basic Architecture [<a href="#B16-remotesensing-13-02221" class="html-bibr">16</a>].</p>
Full article ">Figure 2
<p>Homomorphic Encryption Technique [<a href="#B30-remotesensing-13-02221" class="html-bibr">30</a>].</p>
Full article ">Figure 3
<p>The Proposed Workflow.</p>
Full article ">Figure 4
<p>Sample of the dataset.</p>
Full article ">Figure 5
<p>(<b>a</b>) Bare soil class; (<b>b</b>) Road class; (<b>c</b>) Urban class; (<b>d</b>) Vegetation class.</p>
Full article ">Figure 6
<p>Sample image data augmentation. (<b>a</b>) Rotation results; (<b>b</b>) Zoom results; (<b>c</b>) Shearing results; (<b>d</b>) Horizontal shift results; (<b>e</b>) Vertical shift results; (<b>f</b>) brightness results; (<b>g</b>) Horizontal flip results; (<b>h</b>) Vertical flip results.</p>
Full article ">Figure 7
<p>Model accuracy over plain data. (<b>a</b>) Training accuracy; (<b>b</b>) Validation accuracy.</p>
Full article ">Figure 8
<p>Model accuracy over encrypted data. (<b>a</b>) Training accuracy; (<b>b</b>) Validation accuracy.</p>
Full article ">Figure 9
<p>Training accuracy of different CNN models.</p>
Full article ">Figure A1
<p>Secret Sharing Technique [<a href="#B34-remotesensing-13-02221" class="html-bibr">34</a>].</p>
Full article ">Figure A2
<p>Secure Multi-Party Computation Technique [<a href="#B35-remotesensing-13-02221" class="html-bibr">35</a>].</p>
Full article ">Figure A3
<p>Differential Privacy Technique. Available online: <a href="https://medium.com/ydata-ai/differential-privacy-a-brief-introduction-fee4756d19e" target="_blank">https://medium.com/ydata-ai/differential-privacy-a-brief-introduction-fee4756d19e</a> (accessed on 6 June 2021).</p>
Full article ">
20 pages, 14114 KiB  
Article
Enhancement of Detecting Permanent Water and Temporary Water in Flood Disasters by Fusing Sentinel-1 and Sentinel-2 Imagery Using Deep Learning Algorithms: Demonstration of Sen1Floods11 Benchmark Datasets
by Yanbing Bai, Wenqi Wu, Zhengxin Yang, Jinze Yu, Bo Zhao, Xing Liu, Hanfang Yang, Erick Mas and Shunichi Koshimura
Remote Sens. 2021, 13(11), 2220; https://doi.org/10.3390/rs13112220 - 5 Jun 2021
Cited by 59 | Viewed by 8541
Abstract
Identifying permanent water and temporary water in flood disasters efficiently has mainly relied on change detection method from multi-temporal remote sensing imageries, but estimating the water type in flood disaster events from only post-flood remote sensing imageries still remains challenging. Research progress in [...] Read more.
Identifying permanent water and temporary water in flood disasters efficiently has mainly relied on change detection method from multi-temporal remote sensing imageries, but estimating the water type in flood disaster events from only post-flood remote sensing imageries still remains challenging. Research progress in recent years has demonstrated the excellent potential of multi-source data fusion and deep learning algorithms in improving flood detection, while this field has only been studied initially due to the lack of large-scale labelled remote sensing images of flood events. Here, we present new deep learning algorithms and a multi-source data fusion driven flood inundation mapping approach by leveraging a large-scale publicly available Sen1Flood11 dataset consisting of roughly 4831 labelled Sentinel-1 SAR and Sentinel-2 optical imagery gathered from flood events worldwide in recent years. Specifically, we proposed an automatic segmentation method for surface water, permanent water, and temporary water identification, and all tasks share the same convolutional neural network architecture. We utilize focal loss to deal with the class (water/non-water) imbalance problem. Thorough ablation experiments and analysis confirmed the effectiveness of various proposed designs. In comparison experiments, the method proposed in this paper is superior to other classical models. Our model achieves a mean Intersection over Union (mIoU) of 52.99%, Intersection over Union (IoU) of 52.30%, and Overall Accuracy (OA) of 92.81% on the Sen1Flood11 test set. On the Sen1Flood11 Bolivia test set, our model also achieves very high mIoU (47.88%), IoU (76.74%), and OA (95.59%) and shows good generalization ability. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sample locations of flood event data.</p>
Full article ">Figure 2
<p>Example of all the bands.</p>
Full article ">Figure 3
<p>Illustration of water label. (<b>a</b>) example of hand labeled data of all water; (<b>b</b>) example of hand labeled data of permanent water; (<b>c</b>) the illustration of temporary water, permanent water, and all water.</p>
Full article ">Figure 4
<p>Flowchart of water type detection proposed in this work.</p>
Full article ">Figure 5
<p>Architecture of the BASNet used in this study.</p>
Full article ">Figure 6
<p>Qualitative comparison of proposed method with other methods. Each row represents one image and corresponding flooding maps (binary maps for the Otsu method, probability maps for FCN-ResNet50, Deeplab, <math display="inline"><semantics> <msup> <mi>U</mi> <mn>2</mn> </msup> </semantics></math>Net, and our model). Each column represents one method. Results shown on all water (AW), permanent water (PW), and temporary water (TW).</p>
Full article ">Figure 7
<p>Qualitative comparison of proposed method with other methods. Each row represents one image and corresponding flooding maps (binary maps for the Otsu method, probability maps for FCN-ResNet50, Deeplab, <math display="inline"><semantics> <msup> <mi>U</mi> <mn>2</mn> </msup> </semantics></math>Net, and our model). Each column represents one method. Results shown on all water (AW), permanent water (PW), and temporary water (TW).</p>
Full article ">
23 pages, 10133 KiB  
Article
SAMIRA-SAtellite Based Monitoring Initiative for Regional Air Quality
by Kerstin Stebel, Iwona S. Stachlewska, Anca Nemuc, Jan Horálek, Philipp Schneider, Nicolae Ajtai, Andrei Diamandi, Nina Benešová, Mihai Boldeanu, Camelia Botezan, Jana Marková, Rodica Dumitrache, Amalia Iriza-Burcă, Roman Juras, Doina Nicolae, Victor Nicolae, Petr Novotný, Horațiu Ștefănie, Lumír Vaněk, Ondrej Vlček, Olga Zawadzka-Manko and Claus Zehneradd Show full author list remove Hide full author list
Remote Sens. 2021, 13(11), 2219; https://doi.org/10.3390/rs13112219 - 5 Jun 2021
Cited by 10 | Viewed by 5656
Abstract
The satellite based monitoring initiative for regional air quality (SAMIRA) initiative was set up to demonstrate the exploitation of existing satellite data for monitoring regional and urban scale air quality. The project was carried out between May 2016 and December 2019 and focused [...] Read more.
The satellite based monitoring initiative for regional air quality (SAMIRA) initiative was set up to demonstrate the exploitation of existing satellite data for monitoring regional and urban scale air quality. The project was carried out between May 2016 and December 2019 and focused on aerosol optical depth (AOD), particulate matter (PM), nitrogen dioxide (NO2), and sulfur dioxide (SO2). SAMIRA was built around several research tasks: 1. The spinning enhanced visible and infrared imager (SEVIRI) AOD optimal estimation algorithm was improved and geographically extended from Poland to Romania, the Czech Republic and Southern Norway. A near real-time retrieval was implemented and is currently operational. Correlation coefficients of 0.61 and 0.62 were found between SEVIRI AOD and ground-based sun-photometer for Romania and Poland, respectively. 2. A retrieval for ground-level concentrations of PM2.5 was implemented using the SEVIRI AOD in combination with WRF-Chem output. For representative sites a correlation of 0.56 and 0.49 between satellite-based PM2.5 and in situ PM2.5 was found for Poland and the Czech Republic, respectively. 3. An operational algorithm for data fusion was extended to make use of various satellite-based air quality products (NO2, SO2, AOD, PM2.5 and PM10). For the Czech Republic inclusion of satellite data improved mapping of NO2 in rural areas and on an annual basis in urban background areas. It slightly improved mapping of rural and urban background SO2. The use of satellites based AOD or PM2.5 improved mapping results for PM2.5 and PM10. 4. A geostatistical downscaling algorithm for satellite-based air quality products was developed to bridge the gap towards urban-scale applications. Initial testing using synthetic data was followed by applying the algorithm to OMI NO2 data with a direct comparison against high-resolution TROPOMI NO2 as a reference, thus allowing for a quantitative assessment of the algorithm performance and demonstrating significant accuracy improvements after downscaling. We can conclude that SAMIRA demonstrated the added value of using satellite data for regional- and urban-scale air quality monitoring. Full article
(This article belongs to the Special Issue The Future of Air Quality Monitoring by Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>SAMIRA overview work-flow diagram. Activities marked in green made use of satellite data. For more details see <a href="#sec2dot1-remotesensing-13-02219" class="html-sec">Section 2.1</a>, <a href="#sec2dot2-remotesensing-13-02219" class="html-sec">Section 2.2</a>, <a href="#sec2dot3-remotesensing-13-02219" class="html-sec">Section 2.3</a>, <a href="#sec2dot4-remotesensing-13-02219" class="html-sec">Section 2.4</a> and <a href="#sec2dot5-remotesensing-13-02219" class="html-sec">Section 2.5</a>.</p>
Full article ">Figure 2
<p>WRF-Chem model domains at 5 km (blue) and 1 km (red) horizontal resolution. Bold marked areas indicate regions for which exemplary results are discussed in this text.</p>
Full article ">Figure 3
<p>Methodology used in SAMIRA for AOD to PM<sub>2.5</sub> conversion: data flow of the algorithm.</p>
Full article ">Figure 4
<p>Data fusion process used in SAMIRA: regression-interpolation-merging mapping.</p>
Full article ">Figure 5
<p>General concept of the SAMIRA downscaling methodology. Green boxes indicate input data, white boxes indicate intermediate datasets, blue boxes represent processing steps, and the orange box indicates the final output.</p>
Full article ">Figure 6
<p>SEVIRI NRT AOD (<b>A</b>–<b>D</b>) and AOD uncertainty maps (<b>E</b>,<b>F</b>) over Poland for every fourth retrieval in the morning of 5 June 2019, at 05:00 UTC (<b>A</b>,<b>E</b>), 06:00 UTC (<b>B</b>,<b>F</b>), 07:00 UTC (<b>C</b>,<b>G</b>), and 08:00 UTC (<b>D</b>,<b>H</b>). The pixel resolution is 5.5 × 5.5 km<sup>2</sup> and each map represent 15 min.</p>
Full article ">Figure 7
<p>For 5 June 2019, backward trajectories for 12:00 UTC calculated with the HySplit Model (<b>A</b>), the Warsaw PollyXT LIDAR signal (<b>B</b>), and multi-wavelengths AOD for the Warsaw sun-photometer (<b>C</b>).</p>
Full article ">Figure 8
<p>AOD (panel <b>A</b>), conversion factor (panel <b>B</b>) and PM<sub>2.5</sub> (panel <b>C</b>) map calculated for Poland, 17 September 2014 07:00 UTC. For comparison, in the lower panel PM<sub>2.5</sub> output from WRF-Chem (panel <b>D</b>), and the difference between the calculated and the modeled PM<sub>2.5</sub> is shown (panel <b>E</b>).</p>
Full article ">Figure 9
<p>Hourly NRT air quality maps for the Czech Republic. (<b>A</b>): NO<sub>2</sub> map for 15 August 2019 15:00 UTC; (<b>B</b>): SO<sub>2</sub> map for 15 August 2019 15:00 UTC; (<b>C</b>): PM<sub>2.5</sub> for 23 August 2019 07:00 UTC; (<b>D</b>): PM<sub>10</sub> for 23 August 2019 07:00 UTC. Note that grey areas in the Czech Republic show regions with no satellite data due to cloud coverage.</p>
Full article ">Figure 10
<p>Real-world validation of the downscaling method using TROPOMI as a high-resolution reference for the area of the Ostrava/Katowice area for July through September 2018. Panel A shows the original OMI data (gridded at 0.25° × 0.25°). Panel B shows the result of downscaling the OMI data using the QUARK NO<sub>2</sub> dataset (<a href="https://ec.europa.eu/environment/air/pdf/NO2%20exposure%20technical%20manual.pdf" target="_blank">https://ec.europa.eu/environment/air/pdf/NO2%20exposure%20technical%20manual.pdf</a>, accessed on 30 October 2018) as a proxy. For a direct comparison, panel C shows the original TROPOMI data gridded to a spatial resolution of 0.05° × 0.05°. Panel D shows the relative difference between the downscaled OMI data and the TROPOMI data.</p>
Full article ">Figure 11
<p>Illustration of land cover and land-use within a 5 km × 5 km square SEVIRI pixel. (<b>A</b>): urban area, (<b>B</b>): coastal site.</p>
Full article ">Figure 12
<p>Scatterplots showing a comparison of the original OMI NO<sub>2</sub> product (panel <b>A</b>) and the downscaled OMI NO<sub>2</sub> product (panel <b>B</b>) against the TROPOMI NO<sub>2</sub> product [in 10<sup>15</sup> molecules cm<sup>−2</sup>] for the area of the Czech Republic for July through September 2018. Note that due to its coarse resolution a pixel of the original OMI product represents multiple TROPOMI pixels, thus explaining the striped patterns in panel A.</p>
Full article ">Figure 13
<p>SAMIRA Map Viewer showing hourly averaged NRT NO<sub>2</sub> for 23 November 2019 12:00 over the Czech Republic.</p>
Full article ">
24 pages, 48673 KiB  
Article
MDCwFB: A Multilevel Dense Connection Network with Feedback Connections for Pansharpening
by Weisheng Li, Minghao Xiang and Xuesong Liang
Remote Sens. 2021, 13(11), 2218; https://doi.org/10.3390/rs13112218 - 5 Jun 2021
Cited by 4 | Viewed by 2203
Abstract
In most practical applications of remote sensing images, high-resolution multispectral images are needed. Pansharpening aims to generate high-resolution multispectral (MS) images from the input of high spatial resolution single-band panchromatic (PAN) images and low spatial resolution multispectral images. Inspired by the remarkable results [...] Read more.
In most practical applications of remote sensing images, high-resolution multispectral images are needed. Pansharpening aims to generate high-resolution multispectral (MS) images from the input of high spatial resolution single-band panchromatic (PAN) images and low spatial resolution multispectral images. Inspired by the remarkable results of other researchers in pansharpening based on deep learning, we propose a multilevel dense connection network with a feedback connection. Our network consists of four parts. The first part consists of two identical subnetworks to extract features from PAN and MS images. The second part is a multilevel feature fusion and recovery network, which is used to fuse images in the feature domain and to encode and decode features at different levels so that the network can fully capture different levels of information. The third part is a continuous feedback operation, which refines shallow features by feedback. The fourth part is an image reconstruction network. High-quality images are recovered by making full use of multistage decoding features through dense connections. Experiments on different satellite datasets show that our proposed method is superior to existing methods, through subjective visual evaluation and objective evaluation indicators. Compared with the results of other models, our results achieve significant gains on the multiple objective index values used to measure the spectral quality and spatial details of the generated image, namely spectral angle mapper (SAM), relative global dimensional synthesis error (ERGAS), and structural similarity (SSIM). Full article
Show Figures

Figure 1

Figure 1
<p>Detailed structure of the proposed multistage densely connected network with feedback connection. Red lines are defined as feedback connections.</p>
Full article ">Figure 2
<p>Specific structure of each subnet.</p>
Full article ">Figure 3
<p>Multi-scale feature extraction block with attention mechanism structure. The left shows the complete structure of the entire module, and the right shows the specific structure of the four different sensory branches.</p>
Full article ">Figure 4
<p>Structure of the proposed residual block and multilevel feature fusion recovery block.</p>
Full article ">Figure 5
<p>Results using the QuickBird dataset with four bands (resolutions of 256 × 256 pixels): (<b>a</b>) reference image; (<b>b</b>) DWT; (<b>c</b>) GLP; (<b>d</b>) GS; (<b>e</b>) HPF; (<b>f</b>) PPXS; (<b>g</b>) PNN; (<b>h</b>) DRPNN; (<b>i</b>) PanNet; (<b>j</b>) ResTFNet; (<b>k</b>) TPNwFB; (<b>l</b>) ours.</p>
Full article ">Figure 6
<p>Results using the WorldView-2 dataset with eight bands (resolutions of 256 × 256 pixels): (<b>a</b>) reference image; (<b>b</b>) DWT; (<b>c</b>) GLP; (<b>d</b>) GS; (<b>e</b>) HPF; (<b>f</b>) PPXS; (<b>g</b>) PNN; (<b>h</b>) DRPNN; (<b>i</b>) PanNet; (<b>j</b>) ResTFNet; (<b>k</b>) TPNwFB; (<b>l</b>) ours.</p>
Full article ">Figure 7
<p>Results using the WorldView-3 dataset with eight bands (resolutions of 256 × 256 pixels): (<b>a</b>) reference image; (<b>b</b>) DWT; (<b>c</b>) GLP; (<b>d</b>) GS; (<b>e</b>) HPF; (<b>f</b>) PPXS; (<b>g</b>) PNN; (<b>h</b>) DRPNN; (<b>i</b>) PanNet; (<b>j</b>) ResTFNet; (<b>k</b>) TPNwFB; (<b>l</b>) ours.</p>
Full article ">Figure 8
<p>Results using the Ikonos dataset with four bands (resolutions of 256 × 256 pixels): (<b>a</b>) reference image; (<b>b</b>) DWT; (<b>c</b>) GLP; (<b>d</b>) GS; (<b>e</b>) HPF; (<b>f</b>) PPXS; (<b>g</b>) PNN; (<b>h</b>) DRPNN; (<b>i</b>) PanNet; (<b>j</b>) ResTFNet; (<b>k</b>) TPNwFB; (<b>l</b>) ours.</p>
Full article ">Figure 9
<p>Results using the QuickBird real dataset with four bands (resolutions of 256 × 256 pixels): (<b>a</b>) reference image; (<b>b</b>) DWT; (<b>c</b>) GLP; (<b>d</b>) GS; (<b>e</b>) HPF; (<b>f</b>) PPXS; (<b>g</b>) PNN; (<b>h</b>) DRPNN; (<b>i</b>) PanNet; (<b>j</b>) ResTFNet; (<b>k</b>) TPNwFB; (<b>l</b>) ours.</p>
Full article ">
16 pages, 4871 KiB  
Article
Physical Retrieval of Rain Rate from Ground-Based Microwave Radiometry
by Wenyue Wang, Klemens Hocke and Christian Mätzler
Remote Sens. 2021, 13(11), 2217; https://doi.org/10.3390/rs13112217 - 5 Jun 2021
Cited by 13 | Viewed by 3140
Abstract
Because of its clear physical meaning, physical methods are more often used for space-borne microwave radiometers to retrieve the rain rate, but they are rarely used for ground-based microwave radiometers that are very sensitive to rainfall. In this article, an opacity physical retrieval [...] Read more.
Because of its clear physical meaning, physical methods are more often used for space-borne microwave radiometers to retrieve the rain rate, but they are rarely used for ground-based microwave radiometers that are very sensitive to rainfall. In this article, an opacity physical retrieval method is implemented to retrieve the rain rate (denoted as Opa-RR) using ground-based microwave radiometer data (21.4 and 31.5 GHz) of the tropospheric water radiometer (TROWARA) at Bern, Switzerland from 2005 to 2019. The Opa-RR firstly establishes a direct connection between the rain rate and the enhanced atmospheric opacity during rain, then iteratively adjusts the rain effective temperature to determine the rain opacity, based on the radiative transfer equation, and finally estimates the rain rate. These estimations are compared with the available simultaneous rain rate derived from rain gauge data and reanalysis data (ERA5). The results and the intercomparison demonstrate that during moderate rains and at the 31 GHz channel, the Opa-RR method was close to the actual situation and capable of the rain rate estimation. In addition, the Opa-RR method can well derive the changes in cumulative rain over time (day, month, and year), and the monthly rain rate estimation is superior, with the rain gauge validated R2 and the root-mean-square error value of 0.77 and 22.46 mm/month, respectively. Compared with ERA5, Opa-RR at 31GHz achieves a competitive performance. Full article
(This article belongs to the Special Issue Remote Sensing for Precipitation Retrievals)
Show Figures

Figure 1

Figure 1
<p>An example of the total zenith opacity (red), the non-rain zenith opacity (blue), and the ExWi rain gauge cumulative rain (green) versus time from 20 December 2019 to 25 December 2019. (<b>a</b>)The total zenith opacity measured by TROWARA at 21 GHz. (<b>b</b>) The total zenith opacity measured by TROWARA at 31 GHz.</p>
Full article ">Figure 2
<p>Radiative transfer process of the ground-based microwave radiometer. (<b>a</b>) The radiative transfer process during the non-rainfall period. (<b>b</b>) The radiative transfer process of the assumed additional layer during the rainfall period.</p>
Full article ">Figure 3
<p>Examples of cumulative rain versus time for Opa-RR at 21 GHz (solid red), at 31 GHz (dashed blue), the ExWi rain gauge (solid green), the ExWi optical rain sensor (dashed green). The black and grey lines are the total zenith opacity and the non-rain zenith opacity, respectively. (<b>a</b>) heavy rain on 05 May 2007; (<b>b</b>) moderate rain on 15 April 2006; (<b>c</b>) light rain on 7 January 2007; (<b>d</b>) heavy rain on 31 May 2007; (<b>e</b>) moderate rain on 4 December 2006; (<b>f</b>) light rain on 15 February 2007; (<b>g</b>) 8-day stratiform rain from 24 February 2007 to 3 March 2007.</p>
Full article ">Figure 3 Cont.
<p>Examples of cumulative rain versus time for Opa-RR at 21 GHz (solid red), at 31 GHz (dashed blue), the ExWi rain gauge (solid green), the ExWi optical rain sensor (dashed green). The black and grey lines are the total zenith opacity and the non-rain zenith opacity, respectively. (<b>a</b>) heavy rain on 05 May 2007; (<b>b</b>) moderate rain on 15 April 2006; (<b>c</b>) light rain on 7 January 2007; (<b>d</b>) heavy rain on 31 May 2007; (<b>e</b>) moderate rain on 4 December 2006; (<b>f</b>) light rain on 15 February 2007; (<b>g</b>) 8-day stratiform rain from 24 February 2007 to 3 March 2007.</p>
Full article ">Figure 4
<p>Scatter plots of daily rain rates estimated by Opa-RR and provided by ERA5 versus measured by the ExWi rain gauge over the period 2005 to 2018 (<b>a</b>–<b>c</b>) and the Zimmerwald rain gauge over the period 2008 to 2019 (<b>d</b>–<b>f</b>) in Bern. The solid black line shows the <span class="html-italic">x</span> = <span class="html-italic">y</span> line, and the red dashed line shows the linear regression fit line. The color shows the density of the data distribution calculated by Gaussian kernels.</p>
Full article ">Figure 5
<p>Same as in <a href="#remotesensing-13-02217-f004" class="html-fig">Figure 4</a>, but with regard to the monthly rain rate in Bern.</p>
Full article ">Figure 6
<p>Monthly time series of the rain rate for Opa-RR at 21 GHz (solid red), at 31 GHz (solid yellow), rain gauges (dashed green), and ERA5 (dashed blue) in Bern. It is assigned as 0 for months without rain or when data are missing. (<b>a</b>) comparison with the ExWi rain gauge; (<b>b</b>) comparison with the Zimmerwald rain gauge.</p>
Full article ">Figure 7
<p>Same as in <a href="#remotesensing-13-02217-f006" class="html-fig">Figure 6</a>, but with regard to the annual time series of the rain rate in Bern.</p>
Full article ">
21 pages, 8829 KiB  
Article
Improved Transformer Net for Hyperspectral Image Classification
by Yuhao Qing, Wenyi Liu, Liuyan Feng and Wanjia Gao
Remote Sens. 2021, 13(11), 2216; https://doi.org/10.3390/rs13112216 - 5 Jun 2021
Cited by 135 | Viewed by 7217
Abstract
In recent years, deep learning has been successfully applied to hyperspectral image classification (HSI) problems, with several convolutional neural network (CNN) based models achieving an appealing classification performance. However, due to the multi-band nature and the data redundancy of the hyperspectral data, the [...] Read more.
In recent years, deep learning has been successfully applied to hyperspectral image classification (HSI) problems, with several convolutional neural network (CNN) based models achieving an appealing classification performance. However, due to the multi-band nature and the data redundancy of the hyperspectral data, the CNN model underperforms in such a continuous data domain. Thus, in this article, we propose an end-to-end transformer model entitled SAT Net that is appropriate for HSI classification and relies on the self-attention mechanism. The proposed model uses the spectral attention mechanism and the self-attention mechanism to extract the spectral–spatial features of the HSI image, respectively. Initially, the original HSI data are remapped into multiple vectors containing a series of planar 2D patches after passing through the spectral attention module. On each vector, we perform linear transformation compression to obtain the sequence vector length. During this process, we add the position–coding vector and the learnable–embedding vector to manage capturing the continuous spectrum relationship in the HSI at a long distance. Then, we employ several multiple multi-head self-attention modules to extract the image features and complete the proposed network with a residual network structure to solve the gradient dispersion and over-fitting problems. Finally, we employ a multilayer perceptron for the HSI classification. We evaluate SAT Net on three publicly available hyperspectral datasets and challenge our classification performance against five current classification methods employing several metrics, i.e., overall and average classification accuracy and Kappa coefficient. Our trials demonstrate that SAT Net attains a competitive classification highlighting that a Self-Attention Transformer network and is appealing for HSI classification. Full article
(This article belongs to the Special Issue Machine Learning for Remote Sensing Image/Signal Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Spectral attention mechanism. The module uses operations such as maximum pooling, average pooling, and shared weights to re-output feature maps with different weights.</p>
Full article ">Figure 2
<p>Multi-Head Self-Attention structure: After mapping, linear change, matrix operation, and other operations, the output sequence obtained has the same length as the input sequence, and each output vector depends on all input vectors.</p>
Full article ">Figure 3
<p>Transformer Encoder Block. This module is composed of the norm, multi-head self-attention, and dense and other structures connected in the form of residuals.</p>
Full article ">Figure 4
<p>The proposed SAT Net architecture. After the original HSI data is processed, it is input into the spectral attention and decoder modules with multi-head self-attention to extract HSI features. Second, the encoder module uses a multi-layer residual structure for connection, thereby effectively reducing information loss, and finally through the fully connected layer, it outputs classification information.</p>
Full article ">Figure 5
<p>Salinas images: (<b>a</b>) pseudo-color image; (<b>b</b>) ground-truth labels.</p>
Full article ">Figure 6
<p>Indian Pines images: (<b>a</b>) pseudo-color image; (<b>b</b>) ground-truth labels.</p>
Full article ">Figure 7
<p>University of Pavia images: (<b>a</b>) pseudo-color image; (<b>b</b>) ground-truth labels.</p>
Full article ">Figure 8
<p>Overall classification accuracy per dataset under various encoder block sizes.</p>
Full article ">Figure 9
<p>Overall accuracy per dataset under different training set proportions.</p>
Full article ">Figure 10
<p>The overall classification accuracy of the three data sets at different learning rates.</p>
Full article ">Figure 11
<p>Overall accuracy curve of different models in SA dataset.</p>
Full article ">Figure 12
<p>Overall accuracy curve of different models in IN dataset.</p>
Full article ">Figure 13
<p>Overall accuracy curve of different models in UP dataset.</p>
Full article ">Figure 14
<p>The classification map on the SA dataset for (<b>a</b>) CNN, (<b>b</b>) SA-MCN, (<b>c</b>) 3D-CNN (<b>d</b>) SSRN, (<b>e</b>) MSRN, and (<b>f</b>) proposed SAT Net.</p>
Full article ">Figure 15
<p>The classification map on the IN dataset for (<b>a</b>) CNN, (<b>b</b>) SA-MCN, (<b>c</b>) 3D-CNN (<b>d</b>) SSRN, (<b>e</b>) MSRN, and (<b>f</b>) proposed SAT Net.</p>
Full article ">Figure 16
<p>The classification map on the UP dataset for (<b>a</b>) CNN, (<b>b</b>) SA-MCN, (<b>c</b>) 3D-CNN (<b>d</b>) SSRN, (<b>e</b>) MSRN, and (<b>f</b>) proposed SAT Net.</p>
Full article ">Figure 17
<p>(<b>a</b>) (MSRN) and (<b>b</b>) (SAT NET) are partial results of the UP dataset, (<b>c</b>) (MSRN), and (<b>d</b>) (SAT NET) are partial results of the UP dataset, (<b>e</b>) (MSRN) and (<b>f</b>) (SAT NET) are partial results of the UP dataset.</p>
Full article ">
18 pages, 18659 KiB  
Article
Modify the Accuracy of MODIS PWV in China: A Performance Comparison Using Random Forest, Generalized Regression Neural Network and Back-Propagation Neural Network
by Zhaohui Xiong, Xiaogong Sun, Jizhang Sang and Xiaomin Wei
Remote Sens. 2021, 13(11), 2215; https://doi.org/10.3390/rs13112215 - 5 Jun 2021
Cited by 16 | Viewed by 2728
Abstract
Water vapor plays an important role in climate change and water cycling, but there are few water vapor products with both high spatial resolution and high accuracy that effectively monitor the change of water vapor. The high precision Global Navigation Satellite System (GNSS) [...] Read more.
Water vapor plays an important role in climate change and water cycling, but there are few water vapor products with both high spatial resolution and high accuracy that effectively monitor the change of water vapor. The high precision Global Navigation Satellite System (GNSS) Precipitable Water Vapor (PWV) is often used to calibrate the high spatial resolution Moderate-resolution Imaging Spectroradiometer (MODIS) PWV to produce new PWV product with high accuracy and high spatial resolution. In addition, the machine learning method has a good performance in modifying the accuracy of MODIS PWV. However, the accuracy improvement of different machine learning methods and different modeling timescale is different. In this article, we use three machine learning methods, namely, the Random Forest (RF), Generalized Regression Neural Network (GRNN), and Back-propagation Neural Network (BPNN) to calibrate MODIS PWV in 2019, at annual and monthly timescales. We also use the Multiple Linear Regression (MLR) method for comparison. The root mean squares (RMSs) at the annual timescale with the three machine learning methods are 4.1 mm (BPNN), 3.3 mm (RF), and 3.9 mm (GRNN), and the average RMSs become 2.9 mm (BPNN), 2.8 mm (RF), and 2.5 mm (GRNN) at the monthly timescale. Those results are all better than the MLR method (5.0 mm at the annual timescale and 4.6 mm at the monthly timescale). When there is an obvious variation pattern in the training sample, the RF method can capture the pattern to achieve the best results since the RF achieves the best performance at the annual timescale. Dividing such samples into several sub-samples each having higher internal consistency could further improve the performance of machine learning methods, especially for the GRNN, since GRNN achieves the best performance at the monthly timescale, and the performance of those three machine learning methods at the monthly timescale is better than that of annual timescale. The spatial and temporal variation patterns of the RMS values are significantly weakened after the modeling by machine learning methods for both three methods. Full article
Show Figures

Figure 1

Figure 1
<p>Research area and distribution of GNSS stations.</p>
Full article ">Figure 2
<p>MODIS–GNSS PWV BPNN calibrate model.</p>
Full article ">Figure 3
<p>MODIS–GNSS PWV RF calibrate model.</p>
Full article ">Figure 4
<p>MODIS–GNSS PWV GRNN calibrate model.</p>
Full article ">Figure 5
<p>Scatter plots of Modified MODIS PWV (annual model) against observed (GNSS) PWV from all samples.</p>
Full article ">Figure 6
<p>Scatter plots of Modified MODIS PWV (monthly model) against observed (GNSS) PWV from all samples in 2019. (<b>a</b>) the results of BPNN, (<b>b</b>) the results of GRNN, (<b>c</b>) the results of RF, (<b>d</b>) the results of MLR.</p>
Full article ">Figure 6 Cont.
<p>Scatter plots of Modified MODIS PWV (monthly model) against observed (GNSS) PWV from all samples in 2019. (<b>a</b>) the results of BPNN, (<b>b</b>) the results of GRNN, (<b>c</b>) the results of RF, (<b>d</b>) the results of MLR.</p>
Full article ">Figure 7
<p>Daily accuracy for the original MODIS PWV and modified MODIS PWV.</p>
Full article ">Figure 8
<p>PWV Biases at GNSS sites which are based on monthly model results.</p>
Full article ">Figure 9
<p>PWV STDs at GNSS sites which are based on monthly model results.</p>
Full article ">Figure 10
<p>PWV RMSs at GNSS sites which are based on monthly model results.</p>
Full article ">
27 pages, 15625 KiB  
Article
A Burned Area Mapping Algorithm for Sentinel-2 Data Based on Approximate Reasoning and Region Growing
by Matteo Sali, Erika Piaser, Mirco Boschetti, Pietro Alessandro Brivio, Giovanna Sona, Gloria Bordogna and Daniela Stroppiana
Remote Sens. 2021, 13(11), 2214; https://doi.org/10.3390/rs13112214 - 5 Jun 2021
Cited by 12 | Viewed by 4512
Abstract
Sentinel-2 (S2) multi-spectral instrument (MSI) images are used in an automated approach built on fuzzy set theory and a region growing (RG) algorithm to identify areas affected by fires in Mediterranean regions. S2 spectral bands and their post- and pre-fire date (Δpost-pre [...] Read more.
Sentinel-2 (S2) multi-spectral instrument (MSI) images are used in an automated approach built on fuzzy set theory and a region growing (RG) algorithm to identify areas affected by fires in Mediterranean regions. S2 spectral bands and their post- and pre-fire date (Δpost-pre) difference are interpreted as evidence of burn through soft constraints of membership functions defined from statistics of burned/unburned training regions; evidence of burn brought by the S2 spectral bands (partial evidence) is integrated using ordered weighted averaging (OWA) operators that provide synthetic score layers of likelihood of burn (global evidence of burn) that are combined in an RG algorithm. The algorithm is defined over a training site located in Italy, Vesuvius National Park, where membership functions are defined and OWA and RG algorithms are first tested. Over this site, validation is carried out by comparison with reference fire perimeters derived from supervised classification of very high-resolution (VHR) PlanetScope images leading to more than satisfactory results with Dice coefficient > 0.84, commission error < 0.22 and omission error < 0.15. The algorithm is tested for exportability over five sites in Portugal (1), Spain (2) and Greece (2) to evaluate the performance by comparison with fire reference perimeters derived from the Copernicus Emergency Management Service (EMS) database. In these sites, we estimate commission error < 0.15, omission error < 0.1 and Dice coefficient > 0.9 with accuracy in some cases greater than values obtained in the training site. Regression analysis confirmed the satisfactory accuracy levels achieved over all sites. The algorithm proposed offers the advantages of being least dependent on a priori/supervised selection for input bands (by building on the integration of redundant partial burn evidence) and for criteria/threshold to obtain segmentation into burned/unburned areas. Full article
Show Figures

Figure 1

Figure 1
<p>The CLC2012 land cover classes over the six sites located in Southern Europe (<b>a</b>): Vesuvius, Italy (<b>b</b>), Leiria, Portugal (<b>c</b>), Calar, Spain (<b>d</b>), Huelva, Spain (<b>e</b>), Kalamos, Greece (<b>f</b>), Zakynthos, Greece (<b>g</b>).</p>
Full article ">Figure 2
<p>Pre- and post-fire S2 images (first and second column) and EMS fire grading maps (last column) for each site: Vesuvius, Italy, (<b>a</b>–<b>c</b>); Leiria, Portugal, (<b>d</b>–<b>f</b>); Calar, Spain (<b>g</b>–<b>i</b>); Huelva, Spain (<b>j</b>–<b>l</b>); Kalamos, Greece (<b>m</b>–<b>o</b>); Zakynthos, Greece (<b>p</b>–<b>r</b>). S2 images are displayed as RGB false color composites (SWIR2, NIR, red). Notice that for Kalamos and Zakynthos sites, Greece, no fire damage grading maps are available from EMS.</p>
Full article ">Figure 2 Cont.
<p>Pre- and post-fire S2 images (first and second column) and EMS fire grading maps (last column) for each site: Vesuvius, Italy, (<b>a</b>–<b>c</b>); Leiria, Portugal, (<b>d</b>–<b>f</b>); Calar, Spain (<b>g</b>–<b>i</b>); Huelva, Spain (<b>j</b>–<b>l</b>); Kalamos, Greece (<b>m</b>–<b>o</b>); Zakynthos, Greece (<b>p</b>–<b>r</b>). S2 images are displayed as RGB false color composites (SWIR2, NIR, red). Notice that for Kalamos and Zakynthos sites, Greece, no fire damage grading maps are available from EMS.</p>
Full article ">Figure 3
<p>Pre- (<b>a</b>) and post-fire (<b>b</b>) PlanetScope images: 22 April and 22 July 2017, over the Vesuvius sites. Images are displayed as RGB false color composites (NIR, red, green) and the reference burned area map (<b>c</b>) obtained with RF algorithm. Black polygon shows the border of Vesuvius National Park.</p>
Full article ">Figure 4
<p>The s-shaped (<b>a</b>) and z-shaped (<b>b</b>) sigmoid functions chosen as MFs to quantify the partial evidence of burn.</p>
Full article ">Figure 5
<p>Flowchart of the processing steps from input S2 MSI images to generate BA maps.</p>
Full article ">Figure 6
<p>Partial evidence of burn as given by the input features interpreted by sigmoid MFs: Post<sub>RE2</sub> (<b>a</b>), Post<sub>RE3</sub> (<b>b</b>) Post<sub>NIR</sub> (<b>c</b>), Δ<sub>RE2</sub> (<b>d</b>), Δ<sub>RE3</sub> (<b>e</b>), Δ<sub>NIR</sub> (<b>f</b>) and Δ<sub>SWIR2</sub> (<b>g</b>). Color scale shows increasing degree and likelihood of burn (blue to yellow).</p>
Full article ">Figure 7
<p>Global evidence of burn shown by OWA score [0, 1] (left column) over the Vesuvius study site: OWA<sub>AND</sub> (<b>a</b>), OWA<sub>almostAND</sub> (<b>b</b>), OWA<sub>Average</sub> (<b>c</b>), OWA<sub>AlmostOR</sub> (<b>d</b>) and OWA<sub>OR</sub> (<b>e</b>); borders of Vesuvius National Park are highlighted in black and masked areas in gray.</p>
Full article ">Figure 8
<p>RG<sub>score</sub> and agreement maps for the Vesuvius training site for growing layers OWA<sub>Average</sub> (<b>a</b>,<b>d</b>), OWA<sub>AlmostOR</sub> (<b>b</b>,<b>e</b>) and OWA<sub>OR</sub> (<b>c</b>,<b>f</b>).</p>
Full article ">Figure 9
<p>Agreement maps obtained for OWA<sub>Average</sub> (left column), OWA<sub>AlmostOR</sub> (middle column) and OWA<sub>OR</sub> (right column) over exportability sites: Leiria, Portugal (<b>a</b>–<b>c</b>), Calar, Spain (<b>d</b>–<b>f</b>), Huelva, Spain (<b>g</b>–<b>i</b>), Kalamos, Greece (<b>l</b>–<b>n</b>) and Zakynthos, Greece (<b>o</b>–<b>q</b>). The four classes represent: correctly classified burned pixels (orange), correctly classified unburned pixels (white), pixels mistakenly classified as burned—commission (blue) and pixel mistakenly classified as unburned—omission (green).</p>
Full article ">Figure 9 Cont.
<p>Agreement maps obtained for OWA<sub>Average</sub> (left column), OWA<sub>AlmostOR</sub> (middle column) and OWA<sub>OR</sub> (right column) over exportability sites: Leiria, Portugal (<b>a</b>–<b>c</b>), Calar, Spain (<b>d</b>–<b>f</b>), Huelva, Spain (<b>g</b>–<b>i</b>), Kalamos, Greece (<b>l</b>–<b>n</b>) and Zakynthos, Greece (<b>o</b>–<b>q</b>). The four classes represent: correctly classified burned pixels (orange), correctly classified unburned pixels (white), pixels mistakenly classified as burned—commission (blue) and pixel mistakenly classified as unburned—omission (green).</p>
Full article ">Figure 10
<p>Accuracy metrics (omission error = oe, commission error = ce, Dice coefficient = dc and relative bias = relB) over the exportability and training sites.</p>
Full article ">Figure 11
<p>Scatter plot of the proportion of 500 m × 500 m grid cell mapped as burned in the RG output and the reference dataset for each site and OWA<sub>grow</sub> layer. Scatter plots are displayed as counts of cells for 0.05 step along the x- and <span class="html-italic">y</span>-axis to better represent overlapping points and with a logarithmic color scale. The black dotted line is the 1:1 line while the gray continuous line is the linear regression model. The coefficient of determination (R2), slope of the regression linear model (Slope), root mean squared error (RMSE) and total number of cells (N cells) are also shown.</p>
Full article ">Figure 12
<p>Accuracy metrics (omission error = oe, commission error = ce, Dice coefficient = dc and relative bias = relB) over the training site for burned area maps obtained with the RG algorithm (with three cases of growing layer, OWA<sub>Average</sub>, OWA<sub>Almost_OR</sub> and OWA<sub>OR</sub>) and from segmentation of the OWA global evidence (from all tested OWAs).</p>
Full article ">Figure A1
<p>Estimates of the accuracy metrics omission error (oe), commission error (ce), Dice coefficient (dc) and relative bias (relB) over the Vesuvius site estimated by comparison with PlanetScope (green lines) and EMS (orange line) fire reference polygons, horizontal red dashed line shows y = 0.</p>
Full article ">Figure A2
<p>RG output score [0, 1] estimated over the Vesuvius training site OWA<sub>AlmostOR</sub> (<b>a</b>–<b>e</b>), OWA<sub>Average</sub> (<b>f</b>–<b>l</b>) and OWA<sub>AlmostOR</sub> (<b>m</b>–<b>q</b>) as growing boundaries and different threshold for identifying growing boundaries (Th) in [0.1–0.5] (left to right columns). Masked areas are in gray. In all cases the seed layer is OWA<sub>AND</sub> &gt; 0.9.</p>
Full article ">Figure A3
<p>Accuracy metrics (omission error = oe, commission error = ce and Dice coefficient = dc) estimates for burned area maps obtained with the region growing (RG) algorithm over the Vesuvius site. Seed layer is the OWA<sub>AND</sub> score map and growing layers are: OWA<sub>Average</sub>, OWA<sub>AlmostOR</sub> and OWA<sub>OR</sub>. Accuracy metrics are estimated for combinations of the threshold applied to the growing layers (OWA<sub>grow</sub>, legend colors) and to the RG<sub>score</sub> (<span class="html-italic">x</span>-axis).</p>
Full article ">Figure A4
<p>Example of apparent commission errors for the Leiria site, Portugal (<b>a</b>–<b>d</b>): pre-fire S2 image (4 June 2017, <b>b</b>), post-fire S2 image (4 July 2017, <b>c</b>) and burned areas from RG and EMS (<b>d</b>). Examples of commission errors in the Huelva site, Spain (<b>e</b>–<b>n</b>): pre-fire S2 image (11 June 2017, <b>f</b>,<b>l</b>), post-fire S2 image (1 July 2017, <b>g</b>,<b>m</b>) and burned areas from RG and EMS (<b>c</b>). S2 RGB are false color composites SWIR-NIR-Red. First column shows the location of the zoom areas and the last column RG classification (white to black background) and EMS reference perimeters (red line patterns).</p>
Full article ">
15 pages, 7843 KiB  
Article
Regional Assessments of Surface Ice Elevations from Swath-Processed CryoSat-2 SARIn Data
by Natalia Havelund Andersen, Sebastian Bjerregaard Simonsen, Mai Winstrup, Johan Nilsson and Louise Sandberg Sørensen
Remote Sens. 2021, 13(11), 2213; https://doi.org/10.3390/rs13112213 - 5 Jun 2021
Cited by 5 | Viewed by 3369
Abstract
The Arctic responds rapidly to climate change, and the melting of land ice is a major contributor to the observed present-day sea-level rise. The coastal regions of these ice-covered areas are showing the most dramatic changes in the form of widespread thinning. Therefore, [...] Read more.
The Arctic responds rapidly to climate change, and the melting of land ice is a major contributor to the observed present-day sea-level rise. The coastal regions of these ice-covered areas are showing the most dramatic changes in the form of widespread thinning. Therefore, it is vital to improve the monitoring of these areas to help us better understand their contribution to present-day sea levels. In this study, we derive ice-surface elevations from the swath processing of CryoSat-2 SARIn data, and evaluate the results in several Arctic regions. In contrast to the conventional retracking of radar data, swath processing greatly enhances spatial coverage as it uses the majority of information in the radar waveform to create a swath of elevation measurements. However, detailed validation procedures for swath-processed data are important to assess the performance of the method. Therefore, a range of validation activities were carried out to evaluate the performance of the swath processor in four different regions in the Arctic. We assessed accuracy by investigating both intramission crossover elevation differences, and comparisons to independent elevation data. The validation data consisted of both air- and spaceborne laser altimetry, and airborne X-band radar data. There were varying elevation biases between CryoSat-2 and the validation datasets. The best agreement was found for CryoSat-2 and ICESat-2 over the Helheim region in June 2019. To test the stability of the swath processor, we applied two different coherence thresholds. The number of data points was increased by approximately 25% when decreasing the coherence threshold in the processor from 0.8 to 0.6. However, depending on the region, this came with the cost of an increase of 33–65% in standard deviation of the intramission differences. Our study highlights the importance of selecting an appropriate coherence threshold for the swath processor. Coherence threshold should be chosen on a case-specific basis depending on the need for enhanced spatial coverage or accuracy. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location and coverage of four datasets used in this study. (<b>A</b>) Airborne ALS data from April 2016 over Austfonna ice cap in Svalbard; (<b>B</b>) airborne GeoSAR X-band data from April 2014 over Petermann glacier in Greenland; (<b>C</b>) operation Ice Bridge ATM data from 2017 over Nioghalvfjerdsfjorden glacier; (<b>D</b>) ICESat-2 laser altimetry data for Helheim glacier obtained in June 2019. Gray-shaded data plotted as background on Greenland are 1 km resolution DEM from ArctiDEM [<a href="#B25-remotesensing-13-02213" class="html-bibr">25</a>].</p>
Full article ">Figure 2
<p>Coherence and power data from a CS2 track over Austfonna ice cap on 26 April 2016. (<b>a</b>,<b>c</b>) Northern part of the ice cap; (<b>b</b>,<b>d</b>) southern part of the ice cap. (<b>a</b>,<b>b</b>) Filtered coherence with threshold of 0.6 and 0.8 indicated by horizontal dashed lines; blue vertical box indicates area above the power threshold. (<b>c</b>,<b>d</b>) Power waveform, power threshold indicated by green line. Range bins for which the power was above the chosen limit are marked by shaded blue boxes.</p>
Full article ">Figure 3
<p>Example of local phase unwrapping: gray line, original wrapped phase; red line, position of correctly unwrapped phase for range bins with coherence above 0.8.</p>
Full article ">Figure 4
<p>Swath-processed ice-surface elevations over the Nioghalvfjerdsfjorden glacier in April 2018 computed using the two different coherence thresholds. (<b>left</b>) Elevations computed using a coherence limit of 0.8, revealing gaps in the data. (<b>right</b>) Ice-surface elevations computed using coherence limit of 0.6, which left fewer gaps in the data.</p>
Full article ">Figure 5
<p>Results for Helheim area. (<b>a</b>,<b>b</b>) Intramission crossover elevation differences for coherence limits 0.8 and 0.6, respectively. (<b>c</b>,<b>d</b>) External-mission crossover analysis with ICESat-2 data for coherence limits 0.8 and 0.6, respectively. Corresponding histograms of elevation differences shown in inserts.</p>
Full article ">
21 pages, 8797 KiB  
Article
Fast and Fine Location of Total Lightning from Low Frequency Signals Based on Deep-Learning Encoding Features
by Jingxuan Wang, Yang Zhang, Yadan Tan, Zefang Chen, Dong Zheng, Yijun Zhang and Yanfeng Fan
Remote Sens. 2021, 13(11), 2212; https://doi.org/10.3390/rs13112212 - 5 Jun 2021
Cited by 13 | Viewed by 3424
Abstract
Lightning location provides an important means for the study of lightning discharge process and thunderstorms activity. The fine positioning capability of total lightning based on low-frequency signals has been improved in many aspects, but most of them are based on post waveform processing, [...] Read more.
Lightning location provides an important means for the study of lightning discharge process and thunderstorms activity. The fine positioning capability of total lightning based on low-frequency signals has been improved in many aspects, but most of them are based on post waveform processing, and the positioning speed is slow. In this study, artificial intelligence technology is introduced for the first time to lightning positioning, based on low-frequency electric-field detection array (LFEDA). A new method based on deep-learning encoding features matching is also proposed, which provides a means for fast and fine location of total lightning. Compared to other LFEDA positioning methods, the new method greatly improves the matching efficiency, up to more than 50%, thereby considerably improving the positioning speed. Moreover, the new algorithm has greater fine-positioning and anti-interference abilities, and maintains high-quality positioning under low signal-to-noise ratio conditions. The positioning efficiency for return strokes of triggered lightning was 99.17%, and the standard deviation of the positioning accuracy in the X and Y directions was approximately 70 m. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) The site of LFEDA in the map of China. The red diamond represents the location of LFEDA. (<b>B</b>) Geographical layout of low-frequency electric field detection array substations in the Guangzhou area. Stations are represented by triangles, and experiments with artificially triggered lightning are conducted at the position marked by the circle. The SLC station was relocated to the ZTC station in 2017.</p>
Full article ">Figure 2
<p>Pictures of a Low-frequency E-field Detection Array substation (Digitizer, Fast antenna, and GPS antenna).</p>
Full article ">Figure 3
<p>(<b>A</b>) Layout of the triggered lightning test site. ➀ Control room; ➁ region for measurement of electromagnetic signals, ➂ artificially triggered lightning device, including six rocket launcher, lightning rods, and lightning current measurement equipment in the cabin. (<b>B</b>) A photo of a triggered lightning.</p>
Full article ">Figure 4
<p>Schematic of the new algorithm.</p>
Full article ">Figure 5
<p>The synchronous electric field waveforms of CHJ (<b>a</b>), XTC (<b>b</b>), SGC (<b>c</b>) and ZCJ (<b>d</b>) observed by LFEDA at 16:26:00 on 15 August 2015 (Beijing time) The red dot represents the position of the identified pulse peak, and the number represents the specific value of the pulse peak. The two large pulse signals of each station are represented by station name-1 and station name-2.</p>
Full article ">Figure 6
<p>(<b>A</b>) Positioning results of a lightning process at 16:26 on 15 August 2015 by the new algorithm and (<b>B</b>) the partial development. In each subgraph, (<b>a</b>) height–time plots; (<b>b</b>) north–south vertical projection; (<b>c</b>) height distribution of radiation events; (<b>d</b>) plan view; and (<b>e</b>) east–west vertical projection of lightning radiation sources.</p>
Full article ">Figure 7
<p>(<b>A</b>) Positioning results of a lightning process at 16:26 on 15 August 2015 by the TOA method based on pulse-peak feature matching and (<b>B</b>) the partial development. In each subgraph, (<b>a</b>) height–time plots; (<b>b</b>) north–south vertical projection; (<b>c</b>) height distribution of radiation events; (<b>d</b>) plan view; and (<b>e</b>) east–west vertical projection of lightning radiation sources.</p>
Full article ">Figure 8
<p>(<b>A</b>) Positioning results of a lightning process at 16:26 on 15 August 2015 by the TOA–TR method and (<b>B</b>) the partial development. In each subgraph, (<b>a</b>) height–time plots; (<b>b</b>) north–south vertical projection; (<b>c</b>) height distribution of radiation events; (<b>d</b>) plan view; and (<b>e</b>) east–west vertical projection of lightning radiation sources. Refer to Figure 6 of Chen et al. [<a href="#B46-remotesensing-13-02212" class="html-bibr">46</a>], the graph is redrawn based on the data from <a href="https://zenodo.org/record/2644811#.YEB-PU7isdV" target="_blank">https://zenodo.org/record/2644811#.YEB-PU7isdV</a> (accessed on 4 March 2021).</p>
Full article ">Figure 9
<p>(<b>A</b>) Positioning results of a lightning process at 16:26 on 15 August 2015 by the EMD–TOA method and (<b>B</b>) the partial development. In each subgraph, (<b>a</b>) height–time plots; (<b>b</b>) north–south vertical projection; (<b>c</b>) height distribution of radiation events; (<b>d</b>) plan view; and (<b>e</b>) east–west vertical projection of lightning radiation sources. Refer to Figure 12 of Fan et al. [<a href="#B44-remotesensing-13-02212" class="html-bibr">44</a>], the graph is redrawn based on the data from <a href="https://zenodo.org/record/1133810#.YEB-6E7isdV" target="_blank">https://zenodo.org/record/1133810#.YEB-6E7isdV</a> (accessed on 4 March 2021).</p>
Full article ">Figure 10
<p>The positioning results of the new algorithm on the artificial trigger lightning return strokes detected by LFEDA in 2015 and 2017. The red triangle represents the actual position of the artificial trigger lightning device, and the black dots represent the points located by the new algorithm.</p>
Full article ">Figure 11
<p>(<b>A</b>) The positioning results of the lightning process at 16:26 on 15 August 2015 under 5 dB SNR signals by the new method and (<b>B</b>) the partial developments. In each subgraph, (<b>a</b>) height–time plots; (<b>b</b>) north–south vertical projection; (<b>c</b>) height distribution of radiation events; (<b>d</b>) plan view; and (<b>e</b>) east–west vertical projection of lightning radiation sources.</p>
Full article ">Figure 12
<p>(<b>A</b>) The positioning results of the lightning process at 16:26 on 15 August 2015 under 5 dB SNR signals by the TOA method based on pulse-peak feature matching and (<b>B</b>) the partial developments. In each subgraph, (<b>a</b>) height–time plots; (<b>b</b>) north–south vertical projection; (<b>c</b>) height distribution of radiation events; (<b>d</b>) plan view; and (<b>e</b>) east–west vertical projection of lightning radiation sources.</p>
Full article ">Figure 13
<p>(<b>A</b>) The positioning results of the lightning process at 16:26 on 15 August 2015 under 5 dB SNR signals by the TOA–TR method and (<b>B</b>) the partial developments. In each subgraph, (<b>a</b>) height–time plots; (<b>b</b>) north–south vertical projection; (<b>c</b>) height distribution of radiation events; (<b>d</b>) plan view; and (<b>e</b>) east–west vertical projection of lightning radiation sources. Refer to Figure 11 of Chen et al. [<a href="#B46-remotesensing-13-02212" class="html-bibr">46</a>], the graph is redrawn based on the data from <a href="https://zenodo.org/record/2644811#.YEB-PU7isdV" target="_blank">https://zenodo.org/record/2644811#.YEB-PU7isdV</a> (accessed on 4 March 2021).</p>
Full article ">
22 pages, 90116 KiB  
Article
A Random Forest-Based Data Fusion Method for Obtaining All-Weather Land Surface Temperature with High Spatial Resolution
by Shuo Xu, Jie Cheng and Quan Zhang
Remote Sens. 2021, 13(11), 2211; https://doi.org/10.3390/rs13112211 - 5 Jun 2021
Cited by 25 | Viewed by 4431
Abstract
Land surface temperature (LST) is an important parameter for mirroring the water–heat exchange and balance on the Earth’s surface. Passive microwave (PMW) LST can make up for the lack of thermal infrared (TIR) LST caused by cloud contamination, but its resolution is relatively [...] Read more.
Land surface temperature (LST) is an important parameter for mirroring the water–heat exchange and balance on the Earth’s surface. Passive microwave (PMW) LST can make up for the lack of thermal infrared (TIR) LST caused by cloud contamination, but its resolution is relatively low. In this study, we developed a TIR and PWM LST fusion method on based the random forest (RF) machine learning algorithm to obtain the all-weather LST with high spatial resolution. Since LST is closely related to land cover (LC) types, terrain, vegetation conditions, moisture condition, and solar radiation, these variables were selected as candidate auxiliary variables to establish the best model to obtain the fusion results of mainland China during 2010. In general, the fusion LST had higher spatial integrity than the MODIS LST and higher accuracy than downscaled AMSR-E LST. Additionally, the magnitude of LST data in the fusion results was consistent with the general spatiotemporal variations of LST. Compared with in situ observations, the RMSE of clear-sky fused LST and cloudy-sky fused LST were 2.12–4.50 K and 3.45–4.89 K, respectively. Combining the RF method and the DINEOF method, a complete all-weather LST with a spatial resolution of 0.01° can be obtained. Full article
(This article belongs to the Special Issue Fusion of High-Level Remote Sensing Products)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The locations of the study area and two verification regions ((<b>a</b>) TP and (<b>b</b>) HRB). The locations of the two verification regions are indicated by two red rectangles, and the locations of sites are marked in (<b>a</b>,<b>b</b>) with green circle symbols.</p>
Full article ">Figure 2
<p>The flowchart for implementing model iii.</p>
Full article ">Figure 3
<p>The <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> </mrow> </semantics></math> and RMSE of the fusion results obtained by the four RF models.</p>
Full article ">Figure 4
<p>Spatial distributions of MODIS LST during daytime of the 17th, 107th, 196th, and 291st days of the year 2010, representing different months.</p>
Full article ">Figure 5
<p>Spatial distributions of MODIS LST during nighttime of the 17th, 107th, 196th, and 291st days of the year 2010, representing different months.</p>
Full article ">Figure 6
<p>Spatial distribution of the fusion LSTs for daytime of the 17th, 107th, 196th, and 291st days of the year 2010, representing different months.</p>
Full article ">Figure 7
<p>Spatial distribution of the fusion LSTs for nighttime of the 17th, 107th, 196th, and 291st days of the year 2010, representing different months.</p>
Full article ">Figure 8
<p>The RMSE of the fusion results.</p>
Full article ">Figure 9
<p>The scatter plots of the in situ temperature and MODIS LST at each site.</p>
Full article ">Figure 10
<p>The scatter plots of the in situ temperature and the fused LST at each site.</p>
Full article ">Figure 11
<p>The daily variations of the fused LST, in situ temperature, and MODIS LST at each site: (<b>a</b>) daytime, (<b>b</b>) nighttime.</p>
Full article ">Figure 12
<p>Spatial distributions of the complete all-weather LST during daytime of the 17th, 43rd, 75th, 107th, 139th, 163rd, 196th, 227th, 259th, 291st, 317th, and 345th days of the year 2010, representing different months.</p>
Full article ">Figure 12 Cont.
<p>Spatial distributions of the complete all-weather LST during daytime of the 17th, 43rd, 75th, 107th, 139th, 163rd, 196th, 227th, 259th, 291st, 317th, and 345th days of the year 2010, representing different months.</p>
Full article ">Figure 13
<p>Spatial distribution of the complete all-weather LST during nighttime of the 17th, 46th, 76th, 103rd, 141st, 164th, 196th, 228th, 260th, 292nd, 318th, and 346th days of the year 2010, representing different months.</p>
Full article ">Figure 14
<p>The scatter plot of the in situ temperature and the all-weather LST data.</p>
Full article ">Figure 15
<p>The scatter plot of missing value proportions and RMSE: (<b>a</b>) daytime; (<b>b</b>) nighttime.</p>
Full article ">Figure 16
<p>The variable importance plots.</p>
Full article ">Figure 17
<p>Scatter density plot: (<b>a</b>) MODIS LST and fused LST; (<b>b</b>) MODIS LST and downscaled AMSR-E LST. The first row is daytime data and the second row is nighttime data.</p>
Full article ">
15 pages, 5096 KiB  
Article
Sensitivity of C-Band SAR Polarimetric Variables to the Directionality of Surface Roughness Parameters
by Zohreh Alijani, John Lindsay, Melanie Chabot, Tracy Rowlandson and Aaron Berg
Remote Sens. 2021, 13(11), 2210; https://doi.org/10.3390/rs13112210 - 5 Jun 2021
Cited by 8 | Viewed by 4203
Abstract
Surface roughness is an important factor in many soil moisture retrieval models. Therefore, any mischaracterization of surface roughness parameters (root mean square height, RMSH, and correlation length, ʅ) may result in unreliable predictions and soil moisture estimations. In many environments, but particularly in [...] Read more.
Surface roughness is an important factor in many soil moisture retrieval models. Therefore, any mischaracterization of surface roughness parameters (root mean square height, RMSH, and correlation length, ʅ) may result in unreliable predictions and soil moisture estimations. In many environments, but particularly in agricultural settings, surface roughness parameters may show different behaviours with respect to the orientation or azimuth. Consequently, the relationship between SAR polarimetric variables and surface roughness parameters may vary depending on measurement orientation. Generally, roughness obtained for many SAR-based studies is estimated using pin profilers that may, or may not, be collected with careful attention to orientation to the satellite look angle. In this study, we characterized surface roughness parameters in multi-azimuth mode using a terrestrial laser scanner (TLS). We characterized the surface roughness parameters in different orientations and then examined the sensitivity between polarimetric variables and surface roughness parameters; further, we compared these results to roughness profiles obtained using traditional pin profilers. The results showed that the polarimetric variables were more sensitive to the surface roughness parameters at higher incidence angles (θ). Moreover, when surface roughness measurements were conducted at the look angle of RADARSAT-2, more significant correlations were observed between polarimetric variables and surface roughness parameters. Our results also indicated that TLS can represent more reliable results than pin profiler in the measurement of the surface roughness parameters. Full article
Show Figures

Figure 1

Figure 1
<p>The location of the Elora Research Station (ERS) in Ontario, Canada. The white polygons on the left image indicate the fields of study in May, and the black polygons indicate the November fields.</p>
Full article ">Figure 2
<p>The flowchart of RADARSAT-2 image processing steps.</p>
Full article ">Figure 3
<p>Variation of surface roughness characteristics across orientations for three representative fields (<a href="#remotesensing-13-02210-f003" class="html-fig">Figure 3</a>a–c). (<b>a</b>) A homogenous field with low variations in RMSH and correlation length in the whole field. (<b>b</b>) A moderately smooth field with some distinct variability across some orientations and more ridges and furrows than field <a href="#remotesensing-13-02210-f003" class="html-fig">Figure 3</a>a. (<b>c</b>) A rough field with the highest variability of RMSH and correlation length and more distinct ridges and furrows across orientations.</p>
Full article ">Figure 4
<p>The comparison of mean RMSH measured by TLS and pin profiler at two different orientations.</p>
Full article ">Figure 5
<p>Sensitivity of RADARSAT-2 parameters with respect to the orientation of mean RMSH at incidence angle 45° and 49°, with look angles 38.5° and 41.5°, respectively. (<b>a</b>) The sensitivity of linear backscatter coefficients (σ° HH and σ° VV) to the directionality of RMSH at incidence angles 45° and 49°. (<b>b</b>) The sensitivity of cross-polarized ratios (VV/VH, HH/HV, HV/VV), total power and pedestal height with respect to the RMSH orientation at both incidence angles.</p>
Full article ">
22 pages, 12670 KiB  
Article
Occurrence of GPS Loss of Lock Based on a Swarm Half-Solar Cycle Dataset and Its Relation to the Background Ionosphere
by Michael Pezzopane, Alessio Pignalberi, Igino Coco, Giuseppe Consolini, Paola De Michelis, Fabio Giannattasio, Maria Federica Marcucci and Roberta Tozzi
Remote Sens. 2021, 13(11), 2209; https://doi.org/10.3390/rs13112209 - 4 Jun 2021
Cited by 15 | Viewed by 3899
Abstract
This paper discusses the occurrence of Global Positioning System (GPS) loss of lock events obtained by considering total electron content (TEC) measurements carried out by the three satellites of the European Space Agency Swarm constellation from December 2013 to December 2020, which represents [...] Read more.
This paper discusses the occurrence of Global Positioning System (GPS) loss of lock events obtained by considering total electron content (TEC) measurements carried out by the three satellites of the European Space Agency Swarm constellation from December 2013 to December 2020, which represents the longest dataset ever used to perform such an analysis. After describing the approach used to classify a GPS loss of lock, the corresponding occurrence is analyzed as a function of latitude, local time, season, and solar activity to identify well-defined patterns. Moreover, the strict relation of the occurrence of the GPS loss of lock events with defined values of both the rate of change of electron density index (RODI) and the rate of change of TEC index (ROTI) is highlighted. The scope of this study is, on one hand, to characterize the background conditions of the ionosphere for such events and, on the other hand, to pave the way for their possible future modeling. The results shown, especially the fact that GPS loss of lock events tend to happen for well-defined values of both RODI and ROTI, are of utmost importance in the light of Space Weather effects mitigation. Full article
(This article belongs to the Special Issue Space Geodesy and Ionosphere)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Example of GPS LoL event identified at around 14 UT by Swarm A during its ascending orbit between 13:46:35 and 14:06:50 UT on 17 March 2015 and related to PRN = 9. Magenta curves identify the GPS LoL duration, that in this case is of 191 s. Left, middle, and right panels refer respectively to sTEC, ROTI, and RODI. Upper panels represent the values in geographic coordinates, while the lower ones show the corresponding plots latitude vs. value. 1 TEC unit (TECU) = 10<sup>16</sup> electrons/m<sup>2</sup>.</p>
Full article ">Figure 2
<p>Distribution of the duration (in seconds), between 1 and 100 s, for the cumulative dataset of LoL events suffered by Swarm A, B, and C, between December 2013 and December 2020. Results are given as percentage of occurrence in bins one-second wide. The total number of LoL is 44,731.</p>
Full article ">Figure 3
<p>Geographic distribution of the GPS LoL occurrence affecting Swarm A, B, and C, from December 2013 to December 2020, considering (<b>top panel</b>) all LoL events, independently of the duration, and (<b>bottom panel</b>) considering only those events whose duration is greater than 1 s. The magenta curve visible in both panels represents the magnetic equator.</p>
Full article ">Figure 4
<p>Distribution of the GPS LoL occurrence affecting Swarm A, B, and C, from December 2013 to December 2020, as a function of the QD latitude and the day of the year.</p>
Full article ">Figure 5
<p>Distribution of the GPS LoL events affecting Swarm A, B, and C, from December 2013 to December 2020, as a function of the QD latitude and MLT. (<b>T</b><b>op panel</b>) Global projection of the occurrence. (<b>Bottom panels</b>) Polar projections (where the radius is the QD latitude and the azimuth angle is the MLT) for (<b>left</b>) the Northern hemisphere (where the QD latitude ranges from 50° to the North magnetic pole) and (<b>right</b>) the Southern hemisphere (where the QD latitude ranges from –50° to the South magnetic pole) of the probability density.</p>
Full article ">Figure 6
<p>Same as Figure <b>4</b> but for each year of the considered dataset, from 2014 to 2020. Polar projection (where the radius is the QD latitude and the azimuth angle is the day of the year) (<b>first row</b>) for the Northern hemisphere, (<b>second row</b>) for the Southern hemisphere, (<b>third row</b>) global projection, (<b>fourth row</b>) daily values of the <span class="html-italic">F</span><sub>10.7</sub> solar index (in blue) and corresponding 81-days running mean <span class="html-italic">F</span><sub>10.7</sub><sub>81</sub> (in red).</p>
Full article ">Figure 7
<p>(<b>Top panel</b>) Same as <a href="#remotesensing-13-02209-f003" class="html-fig">Figure 3</a> but for RODI values at GPS loss of lock. The black curve represents the magnetic equator. (<b>Bottom panel</b>) RODI median values obtained for the period December 2013–December 2020 taking into account all RODI values and not only those corresponding to a LoL event.</p>
Full article ">Figure 8
<p>Same as <a href="#remotesensing-13-02209-f007" class="html-fig">Figure 7</a> but for ROTI values.</p>
Full article ">Figure 9
<p>Joint probability density distribution of RODI and ROTI values at GPS loss of lock, independently of the QD latitude, for the period December 2013–December 2020. On the top and right panels, histograms of the probability density for RODI and ROTI are also reported.</p>
Full article ">Figure 10
<p>Same as <a href="#remotesensing-13-02209-f009" class="html-fig">Figure 9</a> but selecting only GPS LoL events occurred for |QD latitude| &gt; 45°.</p>
Full article ">Figure 11
<p>Same as <a href="#remotesensing-13-02209-f009" class="html-fig">Figure 9</a> but selecting only GPS LoL events occurred for |QD latitude| ≤ 45°.</p>
Full article ">
17 pages, 4161 KiB  
Technical Note
CPS-Det: An Anchor-Free Based Rotation Detector for Ship Detection
by Yi Yang, Zongxu Pan, Yuxin Hu and Chibiao Ding
Remote Sens. 2021, 13(11), 2208; https://doi.org/10.3390/rs13112208 - 4 Jun 2021
Cited by 18 | Viewed by 3034
Abstract
Ship detection is a significant and challenging task in remote sensing. At present, due to the faster speed and higher accuracy, the deep learning method has been widely applied in the field of ship detection. In ship detection, targets usually have the characteristics [...] Read more.
Ship detection is a significant and challenging task in remote sensing. At present, due to the faster speed and higher accuracy, the deep learning method has been widely applied in the field of ship detection. In ship detection, targets usually have the characteristics of arbitrary-oriented property and large aspect ratio. In order to take full advantage of these features to improve speed and accuracy on the base of deep learning methods, this article proposes an anchor-free method, which is referred as CPS-Det, on ship detection using rotatable bounding box. The main improvements of CPS-Det as well as the contributions of this article are as follows. First, an anchor-free based deep learning network was used to improve speed with fewer parameters. Second, an annotation method of oblique rectangular frame is proposed, which solves the problem that periodic angle and bounded coordinates in conjunction with the regression calculation can lead to the problem of loss anomalies. For the annotation scheme proposed in this paper, a scheme for calculating Angle Loss is proposed, which makes the loss function of angle near the boundary value more accurate and greatly improves the accuracy of angle prediction. Third, the centerness calculation of feature points is optimized in this article so that the center weight distribution of each point is suitable for the rotation detection. Finally, a scheme combining centerness and positive sample screening is proposed and its effectiveness in ship detection is proved. Experiments on remote sensing public dataset HRSC2016 show the effectiveness of our approach. Full article
Show Figures

Figure 1

Figure 1
<p>Network of CPS-Det.</p>
Full article ">Figure 2
<p>The annotation form we defined of target’s RBox.</p>
Full article ">Figure 3
<p>The process of positive sample screening.</p>
Full article ">Figure 4
<p>The equipotential line of the weight distribution.</p>
Full article ">Figure 5
<p>Loss descent curve of baseline.</p>
Full article ">Figure 6
<p>AP curve of ablation study, "Exp" represents the experimental sequence number in <a href="#remotesensing-13-02208-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 7
<p>Loss descent curve of CPS-Det.</p>
Full article ">Figure 8
<p>The effect of Angle Loss.</p>
Full article ">Figure 9
<p>Visualization of detection results from CPS-Det on HRSC2016 dataset.</p>
Full article ">
17 pages, 13087 KiB  
Article
Aircraft Detection in High Spatial Resolution Remote Sensing Images Combining Multi-Angle Features Driven and Majority Voting CNN
by Fengcheng Ji, Dongping Ming, Beichen Zeng, Jiawei Yu, Yuanzhao Qing, Tongyao Du and Xinyi Zhang
Remote Sens. 2021, 13(11), 2207; https://doi.org/10.3390/rs13112207 - 4 Jun 2021
Cited by 26 | Viewed by 4458
Abstract
Aircraft is a means of transportation and weaponry, which is crucial for civil and military fields to detect from remote sensing images. However, detecting aircraft effectively is still a problem due to the diversity of the pose, size, and position of the aircraft [...] Read more.
Aircraft is a means of transportation and weaponry, which is crucial for civil and military fields to detect from remote sensing images. However, detecting aircraft effectively is still a problem due to the diversity of the pose, size, and position of the aircraft and the variety of objects in the image. At present, the target detection methods based on convolutional neural networks (CNNs) lack the sufficient extraction of remote sensing image information and the post-processing of detection results, which results in a high missed detection rate and false alarm rate when facing complex and dense targets. Aiming at the above questions, we proposed a target detection model based on Faster R-CNN, which combines multi-angle features driven and majority voting strategy. Specifically, we designed a multi-angle transformation module to transform the input image to realize the multi-angle feature extraction of the targets in the image. In addition, we added a majority voting mechanism at the end of the model to deal with the results of the multi-angle feature extraction. The average precision (AP) of this method reaches 94.82% and 95.25% on the public and private datasets, respectively, which are 6.81% and 8.98% higher than that of the Faster R-CNN. The experimental results show that the method can detect aircraft effectively, obtaining better performance than mature target detection networks. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>Top</b>) original image and feature map. (<b>Bottom</b>) 270° rotation and feature map.</p>
Full article ">Figure 2
<p>Aircraft detection process of remote sensing images based on multi-angle features driven strategy.</p>
Full article ">Figure 3
<p>The processing flow of the detection boxes based on majority voting strategy.</p>
Full article ">Figure 4
<p>The structure of Binary Classification Network.</p>
Full article ">Figure 5
<p>Object detection dataset subsets. (<b>a</b>) Dataset I. (<b>b</b>) Dataset II. (<b>c</b>) Dataset III.</p>
Full article ">Figure 6
<p>Subsets of binary classification network dataset. (<b>a</b>) Positive samples. (<b>b</b>) Negative samples.</p>
Full article ">Figure 7
<p>Results on different datasets. (<b>a</b>) Results on Dataset II. (<b>b</b>) Results on Dataset III.</p>
Full article ">Figure 8
<p>P-R curve of different models on dataset. (<b>a</b>) Dataset II. (<b>b</b>) Dataset III.</p>
Full article ">Figure 9
<p>Comparison of detection results on test set (The second line are partial enlarge views and the yellow dashed ellipse is used to emphasize the changed area). (<b>a</b>,<b>c</b>) Results of Faster R-CNN. (<b>b</b>,<b>d</b>) Results after multi-angle features driven strategy (different color boxes represent the preliminary detection results). (<b>a</b>,<b>b</b>) Dataset II. (<b>c</b>,<b>d</b>) Dataset III.</p>
Full article ">Figure 10
<p>Results after majority voting strategy (The second line are partial enlarge views and the yellow dashed ellipse is used to emphasize the changed area). (<b>a</b>,<b>c</b>) Results after multi-angle features driven strategy (different color boxes represent the preliminary detection results). (<b>b</b>,<b>d</b>) Results after majority voting strategy. (<b>a</b>,<b>b</b>) Dataset II. (<b>c</b>,<b>d</b>) Dataset III.</p>
Full article ">
22 pages, 46802 KiB  
Article
Spatial Autocorrelation of Martian Surface Temperature and Its Spatio-Temporal Relationships with Near-Surface Environmental Factors across China’s Tianwen-1 Landing Zone
by Yaowen Luo, Jianguo Yan, Fei Li and Bo Li
Remote Sens. 2021, 13(11), 2206; https://doi.org/10.3390/rs13112206 - 4 Jun 2021
Cited by 11 | Viewed by 3662
Abstract
Variations in the Martian surface temperature indicate patterns of surface energy exchange. The Martian surface temperature at a location is similar to those in adjacent locations; but, an understanding of temperature clusters in multiple locations will deepen our knowledge of planetary surface processes [...] Read more.
Variations in the Martian surface temperature indicate patterns of surface energy exchange. The Martian surface temperature at a location is similar to those in adjacent locations; but, an understanding of temperature clusters in multiple locations will deepen our knowledge of planetary surface processes overall. The spatial coherence of the Martian surface temperature (ST) at different locations, the spatio-temporal variations in temperature clusters, and the relationships between ST and near-surface environmental factors, however, are not well understood. To fill this gap, we studied an area to the south of Utopia Planitia, the landing zone for the Tianwen-1 Mars Exploration mission. The spatial aggregation of three Martian ST indicators (STIs), including sol average temperature (SAT), sol temperature range (STR), and sol-to-sol temperature change (STC), were quantitatively evaluated using clustering analysis at the global and local scale. In addition, we also detected the spatio-temporal variations in relations between the STIs and seven potential driving factors, including thermal inertia, albedo, dust, elevation, slope, and zonal and meridional winds, across the study area during 81 to 111 sols in Martian years 29–32, based on a geographically and temporally weighted regression model (GTWR). We found that the SAT, STR, and STC were not randomly distributed over space but exhibited signs of significant spatial aggregation. Thermal inertia and dust made the greatest contribution to the fluctuation in STIs over time. The local surface temperature was likely affected by the slope, wind, and local circulation, especially in the area with a large slope and low thermal inertia. In addition, the sheltering effects of the mountains at the edge of the basin likely contributed to the spatial difference in SAT and STR. These results are a reminder that the spatio-temporal variation in the local driving factors associated with Martian surface temperature cannot be neglected. Our research contributes to the understanding of the surface environment that might compromise the survival and operations of the Tianwen-1 lander on the Martian surface. Full article
(This article belongs to the Special Issue Cartography of the Solar System: Remote Sensing beyond Earth)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of the study area (<b>a</b>), the elevation distribution (<b>b</b>), and the average dust distribution (<b>c</b>) during sol 81–111 from Martian year 29 to 32 across the study area.</p>
Full article ">Figure 2
<p>Geological map of the study area. A description of the geology units is listed in the <a href="#app2-remotesensing-13-02206" class="html-app">Appendix A</a>.</p>
Full article ">Figure 3
<p>The average surface temperature during sol 81–111 from Martian years 29 to 32.</p>
Full article ">Figure 4
<p>The change in sol average temperature (SAT), sol temperature range (STR), and sol-to-sol temperature change (STC) over time during the period sol 81–111 in Martian years 29–32.</p>
Full article ">Figure 5
<p>The trend in global spatial autocorrelation (<span class="html-italic">p</span> &lt; 0.001) for sol average temperature (<b>a</b>), sol temperature range (<b>b</b>), and sol-to-sol temperature range (<b>c</b>) during sol 81–111 from Martian years 29 to 32 in the study area.</p>
Full article ">Figure 6
<p>LISA map for sol average temperature (SAT) during sol 81–92 in Martian year 29. The LISA maps for SAT during sol 81–111 from Martian year 29 to 32 are provided in the <a href="#app1-remotesensing-13-02206" class="html-app">Supplementary Materials (Figures S1A–D)</a>.</p>
Full article ">Figure 7
<p>LISA for sol temperature range (STR) during sol 81–92 in Martian year 29. The LISA maps for STR during sol 81–111 from Martian years 29 to 32 are in the <a href="#app1-remotesensing-13-02206" class="html-app">Supplementary Materials (Figures S2A–D)</a>.</p>
Full article ">Figure 8
<p>LISA for sol-to-sol temperature change (STC) during sol 81–92 in Martian year 29. The LISA maps for STC during sol 81–111 from Martian years 29 to 32 are in the <a href="#app1-remotesensing-13-02206" class="html-app">Supplementary Materials (Figures S3A–D)</a>.</p>
Full article ">Figure 9
<p>The temporal trend in the coefficient of independent variables for SAT based on the GTWR model during 81–111 sol from Martian years 29 to 32.</p>
Full article ">Figure 10
<p>The temporal trend in the coefficient of independent variables for STR based on the GTWR model during 81–111 sol from Martian years 29 to 32.</p>
Full article ">Figure 11
<p>The distribution of the coefficients of the affecting factors (thermal inertia (<b>a</b>), albedo (<b>b</b>), elevation (<b>c</b>), dust (<b>d</b>), slope (<b>e</b>), zonal wind (<b>f</b>), and meridional wind (<b>g</b>) for SAT.</p>
Full article ">Figure 12
<p>The distribution of the coefficients of the affecting factors (thermal inertia (<b>a</b>), albedo (<b>b</b>), elevation (<b>c</b>), dust (<b>d</b>), slope (<b>e</b>), zonal wind (<b>f</b>) and meridional wind (<b>g</b>) for STR.</p>
Full article ">
21 pages, 41697 KiB  
Article
Mapping Outburst Floods Using a Collaborative Learning Method Based on Temporally Dense Optical and SAR Data: A Case Study with the Baige Landslide Dam on the Jinsha River, Tibet
by Zhongkang Yang, Jinbing Wei, Jianhui Deng, Yunjian Gao, Siyuan Zhao and Zhiliang He
Remote Sens. 2021, 13(11), 2205; https://doi.org/10.3390/rs13112205 - 4 Jun 2021
Cited by 10 | Viewed by 3811
Abstract
Outburst floods resulting from giant landslide dams can cause devastating damage to hundreds or thousands of kilometres of a river. Accurate and timely delineation of flood inundated areas is essential for disaster assessment and mitigation. There have been significant advances in flood mapping [...] Read more.
Outburst floods resulting from giant landslide dams can cause devastating damage to hundreds or thousands of kilometres of a river. Accurate and timely delineation of flood inundated areas is essential for disaster assessment and mitigation. There have been significant advances in flood mapping using remote sensing images in recent years, but little attention has been devoted to outburst flood mapping. The short-duration nature of these events and observation constraints from cloud cover have significantly challenged outburst flood mapping. This study used the outburst flood of the Baige landslide dam on the Jinsha River on 3 November 2018 as an example to propose a new flood mapping method that combines optical images from Sentinel-2, synthetic aperture radar (SAR) images from Sentinel-1 and a Digital Elevation Model (DEM). First, in the cloud-free region, a comparison of four spectral indexes calculated from time series of Sentinel-2 images indicated that the normalized difference vegetation index (NDVI) with the threshold of 0.15 provided the best separation flooded area. Subsequently, in the cloud-covered region, an analysis of dual-polarization RGB false color composites images and backscattering coefficient differences of Sentinel-1 SAR data were found an apparent response to ground roughness’s changes caused by the flood. We carried out the flood range prediction model based on the random forest algorithm. Training samples consisted of 13 feature vectors obtained from the Hue-Saturation-Value color space, backscattering coefficient differences/ratio, DEM data, and a label set from the flood range prepared from Sentinel-2 images. Finally, a field investigation and confusion matrix tested the prediction accuracy of the end-of-flood map. The overall accuracy and Kappa coefficient were 92.3%, 0.89 respectively. The full extent of the outburst floods was successfully obtained within five days of its occurrence. The multi-source data merging framework and the massive sample preparation method with SAR images proposed in this paper, provide a practical demonstration for similar machine learning applications using remote sensing. Full article
Show Figures

Figure 1

Figure 1
<p>Study area location and image range (Number indicates township code. NO 2, 17, 39 and 42 correspond to Bo luo country, Zhu balong country, Ju dian country, and Shi gu country, respectively).</p>
Full article ">Figure 2
<p>Field validation sites. Left: the distribution of validation sites; Right: (<b>a</b>–<b>c</b>): Scene photos documenting flood damage.</p>
Full article ">Figure 3
<p>Sentinel-2 images examples used with the different outburst flood scenes. Left: Spatial distribution of three flood mapping scenes. Right: Image examples of corresponding scenes, (<b>a</b>) Open flood, landslide barrier lake. (<b>b</b>) Open flood, ongoing flood. (<b>c</b>) End -of-flood in cloudless area. (<b>d</b>) End-of-flood in the cloud-covered area.</p>
Full article ">Figure 4
<p>The workflow of the proposed method for the outburst floods.</p>
Full article ">Figure 5
<p>The Sentinel-2 four spectral indices. The time series were from 11.09 to 11.19. From left to right, are NDVI, NDWI, MNDWI, AWEI, respectively. From top to bottom are observation dates 11.09, 11.14 and 11.19 in order; The blue dashed line between images on 11.14 and 11.19 indicates the flood occurred date was 11.15. Line b is the river section location in Figure 6 and rectangular a is the test area of Figure 7.</p>
Full article ">Figure 6
<p>Time series of four spectral indices of profile b.</p>
Full article ">Figure 7
<p>Flood mapping experiments obtained from applying NDVI (middle row) and NDWI (last row) to the test site in Ju Dian Town using Sentinel-2 images on 19 November 2018. (<b>a</b>) Google 3D view showing of the test site, (<b>b</b>) Flood transiting scene photos for 8:00 a.m., 15 November 2018.</p>
Full article ">Figure 8
<p>Sentinel-1 FCCs images with difference combination of SAR polarization. (<b>a</b>) VV SAR image on 2018.11.03; (<b>b</b>) VV SAR image on 2018.11.15; (<b>c</b>) R: 1103VV, G: 11.15VV, B: 11.15VH; (<b>d</b>) R: 1103VH, G: 11.15VV, B: 11.15VV; (<b>e</b>) R: 1103VV, G: 11.15VH, B: 11.15VV; (<b>f</b>) R: 1103VV, G: 11.15VV, B: 11.15VV. The white dotted line is the inundation range from Sentinel-2 data.</p>
Full article ">Figure 9
<p>Sentinel-1 SAR FCCs images and VV polarized backscattering coefficient difference. (<b>a</b>–<b>c</b>) are FCCs images from C1, C2 and C4 in <a href="#remotesensing-13-02205-f001" class="html-fig">Figure 1</a>, respectively. (<b>d</b>–<b>f</b>) are VV polarized backscatter coefficient differences.</p>
Full article ">Figure 10
<p>RF algorithm flood mapping result. (<b>a</b>) Spatial distribution of training samples and prediction samples, the cloud occlusion area C4 (Shigu Town, NO. 42); (<b>b</b>) Flood prediction results. (<b>c</b>) The detailed mapping results of the red box.</p>
Full article ">Figure 11
<p>The mapping results accuracy evaluation. (<b>a</b>) Google Image; (<b>b</b>,<b>c</b>) Flood transit scene photos. Filming time: 12:00 a.m. of 15 November 2018. (<b>d</b>) the classification results of the region using CDAT algorithm.</p>
Full article ">Figure 12
<p>The spatial pattern of flooded area about “1103” Baige Landslide outburst flood.</p>
Full article ">Figure 13
<p>Damage photos of scouring and silting after the flood. (<b>A</b>–<b>D</b>) correspond to areas in <a href="#remotesensing-13-02205-f009" class="html-fig">Figure 9</a>, (A–D).</p>
Full article ">Figure 14
<p>Feature importance ranked by the RF algorithm, HD, S is relative height difference and slope, Hu, St, and Va are hue, saturation, value of the HSV Color space; V,v is VV<sub>N + 1</sub> and VV<sub>N1</sub> and H, h are VH<sub>N + 1</sub> and VH<sub>N1</sub> respectively.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop