[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 14, May-1
Previous Issue
Volume 14, April-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 14, Issue 8 (April-2 2022) – 186 articles

Cover Story (view full-size image): The spaceborne hyperspectral missions foreseen to be launched in the next few years will provide an unprecedented amount of spectroscopic data, enabling new research possibilities within several fields of natural resources, including the “Agriculture and Food Security” domain. In order to efficiently exploit this data stream to extract useful information for sustainable agriculture applications, new processing methods and techniques need to be studied and implemented. This work evaluated the potential of the hybrid approach (radiative transfer model plus machine learning) to assess maize traits (chlorophyll and nitrogen content at leaf and canopy level) within the framework of the future CHIME mission. The promising results obtained in this study support the feasibility of crop traits’ retrieval from spaceborne imaging spectroscopy. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
29 pages, 5415 KiB  
Article
Predicting Landslides Susceptible Zones in the Lesser Himalayas by Ensemble of Per Pixel and Object-Based Models
by Ujjwal Sur, Prafull Singh, Sansar Raj Meena and Trilok Nath Singh
Remote Sens. 2022, 14(8), 1953; https://doi.org/10.3390/rs14081953 - 18 Apr 2022
Cited by 14 | Viewed by 3656
Abstract
Landslide susceptibility is a contemporary method for delineation of landslide hazard zones and holistically mitigating the future landslides risks for planning and decision-making. The significance of this study is that it would be the first instance when the ‘geon’ model will be attempted [...] Read more.
Landslide susceptibility is a contemporary method for delineation of landslide hazard zones and holistically mitigating the future landslides risks for planning and decision-making. The significance of this study is that it would be the first instance when the ‘geon’ model will be attempted to delineate landslide susceptibility map (LSM) for the complex lesser Himalayan topography as a contemporary LSM technique. This study adopted the per-pixel-based ensemble approaches through modified frequency ratio (MFR) and fuzzy analytical hierarchy process (FAHP) and compared it with the ‘geons’ (object-based) aggregation method to produce an LSM for the lesser Himalayan Kalsi-Chakrata road corridor. For the landslide susceptibility models, 14 landslide conditioning factors were carefully chosen; namely, slope, slope aspect, elevation, lithology, rainfall, seismicity, normalized differential vegetation index, stream power index, land use/land cover, soil, topographical wetness index, and proximity to drainage, road, and fault. The inventory data for the past landslides were derived from preceding satellite images, intensive field surveys, and validation surveys. These inventory data were divided into training and test datasets following the commonly accepted 70:30 ratio. The GIS-based statistical techniques were adopted to establish the correlation between landslide training sites and conditioning factors. To determine the accuracy of the model output, the LSMs accuracy was validated through statistical methods of receiver operating characteristics (ROC) and relative landslide density index (R-index). The accuracy results indicate that the object-based geon methods produced higher accuracy (geon FAHP: 0.934; geon MFR: 0.910) over the per-pixel approaches (FAHP: 0.887; MFR: 0.841). The results noticeably showed that the geon method constructs significant regional units for future mitigation strategies and development. The present study may significantly benefit the decision-makers and regional planners in selecting the appropriate risk mitigation procedures at a local scale to counter the potential damages and losses from landslides in the area. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location map of the study area.</p>
Full article ">Figure 2
<p>Active landslides along the Kalsi-Chakrata road corridor, (<b>a</b>) Rock Fall between Chapanu and Sahiya, (<b>b</b>) Amroha landslide site in 2017, retaining wall damaged.</p>
Full article ">Figure 3
<p>Landslide inventory along the Kalsi-Chakrata road corridor showing spatial distribution of (<b>a</b>) the test and training sites selected for model building, (<b>b</b>) landslide area in polygon.</p>
Full article ">Figure 4
<p>Landslide conditioning factors used in this study- (<b>a</b>) slope angle, (<b>b</b>) aspect, (<b>c</b>) elevation, (<b>d</b>) distance to drainage, (<b>e</b>) lithological units, (<b>f</b>) landuse/landcover (LULC), (<b>g</b>) soil, (<b>h</b>) NDVI, (<b>i</b>) rainfall, (<b>j</b>) seismicity, (<b>k</b>) distance to road, (<b>l</b>) distance to faults, (<b>m</b>) TWI and (<b>n</b>) SPI.</p>
Full article ">Figure 4 Cont.
<p>Landslide conditioning factors used in this study- (<b>a</b>) slope angle, (<b>b</b>) aspect, (<b>c</b>) elevation, (<b>d</b>) distance to drainage, (<b>e</b>) lithological units, (<b>f</b>) landuse/landcover (LULC), (<b>g</b>) soil, (<b>h</b>) NDVI, (<b>i</b>) rainfall, (<b>j</b>) seismicity, (<b>k</b>) distance to road, (<b>l</b>) distance to faults, (<b>m</b>) TWI and (<b>n</b>) SPI.</p>
Full article ">Figure 5
<p>Methodology adopted for this study.</p>
Full article ">Figure 6
<p>LSI Mapping using (<b>a</b>) MFR model; (<b>b</b>) FAHP model.</p>
Full article ">Figure 7
<p>Percentage area under landslide susceptible zones obtained from the (<b>a</b>) MFR; (<b>b</b>) FAHP; (<b>c</b>) Geon MFR; (<b>d</b>) Geon FAHP models.</p>
Full article ">Figure 8
<p>LSI Mapping using (<b>a</b>) MFR geons; (<b>b</b>) FAHP geons for the Kalsi-Chakrata road corridor.</p>
Full article ">Figure 9
<p>ROC curve showing the precision for the MFR, FAHP, geon MFR and geon FAHP models.</p>
Full article ">Figure 10
<p>R-Index for LSI Classes.</p>
Full article ">
19 pages, 15180 KiB  
Article
Discussion on InSAR Identification Effectivity of Potential Landslides and Factors That Influence the Effectivity
by Jingtao Liang, Jihong Dong, Su Zhang, Cong Zhao, Bin Liu, Lei Yang, Shengwu Yan and Xiaobo Ma
Remote Sens. 2022, 14(8), 1952; https://doi.org/10.3390/rs14081952 - 18 Apr 2022
Cited by 17 | Viewed by 2702
Abstract
The southwest mountainous area of China is one of the areas with the most landslides in the world. In this paper, we used Ya’an City and Garzê Tibetan Autonomous Prefecture in Sichuan Province as the research areas to explore the identification application effects [...] Read more.
The southwest mountainous area of China is one of the areas with the most landslides in the world. In this paper, we used Ya’an City and Garzê Tibetan Autonomous Prefecture in Sichuan Province as the research areas to explore the identification application effects of large-area potential landslides using synthetic aperture radar (SAR) data with different wavelength types (Sentinel-1, ALOS-2), different processing methods (SBAS-InSAR, Stacking-InSAR), and different geological environmental conditions. The results show the following: (1) The effect of identifying landslides with different slope directions is largely affected by the satellite orbit direction; when we identify landslide hazards across a large area, the joint monitoring mode of ascending and descending orbit data is required. (2) The period of monitoring affects the identification effect of potential landslides when landslide identification is carried out in southwestern China; the InSAR monitoring period is recommended to be more than 2 years. (3) In different geological environmental regions, SBAS technology and Stacking technology have their own advantages; Stacking technology identifies more potential landslides, and SBAS technology identifies potential landslides with higher accuracy; (4) the degree of vegetation coverage has a great impact on the landslide identification effect of different SAR data sources. In low-density vegetation coverage areas, the landslide identification result using Sentinel-1 data seems to be better than the result using ALOS-2 data. In high-density vegetation coverage areas, the landslide identification result using ALOS-2 data is better than that using Sentinel-1 data. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of the study area and the distribution range of SAR data.</p>
Full article ">Figure 2
<p>The topography of the study area.</p>
Full article ">Figure 3
<p>Rainfall distribution map in the study area.</p>
Full article ">Figure 4
<p>Flow chart of InSAR data processing in the study area.</p>
Full article ">Figure 5
<p>The identification flow chart of potential landslide based on InSAR technology.</p>
Full article ">Figure 6
<p>(<b>a</b>) Deformation map of the surface area from InSAR. (<b>b</b>) Optical satellite image of the ground deformation area on 1 February 2021.</p>
Full article ">Figure 7
<p>(<b>a</b>) Deformation map of the surface area from InSAR. (<b>b</b>) Optical satellite image of the ground deformation area on 30 September 2020.</p>
Full article ">Figure 8
<p>(<b>a</b>) Deformation map of the surface area from InSAR. (<b>b</b>) Optical satellite image of the ground deformation area on 27 August 2020.</p>
Full article ">Figure 9
<p>The overall distribution map of potential landslides identified in the study area. The upper-right corner is the statistics of the percentage of Sentinel-1 data visibility distribution area in the study area.</p>
Full article ">Figure 10
<p>Statistics of the number of landslides identified by ascending and descending orbit data in the study area.</p>
Full article ">Figure 11
<p>Statistics of landslide identification results by SBAS technology and Stacking technology in the study area.</p>
Full article ">Figure 12
<p>Schematic diagram of potential landslide identification in the ascending and descending track data (Red zone represents the location of the landslide. In the descending track data, the AB area is the area with good identification effect, and CD is the shaded area; in the ascending track data, CD is the area with good identification effect. area, AB is the shaded area).</p>
Full article ">Figure 13
<p>Sentinel-1 ascending orbit (<b>left</b>) and descending orbit (<b>right</b>) data visibility distribution map.</p>
Full article ">Figure 14
<p>Schematic map of joint monitoring mode of ascending and descending orbit data. (θ is the incidence angle of the Radar).</p>
Full article ">Figure 15
<p>Comparison of landslide identification results from ascending and descending orbit data.</p>
Full article ">Figure 16
<p>Comparison of deformation rate map and the identification results of different monitoring periods based on SBAS InSAR technology. (<b>a</b>) The monitoring period is one year; (<b>b</b>) the monitoring period is two years; (<b>c</b>) the monitoring period is three years.</p>
Full article ">Figure 17
<p>Statistics of landslide identification by SBAS technology and Stacking technology in different regions.</p>
Full article ">Figure 18
<p>(<b>a</b>,<b>b</b>) The correlation coefficient graphs generated by Sentinel-1 data in the same area, Garze Prefecture, respectively, at a summer interval of 48 days and the ALOS-2 data generated at a summer interval of 168 days, without filter processing; (<b>c</b>,<b>d</b>) correlation coefficient graphs obtained in the same area, the Ya’an city area, using the same filtering method and filtering window. The data are generated at 364-day interval of Sentinel-1 data and 12-day interval of ALSO-2 data in summer; (<b>e</b>) the picture shows the selected area (<b>a</b>) in which we used Sentinel-1 data to obtain the coherence statistics graph of different intervals of 7 May 2018~18 July 2018; (<b>f</b>) the picture shows the selected area (<b>c</b>) in which we used Sentinel-1 data to obtain the coherence statistics graph of different intervals between 10 November 2018 and 21 January 2019.</p>
Full article ">Figure 19
<p>Comparison of Stacking-InSAR results of different waveband data in low-density vegetation coverage areas (left is the result of Sentinel-1 data; right is the result of ALOS-2 data).</p>
Full article ">Figure 20
<p>InSAR identification results and on-site verification photos of the local area. (<b>a</b>) The result of ALOS-2 data; (<b>b</b>) the result of Sentinel-1 data; (<b>c</b>) optical satellite image of the area; (<b>d</b>) site photo of H03 landslides; (<b>e</b>) site photo of H04 landslides; (<b>f</b>) photo of deformation characteristic of H03 landslide; (<b>g</b>) photo of deformation characteristic of H01 landslide; (<b>h</b>) photo of deformation characteristic of H02 landslide; (<b>i</b>) photo of deformation characteristic of H05 landslide; (<b>j</b>) photo of deformation characteristic of H06 landslide; (<b>k</b>) photo of deformation characteristic of H07 landslide. Red arrows in (<b>d</b>–<b>h</b>,<b>k</b>) indicate the sliding direction. Red arrows in (<b>i</b>,<b>j</b>) indicate the location of cracks.</p>
Full article ">Figure 21
<p>Comparison of Stacking-InSAR results of different waveband data in high-density vegetation coverage areas. (<b>a</b>) The result of Sentinel-1 data; (<b>b</b>) the result of ALOS-2 data).</p>
Full article ">Figure 22
<p>InSAR identification and on-site verification photos of a typical landslide. (<b>a</b>) the InSAR result of ALOS-2 data; (<b>b</b>) the InSAR result of Sentinel-1 data; (<b>c</b>) the optical satellite image of the landslide; (<b>d</b>–<b>g</b>) photos showing the deformation characteristics of the landslide. Red arrows in (<b>d</b>,<b>e</b>,<b>g</b>) indicate the location of cracks.</p>
Full article ">
26 pages, 10862 KiB  
Article
Hyperspectral Image Classification Based on Spectral Multiscale Convolutional Neural Network
by Cuiping Shi, Jingwei Sun and Liguo Wang
Remote Sens. 2022, 14(8), 1951; https://doi.org/10.3390/rs14081951 - 18 Apr 2022
Cited by 6 | Viewed by 4096
Abstract
In recent years, convolutional neural networks (CNNs) have been widely used for hyperspectral image classification, which show good performance. Compared with using sufficient training samples for classification, the classification accuracy of hyperspectral images is easily affected by a small number of samples. Moreover, [...] Read more.
In recent years, convolutional neural networks (CNNs) have been widely used for hyperspectral image classification, which show good performance. Compared with using sufficient training samples for classification, the classification accuracy of hyperspectral images is easily affected by a small number of samples. Moreover, although CNNs can effectively classify hyperspectral images, due to the rich spatial and spectral information of hyperspectral images, the efficiency of feature extraction still needs to be further improved. In order to solve these problems, a spatial–spectral attention fusion network using four branch multiscale block (FBMB) to extract spectral features and 3D-Softpool to extract spatial features is proposed. The network consists of three main parts. These three parts are connected in turn to fully extract the features of hyperspectral images. In the first part, four different branches are used to fully extract spectral features. The convolution kernel size of each branch is different. Spectral attention block is adopted behind each branch. In the second part, the spectral features are reused through dense connection blocks, and then the spectral attention module is utilized to refine the extracted spectral features. In the third part, it mainly extracts spatial features. The DenseNet module and spatial attention block jointly extract spatial features. The spatial features are fused with the previously extracted spectral features. Experiments are carried out on four commonly used hyperspectral data sets. The experimental results show that the proposed method has better classification performance than some existing classification methods when using a small number of training samples. Full article
(This article belongs to the Special Issue Recent Advances in Processing Mixed Pixels for Hyperspectral Image)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overall framework of the proposed SFBMSN network.</p>
Full article ">Figure 2
<p>The schematic diagram of spectral self-attention module.</p>
Full article ">Figure 3
<p>The schematic diagram of DenseNet.</p>
Full article ">Figure 4
<p>The schematic diagram of 3D-Softpool.</p>
Full article ">Figure 5
<p>Schematic diagram of spatial self-attention module.</p>
Full article ">Figure 6
<p>(<b>a</b>) The OA effect of <math display="inline"><semantics> <mi>n</mi> </semantics></math> on the four hyperspectral image data sets. (<b>b</b>) The OA effect of spatial size on the four hyperspectral image data sets.</p>
Full article ">Figure 7
<p>Classification maps of IN data set using 3% training samples: (<b>a</b>) false color image, (<b>b</b>) ground truth map (GT) and (<b>c</b>–<b>j</b>) the classification map and overall accuracy of different algorithms.</p>
Full article ">Figure 8
<p>Classification maps of UP data set using 0.3% training samples: (<b>a</b>) false color image, (<b>b</b>) ground truth map (GT) and (<b>c</b>–<b>j</b>) classification map and overall accuracy of different algorithms.</p>
Full article ">Figure 9
<p>Classification maps of SV data set using 0.5% training samples: (<b>a</b>) false color image, (<b>b</b>) ground truth map (GT) and (<b>c</b>–<b>j</b>) the classification map and overall accuracy of different algorithms.</p>
Full article ">Figure 10
<p>Classification maps of KSC data set using 5% training samples: (<b>a</b>) false color image, (<b>b</b>) ground truth map (GT) and (<b>c</b>–<b>j</b>) the classification map and overall accuracy of different algorithms.</p>
Full article ">Figure 11
<p>Ablation experiments of three modules of the proposed method on different data sets: (<b>a</b>) attention mechanism, (<b>b</b>) FBMB, (<b>c</b>) 3D-Softpool and (<b>d</b>) dense connection.</p>
Full article ">Figure 12
<p>Schematic diagram of FBMB.</p>
Full article ">Figure 13
<p>Comparative experiments results of FBMB, FBSSB, C1, C2, C3 and C4 on different data sets.</p>
Full article ">Figure 14
<p>The classification performance of different methods is compared under different training sample ratios in IN, UP, SV and KSC data sets. (<b>a</b>) Classification performance of different methods on IN data set, (<b>b</b>) classification performance of different methods on UP data set, (<b>c</b>) classification performance of different methods on SV data set and (<b>d</b>) classification performance of different methods on KSC data set.</p>
Full article ">
21 pages, 10401 KiB  
Article
An Adaptive Focal Loss Function Based on Transfer Learning for Few-Shot Radar Signal Intra-Pulse Modulation Classification
by Zehuan Jing, Peng Li, Bin Wu, Shibo Yuan and Yingchao Chen
Remote Sens. 2022, 14(8), 1950; https://doi.org/10.3390/rs14081950 - 18 Apr 2022
Cited by 15 | Viewed by 3039
Abstract
To solve the difficulty associated with radar signal classification in the case of few-shot signals, we propose an adaptive focus loss algorithm based on transfer learning. Firstly, we trained a one-dimensional convolutional neural network (CNN) with radar signals of three intra-pulse modulation types [...] Read more.
To solve the difficulty associated with radar signal classification in the case of few-shot signals, we propose an adaptive focus loss algorithm based on transfer learning. Firstly, we trained a one-dimensional convolutional neural network (CNN) with radar signals of three intra-pulse modulation types in the source domain, which were effortlessly obtained and had sufficient samples. Then, we transferred the knowledge obtained by the convolutional layer to nine types of few-shot complex intra-pulse modulation classification tasks in the target domain. We propose an adaptive focal loss function based on the focal loss function, which can estimate the parameters based on the ratio of hard samples to easy samples in the data set. Compared with other existing algorithms, our proposed algorithm makes good use of transfer learning to transfer the acquired prior knowledge to new domains, allowing the CNN model to converge quickly and achieve good recognition performance in case of insufficient samples. The improvement based on the focal loss function allows the model to focus on the hard samples while estimating the focusing parameter adaptively instead of tediously repeating experiments. The experimental results show that the proposed algorithm had the best recognition rate at different sample sizes with an average recognition rate improvement of 4.8%, and the average recognition rate was better than 90% for different signal-to-noise ratios (SNRs). In addition, upon comparing the training processes of different models, the proposed method could converge with the least number of generations and the shortest time under the same experimental conditions. Full article
Show Figures

Figure 1

Figure 1
<p>Convolution and pooling process.</p>
Full article ">Figure 2
<p>The architecture of the one-dimensional convolutional neural network with adaptive focal loss function based on transfer learning.</p>
Full article ">Figure 3
<p>Differentiation between LFM signals based on the noise level. (<b>a</b>) −5 dB; (<b>b</b>) 0 dB; (<b>c</b>) 5 dB; (<b>d</b>) noiseless.</p>
Full article ">Figure 4
<p>The learning curve of classification accuracy versus number of samples.</p>
Full article ">Figure 5
<p>Different types of intra-pulse modulated radar signals in the source domain. (<b>a</b>) SCF; (<b>b</b>) LFM; (<b>c</b>) SFM.</p>
Full article ">Figure 6
<p>Different types of intra-pulse modulated radar signals in the target domain. (<b>a</b>) BPSK; (<b>b</b>) BFSK; (<b>c</b>) QFSK; (<b>d</b>) FRANK; (<b>e</b>) EQFM; (<b>f</b>) DLFM; (<b>g</b>) MLFM; (<b>h</b>) LFM–BFSK; (<b>i</b>) BPSK–BFSK.</p>
Full article ">Figure 7
<p>The average accuracy value during the training process.</p>
Full article ">Figure 8
<p>The accuracy of the five models during the training process at a SNR of 0 dB and a number of training samples of 450 (50 for each signal).</p>
Full article ">Figure 9
<p>The number of iterations and time usage for training the model in different methods.</p>
Full article ">Figure 10
<p>The classification accuracy of different methods with different training sets at 0 dB.</p>
Full article ">Figure 11
<p>The classification accuracy of different methods with different SNRs when the sample size was 900 (100 for each signal).</p>
Full article ">Figure 12
<p>The average classification accuracy of all SNRs for different focusing parameters with different sample sizes.</p>
Full article ">Figure 13
<p>Classification accuracies with different noise environments.</p>
Full article ">
19 pages, 19673 KiB  
Article
A Pre-Operational System Based on the Assimilation of MODIS Aerosol Optical Depth in the MOCAGE Chemical Transport Model
by Laaziz El Amraoui, Matthieu Plu, Vincent Guidard, Flavien Cornut and Mickaël Bacles
Remote Sens. 2022, 14(8), 1949; https://doi.org/10.3390/rs14081949 - 18 Apr 2022
Cited by 6 | Viewed by 2520
Abstract
In this study we present a pre-operational forecasting assimilation system of different types of aerosols. This system has been developed within the chemistry-transport model of Météo-France, MOCAGE, and uses the assimilation of the Aerosol Optical Depth (AOD) from MODIS (Moderate Resolution Imaging Spectroradiometer) [...] Read more.
In this study we present a pre-operational forecasting assimilation system of different types of aerosols. This system has been developed within the chemistry-transport model of Météo-France, MOCAGE, and uses the assimilation of the Aerosol Optical Depth (AOD) from MODIS (Moderate Resolution Imaging Spectroradiometer) onboard both Terra and Aqua. It is based on the AOD assimilation system within the MOCAGE model. It operates on a daily basis with a global configuration of 1×1 (longitude × latitude). The motivation of such a development is the capability to predict and anticipate extreme events and their impacts on the air quality and the aviation safety in the case of a huge volcanic eruption. The validation of the pre-operational system outputs has been done in terms of AOD compared against the global AERONET observations within two complete years (January 2018–December 2019). The comparison between both datasets shows that the correlation between the MODIS assimilated outputs and AERONET over the whole period of study is 0.77, whereas the biases and the RMSE (Root Mean Square Error) are 0.006 and 0.135, respectively. The ability of the pre-operational system to predict extreme events in near real time such as the desert dust transport and the propagation of the biomass burning was tested and evaluated. We particularly presented and documented the desert dust outbreak which occurred over Greece in late March 2018 as well as the wildfire event which happened on Australia between July 2019 and February 2020. We only presented these two events, but globally the assimilation chain has shown that it is capable of predicting desert dust events and biomass burning aerosols which happen all over the globe. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>OMF (observation minus forecast) and OMA (observation minus analysis) assimilation diagnostics for the two years of functioning of the pre-operational system 2018 and 2019.</p>
Full article ">Figure 2
<p>Statistics in terms of correlation, bias and RMSE between AERONET observations and MODIS analyses in terms of AOD for both the years of assimilation: 2018 and 2019. Note that all AERONET observations (Version 3) available during the whole two-years period (2018 and 2019) over all the globe are considered in this comparison.</p>
Full article ">Figure 3
<p>Scatter plots of AERONET AOD versus assimilated MODIS AOD for the two years of study: 2018 and 2019. The colours represent the number of counts for each comparison. Both comparisons correspond to the whole period of comparison: From 1 January until 31 December 2018 for the left panel, and from 1 January until 31 December 2019 for the right panel. The thick black line in each panel is the regression line for each dataset.</p>
Full article ">Figure 4
<p>Pictures illustrating the desert dust outbreak that happened on 22–23 March 2018 over Greece. (<b>a</b>): Credit <span class="html-italic">Greek Reporter</span> (<a href="https://greece.greekreporter.com/2018/03/26/athens-acropolis-covered-in-african-dust-photos/" target="_blank">https://greece.greekreporter.com/2018/03/26/athens-acropolis-covered-in-african-dust-photos/</a> accessed on 10 April 2022). (<b>b</b>): Credit <span class="html-italic">The Sun</span> (<a href="https://www.thesun.co.uk/news/5886355/crete-orange-dust-sahara-desert-winds-africa-greek-island/" target="_blank">https://www.thesun.co.uk/news/5886355/crete-orange-dust-sahara-desert-winds-africa-greek-island/</a> accessed on 10 April 2022). (<b>c</b>): Credit <span class="html-italic">The Watchers</span> (<a href="https://watchers.news/2018/03/23/severe-dust-storm-hits-crete-greece/" target="_blank">https://watchers.news/2018/03/23/severe-dust-storm-hits-crete-greece/</a> accessed on 10 April 2022).</p>
Full article ">Figure 5
<p>(<b>a</b>): Geopotential height (in m) and wind flow (in m·s<math display="inline"><semantics> <msup> <mrow/> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </semantics></math>), (<b>b</b>): Temperature (in K) both on 22 March 2018 at 12:00 at the 700 hPa pressure level. (<b>c</b>,<b>d</b>) are the same as (<b>a</b>,<b>b</b>), respectively, but for the day 23 March 2018 at 12:00.</p>
Full article ">Figure 6
<p>Map of the AERONET stations used for the validation of MODIS AOD assimilation run during the desert dust event over the eastern Mediterranean. The colour code refers to the number of observations in each station used within the period of comparison: February–April 2018.</p>
Full article ">Figure 7
<p>Time series of AOD at 550 nm of both the MODIS AOD analyses (red) compared to the AERONET in situ measurements (green circles) between 1 February 2018 and 30 April 2018. The black line corresponds to the optical depth of the desert dust. The title of each panel refers to each station’s name with its coordinates (longitude and latitude). The scores corresponding to the comparison between both datasets in terms of correlation, bias and RMSE for all the stations are presented in <a href="#remotesensing-14-01949-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 8
<p>Map lon–lat issued from the pre-operational assimilation system for the day 22 March 2018 at 12:00 for different aerosol products: (<b>a</b>) Surface desert dust concentration in mg·m<math display="inline"><semantics> <msup> <mrow/> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </semantics></math> super-imposed by the wind direction and intensity presented by the black arrows; (<b>b</b>) the desert dust aerosol optical depth; (<b>c</b>) the surface PM10 concentration in mg·m<math display="inline"><semantics> <msup> <mrow/> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </semantics></math>.</p>
Full article ">Figure 9
<p>(<b>a</b>) The measurement orbits of the CALIOP instrument during 6 January 2020 over the Australian region. The colour code refers to the number of vertical profiles from the beginning of measurement. (<b>b</b>) The vertical profiles of CALIOP observations in terms of backscatter coefficient (m<math display="inline"><semantics> <mrow> <msup> <mrow/> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>·</mo> </mrow> </semantics></math>sr<math display="inline"><semantics> <msup> <mrow/> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </semantics></math>). The x-axis corresponds to the number of vertical profiles shown in (<b>a</b>). The corresponding profiles as for (<b>b</b>) from the model free run and the assimilated product are given in (<b>c</b>,<b>d</b>), respectively. (<b>e</b>) The mean vertical profiles of the ratios <math display="inline"><semantics> <mfrac> <mi>CALIOP</mi> <mi>MOCAGE</mi> </mfrac> </semantics></math> (cyan), and <math display="inline"><semantics> <mfrac> <mi>CALIOP</mi> <mi>ASSIMILATION</mi> </mfrac> </semantics></math> (orange), respectively. The corresponding vertical profiles of the standard deviations are presented in (<b>f</b>). The dashed vertical line in (<b>e</b>) represents the mean ratio of 1.</p>
Full article ">Figure 10
<p>(<b>Top</b>): Aerosol Optical Depth of Organic Carbon (OC) over the Australian continent issued from the model free run (<b>a</b>) and the MODIS assimilated field (<b>b</b>) for 6 January 2020. The difference between (<b>b</b>) and (<b>a</b>) is presented in (<b>c</b>). (<b>d</b>) The zonal cross-section of OC concentration issued from the model free run at 37.5<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math>S between 110<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> and 300<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> in longitude and between 0 and 20 km in the vertical. Part (<b>e</b>) is the same as (<b>d</b>) but for the MODIS assimilated field. The differences between the assimilation and the model free run fields are presented in (<b>f</b>).</p>
Full article ">
19 pages, 4002 KiB  
Article
Tropical Species Classification with Structural Traits Using Handheld Laser Scanning Data
by Meilian Wang, Man Sing Wong and Sawaid Abbas
Remote Sens. 2022, 14(8), 1948; https://doi.org/10.3390/rs14081948 - 18 Apr 2022
Cited by 10 | Viewed by 2792
Abstract
Information about tree species plays a pivotal role in sustainable forest management. Light detection and ranging (LiDAR) technology has demonstrated its potential to obtain species information using the structural features of trees. Several studies have explored the structural properties of boreal or temperate [...] Read more.
Information about tree species plays a pivotal role in sustainable forest management. Light detection and ranging (LiDAR) technology has demonstrated its potential to obtain species information using the structural features of trees. Several studies have explored the structural properties of boreal or temperate trees from terrestrial laser scanning (TLS) data and applied them to species classification, but the study of structural properties of tropical trees for species classification is rare. Compared to conventional static TLS, handheld laser scanning (HLS) is able to effectively capture point clouds of an individual tree with flexible movability. Therefore, in this study, we characterized the structural features of tropical species from HLS data as 23 LiDAR structural parameters, involving 6 branch, 11 crown and 6 entire tree parameters, and used these parameters to classify the species via 5 machine-learning (ML) models, respectively. The performance of each parameter was further evaluated and compared. Classification results showed that the employed parameters can achieve a classification accuracy of 84.09% using the support vector machine with a polynomial kernel. The evaluation of parameters indicated that it is insufficient to classify four species with only one and two parameters, but ten parameters were recommended in order to achieve satisfactory accuracy. The combination of different types of parameters, such as branch and crown parameters, can significantly improve classification accuracy. Finally, five sets of optimal parameters were suggested according to their importance and performance. This study also showed that the time- and cost-efficient HLS instrument could be a promising tool for tree-structure-related studies, such as structural parameter estimation, species classification, forest inventory, as well as sustainable tree management. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of research area and distribution of trees of selected four species. The colored icons in the dashed red circle indicate the general location of the selected four species. The icons in the solid red circle are specific sample locations of each species.</p>
Full article ">Figure 2
<p>Examples of point cloud of four species. Four species from left to right are <span class="html-italic">Aleurites moluccana,</span> <span class="html-italic">Ficus altissima, Delonix regia</span> and <span class="html-italic">Hibiscus tiliaceus</span>, respectively.</p>
Full article ">Figure 3
<p>Illustration of crown parameters.<math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>H</mi> <mrow> <mi>s</mi> <mi>p</mi> <mi>r</mi> <mi>e</mi> <mi>a</mi> <mi>d</mi> <mi>i</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> is the height with the largest projection area. <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mo>_</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mo>_</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> are the minimum and maximum height of low crown, respectively. <math display="inline"><semantics> <mrow> <mi>S</mi> <msub> <mi>H</mi> <mrow> <mi>c</mi> <mi>r</mi> <mi>o</mi> <mi>w</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math> is crown start height. <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>s</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>c</mi> </msub> </mrow> </semantics></math> are the maximum crown spread and maximum crown cross-spread, respectively.</p>
Full article ">Figure 4
<p>Workflow of the classification method. Vali in each fold of train data means validation data. The classification model is trained on the Train of each fold and is validated on Vali. After training on each fold, the parameter of the trained model is saved and used as the initial parameter of the classification model of next fold. The final trained model is evaluated using Test (30%).</p>
Full article ">Figure 5
<p>Comparison of tree basic parameters of four species between manual measured values and extracted values from 3D tree model. RMSE and R<sup>2</sup> are the root mean squared error and r-squared error of each parameter.</p>
Full article ">Figure 6
<p>Boxplot of all structural parameter values of four selected species. The individual points out of the box are the outliers of each parameter of each species. The horizontal lines below and above the box are the minimum and maximum values excluding outliers, respectively. The bottom line and top line of the box are the median values of the lower half and upper half of the parameter of each species, respectively. The bold line and square in each box are the median and mean value of each parameter of each species, respectively.</p>
Full article ">Figure 7
<p>Correlation coefficient (<span class="html-italic">r</span>) values between proposed structural parameters. The darker the blue color, the higher positive the correlation between structural parameters. The darker the red color, the higher negative the correlation between structural parameters.</p>
Full article ">Figure 8
<p>Boxplot of overall weighted accuracy achieved by all classification models using different number of structural parameters. The bold line means the median value of all weighted accuracy achieved by the corresponding number of parameters. The individual black points are the outliers of all weighted accuracy. The bottom line and top line of the box are the median of lower half and upper half of all weighted accuracy achieved by the corresponding number of parameters.</p>
Full article ">Figure 9
<p>Importance values of employed parameters calculated from the top 200 weighted accuracies. The more frequently the parameter occurs in the first 200 parameter sets, the more important the parameter is, and the closer to 1 the importance value is.</p>
Full article ">
22 pages, 32990 KiB  
Article
Global Mapping of Soil Water Characteristics Parameters— Fusing Curated Data with Machine Learning and Environmental Covariates
by Surya Gupta, Andreas Papritz, Peter Lehmann, Tomislav Hengl, Sara Bonetti and Dani Or
Remote Sens. 2022, 14(8), 1947; https://doi.org/10.3390/rs14081947 - 18 Apr 2022
Cited by 17 | Viewed by 4119
Abstract
Hydrological and climatic modeling of near-surface water and energy fluxes is critically dependent on the availability of soil hydraulic parameters. Key among these parameters is the soil water characteristic curve (SWCC), a function relating soil water content (θ) to matric potential [...] Read more.
Hydrological and climatic modeling of near-surface water and energy fluxes is critically dependent on the availability of soil hydraulic parameters. Key among these parameters is the soil water characteristic curve (SWCC), a function relating soil water content (θ) to matric potential (ψ). The direct measurement of SWCC is laborious, hence, reported values of SWCC are spatially sparse and usually have only a small number of data pairs (θ, ψ) per sample. Pedotransfer function (PTF) models have been used to correlate SWCC with basic soil properties, but evidence suggests that SWCC is also shaped by vegetation-promoted soil structure and climate-modified clay minerals. To capture these effects in their spatial context, a machine learning framework (denoted as Covariate-based GeoTransfer Functions, CoGTFs) was trained using (a) a novel and comprehensive global dataset of SWCC parameters and (b) global maps of environmental covariates and soil properties at 1 km spatial resolution. Two CoGTF models were developed: one model (CoGTF-1) was based on predicted soil covariates because measured soil data are not generally available, and the other (CoGTF-2) used measured soil properties to model SWCC parameters. The spatial cross-validation of CoGTF-1 resulted, for the predicted van Genuchten SWCC parameters, in concordance correlation coefficients (CCC) of 0.321–0.565. To validate the resulting global maps of SWCC parameters and to compare the CoGTF framework to two pedotransfer functions from the literature, the predicted water contents at 0.1 m, 3.3 m, and 150 m matric potential were evaluated. The accuracy metrics for CoGTF were considerably better than PTF-based maps. Full article
(This article belongs to the Special Issue Global Gridded Soil Information Based on Machine Learning)
Show Figures

Figure 1

Figure 1
<p>Collection of SWCC samples to produce global maps of van Genuchten parameters. The spatial distribution of SWCC data was shown with different categories. The colors are assigned to different quality criteria as explained in <a href="#remotesensing-14-01947-f0A1" class="html-fig">Figure A1</a>.</p>
Full article ">Figure 2
<p>Importance of the covariates for modeling vG parameters by a random forest model. The x-axis displays the average increase in node purity (the larger the value, the more important is a covariate). The 7 most important covariates are shown for both models (CoGTF-1 with predicted soil covariates and CoGTF-2 with measured soil covariates). The plots (<b>a</b>,<b>e</b>) show the importance for the common logarithm <math display="inline"><semantics> <mrow> <mi>l</mi> <mi>o</mi> <msub> <mi>g</mi> <mn>10</mn> </msub> </mrow> </semantics></math><math display="inline"><semantics> <mi>α</mi> </semantics></math> of inverse air entry pressure parameter (unit of <math display="inline"><semantics> <mi>α</mi> </semantics></math> m<sup>−1</sup>), (<b>b</b>,<b>f</b>) for the shape parameter <span class="html-italic">n</span>, (<b>c</b>,<b>g</b>) for the residual water content <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>r</mi> </msub> </semantics></math>, and (<b>d</b>,<b>h</b>) for the saturated water content <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>s</mi> </msub> </semantics></math>. Sand content, bulk density (BD), soil depth (SD), and clay content belong to soil covariates. Elevation is one of the terrain covariates. Temperature seasonality (TS), minimum temperature of warmest month (MTCM), annual average land surface temperature (LST), minimum temperature of coldest month (MTWM), precipitation of driest month (PDM), mean annual temperature (AMT), diffuse irradiation (DI), and mean annual precipitation (AMP) belong to the climate category.</p>
Full article ">Figure 3
<p>Correlation between observations and SCV predictions of van Genuchten parameters for the CoGTF-1 model using predicted soil covariates on the left and for the CoGTF-2 model using measured soil properties on the right. The plots (<b>a</b>,<b>e</b>) show inverse air entry pressure parameter <math display="inline"><semantics> <mrow> <mi>l</mi> <mi>o</mi> <msub> <mi>g</mi> <mn>10</mn> </msub> </mrow> </semantics></math><math display="inline"><semantics> <mi>α</mi> </semantics></math> (unit of <math display="inline"><semantics> <mi>α</mi> </semantics></math> m<sup>−1</sup>), (<b>b</b>,<b>f</b>) shape parameter <span class="html-italic">n</span>, (<b>c</b>,<b>g</b>) the residual water content <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>r</mi> </msub> </semantics></math>, and (<b>d</b>,<b>h</b>) the saturated water content <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>s</mi> </msub> </semantics></math>. The term ‘measured’ on the axes of the figures relates to parameter estimates obtained by fitting the vG model to measured SWCCs. The color code represented the number of observations in each hexagonal bin. The solid black line is the 1:1 line, and the blue dashed line the LOWESS (locally weighted scatter plot smoothing) curve.</p>
Full article ">Figure 4
<p>Global maps of van Genuchten parameter at the soil surface (marked as ‘0 cm’) calculated with CoGTF-1 model based on predicted soil covariates. The maps (<b>a</b>) show inverse air entry pressure parameter <math display="inline"><semantics> <mrow> <mi>l</mi> <mi>o</mi> <msub> <mi>g</mi> <mn>10</mn> </msub> </mrow> </semantics></math><math display="inline"><semantics> <mi>α</mi> </semantics></math> (unit of <math display="inline"><semantics> <mi>α</mi> </semantics></math> m<sup>−1</sup>),), (<b>b</b>) shape parameter <span class="html-italic">n</span>, (<b>c</b>) the residual water content <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>r</mi> </msub> </semantics></math>, and (<b>d</b>) the saturated water content <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>s</mi> </msub> </semantics></math>. The parameter <span class="html-italic">n</span> is high for sandy soils. Predictably, <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>s</mi> </msub> </semantics></math> is affected by soil bulk density map (high <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>s</mi> </msub> </semantics></math> values in high latitudes dominated by organic soils with low bulk density). In contrast, the parameter <math display="inline"><semantics> <mi>α</mi> </semantics></math> shows low values in these northern latitudes (large pores and low capillarity).</p>
Full article ">Figure 5
<p>Visual comparison of global maps of shape parameter <span class="html-italic">n</span> between (<b>a</b>) CoGTF-1 (based on predicted soil covariates), (<b>b</b>) HiHydroSoil v2.0, and (<b>c</b>) Rosetta 3. Large values are shown for regions with high sand content for CoGTF-1 and Rosetta 3. In contrast to Rosetta 3, large <span class="html-italic">n</span> values were predicted as well by CoGTF-1 for cold regions with high bulk densities. The values from HiHydroSoil v2.0 map were consistently small in all regions.</p>
Full article ">Figure 6
<p>Probability density functions (PDF) of vG parameters for the CoGTF-1 map (black), Rosetta 3 map (red), HiHydroSoil v2.0 map (orange), and of the ‘measured’ values of 11,705 SWCC from GSHP dataset. The plots (<b>a</b>) show inverse air entry pressure parameter <math display="inline"><semantics> <mrow> <mi>l</mi> <mi>o</mi> <msub> <mi>g</mi> <mn>10</mn> </msub> </mrow> </semantics></math><math display="inline"><semantics> <mi>α</mi> </semantics></math> (unit of <math display="inline"><semantics> <mi>α</mi> </semantics></math> m<sup>−1</sup>),), (<b>b</b>) shape parameter <span class="html-italic">n</span>, (<b>c</b>) the residual water content <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>r</mi> </msub> </semantics></math>, and (<b>d</b>) the saturated water content <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>s</mi> </msub> </semantics></math>. Note that the HiHydroSoil v02.0 map reported only 2 distinct values for <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>r</mi> </msub> </semantics></math> with 0.04 m<sup>3</sup>/m<sup>3</sup> if sand content was larger than 2% and 0.17 m<sup>3</sup>/m<sup>3</sup> if sand content was below 2% (only very few locations).</p>
Full article ">Figure A1
<p>Conceptual workflow used to generate the different CoGTF models using the SWCCs shown in <a href="#remotesensing-14-01947-f001" class="html-fig">Figure 1</a>. The CoGTF-A model was generated using all 15,259 SWCCs whereas to generate the CoGTF-1 model only 11,705 SWCCs were used with predicted soil properties. Furthermore, the CoGTF-2 model was produced using 9958 SWCCs that had measured soil properties.</p>
Full article ">Figure A2
<p>Cumulative distribution function (CDF) for global maps of vG parameters at different depths (0, 30, 60, and 100 cm) predicted by CoGTF-1. (<b>a</b>) shows the inverse air entry pressure parameter <math display="inline"><semantics> <msub> <mo form="prefix">log</mo> <mn>10</mn> </msub> </semantics></math><math display="inline"><semantics> <mi>α</mi> </semantics></math> (unit of <math display="inline"><semantics> <mi>α</mi> </semantics></math> m<sup>−1</sup>), (<b>b</b>) shape parameter <span class="html-italic">n</span>, (<b>c</b>) the residual water content <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>r</mi> </msub> </semantics></math>, and (<b>d</b>) the saturated water content <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>s</mi> </msub> </semantics></math>.</p>
Full article ">Figure A3
<p>Visual comparison between (<b>a</b>) CoGTF-1 (model based on predicted soil covariates), (<b>b</b>) HiHydroSoil v2.0, and (<b>c</b>) Rosetta 3 map of shape parameter <math display="inline"><semantics> <msub> <mo form="prefix">log</mo> <mn>10</mn> </msub> </semantics></math><math display="inline"><semantics> <mi>α</mi> </semantics></math> (unit of <math display="inline"><semantics> <mi>α</mi> </semantics></math> m<sup>−1</sup>).</p>
Full article ">Figure A4
<p>Visual comparison between (<b>a</b>) CoGTF-1 (model based on predicted soil covariates), (<b>b</b>) HiHydroSoil v2.0, and (<b>c</b>) Rosetta 3 map of residual water content <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>r</mi> </msub> </semantics></math>.</p>
Full article ">Figure A5
<p>Visual comparison between (<b>a</b>) CoGTF-1 (model based on predicted soil covariates), (<b>b</b>) HiHydroSoil v2.0, and (<b>c</b>) Rosetta 3 <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>s</mi> </msub> </semantics></math> map of saturated water content <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>s</mi> </msub> </semantics></math>.</p>
Full article ">Figure A6
<p>Results of SCV of SWCC parameter predictions to show the effect of mixing soil information for model calibration (using measured soil properties) and computing predictions (using predicted soil properties). (<b>a</b>) inverse air entry pressure parameter <math display="inline"><semantics> <mrow> <mi>l</mi> <mi>o</mi> <msub> <mi>g</mi> <mn>10</mn> </msub> </mrow> </semantics></math><math display="inline"><semantics> <mi>α</mi> </semantics></math> (unit of <math display="inline"><semantics> <mi>α</mi> </semantics></math> m<sup>−1</sup>), (<b>b</b>) shape parameter <span class="html-italic">n</span>, (<b>c</b>) the residual water content <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>r</mi> </msub> </semantics></math>, and (<b>d</b>) the saturated water content <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>s</mi> </msub> </semantics></math>. The term ‘measured’ on the axes of the figures relates to parameter estimates obtained by fitting the vG model to measured SWCCs.</p>
Full article ">
17 pages, 11313 KiB  
Article
Large-Scale Detection of the Tableland Areas and Erosion-Vulnerable Hotspots on the Chinese Loess Plateau
by Kai Liu, Jiaming Na, Chenyu Fan, Ying Huang, Hu Ding, Zhe Wang, Guoan Tang and Chunqiao Song
Remote Sens. 2022, 14(8), 1946; https://doi.org/10.3390/rs14081946 - 18 Apr 2022
Cited by 9 | Viewed by 2771
Abstract
Tableland areas, featured by flat and broad landforms, provide precious land resources for agricultural production and human settlements over the Chinese Loess Plateau (CLP). However, severe gully erosion triggered by extreme rainfall and intense human activities makes tableland areas shrink continuously. Preventing the [...] Read more.
Tableland areas, featured by flat and broad landforms, provide precious land resources for agricultural production and human settlements over the Chinese Loess Plateau (CLP). However, severe gully erosion triggered by extreme rainfall and intense human activities makes tableland areas shrink continuously. Preventing the loss of tableland areas is of real urgency, in which generating its accurate distribution map is the critical prerequisite. However, a plateau-scale inventory of tableland areas is still lacking across the Loess Plateau. This study proposed a large-scale approach for tableland area mapping. The Sentinel-2 imagery was used for the initial delineation based on object-based image analysis and random forest model. Subsequently, the drainage networks extracted from AW3D30 DEM were applied for correcting commission and omission errors based on the law that rivers and streams rarely appear on the tableland areas. The automatic mapping approach performs well, with the overall accuracies over 90% in all four investigated subregions. After the strict quality control by manual inspection, a high-quality inventory of tableland areas at 10 m resolution was generated, demonstrating that the tableland areas occupied 9507.31 km2 across the CLP. Cultivated land is the dominant land-use type on the tableland areas, yet multi-temporal observations indicated that it has decreased by approximately 500 km2 during the year of 2000 to 2020. In contrast, forest and artificial surfaces increased by 57.53% and 73.10%, respectively. Additionally, we detected 455 vulnerable hotspots of the tableland with a width of less than 300 m. Particular attention should be paid to these areas to prevent the potential split of a large tableland, accompanied by damage on roads and buildings. This plateau-scale tableland inventory and erosion-vulnerable hotspots are expected to support the environmental protection policymaking for sustainable development in the CLP region severely threatened by soil erosion and land degradation. Full article
Show Figures

Figure 1

Figure 1
<p>Photos of loess tableland taken by UAV-mounted cameras. (<b>a</b>) Tableland areas are featured by flat terrains surrounded by deep-cut gullies; (<b>b</b>) tableland areas provide a precise land resource for agricultural production and living settlements; (<b>c</b>) gully development may cause the shrinkage of tableland areas and even cut the large tableland areas into several disconnected parts.</p>
Full article ">Figure 2
<p>The distribution of study area and four subregions including eastern Gansu (EGS), northern Wei River (NWR), northern Shaanxi (NSX), and western Shanxi (WSX).</p>
Full article ">Figure 3
<p>Overflow workflow of proposed method for loess tableland area mapping.</p>
Full article ">Figure 4
<p>The distribution of training and test samples over the study area. (<b>a</b>) The selected training and test areas; (<b>b</b>–<b>e</b>) four enlarged areas in each subregion.</p>
Full article ">Figure 5
<p>Comparison between the initial drainage networks (<b>left</b>) and the revised drainage networks (<b>right</b>) which are regarded as the terrain skeleton for modifying the mapping results.</p>
Full article ">Figure 6
<p>Manual editing for generating the inventory of loess tableland areas. (<b>a</b>,<b>b</b>) Commission and omission errors are removed by changing the type of segments. (<b>c</b>,<b>d</b>) Manual delineation sometimes is necessary if the object is under over-segmentation.</p>
Full article ">Figure 7
<p>Steps for identifying the vulnerable areas for gully developments. (<b>a</b>) Potential gully heads were selected within a buffer zone. (<b>b</b>) The gully head pairs located in different catchments were detected. (<b>c</b>) The erosion-vulnerable area and its width was determined with manual interpretation.</p>
Full article ">Figure 8
<p>Distribution pattern of the extracted tableland areas. (<b>a</b>) Spatial distribution of tableland areas over the Loess Plateau. (<b>b</b>) Spatial variation of the tableland ratio. Zoom-in of five subregions are shown in (<b>c</b>) Qingyang, (<b>d</b>) Changwu, (<b>e</b>) Chunhua, (<b>f</b>) Luochuan, and (<b>g</b>) Daning.</p>
Full article ">Figure 9
<p>The frequency distribution of slope (<b>a</b>) and elevation (<b>b</b>) on the tableland areas.</p>
Full article ">Figure 10
<p>The land use on tableland areas in (<b>a</b>) 2000, (<b>b</b>) 2010, and (<b>c</b>) 2020. Two enlarged areas are Qingyang (<b>left</b>) and Luochuan (<b>right</b>), respectively.</p>
Full article ">Figure 11
<p>Distribution of extracted risk points over the study area.</p>
Full article ">Figure 12
<p>A typical region suffers from the tableland shrinkage. (<b>a</b>) Imagery captured on 19 October 2012; (<b>b</b>) imagery captured on 26 October 2017.</p>
Full article ">
15 pages, 7300 KiB  
Technical Note
Characteristics of Precipitation and Floods during Typhoons in Guangdong Province
by Yan Yan, Guihua Wang, Huan Wu, Guojun Gu and Nergui Nanding
Remote Sens. 2022, 14(8), 1945; https://doi.org/10.3390/rs14081945 - 18 Apr 2022
Cited by 4 | Viewed by 2480
Abstract
The spatial and temporal characteristics of precipitation and floods during typhoons in Guangdong province were examined by using TRMM TMPA 3B42 precipitation data and the Dominant River Routing Integrated with VIC Environment (DRIVE) model outputs for the period 1998–2019. The evaluations based on [...] Read more.
The spatial and temporal characteristics of precipitation and floods during typhoons in Guangdong province were examined by using TRMM TMPA 3B42 precipitation data and the Dominant River Routing Integrated with VIC Environment (DRIVE) model outputs for the period 1998–2019. The evaluations based on gauge-measured and model-simulated streamflow show the reliability of the DRIVE model. The typhoon tracks are divided into five categories for those that landed on or influenced Guangdong province. Generally, the spatial distribution of precipitation and floods differ for different typhoon tracks. Precipitation has a similar spatial distribution to flood duration (FD) but is substantially different from flood intensity (FI). The average precipitation over Guangdong province usually reaches its peak at the landing time of typhoons, while the average FD and FI reach their peaks several hours later than precipitation peak. The lagged correlations between precipitation and FD/FI are hence always higher than their simultaneous correlations. Full article
(This article belongs to the Topic Advanced Research in Precipitation Measurements)
Show Figures

Figure 1

Figure 1
<p>Topography and locations of stations in Guangdong; blue lines are rivers.</p>
Full article ">Figure 2
<p>DRIVE simulated streamflow against observed data for eight stations in Guangdong province, and the metrics for model performance in the streamflow simulation.</p>
Full article ">Figure 3
<p>Typhoon tracks of five classifications from 1998–2019.</p>
Full article ">Figure 4
<p>Averaged spatial distribution of cumulative precipitation during the 24 h before through the 24 h after typhoons made landfall (for non-landing typhoons, data accumulated over the time from 24 h before through 24 h after the time at which the typhoon most closely approaches the coast of Guangdong).</p>
Full article ">Figure 5
<p>The same as <a href="#remotesensing-14-01945-f004" class="html-fig">Figure 4</a>, but for the cumulative flood duration.</p>
Full article ">Figure 6
<p>The same as <a href="#remotesensing-14-01945-f004" class="html-fig">Figure 4</a>, but for the flood intensity.</p>
Full article ">Figure 7
<p>Time series of precipitation, flood duration, and flood intensity during 24 h before through 72 h after typhoons made landfall (24 h represents the landing or nearest time, and all the data were standardized).</p>
Full article ">Figure 8
<p>Simultaneous and lagged correlations between precipitation and flood duration/flood intensity in Guangdong province during typhoon periods.</p>
Full article ">Figure 9
<p>Scatterplots of precipitation and flood duration for the five classes of typhoon tracks. “*” indicate the correlation coefficient is above the 90% confidence level.</p>
Full article ">Figure 10
<p>Scatterplots of precipitation and flood intensity for the five classes of typhoon tracks.</p>
Full article ">
18 pages, 9268 KiB  
Article
Feedback Refined Local-Global Network for Super-Resolution of Hyperspectral Imagery
by Zhenjie Tang, Qing Xu, Pengfei Wu, Zhenwei Shi and Bin Pan
Remote Sens. 2022, 14(8), 1944; https://doi.org/10.3390/rs14081944 - 18 Apr 2022
Cited by 8 | Viewed by 2299
Abstract
Powered by advanced deep-learning technology, multi-spectral image super-resolution methods based on convolutional neural networks have recently achieved great progress. However, the single hyperspectral image super-resolution remains a challenging problem due to the high-dimensional and complex spectral characteristics of hyperspectral data, which make it [...] Read more.
Powered by advanced deep-learning technology, multi-spectral image super-resolution methods based on convolutional neural networks have recently achieved great progress. However, the single hyperspectral image super-resolution remains a challenging problem due to the high-dimensional and complex spectral characteristics of hyperspectral data, which make it difficult for general 2D convolutional neural networks to simultaneously capture spatial and spectral prior information. To deal with this issue, we propose a novel Feedback Refined Local-Global Network (FRLGN) for the super-resolution of hyperspectral image. To be specific, we develop a new Feedback Structure and a Local-Global Spectral block to alleviate the difficulty in spatial and spectral feature extraction. The Feedback Structure can transfer the high-level information to guide the generation process of low-level features, which is achieved by a recurrent structure with finite unfoldings. Furthermore, in order to effectively use the high-level information passed back, a Local-Global Spectral block is constructed to handle the feedback connections. The Local-Global Spectral block utilizes the feedback high-level information to correct the low-level feature from local spectral bands and generates powerful high-level representations among global spectral bands. By incorporating the Feedback Structure and Local-Global Spectral block, the FRLGN can fully exploit spatial-spectral correlations among spectral bands and gradually reconstruct high-resolution hyperspectral images. Experimental results indicate that FRLGN presents advantages on three public hyperspectral datasets. Full article
Show Figures

Figure 1

Figure 1
<p>The overview of our proposed Feedback Refined Local-Global Network (FRLGN). The blue arrows are the feedback connections and the Local-Global Spectral Block (LGSB) represented by the trapezoidal block is specifically designed for the task of super-resolution of hyperspectral images.</p>
Full article ">Figure 2
<p>(<b>a</b>) The illustration of the Feedback Structure (FS) in the proposed FRLGN network and the blue arrow is the feedback connection. (<b>b</b>) The architecture of Traditional Recurrent Structure.</p>
Full article ">Figure 3
<p>The network architecture of the Local-Global Spectral Block (LGSB): (<b>a</b>) the Local-Global Spectral Block, (<b>b</b>) the Residual Block.</p>
Full article ">Figure 4
<p>Mean error maps of superballs and paints hyperspectral images from the CAVE testing dataset with a scale factor of 4.</p>
Full article ">Figure 5
<p>Mean error maps of two hyperspectral images from the Harvard testing dataset with the scale factor 4.</p>
Full article ">Figure 6
<p>The third reconstructed hyperspectral image from the Chikusei testing dataset with the scale factor 4, in which the bands 70-100-36 are treated as R-G-B.</p>
Full article ">Figure 7
<p>The fourth reconstructed hyperspectral image from the Chikusei testing dataset with the scale factor 4, in which the bands 70-100-36 are treated as R-G-B.</p>
Full article ">Figure 8
<p>Mean spectral difference curve of two hyperspectral images from the CAVE testing dataset with the scale factor 4: (<b>a</b>) sushi, (<b>b</b>) chart_and_stuffed.</p>
Full article ">Figure 9
<p>Mean spectral difference curve of two hyperspectral images (<b>a</b>,<b>b</b>) from the Harvard testing dataset with the scale factor 4.</p>
Full article ">Figure 10
<p>Mean spectral difference curve of two hyperspectral images (<b>a</b>,<b>b</b>) from the Chikusei testing dataset with the scale factor 4.</p>
Full article ">
2 pages, 167 KiB  
Editorial
Editorial for the Special Issue: “3D Virtual Reconstruction for Cultural Heritage”
by Sara Gonizzi Barsanti
Remote Sens. 2022, 14(8), 1943; https://doi.org/10.3390/rs14081943 - 18 Apr 2022
Cited by 1 | Viewed by 1782
Abstract
The use of 3D modelling, computer-aided design (CAD), augmented reality (AR) and virtual reality (VR) for the acquisition and virtual reconstruction of Cultural Heritage is of great importance in the analysis, study, documentation and dissemination of the past [...] Full article
(This article belongs to the Special Issue 3D Virtual Reconstruction for Cultural Heritage)
24 pages, 4907 KiB  
Article
GA-Net-Pyramid: An Efficient End-to-End Network for Dense Matching
by Yuanxin Xia, Pablo d’Angelo, Friedrich Fraundorfer, Jiaojiao Tian, Mario Fuentes Reyes and Peter Reinartz
Remote Sens. 2022, 14(8), 1942; https://doi.org/10.3390/rs14081942 - 17 Apr 2022
Cited by 1 | Viewed by 2874
Abstract
Dense matching plays a crucial role in computer vision and remote sensing, to rapidly provide stereo products using inexpensive hardware. Along with the development of deep learning, the Guided Aggregation Network (GA-Net) achieves state-of-the-art performance via the proposed Semi-Global Guided Aggregation layers and [...] Read more.
Dense matching plays a crucial role in computer vision and remote sensing, to rapidly provide stereo products using inexpensive hardware. Along with the development of deep learning, the Guided Aggregation Network (GA-Net) achieves state-of-the-art performance via the proposed Semi-Global Guided Aggregation layers and reduces the use of costly 3D convolutional layers. To solve the problem of GA-Net requiring large GPU memory consumption, we design a pyramid architecture to modify the model. Starting from a downsampled stereo input, the disparity is estimated and continuously refined through the pyramid levels. Thus, the disparity search is only applied for a small size of stereo pair and then confined within a short residual range for minor correction, leading to highly reduced memory usage and runtime. Tests on close-range, aerial, and satellite data demonstrate that the proposed algorithm achieves significantly higher efficiency (around eight times faster consuming only 20–40% GPU memory) and comparable results with GA-Net on remote sensing data. Thanks to this coarse-to-fine estimation, we successfully process remote sensing datasets with very large disparity ranges, which could not be processed with GA-Net due to GPU memory limitations. Full article
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)
Show Figures

Figure 1

Figure 1
<p>GA-Net-Pyramid with explicit downsampling. The input stereo pair is downsampled explicitly according to the resolution required by each pyramid level. At the pyramid top, the stereo correspondences are located within an absolute disparity range in low resolution. The following pyramid levels perform disparity refinement within a pre-defined residual disparity range until the original resolution is recovered at the pyramid bottom. SPN indicates the Spatial Propagation Network which is an optional module for depth boundary enhancement, as described in <a href="#sec3dot3-remotesensing-14-01942" class="html-sec">Section 3.3</a>.</p>
Full article ">Figure 2
<p>GA-Net-Pyramid with implicit downsampling. The feature extractor is applied on the stereo pair in original resolution, with the intermediate feature maps from its decoder to feed each pyramid level according to the expected resolution. SPN indicates the Spatial Propagation Network which is an optional module for depth boundary enhancement, as described in <a href="#sec3dot3-remotesensing-14-01942" class="html-sec">Section 3.3</a>.</p>
Full article ">Figure 3
<p>Visual comparison on Scene Flow data. Two test cases are displayed in subfigure (<b>a</b>,<b>b</b>). In each subfigure, the disparity maps from the ground truth, GA-Net-PyramidID+SPN and GA-Net are displayed from left to right in the first row. The second row provides the master epipolar image and the corresponding error map of each model. Regions where the proposed algorithm outperforms GA-Net are marked with red arrows.</p>
Full article ">Figure 4
<p>Visual comparison on KITTI-2012 data. Two test cases are displayed in subfigure (<b>a</b>,<b>b</b>). In each subfigure, the disparity maps from the ground truth, GA-Net-PyramidID+SPN and GA-Net are displayed from left to right in the first row. The second row provides the master epipolar image and the corresponding error map of each model. Regions where the proposed algorithm outperforms GA-Net are marked with red arrows.</p>
Full article ">Figure 5
<p>Visual comparison on aerial data. Two test cases regarding vegetation and building area are displayed in subfigure (<b>a</b>,<b>b</b>), respectively. In each subfigure, the reference disparity map and the stereo results from GA-Net-PyramidED, GA-Net and SGM are displayed from left to right in the first row. The second row provides the master epipolar image and the corresponding error map of each model.</p>
Full article ">Figure 6
<p>Visual comparison on satellite data. Two test cases regarding vegetation and building area are displayed in subfigure (<b>a</b>,<b>b</b>), respectively. In each subfigure, the reference disparity map and the stereo results from GA-Net-PyramidED, GA-Net and SGM are displayed from left to right in the first row. The second row provides the master epipolar image and the corresponding error map of each model.</p>
Full article ">Figure 7
<p>A showcase to indicate the ability of our pyramid network in processing remote sensing stereo pair with large baseline. The test image and the corresponding stereo reconstruction from the reference disparity map (<b>lower left</b>) and our pyramid model (<b>lower right</b>) are shown. The reconstructed region is highlighted by the green rectangle with a size of 19,791 × 15,639 pixels. Test region: Matterhorn mountain, Switzerland. Test model: GA-Net-PyramidED.</p>
Full article ">
21 pages, 2191 KiB  
Article
Global Identification of Unelectrified Built-Up Areas by Remote Sensing
by Xumiao Gao, Mingquan Wu, Zheng Niu and Fang Chen
Remote Sens. 2022, 14(8), 1941; https://doi.org/10.3390/rs14081941 - 17 Apr 2022
Cited by 6 | Viewed by 2978
Abstract
Access to electricity (the proportion of the population with access to electricity) is a key indica for of the United Nations’ Sustainable Development Goal 7 (SDG7), which aims to provide affordable, reliable, sustainable, and modern energy services for all. Accurate and timely global [...] Read more.
Access to electricity (the proportion of the population with access to electricity) is a key indica for of the United Nations’ Sustainable Development Goal 7 (SDG7), which aims to provide affordable, reliable, sustainable, and modern energy services for all. Accurate and timely global data on access to electricity in all countries is important for the achievement of SDG7. Current survey-based access to electricity datasets suffers from short time spans, slow updates, high acquisition costs, and a lack of location data. Accordingly, a new method for identifying the electrification status of built-up areas based on the remote sensing of nighttime light is proposed in this study. More specifically, the method overlays global built-up area data with night-time light remote sensing data to determine whether built-up areas are electrified based on a threshold night-time light value. By using our approach, electrified and unelectrified built-up areas were extracted at 500 m resolution on a global scale for the years 2014 and 2020. The acquired results show a significant reduction in an unelectrified built-up area between 2014 and 2020, from 51,301.14 km2 to 22,192.52 km2, or from 3.05% to 1.32% of the total built-up area. Compared to 2014, 117 countries or territories had improved access to electricity, and 18 increased their proportion of unelectrified built-up area by >0.1%. The identification accuracy was evaluated by using a random sample of 10,106 points. The accuracies in 2014 and 2020 were 97.29% and 98.9%, respectively, with an average of 98.1%. The outcomes of this method are in high agreement with the spatial distribution of access to electricity data reported by the World Bank. This study is the first to investigate the global electrification of built-up areas by using remote sensing. It makes an important supplement to global data on access to electricity, which can aid in the achievement of SDG7. Full article
(This article belongs to the Special Issue Remote Sensing for Engineering and Sustainable Development Goals)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Technical flowchart of the method of non-electrified area identification.</p>
Full article ">Figure 2
<p>Histogram of night-light values in non-electrified areas (<b>a</b>) and electrified built-up areas (<b>b</b>) in 2014.</p>
Full article ">Figure 3
<p>Electrified and non-electrified built-up areas in Wau, South Sudan, in 2014 (<b>a</b>) and 2020 (<b>b</b>).</p>
Full article ">Figure 4
<p>Distribution of the percentage of unelectrified built-up areas by country, 2014.</p>
Full article ">Figure 5
<p>Distribution of validation sample points.</p>
Full article ">Figure 6
<p>Spatial distribution of non-access to electricity data by country, 2014.</p>
Full article ">Figure 7
<p>Changes in the percentage of the unelectrified built-up area between 2014 and 2020.</p>
Full article ">
17 pages, 2870 KiB  
Article
BDS/GPS/UWB Adaptively Robust EKF Tightly Coupled Navigation Model Considering Pedestrian Motion Characteristics
by Jian Zhang, Jian Wang, Ximin Cui and Debao Yuan
Remote Sens. 2022, 14(8), 1940; https://doi.org/10.3390/rs14081940 - 17 Apr 2022
Cited by 4 | Viewed by 2124
Abstract
In the indoor and outdoor transition area, due to its poor availability in a complex positioning environment, the BDS/GPS SPP (single-point positioning by combining BeiDou Navigation Satellite System (BDS) and Global Positioning System (GPS)) is unable to provide an effective positioning service. In [...] Read more.
In the indoor and outdoor transition area, due to its poor availability in a complex positioning environment, the BDS/GPS SPP (single-point positioning by combining BeiDou Navigation Satellite System (BDS) and Global Positioning System (GPS)) is unable to provide an effective positioning service. In view of the poor positioning accuracy and low sampling rate of the BDS/GPS SPP and the gross error, such as the non-line-of-sight error of UWB (Ultra-Wide-Band), making the accuracy of positioning results poor, a BDS/GPS/UWB tightly coupled navigation model considering pedestrian motion characteristics is proposed to make positioning results more reliable and accurate in the transition area. The core content of this paper is divided into the following three parts: (1) Firstly, the dynamic model and positioning theories of BDS/GPS SPP and UWB are introduced, respectively. (2) Secondly, the BDS/GPS/UWB tightly coupled navigation model is proposed. An environment discrimination factor is introduced to adaptively adjust the variance factor of the system state. At the same time, the gross error detection factor is constructed by using the a posteriori residuals to make the variance factor of the measurement information of the combined positioning system able to be adjusted intelligently for the purpose of eliminating the interference of gross error observations on positioning results. On the other hand, pedestrian motion characteristics are introduced to establish the constraint equation to improve the consistency of positioning accuracy. (3) Thirdly, the actual measured data are used to demonstrate and analyze the reliability of the positioning model proposed by this paper. The experimental results show that the BDS/GPS/UWB tightly coupled navigation model can effectively improve the accuracy and availability of positioning. Compared with BDS/GPS SPP, the accuracy of this model is improved by 57.8%, 76.0% and 56.5% in the E, N and U directions, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>CUT0 one-day BDS/GPS SPP positioning results.(where (<b>a</b>) represents the BDS/GPS SPP positioning error when the observation value is not corrected by TGD, and (<b>b</b>) represents the BDS/GPS SPP positioning error after the observation value is corrected by TGD).</p>
Full article ">Figure 2
<p>UWB simulation experiment results.</p>
Full article ">Figure 3
<p>BDS/GPS/UWB tightly coupled positioning model.</p>
Full article ">Figure 4
<p>Experimental scene of the BDS/GPS/UWB tight combination positioning. (<b>a</b>) Overall overview of the experimental scene. (<b>b</b>) Walking path.</p>
Full article ">Figure 5
<p>BDS/GPS SPP positioning results (The positioning error is the difference between BDS/GPS SPP and the reference value, which is obtained epoch by epoch). (<b>a</b>) Positioning trajectory. (<b>b</b>) Positioning error.</p>
Full article ">Figure 6
<p>Positioning results of pure UWB. (<b>a</b>) Positioning trajectory. (<b>b</b>) Positioning error.</p>
Full article ">Figure 7
<p>Pedestrian positioning trajectory in the transition area.</p>
Full article ">Figure 8
<p>Influence of pedestrian motion characteristics constraint information on positioning results.</p>
Full article ">Figure 9
<p>Positioning residuals of each scheme.</p>
Full article ">Figure 10
<p>Correlation analysis between positioning results and reference values of each positioning scheme in E-direction.</p>
Full article ">Figure 11
<p>Correlation analysis between positioning results and reference values of each positioning scheme in the N-direction.</p>
Full article ">Figure 12
<p>Correlation analysis between positioning results and reference values of each positioning scheme in U-direction.</p>
Full article ">
18 pages, 4263 KiB  
Article
Simulative Evaluation of the Underwater Geodetic Network Configuration on Kinematic Positioning Performance
by Menghao Li, Yang Liu, Yanxiong Liu, Guanxu Chen, Qiuhua Tang, Yunfeng Han and Yuanlan Wen
Remote Sens. 2022, 14(8), 1939; https://doi.org/10.3390/rs14081939 - 17 Apr 2022
Cited by 9 | Viewed by 4494
Abstract
The construction of underwater geodetic networks (UGN) is crucial in marine geodesy. To provide high-precision kinematic positioning for underwater submersibles, an underwater acoustic geodetic network configuration of three seafloor base stations, one subsurface buoy, and one sea surface buoy is proposed. The simulation [...] Read more.
The construction of underwater geodetic networks (UGN) is crucial in marine geodesy. To provide high-precision kinematic positioning for underwater submersibles, an underwater acoustic geodetic network configuration of three seafloor base stations, one subsurface buoy, and one sea surface buoy is proposed. The simulation results show that, for a 3 km-deep sea, based on the proposed UGN, the submersible positioning range and positioning accuracy are primarily affected by the size of the seafloor base station array, while the height of the subsurface buoy has a greater impact on the submersible positioning accuracy than the positioning range. Considering current acoustic ranging technology, the kinematic positioning performance of the UGN is optimal when the seafloor base stations are 9~13 km apart and the subsurface buoy is less than 2.5 km above the seafloor, which can achieve a submersible positioning accuracy of less than 30 m within an underwater space of 25 km × 25 km × 3 km. The proposed cost-effective UGN configuration can provide high-precision submersible kinematic positioning performance for seafloor surveying and ocean precision engineering. The impact of the underwater environment on the acoustic transmission characteristics should be further investigated. Full article
(This article belongs to the Special Issue Remote Sensing Technology for New Ocean and Seafloor Monitoring)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of the underwater geodetic network configuration and user navigation application. The network consists of a seafloor component and a mooring component.</p>
Full article ">Figure 2
<p>User positioning indicators of the “3+1+1” network configuration. (<b>a</b>) Heat map of the percentage of EPGs with the number of available stations ≥3. (<b>b</b>) Box chart of the PDOP variation with the seafloor base station distance and the subsurface buoy height. The red line denotes the median, the blue box denotes IQR (interquartile range), the solid black lines denote the maximum value and minimum value, respectively, and the ±1.5 × IQR criterion is used for outlier detection and removal. The shaded area denotes the steep variation of the PODP.</p>
Full article ">Figure 3
<p>User positioning indicators under various distance and height conditions. (<b>a</b>) EPG with various available stations—the colors blue, orange, yellow, purple, green, and red indicate the available station numbers of 0~5, respectively; (<b>b</b>) changes in the average, standard deviation, and maximum of PDOP—the blue box denotes the average, the black vertical line denotes the standard deviation, and the orange asterisk line denotes the maximum value; (<b>c</b>,<b>d</b>) spatial distribution of the ASN when the subsurface buoy height is 2.5 km, and the seafloor base station distances are 7 km and 9 km, respectively.</p>
Full article ">Figure 4
<p>User positioning indicators of the “3+0+0”, “3+0+1”, and “3+1+0” network configurations. (<b>a</b>,<b>b</b>) Heat maps of the percentage of EPGs with the number of available base stations ≥3; (<b>c</b>,<b>d</b>) average, standard deviation, and maximum PDOP—the box denotes the average, the black vertical line denotes the standard deviation, and the asterisk line denotes the maximum value. The results for the “3+0+0” configuration and the “3+0+1” configuration are shown in subfigure (<b>c</b>), whereas the “3+1+0” configuration with a subsurface buoy height of 0~2.9 km is shown in subfigure (<b>d</b>).</p>
Full article ">Figure 5
<p>User positioning indicators of the “4+1+1” network configuration. The geometries of the seafloor base station arrays are triangular-center-form array and square-form array. (<b>a</b>,<b>b</b>) Heat maps of the percentage of EPGs with the number of available base stations ≥ 3; (<b>c</b>) average, standard deviation, and maximum of PDOP—the box denotes the average, the black vertical line denotes the standard deviation, and the asterisk line denotes the maximum value.</p>
Full article ">Figure 6
<p>Simulated ocean environment with a calm sea surface, flat seafloor, and Munk sound speed profile.</p>
Full article ">Figure 7
<p>Simulation of acoustic ray bending. (<b>a</b>) Acoustic propagation path of the base station at different depths, with the rays grazing the sea surface or the seafloor. Blue and orange colors denote the rays grazing the sea surface and the seafloor, respectively; (<b>b</b>) base station maximum service range on the sea surface and the seafloor. Blue and orange colors denote the rays grazing the sea surface and the seafloor, respectively.</p>
Full article ">Figure 8
<p>Propagation time deviation between the reflected signal and the direct signal. (<b>a</b>,<b>b</b>) Propagation time deviation of the base station at depths of 500 m and 2998 m, respectively. Blue, orange, yellow, purple, and green colors denote user depths of 100 m, 500 m, 1000 m, 2500 m, and 2900 m, respectively.</p>
Full article ">Figure 9
<p>Signal transmission loss. (<b>a</b>,<b>b</b>) Signal transmission loss of the base station at 500 m depth and its variation with distance and depth; (<b>c</b>,<b>d</b>) signal transmission loss of the base station at 2998 m depth and its variation with distance and depth. Blue, orange, yellow, purple, and green colors in subfigures (<b>b</b>,<b>d</b>) denote user depths of 100 m, 500 m, 1000 m, 2500 m, and 2900 m, respectively.</p>
Full article ">
14 pages, 5654 KiB  
Article
Estimating Tree Defects with Point Clouds Developed from Active and Passive Sensors
by Carli J. Morgan, Matthew Powers and Bogdan M. Strimbu
Remote Sens. 2022, 14(8), 1938; https://doi.org/10.3390/rs14081938 - 17 Apr 2022
Cited by 6 | Viewed by 2697
Abstract
Traditional inventories require large investments of resources and a trained workforce to measure tree sizes and characteristics that affect wood quality and value, such as the presence of defects and damages. Handheld light detection and ranging (LiDAR) and photogrammetric point clouds developed using [...] Read more.
Traditional inventories require large investments of resources and a trained workforce to measure tree sizes and characteristics that affect wood quality and value, such as the presence of defects and damages. Handheld light detection and ranging (LiDAR) and photogrammetric point clouds developed using Structure from Motion (SfM) algorithms achieved promising results in tree detection and dimensional measurements. However, few studies have utilized handheld LiDAR or SfM to assess tree defects or damages. We used a Samsung Galaxy S7 smartphone camera to photograph trees and create digital models using SfM, and a handheld GeoSLAM Zeb Horizon to create LiDAR point cloud models of some of the main tree species from the Pacific Northwest. We compared measurements of damage count and damage length obtained from handheld LiDAR, SfM photogrammetry, and traditional field methods using linear mixed-effects models. The field method recorded nearly twice as many damages per tree as the handheld LiDAR and SfM methods, but there was no evidence that damage length measurements varied between the three survey methods. Lower damage counts derived from LiDAR and SfM were likely driven by the limited point cloud reconstructions of the upper stems, as usable tree heights were achieved, on average, at 13.6 m for LiDAR and 9.3 m for SfM, even though mean field-measured tree heights was 31.2 m. Our results suggest that handheld LiDAR and SfM approaches show potential for detection and measurement of tree damages, at least on the lower stem. Full article
(This article belongs to the Special Issue The Future of Remote Sensing: Harnessing the Data Revolution)
Show Figures

Figure 1

Figure 1
<p>Vicinity map of OSU McDonald forest and research site.</p>
Full article ">Figure 2
<p>Scanning in McDonald research forest using the GeoSLAM Zeb Horizon.</p>
Full article ">Figure 3
<p>Handheld LiDAR model for plot three: (<b>a</b>) extent and (<b>b</b>) detail.</p>
Full article ">Figure 4
<p>Sample photogrammetric point cloud for a tree in plot three developed with Structure from Motion: (<b>a</b>) whole tree extent and (<b>b</b>) detail.</p>
Full article ">
22 pages, 2695 KiB  
Article
Fourfold Bounce Scattering-Based Reconstruction of Building Backs Using Airborne Array TomoSAR Point Clouds
by Xiaowan Li, Fubo Zhang, Xingdong Liang, Yanlei Li, Qichang Guo, Yangliang Wan, Xiangxi Bu and Yunlong Liu
Remote Sens. 2022, 14(8), 1937; https://doi.org/10.3390/rs14081937 - 17 Apr 2022
Cited by 6 | Viewed by 2157
Abstract
Building reconstruction using high-resolution tomographic synthetic aperture radar (TomoSAR) point clouds has been very attractive in numerous applications, such as urban planning and dynamic city modeling. However, for side-looking TomoSAR, it is a challenge to reconstruct the obscured backs of buildings using traditional [...] Read more.
Building reconstruction using high-resolution tomographic synthetic aperture radar (TomoSAR) point clouds has been very attractive in numerous applications, such as urban planning and dynamic city modeling. However, for side-looking TomoSAR, it is a challenge to reconstruct the obscured backs of buildings using traditional single-bounce scattering-based methods. It comes to our attention that the higher-order scattering points in airborne array TomoSAR point clouds may provide rich information on the backs of buildings. In this paper, the fourfold bounce (FB) scattering model of combined buildings in airborne array TomoSAR is derived, which not only explains the cause of FB scattering but also gives the distribution pattern of FB scattering points. Furthermore, a novel FB scattering-based method for the reconstruction of building backs is proposed. First, a two-step geometric constraint is used to detect the candidate FB scattering points. Subsequently, the FB scattering points are further detected by seed point selection and density estimation in the radar coordinate system. Finally, the backs of buildings can be reconstructed using the footprint inverted from the FB scattering points and the height information of the illuminated facades. To verify the FB scattering model and the effectiveness of the proposed method, the results from the simulated point clouds and the real airborne array TomoSAR point clouds are presented. Compared with the traditional roof point-based methods, the outstanding advantage of the proposed method is that it allows for the high-precision reconstruction of building backs, even in the case of poor roof points. Moreover, this paper may provide a novel perspective for the three-dimensional (3D) reconstruction of dense urban areas. Full article
(This article belongs to the Special Issue Recent Progress and Applications on Multi-Dimensional SAR)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of multi-view observation of combined buildings. The back of the lower building is always obscured.</p>
Full article ">Figure 2
<p>The 3D observation geometry of airborne array TomoSAR.</p>
Full article ">Figure 3
<p>The schematic of airborne array TomoSAR on the ground–height plane.</p>
Full article ">Figure 4
<p>The schematic of FB scattering between combined buildings of airborne array TomoSAR.</p>
Full article ">Figure 5
<p>Schematic of the distribution of FB scattering points on (<b>a</b>) the ground–height plane, and on (<b>b</b>) the range–elevation plane. For (<b>b</b>), the flat-earth phase has been compensated.</p>
Full article ">Figure 6
<p>Schematic of FB scattering distribution between combined buildings of airborne array TomoSAR. (<b>a</b>) Case 1. (<b>b</b>) Case 2.</p>
Full article ">Figure 7
<p>Workflow of the proposed method. Shahzad, M., 2015 [<a href="#B14-remotesensing-14-01937" class="html-bibr">14</a>].</p>
Full article ">Figure 8
<p>Illustration of FB scattering point detection in the radar coordinate system <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mi>s</mi> <mo>)</mo> </mrow> </semantics></math>. The absolute elevation of the seed is less than the threshold <math display="inline"><semantics> <mrow> <mi>T</mi> <mi>e</mi> <mn>1</mn> </mrow> </semantics></math>. The cylinder with radius <math display="inline"><semantics> <mrow> <mi>r</mi> <mi>a</mi> <mi>d</mi> </mrow> </semantics></math> and height <math display="inline"><semantics> <mrow> <mn>2</mn> <mi>T</mi> <mi>e</mi> <mn>2</mn> </mrow> </semantics></math> is centered at the seed. Only the points with seed and within a cylinder are involved in density estimation.</p>
Full article ">Figure 9
<p>Image of (<b>a</b>) the simulated scene 1, (<b>b</b>) the simulated scene 2, (<b>c</b>) the single-channel scattering points of the simulated scene 1, and (<b>d</b>) the single-channel scattering points of the simulated scene 2. The color maps in (<b>c</b>) and (<b>d</b>) indicate the normalized scattering intensity.</p>
Full article ">Figure 10
<p>Illustration of the whole reconstruction process of the back of building 1 in the simulated scene 1. TomoSAR point cloud and the candidate FB scattering point detection in (<b>a</b>) the ground–height plane and (<b>b</b>) the range–elevation plane. (<b>c</b>) Detection of FB scattering points in range–elevation plane. (<b>d</b>) Reconstruction results of the back of building 1, where the line colored by height indicates the true value.</p>
Full article ">Figure 11
<p>Results of the simulated scene 2. (<b>a</b>) TomoSAR point cloud. The colormap indicates the normalized scattering intensity. (<b>b</b>) The reconstruction results of the back of building 1, where the line colored by height indicates the true value.</p>
Full article ">Figure 12
<p>Images of the studied scene in Experiment 1. (<b>a</b>) Optical image. (<b>b</b>) SAR image 1. (<b>c</b>) SAR image 2.</p>
Full article ">Figure 13
<p>The original TomoSAR point cloud of the area marked by the yellow rectangular box in SAR image 2.</p>
Full article ">Figure 14
<p>Results of Slice 1 in Experiment 1. (<b>a</b>) TomoSAR point cloud. The colormap indicates the normalized scattering intensity. (<b>b</b>) The reconstruction results of the back of building 1, where the line colored by height indicates the true value.</p>
Full article ">Figure 15
<p>Optical image of the studied scene in Experiment 2. (<b>a</b>) Image 1. (<b>b</b>) Image 2.</p>
Full article ">Figure 16
<p>SAR image of the studied scene in Experiment 2.</p>
Full article ">Figure 17
<p>Results of Slice 1 in Experiment 2. (<b>a</b>) TomoSAR point cloud. The colormap indicates the normalized scattering intensity. (<b>b</b>) The reconstruction results of the back of building 1, where the line colored by height indicates the reference true value.</p>
Full article ">Figure 18
<p>Illustration of the whole reconstruction process of the back of buildings of Slice 2 in Experiment 2. TomoSAR point cloud and the candidate FB scattering point detection in (<b>a</b>) the ground–height plane and (<b>b</b>) the range–elevation plane. (<b>c</b>) Detection of FB scattering point in range–elevation plane. (<b>d</b>) Reconstruction results of the back of buildings 3 and 4, where the line colored by height indicates the reference true value.</p>
Full article ">
20 pages, 1517 KiB  
Article
On Turbulent Features of E × B Plasma Motion in the Auroral Topside Ionosphere: Some Results from CSES-01 Satellite
by Giuseppe Consolini, Virgilio Quattrociocchi, Simone Benella, Paola De Michelis, Tommaso Alberti, Mirko Piersanti and Maria Federica Marcucci
Remote Sens. 2022, 14(8), 1936; https://doi.org/10.3390/rs14081936 - 17 Apr 2022
Cited by 4 | Viewed by 1944
Abstract
The recent Chinese Seismo-Electromagnetic Satellite (CSES-01) provides a good opportunity to investigate some features of plasma properties and its motion in the topside ionosphere. Using simultaneous measurements from the electric field detector and the magnetometers onboard CSES-01, we investigate some properties of the [...] Read more.
The recent Chinese Seismo-Electromagnetic Satellite (CSES-01) provides a good opportunity to investigate some features of plasma properties and its motion in the topside ionosphere. Using simultaneous measurements from the electric field detector and the magnetometers onboard CSES-01, we investigate some properties of the plasma E × B drift velocity for a case study during a crossing of the Southern auroral region in the topside ionosphere. In detail, we analyze the spectral and scaling features of the plasma drift velocity and provide evidence of the turbulent character of the E × B drift. Our results provide an evidence of the occurrence of 2D E × B intermittent convective turbulence for the plasma motion in the topside ionospheric F2 auroral region at scales from tens of meters to tens of kilometers. The intermittent character of the observed turbulence suggests that the macro-scale intermittent structure is isomorphic with a quasi-1D fractal structure, as happens, for example, in the case of a filamentary or thin-tube-like structure. Furthermore, in the analyzed range of scales we found that both magnetohydrodynamic and kinetic processes may affect the plasma dynamics at spatial scales below 2 km. The results are discussed and compared with previous results reported in the literature. Full article
(This article belongs to the Special Issue Ionosphere Monitoring with Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Electric (solid lines on the left vertical axes) and magnetic (dashed lines on the right vertical axes) field measurements collected by CSES-01 EFD and HPM instruments for the time interval under consideration. The three components are in the geographic coordinate system.</p>
Full article ">Figure 2
<p>Drift velocity (<math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">v</mi> <mi>D</mi> </msub> <mo>=</mo> <mfenced separators="" open="(" close=")"> <mi mathvariant="bold">E</mi> <mo>×</mo> <mi mathvariant="bold">B</mi> </mfenced> <mo>/</mo> <msup> <mi>B</mi> <mn>2</mn> </msup> </mrow> </semantics></math>) in the geographic coordinate system.</p>
Full article ">Figure 3
<p>The two components of drift velocity (<math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">v</mi> <mi>D</mi> </msub> <mo>=</mo> <mfenced separators="" open="(" close=")"> <mi mathvariant="bold">E</mi> <mo>×</mo> <mi mathvariant="bold">B</mi> </mfenced> <mo>/</mo> <msup> <mi>B</mi> <mn>2</mn> </msup> </mrow> </semantics></math>) perpendicular to the local magnetic field direction.</p>
Full article ">Figure 4
<p>On the <b>top</b>: the instantaneous convection cells as reconstructed from SuperDARN observations in Antarctica. Green arrows show the overall plasma convection. On the <b>bottom</b>: a zoom of the region of the CSES-01 trajectory. Colored lines refer to the velocity vector field in the <math display="inline"><semantics> <mrow> <mi>X</mi> <mi>Y</mi> </mrow> </semantics></math>-plane. Colors (from red to light violet) indicate the time.</p>
Full article ">Figure 5
<p>Motion of the velocity vector tip in the plane perpendicular to the magnetic field. The color is associated with the universal time UT (see the color bar).</p>
Full article ">Figure 6
<p>The trace, <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>(</mo> <mi>f</mi> <mo>)</mo> </mrow> </semantics></math>, of the PSD of the perpendicular components of the <math display="inline"><semantics> <mrow> <mi mathvariant="bold">E</mi> <mo>×</mo> <mi mathvariant="bold">B</mi> </mrow> </semantics></math> drift velocity. The two power laws are that expected for K41 theory (∼<math display="inline"><semantics> <msup> <mi>f</mi> <mrow> <mo>−</mo> <mn>5</mn> <mo>/</mo> <mn>3</mn> </mrow> </msup> </semantics></math>) and that observed for 2D <math display="inline"><semantics> <mrow> <mi mathvariant="bold">E</mi> <mo>×</mo> <mi mathvariant="bold">B</mi> </mrow> </semantics></math> convective turbulence (∼<math display="inline"><semantics> <msup> <mi>f</mi> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </semantics></math>) in a quasi-steady state [<a href="#B40-remotesensing-14-01936" class="html-bibr">40</a>].</p>
Full article ">Figure 7
<p>The generalized structure functions, <math display="inline"><semantics> <mrow> <msup> <mi>S</mi> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, of the velocity field components perpendicular to the magnetic field direction. The black line refers to a linear scaling, i.e., <math display="inline"><semantics> <mrow> <msup> <mi>S</mi> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> <mo>≃</mo> <mi>τ</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>The compensated generalized structure functions, <math display="inline"><semantics> <mrow> <msup> <mi>τ</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <msup> <mi>S</mi> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, of the velocity field components perpendicular to the magnetic field direction for <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mspace width="4pt"/> <mn>3</mn> </mrow> </semantics></math> and 4. The black lines refer to power law fits.</p>
Full article ">Figure 9
<p>The relative scaling of <span class="html-italic">q</span>th-order structure functions, <math display="inline"><semantics> <mrow> <msup> <mi>S</mi> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, of the velocity field components perpendicular to the magnetic field direction versus the corresponding 3rd-order one. The black dashed lines are power-law fits.</p>
Full article ">Figure 10
<p>The relative scaling exponents <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </semantics></math> as a function of the moment order <span class="html-italic">q</span>. The dashed line is the expected trend of <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </semantics></math> for the K41 theory of turbulence (<math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>(</mo> <mi>q</mi> <mo>)</mo> <mo>=</mo> <mi>q</mi> <mo>/</mo> <mn>3</mn> </mrow> </semantics></math> being <math display="inline"><semantics> <mrow> <mi>ζ</mi> <mo>(</mo> <mn>3</mn> <mo>)</mo> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>). The blue solid line is a nonlinear best fit done using the Meneveau and Sreenivasan <span class="html-italic">P-model</span> [<a href="#B44-remotesensing-14-01936" class="html-bibr">44</a>]. The solid green line is the <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </semantics></math> trend for the She-Leveque model of Equation (<a href="#FD12-remotesensing-14-01936" class="html-disp-formula">12</a>) [<a href="#B45-remotesensing-14-01936" class="html-bibr">45</a>].</p>
Full article ">Figure 11
<p>The generalized kurtosis <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">Γ</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. The solid curve is a guide for eye. The vertical dashed line indicates the expected timescale <math display="inline"><semantics> <mrow> <msubsup> <mi>τ</mi> <mi>η</mi> <msup> <mi>O</mi> <mo>+</mo> </msup> </msubsup> <mo>≃</mo> <mn>0.3</mn> </mrow> </semantics></math> s corresponding to the <span class="html-italic">O</span><math display="inline"><semantics> <msup> <mrow/> <mo>+</mo> </msup> </semantics></math> inertial length, <math display="inline"><semantics> <mi>η</mi> </semantics></math>, assuming a density in the range <math display="inline"><semantics> <mrow> <mrow> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>]</mo> </mrow> <mo>×</mo> <msup> <mn>10</mn> <mn>5</mn> </msup> </mrow> </semantics></math> cm<math display="inline"><semantics> <msup> <mrow/> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </semantics></math>.</p>
Full article ">Figure 12
<p>The PDFs of the velocity increments in the time scale interval from 4 ms to ∼0.5 s. Data are rescaled by the corresponding standard deviation.</p>
Full article ">Figure 13
<p>The Kullback–Leibler (<math display="inline"><semantics> <mrow> <mi>K</mi> <mi>L</mi> </mrow> </semantics></math>) distance between the PDFs. The vertical dashed line is in correspondence of the timescale <math display="inline"><semantics> <mrow> <msubsup> <mi>τ</mi> <mi>η</mi> <msup> <mi>O</mi> <mo>+</mo> </msup> </msubsup> <mo>∼</mo> <mn>0.35</mn> </mrow> </semantics></math> s associated with the <math display="inline"><semantics> <msup> <mi>O</mi> <mo>+</mo> </msup> </semantics></math> inertial length <math display="inline"><semantics> <mi>η</mi> </semantics></math> for a density of ∼<math display="inline"><semantics> <msup> <mn>10</mn> <mn>5</mn> </msup> </semantics></math> cm<math display="inline"><semantics> <msup> <mrow/> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </semantics></math>. The solid horizontal line indicate the 95% critical threshold, <math display="inline"><semantics> <mrow> <mi>K</mi> <msup> <mi>L</mi> <mo>*</mo> </msup> <mo>=</mo> <mn>1</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>, below which the <math display="inline"><semantics> <mrow> <mi>K</mi> <mi>L</mi> </mrow> </semantics></math>-distance between PDFs is not significant (grey region).</p>
Full article ">
21 pages, 8374 KiB  
Article
Evaluation and Comparison of MODIS C6 and C6.1 Deep Blue Aerosol Products in Arid and Semi-Arid Areas of Northwestern China
by Leiku Yang, Xinyao Tian, Chao Liu, Weiqian Ji, Yu Zheng, Huan Liu, Xiaofeng Lu and Huizheng Che
Remote Sens. 2022, 14(8), 1935; https://doi.org/10.3390/rs14081935 - 17 Apr 2022
Cited by 9 | Viewed by 2433
Abstract
The Moderate Resolution Imaging Spectroradiometer (MODIS) Deep Blue (DB) algorithm was developed for aerosol retrieval on bright surfaces. Although the global validation accuracy of the DB product is satisfactory, there are still some regions found to have very low accuracy. To this end, [...] Read more.
The Moderate Resolution Imaging Spectroradiometer (MODIS) Deep Blue (DB) algorithm was developed for aerosol retrieval on bright surfaces. Although the global validation accuracy of the DB product is satisfactory, there are still some regions found to have very low accuracy. To this end, DB has updated the surface database in the latest version of the Collection 6.1 (C6.1) algorithm. Some studies have shown that DB aerosol optical depth (AOD) of the old version Collection 6 (C6) has been seriously underestimated in Northwestern China. However, the status of the new version of the C6.1 product in this region is still unknown. This study aims to comprehensively evaluate the performance of the MODIS DB product in Northwestern China. The DB AOD with high quality (Quality Flag = 2 or 3) was selected to validate against the 23 sites from the China Aerosol Remote Sensing Network (CARSNET) and Aerosol Robotic Network (AERONET) during the period 2002–2014. By the overall analysis, the results indicate that both C6 and C6.1 show significant underestimation with a large fraction of more than 54% of collocations falling below the Expected Error (EE = ±(0.05 + 20% AODground)) envelope and with a large negative Mean Bias (MB) of less than −0.14. Furthermore, the new C6.1 products failed to achieve reasonable improvements in the region of Northwestern China. Besides, C6.1 has slightly fewer collocations than C6 due that some pixels with systematic biases have been removed from the new surface reflectance database. From the analysis of the site scale, the scatter plot of C6.1 is similar to that of C6 in most sites. Furthermore, a significant underestimation of DB AOD was observed at most sites, with the most severe underestimation at two sites located in the Taklimakan Desert region. Among 23 sites in Northwestern China, there are only two sites where C6.1 has largely improved the underestimation of C6. Furthermore, it is interesting to note that there are also two sites where the accuracy of the new C6.1 has declined. Moreover, it is surprising that there is one site where a large overestimation was observed in C6 and improved in C6.1. Additionally, we found a constant value of about 0.05 for both C6 and C6.1 at several sites with low aerosol loading, which is an obvious artifact. The significant improvements of C6.1 were observed in the Middle East and Central Asia but not in most sites of Northwestern China. The results of this study will be beneficial to further improvements in the MODIS DB algorithm. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>True color image of Northwestern China and the geolocation of CARSNET and AERONET sites. The blue and red dots denote CARSNET and AERONET sites respectively.</p>
Full article ">Figure 2
<p>Validation of Aqua MODIS C6 (<b>left</b>) and C6.1 (<b>right</b>) Deep Blue 10 km AOD against AERONET AOD at IASBS, Kandahar, Dushanbe and Solar_Village. One-one line, linear regression line, and the expected error (EE) envelopes of ±(0.05 + 20%AOD<sub>AERONET</sub>) are plotted as green solid, red solid, and black dashed lines. (<b>a</b>) IASBS, Iran; (<b>b</b>) Kandahar, Afghanistan; (<b>c</b>) Dushanbe, Tajikistan; (<b>d</b>) Solar_Village, Saudi Arabia.</p>
Full article ">Figure 2 Cont.
<p>Validation of Aqua MODIS C6 (<b>left</b>) and C6.1 (<b>right</b>) Deep Blue 10 km AOD against AERONET AOD at IASBS, Kandahar, Dushanbe and Solar_Village. One-one line, linear regression line, and the expected error (EE) envelopes of ±(0.05 + 20%AOD<sub>AERONET</sub>) are plotted as green solid, red solid, and black dashed lines. (<b>a</b>) IASBS, Iran; (<b>b</b>) Kandahar, Afghanistan; (<b>c</b>) Dushanbe, Tajikistan; (<b>d</b>) Solar_Village, Saudi Arabia.</p>
Full article ">Figure 3
<p>Validation of MODIS C6 (<b>left</b>) and C6.1 (<b>right</b>) DB AOD against ground-based AOD in Northwestern China. One-one line, linear regression line, and the EE envelopes of ±(0.05 + 20%AOD<sub>ground</sub>) are plotted as red solid, green solid, and black dashed lines.</p>
Full article ">Figure 4
<p>Box plots of AOD bias (MODIS AOD—Ground-based AOD) and the percentage of retrievals falling within the EE envelopes (dashed curves) for MYD04 C6 and C6.1 DB AOD against ground-based AOD measurements at 0.55 μm as a function of aerosol loading. Cyan and purple represent the results of C6 and C6.1, respectively. The black horizontal solid line represents the zero bias. The two black dashed curves represent the EE envelopes: ±(0.05 + 20%AOD<sub>ground</sub>). In each box, the middle, lower, and upper horizontal lines represent the AOD bias median, and 25th and 75th percentiles, respectively.</p>
Full article ">Figure 5
<p>Validation of MODIS C6 (<b>left</b>) and C6.1 (<b>right</b>) DB AOD against ground-based AOD at 12 desert sites. One-one line, linear regression line, and the EE envelopes are plotted as red solid, green solid, and black dashed lines. (<b>a</b>) Tazhong; (<b>b</b>) Hotan; (<b>c</b>) Minqin; (<b>d</b>) Ulate; (<b>e</b>) AOE_Baotou; (<b>f</b>) Zhurihe; (<b>g</b>) Zhangbei; (<b>h</b>) Xilinhot; (<b>i</b>) Hami; (<b>j</b>) Jiuquan; (<b>k</b>) Dunhuang; (<b>l</b>) Ejina.</p>
Full article ">Figure 5 Cont.
<p>Validation of MODIS C6 (<b>left</b>) and C6.1 (<b>right</b>) DB AOD against ground-based AOD at 12 desert sites. One-one line, linear regression line, and the EE envelopes are plotted as red solid, green solid, and black dashed lines. (<b>a</b>) Tazhong; (<b>b</b>) Hotan; (<b>c</b>) Minqin; (<b>d</b>) Ulate; (<b>e</b>) AOE_Baotou; (<b>f</b>) Zhurihe; (<b>g</b>) Zhangbei; (<b>h</b>) Xilinhot; (<b>i</b>) Hami; (<b>j</b>) Jiuquan; (<b>k</b>) Dunhuang; (<b>l</b>) Ejina.</p>
Full article ">Figure 5 Cont.
<p>Validation of MODIS C6 (<b>left</b>) and C6.1 (<b>right</b>) DB AOD against ground-based AOD at 12 desert sites. One-one line, linear regression line, and the EE envelopes are plotted as red solid, green solid, and black dashed lines. (<b>a</b>) Tazhong; (<b>b</b>) Hotan; (<b>c</b>) Minqin; (<b>d</b>) Ulate; (<b>e</b>) AOE_Baotou; (<b>f</b>) Zhurihe; (<b>g</b>) Zhangbei; (<b>h</b>) Xilinhot; (<b>i</b>) Hami; (<b>j</b>) Jiuquan; (<b>k</b>) Dunhuang; (<b>l</b>) Ejina.</p>
Full article ">Figure 6
<p>Validation of MODIS C6 (<b>left</b>) and C6.1 (<b>right</b>) DB AOD against ground-based AOD at 5 sites in transitional region. One-one line, linear regression line, and the EE envelopes are plotted as red solid, green solid, and black dashed lines. (<b>a</b>) Dongsheng; (<b>b</b>) Yanan; (<b>c</b>) Yulin; (<b>d</b>) SACOL; (<b>e</b>) Mt. Gaolan.</p>
Full article ">Figure 6 Cont.
<p>Validation of MODIS C6 (<b>left</b>) and C6.1 (<b>right</b>) DB AOD against ground-based AOD at 5 sites in transitional region. One-one line, linear regression line, and the EE envelopes are plotted as red solid, green solid, and black dashed lines. (<b>a</b>) Dongsheng; (<b>b</b>) Yanan; (<b>c</b>) Yulin; (<b>d</b>) SACOL; (<b>e</b>) Mt. Gaolan.</p>
Full article ">Figure 7
<p>Validation of MODIS C6 (<b>left</b>) and C6.1 (<b>right</b>) DB AOD against ground-based AOD at 4 sites in urban and built-up region. One-one line, linear regression line, and the EE envelopes are plotted as red solid, green solid, and black dashed lines. (<b>a</b>) Lanzhou; (<b>b</b>) Urumqi; (<b>c</b>) Yinchuan; (<b>d</b>) Datong.</p>
Full article ">Figure 7 Cont.
<p>Validation of MODIS C6 (<b>left</b>) and C6.1 (<b>right</b>) DB AOD against ground-based AOD at 4 sites in urban and built-up region. One-one line, linear regression line, and the EE envelopes are plotted as red solid, green solid, and black dashed lines. (<b>a</b>) Lanzhou; (<b>b</b>) Urumqi; (<b>c</b>) Yinchuan; (<b>d</b>) Datong.</p>
Full article ">Figure 8
<p>Validation of MODIS C6 (<b>left</b>) and C6.1 (<b>right</b>) DB AOD against ground-based AOD at 2 remote sites. One-one line, linear regression line, and the EE envelopes are plotted as red solid, green solid, and black dashed lines. (<b>a</b>) Mt. Waliguan; (<b>b</b>) Akedala.</p>
Full article ">Figure 9
<p>Maps showing the best performing of the MODIS DB products for the following statistical metrics: (<b>a</b>) correlation coefficient, (<b>b</b>) fraction of points within the EE, (<b>c</b>) RMSE, and (<b>d</b>) MB.</p>
Full article ">
14 pages, 6332 KiB  
Communication
Seasonal and Interannual Variability of Tidal Mixing Signatures in Indonesian Seas from High-Resolution Sea Surface Temperature
by Raden Dwi Susanto and Richard D. Ray
Remote Sens. 2022, 14(8), 1934; https://doi.org/10.3390/rs14081934 - 16 Apr 2022
Cited by 13 | Viewed by 3129
Abstract
With their complex narrow passages and vigorous mixing, the Indonesian seas provide the only low-latitude pathway between the Pacific and Indian Oceans and thus play an essential role in regulating Pacific-Indian Ocean exchange, regional air-sea interaction, and ultimately, global climate phenomena. While previous [...] Read more.
With their complex narrow passages and vigorous mixing, the Indonesian seas provide the only low-latitude pathway between the Pacific and Indian Oceans and thus play an essential role in regulating Pacific-Indian Ocean exchange, regional air-sea interaction, and ultimately, global climate phenomena. While previous investigations using remote sensing and numerical simulations strongly suggest that this mixing is tidally driven, the impacts of monsoon and El Niño Southern Oscillation (ENSO) on tidal mixing in the Indonesian seas must play an important role. Here we use high-resolution sea surface temperature from June 2002 to June 2021 to reveal monsoon and ENSO modulations of mixing. The largest spring-neap (fortnightly) signals are found to be localized in the narrow passages/straits and sills, with more vigorous tidal mixing during the southeast (boreal summer) monsoon and El Niño than that during the northwest (boreal winter monsoon) and La Niña. Therefore, tidal mixing, which necessarily responds to seasonal and interannual changes in stratification, must also play a feedback role in regulating seasonal and interannual variability of water mass transformations and Indonesian throughflow. The findings have implications for longer-term variations and changes of Pacific–Indian ocean water mass transformation, circulation, and climate. Full article
Show Figures

Figure 1

Figure 1
<p>Indonesia throughflow pathways (ITF) (purple lines) overlaid with sea surface temperature anomaly in the Indo-Pacific region during (<b>A</b>) the 1997 El Niño and (<b>B</b>) the 2015 La Niña.</p>
Full article ">Figure 2
<p>Average SST from June 2002 to June 2021 during (<b>A</b>) the southeast monsoon (boreal summer); (<b>B</b>) during the northwest monsoon (boreal winter); (<b>C</b>) El Niño; (<b>D</b>) La Niña, and (<b>E</b>) Neutral year.</p>
Full article ">Figure 3
<p>The amplitude of the fortnightly signal (MSf): (<b>A</b>) During the southeast monsoon (boreal summer) and (<b>B</b>) during the northwest monsoon (boreal winter). The tidal mixing signatures are more robust/stronger during the boreal summer (when the ITF is generally stronger) than that during boreal winter.</p>
Full article ">Figure 4
<p>(<b>A</b>) Temperature-Salinity diagram of the evolution of water masses from north of Lombok Strait through Lombok Strait into the Indian Ocean taken in (<b>A</b>) June 2005, representing southeast monsoon and (<b>B</b>) December 2019, representing northwest monsoon.</p>
Full article ">Figure 5
<p>(<b>A</b>) Map of the main ITF pathways from the Pacific to Indian Oceans (adopted from Susanto et al. [<a href="#B42-remotesensing-14-01934" class="html-bibr">42</a>]). Numbers denote annual mean ITF transports in each strait with negative values representing transport toward the Indian Ocean; (<b>B</b>) Map of the Makassar and Lombok Straits showing the location of the stations used in (<b>C</b>); (<b>C</b>) TS diagram of the evolution of water masses from north of Makassar Strait through Lombok Strait into the Indian Ocean taken during the Indonesian throughflow monitoring program, revealing vigorous mixing in Makassar and Lombok Straits for potential densities σ<sub>θ</sub> = 23–27 kg m<sup>–3</sup>.</p>
Full article ">Figure 6
<p>(<b>A</b>) SST spectrum based on 20 years of daily GHRSST data south of Lombok and Alas Straits (red and blue curves), and at the location of the INSTANT mooring at the northern end of Lombok Strait (green curve). Note that the strong MSf peaks in the red and blue curves are accompanied by two side-peaks, which are precisely one cycle-per-year (cpy) from the central MSf peaks [<a href="#B21-remotesensing-14-01934" class="html-bibr">21</a>], thus indicating substantial annual modulation, which <a href="#remotesensing-14-01934-f003" class="html-fig">Figure 3</a> explicitly shows; (<b>B</b>) Velocity spectrum of the INSTANT time series. Labeled vertical dashed lines mark frequencies Mf (T = 13.66-days); MSf (T = 14.77-days), and Mm (T = 27.55-days); the two lines at the far left mark annual and semiannual frequencies (Sa and Ssa); (<b>C</b>) Estimates of fortnightly (MSf) SST amplitude in milli °C (zoom in <a href="#remotesensing-14-01934-f003" class="html-fig">Figure 3</a>A). Locations of observations for the spectrum analysis (red, green, and blue curves).</p>
Full article ">Figure 7
<p>The amplitude of the fortnightly signal (MSf): (<b>A</b>) During El Niño year; (<b>B</b>) During the La Niña year, and (<b>C</b>) During the normal year. Tidal mixing signatures are stronger/more robust during the El Niño year, where ITF is less than in the La Niña.</p>
Full article ">Figure 8
<p>Niño3.4 index (magenta line) and Dipole Mode Index (DMI) (gray line). Values above and below 0.5 are shaded (blue for El Niño and magenta for La Niña). Shaded in cyan denotes Indian Ocean Dipole Mode (IOD) positive, while shaded in gray denotes IOD negative. Blue vertical lines denote El Niño and magenta vertical lines denote La Niña.</p>
Full article ">
15 pages, 2703 KiB  
Article
A Camera-Based Method for Collecting Rapid Vegetation Data to Support Remote-Sensing Studies of Shrubland Biodiversity
by Erin J. Questad, Marlee Antill, Nanfeng Liu, E. Natasha Stavros, Philip A. Townsend, Susan Bonfield and David Schimel
Remote Sens. 2022, 14(8), 1933; https://doi.org/10.3390/rs14081933 - 16 Apr 2022
Cited by 3 | Viewed by 3421
Abstract
The decline in biodiversity in Mediterranean-type ecosystems (MTEs) and other shrublands underscores the importance of understanding the trends in species loss through consistent vegetation mapping over broad spatial and temporal ranges, which is increasingly accomplished with optical remote sensing (imaging spectroscopy). Airborne missions [...] Read more.
The decline in biodiversity in Mediterranean-type ecosystems (MTEs) and other shrublands underscores the importance of understanding the trends in species loss through consistent vegetation mapping over broad spatial and temporal ranges, which is increasingly accomplished with optical remote sensing (imaging spectroscopy). Airborne missions planned by the National Aeronautics and Space Administration (NASA) and other groups (e.g., US National Ecological Observatory Network, NEON) are essential for improving high-quality maps of vegetation and plant species. These surveys require robust and efficient ground calibration/validation data; however, barriers to ground-data collection exist, such as steep terrain, which is a common feature of Mediterranean-type ecosystems. We developed and tested a method for rapidly collecting ground-truth data for shrubland plant communities across steep topographic gradients in southern California. Our method utilizes semi-aerial photos taken with a high-resolution digital camera mounted on a telescoping pole to capture groundcover, and a point-intercept image-classification program (Photogrid) that allows efficient sub-sampling of field images to derive vegetation percent-cover estimates while reducing human bias. Here, we assessed the quality of data collection using the image-based method compared to a traditional point-intercept ground survey and performed time trials to compare the efficiency of various survey efforts. The results showed no significant difference in estimates of percent cover and Simpson’s diversity derived from the point-intercept and those derived using the image-based method; however, there was lower correspondence in estimates of species richness and evenness. The image-based method was overall more efficient than the point-intercept surveys, reducing the total survey time by 13 to 46 min per plot depending on sampling effort. The difference in survey time between the two methods became increasingly greater when the vegetation height was above 1 m. Due to the high correspondence between estimates of species percent cover derived from the image-based compared to the point-intercept method, we recommend this type of survey for the verification of remote-sensing datasets featuring percent cover of individual species or closely related plant groups, for use in classifying UAS imagery, and especially for use in MTEs that have steep, rugged terrain and other situations such as tall, dense-growing shrubs where traditional field methods are dangerous or burdensome. Full article
Show Figures

Figure 1

Figure 1
<p>Map of the study area in the Angeles National Forest (green), Los Angeles County, CA, USA. The Copper and Sayre fires burned approximately 25,000 acres between 2002 and 2008.</p>
Full article ">Figure 2
<p>Diagram of photo-survey method for (<b>A</b>) steep plots: surveyor walks along a 45-m transect taking 10 5-m × 5-m photos (grey boxes) of the plot below, (<b>B</b>) a field crew member taking photos with camera attached to a pole extended 5 m, and (<b>C</b>) diagram of photo survey for flat plots: surveyor walks along two crossed 45-m transects taking 10 5-m × 5-m photos from above.</p>
Full article ">Figure 3
<p>Photogrid classification process: (<b>A</b>) in the field, aboveground photos were marked with a smartphone app to aid with later species ID; (<b>B</b>) back in the lab, photos were uploaded into the Photogrid classifier program and number of gridpoints per photo (generally 42) were chosen, which instructs the program to populate each photo with gridpoints to classify; (<b>C</b>) for each gridpoint, the user must select the dominant species/cover class within that cell from a pre-entered list; (<b>D</b>) after completing classification for all ten plot photos, a table is generated with percent cover of each class identified, out of 100% (Annual Grass was a composite category that included annual grass species that could not be distinguished in the photos such as <span class="html-italic">Bromus</span> spp., <span class="html-italic">Avena</span> spp., etc.).</p>
Full article ">Figure 4
<p>Relationship between species richness of field observations (S<sub>field</sub>) and Photogrid surveys (S<sub>photo</sub>) for Flat and Steep protocols. Data are from 2018 field surveys including 69 steep (shown in black) and 14 flat plots (shown in grey). The dashed line is the one-to-one line. Results of a linear mixed-effects regression showed a significant effect of S<sub>field</sub> (F<sub>1,79</sub> = 95.55, <span class="html-italic">p</span> &lt; 0.0001) and protocol (F<sub>1,79</sub> = 7.91, <span class="html-italic">p</span> = 0.006) on S<sub>photo</sub>, with no significant interaction (F<sub>1,79</sub> = 0.47, <span class="html-italic">p</span> &gt; 0.05). Pearson correlation coefficients between S<sub>photo</sub> and S<sub>field</sub> are shown.</p>
Full article ">Figure 5
<p>Regressions between field (e.g., S<sub>p-i</sub>) and Photogrid (e.g., S<sub>p</sub>) vegetation metrics from the 2019 data, shown for the 12 Photogrid configurations of (<b>A</b>) Simpson’s species diversity (1/D); (<b>B</b>) % Cover; (<b>C</b>) species richness (S); and (<b>D</b>) Simpson’s evenness. The heavy black lines show the one-to-one relationship. Dashed lines represent the regression between Photogrid configuration 1 (highest sampling effort) and the field point-intercept method; colored lines are those configurations that did not produce significantly different results from configuration 1, and grey lines represent those that were significantly different than configuration 1, based on a Tukey HSD test. There was no significant difference among the configurations for 1/D<sub>p</sub> or % Cover<sub>p</sub>. Five configurations produced significantly lower values for S<sub>p</sub>, and three configurations produced significantly greater values for E<sub>p</sub> when compared to configuration 1.</p>
Full article ">Figure 6
<p>Survey time in minutes by survey method and vegetation height class. Height classes were low (&lt;1 m), mid (1 m to 1.5 m), and high (&gt;1.5 m). Crossbars represent the mean ±95% confidence interval for 13 field plots within three vegetation height classes surveyed in 2019 by both the point-intercept and Photogrid methods. There were no significant effects of method, height class, or their interaction on survey time. The graph is provided to illustrate a trend that the photo method time was lower in mid and high height classes.</p>
Full article ">
17 pages, 3294 KiB  
Article
Polarization Aberrations in High-Numerical-Aperture Lens Systems and Their Effects on Vectorial-Information Sensing
by Yuanxing Shen, Binguo Chen, Chao He, Honghui He, Jun Guo, Jian Wu, Daniel S. Elson and Hui Ma
Remote Sens. 2022, 14(8), 1932; https://doi.org/10.3390/rs14081932 - 16 Apr 2022
Cited by 14 | Viewed by 4001
Abstract
The importance of polarization aberrations has been recognized and studied in numerous optical systems and related applications. It is known that polarization aberrations are particularly crucial in certain photogrammetry and microscopy techniques that are related to vectorial information—such as polarization imaging, stimulated emission [...] Read more.
The importance of polarization aberrations has been recognized and studied in numerous optical systems and related applications. It is known that polarization aberrations are particularly crucial in certain photogrammetry and microscopy techniques that are related to vectorial information—such as polarization imaging, stimulated emission depletion microscopy, and structured illumination microscopy. Hence, a reduction in polarization aberrations would be beneficial to different types of optical imaging/sensing techniques with enhanced vectorial information. In this work, we first analyzed the intrinsic polarization aberrations induced by a high-NA lens theoretically and experimentally. The aberrations of depolarization, diattenuation, and linear retardance were studied in detail using the Mueller matrix polar-decomposition method. Based on an analysis of the results, we proposed strategies to compensate the polarization aberrations induced by high-NA lenses for hardware-based solutions. The preliminary imaging results obtained using a Mueller matrix polarimeter equipped with multiple coated aspheric lenses for polarization-aberration reduction confirmed that the conclusions and strategies proposed in this study had the potential to provide more precise polarization information of the targets for applications spanning across classical optics, remote sensing, biomedical imaging, photogrammetry, and vectorial optical-information extraction. Full article
(This article belongs to the Special Issue Advanced Light Vector Field Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Schematics of light-beam propagation through a lens. (<b>a</b>) Focusing and (<b>b</b>) collection of light in a high-NA lens system.</p>
Full article ">Figure 2
<p>Four designs used in high-NA lens systems. The red solid lines show the light paths, the yellow solid lines show the reflection light paths of the interface surface, the black dotted lines show the normal lines of the interface, and the coated lenses are represented by the gray boundary on the surface.</p>
Full article ">Figure 3
<p>Schematic of Mueller matrix polarimeter.</p>
Full article ">Figure 4
<p>Schematic of a simplified high-NA 4-f lens system.</p>
Full article ">Figure 5
<p>Mueller matrices of the spectrum of plane waves from the exit pupil for (<b>a</b>) uncoated spherical lens model (theoretical calculation), (<b>b</b>) uncoated aspherical lens system, and (<b>c</b>) coated aspherical lens system. The areas corresponding to different NA values are labeled with black dotted lines in the M11 element. The M11 elements were normalized to [0, 1], while all the other elements were normalized with the M11 to [–1, 1]. The color bars in (<b>a</b>–<b>c</b>) are [0, 1] for the M11 element; [0.8, 1] for the M22, M33, and M44 elements; and [−0.25, 0.25] for other nondiagonal elements.</p>
Full article ">Figure 6
<p>(<b>a</b>) Measured Mueller matrix of the spectrum of plane waves from exit pupil for aspherical multiple lenses coated with CaF2. The areas of different NA values are labeled with black lines in the M11. The M11 element was normalized to [0, 1], while all other elements were normalized with the M11 to [−1, 1]. The color bar is [0.8, 1] for the M22, M33, and M44 elements; and [−0.25, 0.25] for other nondiagonal elements. (<b>b</b>) Average values of Mueller matrices for four high-NA lens systems: uncoated spherical lens (pink lines), uncoated aspherical lens (blue lines), coated aspherical lens (orange lines) and coated aspherical multiple lenses (purple lines), with NA increases from 0 to 0.2, 0.4, 0.6, and 0.78. The black dotted lines show the theoretical values. Here, air was used as the standard sample. All Mueller matrix elements were normalized with the M11. The measured average values with NA 0, 0.2, 0.4, 0.6, and 0.78 of the M44 element for the uncoated aspherical lens were 0.999 ± 7.09 × 10<sup>−5</sup>, 0.999 ± 7.70 × 10<sup>−5</sup>, 0.997 ± 7.61 × 10<sup>−4</sup>, 0.990 ± 8.41 × 10<sup>−5</sup>, and 0.981 ± 8.85 × 10<sup>−5</sup>, respectively; and of the M24 element for the coated aspherical lens were −0.00340 ± 1.45 × 10<sup>−4</sup>, 0.0312 ± 1.70 × 10<sup>−3</sup>, 0.0279 ± 8.93 × 10<sup>−4</sup>, 0.0238 ± 8.83 × 10<sup>−4</sup>, and 0.0242 ± 5.19 × 10<sup>−4</sup>, respectively.</p>
Full article ">Figure 7
<p>Experimental results of MMPD parameters (<b>a</b>) Δ of air, (<b>b</b>) D of a linear polarizer, and (<b>c</b>) δ of a quarter-wave retarder, for four high-NA lens systems: uncoated spherical lens (pink lines), uncoated aspherical lens (blue lines), coated aspherical lens (orange lines), and coated aspherical multiple lenses (purple lines), with NA increases from 0 to 0.2, 0.4, 0.6, and 0.78. The black dotted lines show the theoretical values.</p>
Full article ">Figure 8
<p>Comparative validation experiment between a common objective lens and a lens combination in suppressing polarization aberrations. (<b>a</b>) Schematic of the vortex retarder sample; (<b>b</b>) from left to right, the MMPD δ and θ results measured by the common objective lens; and (<b>c</b>) from left to right, the MMPD δ and θ results measured by the cascading coated aspherical multiple-lens combination.</p>
Full article ">
20 pages, 7136 KiB  
Article
Automated Inventory of Broadleaf Tree Plantations with UAS Imagery
by Aishwarya Chandrasekaran, Guofan Shao, Songlin Fei, Zachary Miller and Joseph Hupy
Remote Sens. 2022, 14(8), 1931; https://doi.org/10.3390/rs14081931 - 16 Apr 2022
Cited by 4 | Viewed by 3123
Abstract
With the increased availability of unmanned aerial systems (UAS) imagery, digitalized forest inventory has gained prominence in recent years. This paper presents a methodology for automated measurement of tree height and crown area in two broadleaf tree plantations of different species and ages [...] Read more.
With the increased availability of unmanned aerial systems (UAS) imagery, digitalized forest inventory has gained prominence in recent years. This paper presents a methodology for automated measurement of tree height and crown area in two broadleaf tree plantations of different species and ages using two different UAS platforms. Using structure from motion (SfM), we generated canopy height models (CHMs) for each broadleaf plantation in Indiana, USA. From the CHMs, we calculated individual tree parameters automatically through an open-source web tool developed using the Shiny R package and assessed the accuracy against field measurements. Our analysis shows higher tree measurement accuracy with the datasets derived from multi-rotor platform (M600) than with the fixed wing platform (Bramor). The results show that our automated method could identify individual trees (F-score > 90%) and tree biometrics (root mean square error < 1.2 m for height and <1 m2 for the crown area) with reasonably good accuracy. Moreover, our automated tool can efficiently calculate tree-level biometric estimations for 4600 trees within 30 min based on a CHM from UAS-SfM derived images. This automated UAS imagery approach for tree-level forest measurements will be beneficial to landowners and forest managers by streamlining their broadleaf forest measurement and monitoring effort. Full article
(This article belongs to the Special Issue UAV Applications for Forest Management: Wood Volume, Biomass, Mapping)
Show Figures

Figure 1

Figure 1
<p>Orthomosaic and point cloud illustration of the two plantations in this study, (<b>a</b>) red oak plantation and (<b>b</b>) black walnut plantation, with their tree height statistics, at Martell Forest, Indiana.</p>
Full article ">Figure 2
<p>Workflow of tree-level information extraction using UAS-based imagery from broadleaf tree plantations.</p>
Full article ">Figure 3
<p>Illustration of issues involved in treetop detection and crown segmentation of individual trees performed using the automated technique proposed in this study. Panels (<b>a</b>,<b>d</b>) represent perfect segmentation. Panels (<b>b</b>,<b>e</b>) represent undersegmentation. Panels (<b>c</b>,<b>f</b>) represent oversegmentation with orthophoto and CHM.</p>
Full article ">Figure 4
<p>Comparison of different accuracy parameters for the two plantations using different UAS platforms. (<b>a</b>) Recall, (<b>b</b>) Precision, (<b>c</b>) Omission error, (<b>d</b>) Commission error, (<b>e</b>) F-score and (<b>f</b>) True positive rates are presented by using manually detected treetops as a reference for the accuracy assessment.</p>
Full article ">Figure 5
<p>Results of regression analysis between ground-measured and algorithm-derived estimates for the red oak plantation. (<b>a</b>) Crown diameter measured using different UAS datasets (M600 and Bramor, respectively). (<b>b</b>) Tree height measured using different UAS datasets (DJI Mavic 600 and Bramor, respectively). *** indicates <span class="html-italic">p</span> ≤ 0.001 (statistical significance).</p>
Full article ">Figure A1
<p>Location map of the two plantation areas. Top: black walnut; bottom: red oak; located in Indiana, USA.</p>
Full article ">Figure A2
<p>Illustration for calculating the TP, FP, and FN from the segmented data.</p>
Full article ">Figure A3
<p></p>
Full article ">Figure A3 Cont.
<p></p>
Full article ">Figure A3 Cont.
<p></p>
Full article ">
18 pages, 12317 KiB  
Article
Approaches for Joint Retrieval of Wind Speed and Significant Wave Height and Further Improvement for Tiangong-2 Interferometric Imaging Radar Altimeter
by Guo Li, Yunhua Zhang and Xiao Dong
Remote Sens. 2022, 14(8), 1930; https://doi.org/10.3390/rs14081930 - 16 Apr 2022
Cited by 3 | Viewed by 1944
Abstract
The interferometric imaging radar altimeter (InIRA) adopts a short baseline along with small incidence angles to acquire interferometric signals from the sea surface with high accuracy, thus the wide-swath sea surface height (SSH) and backscattering coefficient (σ0) can be obtained [...] Read more.
The interferometric imaging radar altimeter (InIRA) adopts a short baseline along with small incidence angles to acquire interferometric signals from the sea surface with high accuracy, thus the wide-swath sea surface height (SSH) and backscattering coefficient (σ0) can be obtained simultaneously. This work presents an approach to jointly retrieve the wind speed and significant wave height (SWH) for the Chinese Tiangong-2 interferometric imaging radar altimeter (TG2-InIRA). This approach utilizes a multilayer perceptron (MLP) joint retrieval model based on σ0 and SSH data. By comparing with the European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis data, the root mean square errors (RMSEs) of the retrieved wind speed and the SWH are 1.27 m/s and 0.36 m, respectively. Based on the retrieved SWH, two enhanced wind speed retrieval models are developed for high sea states and low sea states, respectively. The results show that the RMSE of the retrieved wind speed is 1.12 m/s when the SWHs < 4 m; the RMSE is 0.73 m/s when the SWHs ≥ 4 m. Similarly, two enhanced SWH retrieval models for relatively larger and relatively smaller wind speed regions are developed based on the retrieved wind speed with corresponding RMSEs of 0.19 m and 0.16 m, respectively. The comparison between the retrieved results and the buoy data shows that they are highly consistent. The results show that the additional information of SWH can be used to improve the accuracy of wind speed retrieval at small incidence angles, and also the additional information of wind speed can be used to improve the SWH retrieval. The stronger the correlation between wind speed and SWH, the greater the improvement of the retrieved results. The proposed method can achieve joint retrieval of wind speed and SWH accurately, which complements the existing wind speed and SWH retrieval methods for InIRA. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Sixty tracks of TG2-InIRA data used in this study.</p>
Full article ">Figure 2
<p>Ocean observation images of TG2-InIRA. (<b>a</b>) SSH; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>σ</mi> <mn>0</mn> </msub> </semantics></math>.</p>
Full article ">Figure 3
<p>Distribution of NDBC buoys matching to TG2-InIRA. The red points denote the positions of the buoys while the blue squares represent the positions of the collocated TG2-InIRA data.</p>
Full article ">Figure 4
<p>Mean values of <math display="inline"><semantics> <msub> <mi>σ</mi> <mn>0</mn> </msub> </semantics></math> at different wind speeds and SWHs at 5° incidence angle. Different colored curves represent different SWH ranges.</p>
Full article ">Figure 5
<p>Correlation coefficients between wind speed and SWH at different incidence angles.</p>
Full article ">Figure 6
<p>Histograms of the collocated data for training. (<b>a</b>) original SWH; (<b>b</b>) re-sampled SWH; (<b>c</b>) re-sampled wind speed.</p>
Full article ">Figure 7
<p>Structure of the four-layer MLP model with one input layer, one output layer and two hidden layers.</p>
Full article ">Figure 8
<p>Block diagram of the proposed joint retrieval approach.</p>
Full article ">Figure 9
<p>Joint retrieval results compared with ECMWF reanalysis data. (<b>a</b>) wind speed; (<b>b</b>) SWH. The red line represents the reference line ’y = x’.</p>
Full article ">Figure 10
<p>Wind speed–SWH distribution. (<b>a</b>)the training data set number, (<b>b</b>) Wind speed–SWH distribution of the RMSE of the retrieved result (wind speed+SWH).</p>
Full article ">Figure 11
<p>Retrieved results. (<b>a</b>) retrieved wind speed after adding SWH. (<b>b</b>) retrieved SWH after adding wind speed. The black lines are ‘y = x’ for references.</p>
Full article ">Figure 12
<p>Spearman’s rank correlation coefficients between wind speed and SWH at 5° incidence angle. (<b>a</b>) at different wind speeds; (<b>b</b>) at different SWHs.</p>
Full article ">Figure 13
<p>Block diagram of the enhanced joint retrieval method.</p>
Full article ">Figure 14
<p>Retrieved wind speeds by the enhanced model with SWH divided into SWHs &lt; 4 m (blue color) and SWHs ≥ 4 m (red color) regions. The black line is ‘y = x’ for reference.</p>
Full article ">Figure 15
<p>Retrieved SWHs by enhanced model with wind speed divided into wind speeds &lt; 9 m/s (blue color) and wind speeds ≥ 9 m/s (red color) regions.The black line is ‘y = x’ for reference.</p>
Full article ">Figure 16
<p>Comparison of the retrieved results and buoys’ data. (<b>a</b>) comparison of the joint retrieved wind speeds, enhanced wind speeds and buoy wind speeds; (<b>b</b>) difference between joint retrieved wind speeds and buoy, and enhanced retrieved wind speeds and buoy; (<b>c</b>) comparison of the joint retrieved SWHs, enhanced SWHs and buoy SWHs; (<b>d</b>) difference between joint retrieved SWHs and buoy, and enhanced retrieved SWHs and buoy.</p>
Full article ">Figure 17
<p>Wind speed–SWH distribution of the training data set.</p>
Full article ">
15 pages, 853 KiB  
Article
Spatial-Temporal Neural Network for Rice Field Classification from SAR Images
by Yang-Lang Chang, Tan-Hsu Tan, Tsung-Hau Chen, Joon Huang Chuah, Lena Chang, Meng-Che Wu, Narendra Babu Tatini, Shang-Chih Ma and Mohammad Alkhaleefah
Remote Sens. 2022, 14(8), 1929; https://doi.org/10.3390/rs14081929 - 16 Apr 2022
Cited by 10 | Viewed by 3733
Abstract
Agriculture is an important regional economic industry in Asian regions. Ensuring food security and stabilizing the food supply are a priority. In response to the frequent occurrence of natural disasters caused by global warming in recent years, the Agriculture and Food Agency (AFA) [...] Read more.
Agriculture is an important regional economic industry in Asian regions. Ensuring food security and stabilizing the food supply are a priority. In response to the frequent occurrence of natural disasters caused by global warming in recent years, the Agriculture and Food Agency (AFA) in Taiwan has conducted agricultural and food surveys to address those issues. To improve the accuracy of agricultural and food surveys, AFA uses remote sensing technology to conduct surveys on the planting area of agricultural crops. Unlike optical images that are easily disturbed by rainfall and cloud cover, synthetic aperture radar (SAR) images will not be affected by climatic factors, which makes them more suitable for the forecast of crops production. This research proposes a novel spatial-temporal neural network called a convolutional long short-term memory rice field classifier (ConvLSTM-RFC) for rice field classification from Sentinel-1A SAR images of Yunlin and Chiayi counties in Taiwan. The proposed model ConvLSTM-RFC is implemented with multiple convolutional long short-term memory attentions blocks (ConvLSTM Att Block) and a bi-tempered logistic loss function (BiTLL). Moreover, a convolutional block attention module (CBAM) was added to the residual structure of the ConvLSTM Att Block to focus on rice detection in different periods on SAR images. The experimental results of the proposed model ConvLSTM-RFC have achieved the highest accuracy of 98.08% and the rice false positive is as low as 15.08%. The results indicate that the proposed ConvLSTM-RFC produces the highest area under curve (AUC) value of 88% compared with other related models. Full article
Show Figures

Figure 1

Figure 1
<p>Yunlin and Chiayi counties’ data was received from Sentinel-1A in February 2017, where the height and width represent the number of pixels. White indicates the distribution of rice fields.</p>
Full article ">Figure 2
<p>(<b>a</b>) The ground truth data of study areas from the Agriculture and Food Agency (AFA). (<b>b</b>) The ground truth data of study areas from the Taiwan Agriculture Research Institute (TARI). White indicates the non-rice and black indicates the rice fields, where the height and width represent the number of pixels.</p>
Full article ">Figure 3
<p>Flowchart of this study.</p>
Full article ">Figure 4
<p>The architectural framework of the proposed ConvLSTM-RFC.</p>
Full article ">Figure 5
<p>A flowchart of the training and testing process.</p>
Full article ">Figure 6
<p>Sentinel-1A image and the rice field classification results of all the models. (<b>a</b>) Predicted result of GRU. (<b>b</b>) Predicted result of 3D CNN. (<b>c</b>) Predicted result of ConvLSTM. (<b>d</b>) Predicted result of ConvLSTM-RFC model. (<b>e</b>) Ground truth data from Agriculture and Food Agency (AFA).</p>
Full article ">Figure 7
<p>ROC curve of all the models.</p>
Full article ">
26 pages, 53673 KiB  
Article
Investigating the Potential of Sentinel-2 MSI in Early Crop Identification in Northeast China
by Mengfan Wei, Hongyan Wang, Yuan Zhang, Qiangzi Li, Xin Du, Guanwei Shi and Yiting Ren
Remote Sens. 2022, 14(8), 1928; https://doi.org/10.3390/rs14081928 - 15 Apr 2022
Cited by 11 | Viewed by 3502
Abstract
Early crop identification can provide timely and valuable information for agricultural planting management departments to make reasonable and correct decisions. At present, there is still a lack of systematic summary and analysis on how to obtain real-time samples in the early stage, what [...] Read more.
Early crop identification can provide timely and valuable information for agricultural planting management departments to make reasonable and correct decisions. At present, there is still a lack of systematic summary and analysis on how to obtain real-time samples in the early stage, what the optimal feature sets are, and what level of crop identification accuracy can be achieved at different stages. First, this study generated training samples with the help of historical crop maps in 2019 and remote sensing images in 2020. Then, a feature optimization method was used to obtain the optimal features in different stages. Finally, the differences of the four classifiers in identifying crops and the variation characteristics of crop identification accuracy at different stages were analyzed. These experiments were conducted at three sites in Heilongjiang Province to evaluate the reliability of the results. The results showed that the earliest identification time of corn can be obtained in early July (the seven leaves period) with an identification accuracy up to 86%. In the early stages, its accuracy was 40~79%, which was low, and could not reach the satisfied accuracy requirements. In the middle stages, a satisfactory recognition accuracy could be achieved, and its recognition accuracy was 79~100%. The late stage had a higher recognition accuracy, which was 90~100%. The accuracy of soybeans at each stage was similar to that of corn, and the earliest identification time of soybeans could also be obtained in early July (the blooming period) with an identification accuracy up to 87%. Its accuracy in the early growth stage was 35~71%; in the middle stage, it was 69~100%; and in the late stage, it was 92~100%. Unlike corn and soybeans, the earliest identification time of rice could be obtained at the end of April (the flooding period) with an identification accuracy up to 86%. In the early stage, its accuracy was 58~100%; in the middle stage, its accuracy was 93~100%; and in the late stage, its accuracy was 96~100%. In terms of crop identification accuracy in the whole growth stage, GBDT and RF performed better than other classifiers in our three study areas. This study systematically investigated the potential of early crop recognition in Northeast China, and the results are helpful for relevant applications and decision making of crop recognition in different crop growth stages. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the three regions.</p>
Full article ">Figure 2
<p>The image coverage of Sentinel-2 used in the three study areas.</p>
Full article ">Figure 3
<p>Illustration of early crop identification.</p>
Full article ">Figure 4
<p>Technical process of automatic acquisition of training samples.</p>
Full article ">Figure 5
<p>Distribution of training sample points in the three study areas.</p>
Full article ">Figure 6
<p>RF classification results in the three study areas.</p>
Full article ">Figure 7
<p>Overall accuracy and kappa of RF classification results in three study areas.</p>
Full article ">Figure 8
<p>The recognition accuracy of corn in different periods in the three study areas.</p>
Full article ">Figure 9
<p>The recognition accuracy of soybeans in different periods in the three study areas.</p>
Full article ">Figure 10
<p>The recognition accuracy of rice in different periods in the three study areas.</p>
Full article ">
18 pages, 14526 KiB  
Project Report
Earth Observation to Investigate Occurrence, Characteristics and Changes of Glaciers, Glacial Lakes and Rock Glaciers in the Poiqu River Basin (Central Himalaya)
by Tobias Bolch, Tandong Yao, Atanu Bhattacharya, Yan Hu, Owen King, Lin Liu, Jan B. Pronk, Philipp Rastner and Guoqing Zhang
Remote Sens. 2022, 14(8), 1927; https://doi.org/10.3390/rs14081927 - 15 Apr 2022
Cited by 14 | Viewed by 3725
Abstract
Meltwater from the cryosphere contributes a significant fraction of the freshwater resources in the countries receiving water from the Third Pole. Within the ESA-MOST Dragon 4 project, we addressed in particular changes of glaciers and proglacial lakes and their interaction. In addition, we [...] Read more.
Meltwater from the cryosphere contributes a significant fraction of the freshwater resources in the countries receiving water from the Third Pole. Within the ESA-MOST Dragon 4 project, we addressed in particular changes of glaciers and proglacial lakes and their interaction. In addition, we investigated rock glaciers in permafrost environments. Here, we focus on the detailed investigations which have been performed in the Poiqu River Basin, central Himalaya. We used in particular multi-temporal stereo satellite imagery, including high-resolution 1960/70s Corona and Hexagon spy images and contemporary Pleiades data. Sentinel-2 data was applied to assess the glacier flow. The results reveal that glacier mass loss continuously increased with a mass budget of −0.42 ± 0.11 m w.e.a−1 for the period 2004–2018. The mass loss has been primarily driven by an increase in summer temperature and is further accelerated by proglacial lakes, which have become abundant. The glacial lake area more than doubled between 1964 and 2017. The termini of glaciers that flow into lakes moved on average twice as fast as glaciers terminating on land, indicating that dynamical thinning plays an important role. Rock glaciers are abundant, covering approximately 21 km2, which was more than 10% of the glacier area (approximately 190 km2) in 2015. With ongoing glacier wastage, rock glaciers can become an increasingly important water resource. Full article
(This article belongs to the Special Issue ESA - NRSCC Cooperation Dragon 4 Final Results)
Show Figures

Figure 1

Figure 1
<p>Study region, its glaciers and glacial lakes; background image: Sentinel-2 MSI image mosaic from Dec 2021, false colour composite (SWIR-NIR-RED).</p>
Full article ">Figure 2
<p>Jialong Co (<b>left</b>, photo: T. Bolch) and Garlong Co (<b>right</b>: photo: J.B. Pronk), moraine-dammed proglacial lakes located in the Poiqu River Basin.</p>
Full article ">Figure 3
<p>Area change of glacial lakes between 1964 and 2017 (data source: [<a href="#B42-remotesensing-14-01927" class="html-bibr">42</a>]).</p>
Full article ">Figure 4
<p>Glacier elevation changes in the Poiqu basin and surroundings (example): DEM difference between 1974 (KH9) and-2000 (SRTM Data) (data source: [<a href="#B6-remotesensing-14-01927" class="html-bibr">6</a>]).</p>
Full article ">Figure 5
<p>Surface velocity of glaciers in the Poiqu basin and surrounding area, inset: centreline velocities of the ablation areas of lake- and land-terminating glaciers in the Poiqu River basin (data source: [<a href="#B52-remotesensing-14-01927" class="html-bibr">52</a>]).</p>
Full article ">Figure 6
<p>Rock glacier inventory of the Poiqu basin, background image: True colour composite of a Sentinel-2 image, acquired 12 September 2016 (data source: [<a href="#B51-remotesensing-14-01927" class="html-bibr">51</a>]).</p>
Full article ">Figure 7
<p>Comparison of average annual mass change rate of glaciers for the whole Poiqu Region with ERA5 Land climate variables and the mass change of the Langtang Basin with measurements at Nielamu station (adjusted based on [<a href="#B47-remotesensing-14-01927" class="html-bibr">47</a>]).</p>
Full article ">
29 pages, 9691 KiB  
Article
Range-Ambiguous Clutter Suppression via FDA MIMO Planar Array Radar with Compressed Sensing
by Yuzhuo Wang, Shengqi Zhu, Lan Lan, Ximin Li, Zhixin Liu and Zhixia Wu
Remote Sens. 2022, 14(8), 1926; https://doi.org/10.3390/rs14081926 - 15 Apr 2022
Cited by 4 | Viewed by 2040
Abstract
Range-ambiguous clutter is an inevitable issue for airborne forward-looking array radars, especially with the high pulse repetition frequency (PRF). In this paper, a method to suppress the range-ambiguous clutter is proposed in an FDA-MIMO radar with a forward-looking planar array. Compressed sensing FDA [...] Read more.
Range-ambiguous clutter is an inevitable issue for airborne forward-looking array radars, especially with the high pulse repetition frequency (PRF). In this paper, a method to suppress the range-ambiguous clutter is proposed in an FDA-MIMO radar with a forward-looking planar array. Compressed sensing FDA technology is used to suppress the range-ambiguous clutter and the forward-looking non-uniformity short-range clutter of radar. Specifically, first, the range ambiguous clutter in different regions is separated by the characteristics of the planar array radar elevation dimension and FDA radar range coupling. Meanwhile, regarding the issue of the FDA radar main lobe moving between coherent pulses, a main lobe correction (MLC) algorithm proposes a solution for the issue, where the FDA radar cannot coherently accumulate signals in the case of non-full angle illumination. Finally, compressed sensing technology and elevation dimension filtering are utilized to suppress the range ambiguous clutter at the receiver, with the approach alleviating the range dependence of clutter in the observation region. A small number of clutter snapshots can obtain an approximately ideal clutter covariance matrix through compressed sensing sparse recovery. The method not only reduces the number of training samples, but also overcomes the problem of clutter non-uniformity in the forward-looking array. Therefore, the clutter suppression problems faced by the high repetition frequency airborne radar forward-looking array structure are solved. At the analysis stage, a comparison among the conventional MIMO and FDA methods is carried on by analyzing the improvement factor (IF) curves. Numerical results verify the effectiveness of the proposed method in range-ambiguous clutter suppression. Full article
(This article belongs to the Special Issue Radar Techniques for Structures Characterization and Monitoring)
Show Figures

Figure 1

Figure 1
<p>Geometry of FDA-MIMO planar array radar. (<b>a</b>) Schematic diagram of FDA-MIMO planar array radar clutter geometric distribution; (<b>b</b>) schematic diagram of the planar array geometric distribution.</p>
Full article ">Figure 2
<p>Schematic diagram of main lobe movement in equivalent transmission pattern.</p>
Full article ">Figure 3
<p>Schematic diagram of main lobe moving correction of emission pattern.</p>
Full article ">Figure 4
<p>Receiving signal processing block diagram.</p>
Full article ">Figure 5
<p>Schematic diagram of range-ambiguous clutter pulse echoes.</p>
Full article ">Figure 6
<p>D domain clutter power spectrum diagram. (<b>a</b>) MIMO; (<b>b</b>) FDA; (<b>c</b>) MLC FDA.</p>
Full article ">Figure 7
<p>Simulation diagram. (<b>a</b>) Multiple range gate clutter ridge distribution; (<b>b</b>) desired range gate clutter ridge distribution.</p>
Full article ">Figure 8
<p>Compressed sensing MLC FDA radar signal processing flow chart.</p>
Full article ">Figure 9
<p>Simulation diagram of FDA main lobe moving between pulses. (<b>a</b>) Conventional non-full angle FDA; (<b>b</b>) after MLC non-full angle FDA; (<b>c</b>) coherent pulse number <span class="html-italic">K</span> = 5 MLC FDA.</p>
Full article ">Figure 9 Cont.
<p>Simulation diagram of FDA main lobe moving between pulses. (<b>a</b>) Conventional non-full angle FDA; (<b>b</b>) after MLC non-full angle FDA; (<b>c</b>) coherent pulse number <span class="html-italic">K</span> = 5 MLC FDA.</p>
Full article ">Figure 10
<p>dB ambiguous clutter power spectrum. (<b>a</b>) MIMO; (<b>b</b>) FDA; (<b>c</b>) MLC FDA.</p>
Full article ">Figure 10 Cont.
<p>dB ambiguous clutter power spectrum. (<b>a</b>) MIMO; (<b>b</b>) FDA; (<b>c</b>) MLC FDA.</p>
Full article ">Figure 11
<p>dB range ambiguous clutter power spectrum. (<b>a</b>) MIMO; (<b>b</b>) FDA; (<b>c</b>) MLC FDA.</p>
Full article ">Figure 12
<p>Ambiguous clutter power spectrum. (<b>a</b>) MIMO; (<b>b</b>) FDA; (<b>c</b>) MLC FDA.</p>
Full article ">Figure 12 Cont.
<p>Ambiguous clutter power spectrum. (<b>a</b>) MIMO; (<b>b</b>) FDA; (<b>c</b>) MLC FDA.</p>
Full article ">Figure 13
<p>IF curves.</p>
Full article ">Figure 14
<p>Ambiguous clutter power spectrum. (<b>a</b>) Proposed method; (<b>b</b>) ideal power spectrum; (<b>c</b>) MIMO; (<b>d</b>) FDA.</p>
Full article ">Figure 15
<p>IF curves.</p>
Full article ">
15 pages, 4404 KiB  
Article
Laboratory Heat Flux Estimates of Seawater Foam for Low Wind Speeds
by C. Chris Chickadel, Ruth Branch, William E. Asher and Andrew T. Jessup
Remote Sens. 2022, 14(8), 1925; https://doi.org/10.3390/rs14081925 - 15 Apr 2022
Cited by 1 | Viewed by 2127
Abstract
Laboratory experiments were conducted to measure the heat flux from seafoam continuously generated in natural seawater. Using a control volume technique, heat flux was calculated from foam and foam-free surfaces as a function of ambient humidity (ranged from 40% to 78%), air–water temperature [...] Read more.
Laboratory experiments were conducted to measure the heat flux from seafoam continuously generated in natural seawater. Using a control volume technique, heat flux was calculated from foam and foam-free surfaces as a function of ambient humidity (ranged from 40% to 78%), air–water temperature difference (ranged from −9 °C to 0 °C), and wind speed (variable up to 3 m s−1). Water-surface skin temperature was imaged with a calibrated thermal infrared camera, and near-surface temperature profiles in the air, water, and foam were recorded. Net heat flux from foam surfaces increased with increasing wind speed and was shown to be up to four times greater than a foam-free surface. The fraction of the total heat flux due to the latent heat flux was observed for foam to be 0.75, with this value being relatively constant with wind speed. In contrast, for a foam-free surface the fraction of the total heat flux due to the latent heat flux decreased at higher wind speeds. Temperature profiles through foam are linear and have larger gradients, which increased with wind speed, while foam free surfaces show the expected logarithmic profile and show no variation with temperature. The radiometric surface temperatures show that foam is cooler and more variable than a foam-free surface, and bubble-resolving thermal images show that radiometrically transparent bubble caps and burst bubbles reveal warm foam below the cool surface layer, contributing to the enhanced variability. Full article
(This article belongs to the Special Issue Passive Remote Sensing of Oceanic Whitecaps)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) IR image from of the beach and surfzone at the USACE Field Research Facility in Duck NC, looking NNE. The camera tilt was ~15° below horizontal (75° incidence). Bright regions in the image indicate breaking wave crests that appear warmer than quiescent water and the much cooler region of the beach (lower left). (<b>b</b>) Space-time plot of IR temperature anomaly (mean removed) along the blue transect in (<b>a</b>). Wave crests approach the beach from right to left and the bright fronts of breaking waves are visible near the beach and over a submerged bar (at X = 180 m). (<b>c</b>) Near vertical traces of cooler (darker) residual foam appear almost instantaneously after individual breaking events, as seen in this enlargement. A wind of 9 m/s was blowing from the north (left to right in panel (<b>a</b>)), air temperature was 24.3 °C, and water temperature was 23.8 °C.</p>
Full article ">Figure 2
<p>Diagram of the laboratory foam tank–wind tunnel, where airflow is from left to right. Instruments include long-wave (LWIR), mid-wave (MWIR), visual foam profile and surface imagers (EO), and humidity and temperature probes. The wind tunnel fan, plenum, and air conditioning are not shown.</p>
Full article ">Figure 3
<p>Typical wind speed profiles during the experiment for foam covered (solid) and foam-free (dashed) water.</p>
Full article ">Figure 4
<p>Enthalpy flux estimates from foam (squares) and foam free (circles) plotted against wind speed and shaded according to (<b>a</b>) the air–water temperature difference and (<b>b</b>) the ambient relative humidity.</p>
Full article ">Figure 5
<p>Latent (<span class="html-italic">Q<sub>L</sub></span>), sensible (<span class="html-italic">Q<sub>S</sub></span>) and radiative (<span class="html-italic">Q<sub>R</sub></span>) heat flux components plotted against wind speed for (<b>a</b>) foam-free (<b>b</b>) and foam covered water surfaces for the same range of ambient air temperature and relative humidity conditions.</p>
Full article ">Figure 6
<p>Ratio of (<b>a</b>) the latent component (<span class="html-italic">Q<sub>L</sub></span>) and (<b>b</b>) the sensible component (<span class="html-italic">Q<sub>S</sub></span>) to the total heat flux (<span class="html-italic">Q<sub>T</sub></span>) for foam and foam-free surfaces plotted against relative humidity.</p>
Full article ">Figure 7
<p>Temperature profiles on the upper 2 cm from (<b>a</b>) foam-free and (<b>b</b>) foam tests. Line colors indicate the net heat flux of the measurement. (<b>c</b>) Bulk temperature gradients for foam-free (dots) and foam (squares) measurement from the surface to 1.8 cm depth are plotted against observed net heat flux.</p>
Full article ">Figure 8
<p>(<b>a</b>) Average bulk-skin temperature difference plotted versus observed enthalpy flux for foam and foam free-observations. (<b>b</b>) Skin temperature variability, calculated as the standard deviation, for foam and foam-free observations also plotted versus measured net heat flux.</p>
Full article ">Figure 9
<p>The calculated enthalpy exchange coefficient, <span class="html-italic">C<sub>k</sub></span>, for foam (squares) and foam-free (circles) surfaces is plotted versus wind speed and shaded to indicate (<b>a</b>) the air–bulk water difference and (<b>b</b>) ambient relative humidity observed. A curve generated from <span class="html-italic">C<sub>k</sub></span> determined by [<a href="#B36-remotesensing-14-01925" class="html-bibr">36</a>] from observations in a laboratory wind-wave flume for non-foam conditions is plotted for comparison.</p>
Full article ">Figure 10
<p>(<b>a</b>) Thermal IR image of cool foam (dark) and warmer foam-free water (light) taken after the foam generator was shut off. (<b>b</b>) An enlargement of the foam surface shows individual warm bubbles, burst bubble pockets, and cooler surrounding fluid.</p>
Full article ">
9 pages, 2356 KiB  
Communication
Low-SNR Doppler Data Processing for the InSight Radio Science Experiment
by Dustin Buccino, James S. Border, William M. Folkner, Daniel Kahan and Sebastien Le Maistre
Remote Sens. 2022, 14(8), 1924; https://doi.org/10.3390/rs14081924 - 15 Apr 2022
Cited by 6 | Viewed by 2012
Abstract
Radio Doppler measurements between the InSight lander and NASA’s Deep Space Network have been acquired for measuring the rotation of Mars. Unlike previous landers used for this purpose that utilized steerable high-gain antennas, InSight uses two fixed medium-gain antennas, which results in a [...] Read more.
Radio Doppler measurements between the InSight lander and NASA’s Deep Space Network have been acquired for measuring the rotation of Mars. Unlike previous landers used for this purpose that utilized steerable high-gain antennas, InSight uses two fixed medium-gain antennas, which results in a lower radio signal-to-noise ratio (SNR). Lower SNR results in additional thermal noise for Doppler measurements using standard processes. Through a combination of phase averaging and traditional data compression, the increased thermal noise due to low SNR can be removed for most of the signal of interest, resulting in more accurate Doppler measurements. During the first 900 days of InSight operations, Doppler measurements were improved by ~25% on average using this method. Full article
Show Figures

Figure 1

Figure 1
<p>Simplified block of the closed-loop Doppler measurement process.</p>
Full article ">Figure 2
<p>Doppler residuals at 0.1 s count-time for Sol 345 (<b>left</b>), and histogram distribution of the residuals (<b>right</b>). The standard deviation of the residuals is 0.463 Hz.</p>
Full article ">Figure 3
<p>(<b>a</b>) Power spectral density of 0.1 s residual frequency for the Sol 345 pass, showing the white phase noise dominance. (<b>b</b>) Allan deviation (fractional frequency stability) of the Sol 345 pass. Fits of the three primary noise component models (white phase noise, white frequency noise, and Kolmogorov noise) are shown.</p>
Full article ">Figure 4
<p>Residual phase at the original 0.1 s count-time and the resulting averaged phase residuals at a 20 s integration time for the Sol 345 pass.</p>
Full article ">Figure 5
<p>Doppler residuals at 60 s count-time for Sol 345 with two compression methods. Left (<b>a</b>), using phase compression from 0.1 s to 20 s using phase averaging followed by frequency compression from 20 s to 60 s; and right (<b>b</b>), using frequency compression from 0.1 s to 60 s only.</p>
Full article ">Figure 6
<p>Standard deviation of Doppler residuals at 60 s count time for all passes in the first 900 days of operations. Only the standard deviation of the described phase/frequency compression method and the traditional frequency compression are shown.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop