[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 15, February-2
Previous Issue
Volume 15, January-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 15, Issue 3 (February-1 2023) – 325 articles

Cover Story (view full-size image): The Clouds and the Earth’s Radiant Energy System (CERES) has monitored clouds and radiation since 2000 with broadband radiometers collocated with cloud properties retrieved from radiances measured by the MODerate-Resolution Imaging Spectroradiometer (MODIS) on Terra and Aqua. Continuing the CERES cloud record employs radiances from the Visible Infrared Imaging Radiometer Suite (VIIRS) in the Suomi National Polar-orbiting Partnership. The MODIS retrieval techniques are adapted to VIIRS to minimize spectral and channel differences, with several additional changes. With a few exceptions, the resulting 2012–2020 VIIRS cloud amount, phase, height, optical depth, and hydrometeor size retrievals agree reasonably well with their Aqua MODIS counterparts. Various sources of disagreement are identified and will be mitigated in future editions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 5852 KiB  
Article
Improved Analytical Formula for the SAR Doppler Centroid Estimation Standard Deviation for a Dynamic Sea Surface
by Siqi Qiao, Baochang Liu and Yijun He
Remote Sens. 2023, 15(3), 867; https://doi.org/10.3390/rs15030867 - 3 Feb 2023
Viewed by 2187
Abstract
The existing formulas for the synthetic aperture radar (SAR) Doppler centroid estimation standard deviation (STD) suffer from various limitations, especially for a dynamic sea surface. In this study, we derive an improved version of these formulas through three steps. First, by considering the [...] Read more.
The existing formulas for the synthetic aperture radar (SAR) Doppler centroid estimation standard deviation (STD) suffer from various limitations, especially for a dynamic sea surface. In this study, we derive an improved version of these formulas through three steps. First, by considering the ocean wavenumber spectrum information, a new strategy for determining the number of independent samples of the sea wave velocity field is adopted in the new formula. This is contrary to the method used in the existing formulas, where the number of SAR geometric resolution cells is taken as the number of samples assuming that adjacent SAR resolution cells are statistically uncorrelated. Second, the pulse repetition frequency and Doppler bandwidth are decoupled in the new formula, unlike in the existing formulas where they are unchangeably related to each other. Third, the effects of thermal noise and Doppler aliasing are jointly quantified in a mathematically exact manner instead of being treated separately, as in the existing formulas. Comprehensive SAR raw data simulations for the ocean surface show that the new formula has a better performance in predicting the Doppler centroid estimation STD than the existing formulas. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Doppler power spectrum of the SAR and the spectrum sharpness factor <math display="inline"><semantics> <mi>m</mi> </semantics></math>.</p>
Full article ">Figure 2
<p>Doppler power spectra with and without Doppler aliasing. The solid, dashed, and dash–dotted lines plot the unaliased Doppler power spectrum, aliased part of the Doppler power spectrum, and total observed Doppler power spectrum, respectively. The receiver thermal noise power spectrum is drawn as a horizontal line.</p>
Full article ">Figure 3
<p>Curves of the RMS radial velocity versus (<b>a</b>) the wind speed; (<b>b</b>) the relative angle between the sea wave propagation and radar look directions.</p>
Full article ">Figure 4
<p>Illustration of the wavenumber power spectrum of the sea wave velocity field and its equivalent spectral width. (<b>a</b>) Top view; (<b>b</b>) Side view.</p>
Full article ">Figure 5
<p>Curves of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>k</mi> </mrow> </semantics></math> versus <math display="inline"><semantics> <mi>U</mi> </semantics></math> computed by Equations (28) and (29).</p>
Full article ">Figure 6
<p>Flowchart of Monte Carlo simulations.</p>
Full article ">Figure 7
<p>PM spectrum.</p>
Full article ">Figure 8
<p>2D wave height field.</p>
Full article ">Figure 9
<p>2D NRCS field.</p>
Full article ">Figure 10
<p>2D radial sea wave velocity field.</p>
Full article ">Figure 11
<p>2D frequency spectrum of the simulated SAR raw data.</p>
Full article ">Figure 12
<p>Processed SAR image.</p>
Full article ">Figure 13
<p>Fluctuation of Doppler power spectrum.</p>
Full article ">Figure 14
<p>Scatter diagram of the Doppler centroid estimates obtained from 390 Monte Carlo runs. The bias and the STD of these 390 Doppler centroid estimates are annotated in the plot.</p>
Full article ">Figure 15
<p>(<b>a</b>) Curves of the Doppler centroid estimation STD against wind speed; (<b>b</b>) Relative prediction error (in percentages), defined as the absolute value of the difference between the predicted and measured values of the Doppler centroid estimation STD divided by the latter.</p>
Full article ">Figure 16
<p>(<b>a</b>) Curves of the Doppler centroid estimation STD against the SNR; (<b>b</b>) Relative prediction error.</p>
Full article ">Figure 17
<p>(<b>a</b>) Curves of the Doppler centroid estimation STD against the azimuth oversampling ratio; (<b>b</b>) Relative prediction error.</p>
Full article ">
17 pages, 6452 KiB  
Article
Spatiotemporal Distribution and Main Influencing Factors of Grasshopper Potential Habitats in Two Steppe Types of Inner Mongolia, China
by Jing Guo, Longhui Lu, Yingying Dong, Wenjiang Huang, Bing Zhang, Bobo Du, Chao Ding, Huichun Ye, Kun Wang, Yanru Huang, Zhuoqing Hao, Mingxian Zhao and Ning Wang
Remote Sens. 2023, 15(3), 866; https://doi.org/10.3390/rs15030866 - 3 Feb 2023
Cited by 12 | Viewed by 2864
Abstract
Grasshoppers can greatly interfere with agriculture and husbandry, and they will breed and grow rapidly in suitable habitats. Therefore, it is necessary to extract the distribution of the grasshopper potential habitat (GPH), analyze the spatial-temporal characteristics of the GPH, and detect the different [...] Read more.
Grasshoppers can greatly interfere with agriculture and husbandry, and they will breed and grow rapidly in suitable habitats. Therefore, it is necessary to extract the distribution of the grasshopper potential habitat (GPH), analyze the spatial-temporal characteristics of the GPH, and detect the different effects of key environmental factors in the meadow and typical steppe. To achieve the goal, this study took the two steppe types of Xilingol (the Inner Mongolia Autonomous Region of China) as the research object and coupled them with the MaxEnt and multisource remote sensing data to establish a model. First, the environmental factors, including meteorological, vegetation, topographic, and soil factors, that affect the developmental stages of grasshoppers were obtained. Secondly, the GPH associated with meadow and typical steppes from 2018 to 2022 were extracted based on the MaxEnt model. Then, the spatial-temporal characteristics of the GPHs were analyzed. Finally, the effects of the habitat factors in two steppe types were explored. The results demonstrated that the most suitable and moderately suitable areas were distributed mainly in the southern part of the meadow steppe and the eastern and southern parts of the typical steppe. Additionally, most areas in the town of Gaorihan, Honggeergaole, Jirengaole, as well as the border of Wulanhalage and Haoretugaole became more suitable for grasshoppers from 2018 to 2022. This paper also found that the soil temperature in the egg stage, the vegetation type, the soil type, and the precipitation amount in the nymph stage were significant factors both in the meadow and typical steppes. The slope and precipitation in the egg stage played more important roles in the typical steppe, whereas the aspect had a greater contribution to the meadow steppe. These findings can provide a methodical guide for grasshopper control and management and for further ensuring the security of agriculture and husbandry. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Location of the study area; (<b>b</b>) location of the meadow steppe area and grasshopper occurrence points from 2018 to 2022; and (<b>c</b>) location of the typical steppe area and grasshopper occurrence points from 2018 to 2022.</p>
Full article ">Figure 2
<p>Flowchart of spatiotemporal distribution and main influencing factors of grasshopper potential habitats in two steppe types in Inner Mongolia, China.</p>
Full article ">Figure 3
<p>The accuracy of the GPH extraction results in meadow steppe and typical steppe from 2018 to 2022.</p>
Full article ">Figure 4
<p>Spatial distribution of the GPHs in the meadow steppe from 2018 to 2022.</p>
Full article ">Figure 5
<p>Spatial distribution of the GPHs in the typical steppe from 2018 to 2022.</p>
Full article ">Figure 6
<p>(<b>a</b>) The trends of the suitability index in meadow grasslands; and (<b>b</b>) typical steppe.</p>
Full article ">Figure 7
<p>Environmental factor correlation results from 2018 to 2022. (<b>a</b>) 2018; (<b>b</b>) 2019; (<b>c</b>) 2020; (<b>d</b>) 2021; and (<b>e</b>) 2022.</p>
Full article ">Figure 8
<p>The environmental factors contributions from 2018 to 2022 in the (<b>a</b>) meadow steppe, and (<b>b</b>) typical steppe.</p>
Full article ">
24 pages, 4693 KiB  
Article
Design and Application of a UAV Autonomous Inspection System for High-Voltage Power Transmission Lines
by Ziran Li, Yanwen Zhang, Hao Wu, Satoshi Suzuki, Akio Namiki and Wei Wang
Remote Sens. 2023, 15(3), 865; https://doi.org/10.3390/rs15030865 - 3 Feb 2023
Cited by 48 | Viewed by 7665
Abstract
As the scale of the power grid continues to expand, the human-based inspection method struggles to meet the needs of efficient grid operation and maintenance. Currently, the existing UAV inspection system in the market generally has short endurance power time, high flight operation [...] Read more.
As the scale of the power grid continues to expand, the human-based inspection method struggles to meet the needs of efficient grid operation and maintenance. Currently, the existing UAV inspection system in the market generally has short endurance power time, high flight operation requirements, low degree of autonomous flight, low accuracy of intelligent identification, slow generation of inspection reports, and other problems. In view of these shortcomings, this paper designs an intelligent inspection system based on self-developed UAVs, including autonomous planning of inspection paths, sliding film control algorithms, mobile inspection schemes and intelligent fault diagnosis. In the first stage, basic data such as latitude, longitude, altitude, and the length of the cross-arms are obtained from the cloud database of the power grid, while the lateral displacement and vertical displacement during the inspection drone operation are calculated, and the inspection flight path is generated independently according to the inspection type. In the second stage, in order to make the UAV’s flight more stable, the reference-model-based sliding mode control algorithm is introduced to improve the control performance. Meanwhile, during flight, the intelligent UAV uploads the captured photos to the cloud in real time. In the third stage, a mobile inspection program is designed in order to improve the inspection efficiency. The transfer of equipment is realized in the process of UAV inspection. Finally, to improve the detection accuracy, a high-precision object detector is designed based on the YOLOX network model, and the improved model increased the mAP0.5:0.95 metric by 2.22 percentage points compared to the original YOLOX_m for bird’s nest detection. After a large number of flight verifications, the inspection system designed in this paper greatly improves the efficiency of power inspection, shortens the inspection cycle, reduces the investment cost of inspection manpower and material resources, and successfully fuses the object detection algorithm in the field of high-voltage power transmission lines inspection. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Comparison of the traditional inspection solution and our solution.</p>
Full article ">Figure 2
<p>The structure of the system.</p>
Full article ">Figure 3
<p>The process of fine inspection.</p>
Full article ">Figure 4
<p>(<bold>a</bold>) The process of arc-chasing inspection; (<bold>b</bold>) The process of channel inspection.</p>
Full article ">Figure 5
<p>The structure of the control.</p>
Full article ">Figure 6
<p>Intelligent machine nest. Green is the drone inspection route and gray is the vehicle route.</p>
Full article ">Figure 7
<p>The structure of the improved YOLOX.</p>
Full article ">Figure 8
<p>The structure of the coordinate attention.</p>
Full article ">Figure 9
<p>(<bold>a</bold>) Graphical explanation of SIoU loss function; (<bold>b</bold>) The definition of IoU.</p>
Full article ">Figure 10
<p>Actual flight environment. (<bold>a</bold>) Preparation stage of autonomous inspection operation of UAV; (<bold>b</bold>) Intelligent machine nest test; (<bold>c</bold>) Single UAV for autonomous inspection operation; (<bold>d</bold>) Multiple UAVs for autonomous inspection operations.</p>
Full article ">Figure 11
<p>(<bold>a</bold>) The trajectory of fine inspection; (<bold>b</bold>) The trajectory of arc-chasing inspection; (<bold>c</bold>) The trajectory of channel inspection.The red points are mission points and the blue lines are the flight paths of the drones.</p>
Full article ">Figure 12
<p>(<bold>a</bold>) Speed-tracking results for fine inspection; (<bold>b</bold>) Speed-tracking results for chasing inspection; (<bold>c</bold>) Speed-tracking results for channel inspection.</p>
Full article ">Figure 13
<p>Fine inspection task data collection.</p>
Full article ">Figure 14
<p>Arc-chasing inspection and channel inspection task data collection.</p>
Full article ">Figure 15
<p>Comparison of the detection results of the different models. The red rectangular box is the detection result of the model, inside the yellow border is the incorrect detection result, and inside the blue border is the correct detection result.</p>
Full article ">Figure 16
<p>(<bold>a</bold>) The grading ring is displaced; (<bold>b</bold>) The locking pin is defective; (<bold>c</bold>) The insulator string is tilted.</p>
Full article ">
18 pages, 8116 KiB  
Article
X-Band Radar Attenuation Correction Method Based on LightGBM Algorithm
by Qiang Yang, Yan Feng, Li Guan, Wenyu Wu, Sichen Wang and Qiangyu Li
Remote Sens. 2023, 15(3), 864; https://doi.org/10.3390/rs15030864 - 3 Feb 2023
Cited by 2 | Viewed by 2797
Abstract
X-band weather radar can provide high spatial and temporal resolution data, which is essential to precipitation observation and prediction of mesoscale and microscale weather. However, X-band weather radar is susceptible to precipitation attenuation. This paper presents an X-band attenuation correction method based on [...] Read more.
X-band weather radar can provide high spatial and temporal resolution data, which is essential to precipitation observation and prediction of mesoscale and microscale weather. However, X-band weather radar is susceptible to precipitation attenuation. This paper presents an X-band attenuation correction method based on the light gradient machine (LightGBM) algorithm (the XACL method), then compares it with the ZH correction method and the ZH-KDP comprehensive correction method. The XACL method was validated using observations from two radars in July 2021, the X-band dual-polarization weather radar at the Shouxian National Climatology Observatory of China (SNCOC), and the S-band dual-polarization weather radar at Hefei. During the rainfall cases on July 2021, the results of the attenuation correction were used for precipitation estimation and verified with the rainfall data from 1204 automatic ground-based meteorological network stations in Anhui Province, China. We found that the XACL method produced a significant correction effect and reduced the anomalous correction phenomenon of the comparison methods. The results show that the average error in precipitation estimation by the XACL method was reduced by 39.88% over 1204 meteorological stations, which is better than the effect of the other two correction methods. Thus, the XACL method proved good local adaptability and provided a new X-band attenuation correction scheme. Full article
Show Figures

Figure 1

Figure 1
<p>Geographic distribution of X-band and S-band dual-polarization radars. The circle enclosed by a dotted line represents the detection range of the radars.</p>
Full article ">Figure 2
<p>Sample data establishment process.</p>
Full article ">Figure 3
<p>Construction of X-band radar XACL method and its correction diagram.</p>
Full article ">Figure 4
<p>Importance distribution of sample variables.</p>
Full article ">Figure 5
<p>(<b>a</b>) Before attenuation correction of X-band radar reflectivity at 10:46 a.m. (UTC+8) on 8 July 2021, (<b>b</b>) S-band radar reflectivity, (<b>c</b>) the XACL method, (<b>d</b>) the Z<sub>H</sub> method correction, and (<b>e</b>) Z<sub>H</sub>-K<sub>DP</sub> comprehensive correction; radar reflectivity at 2° elevation PPI.</p>
Full article ">Figure 6
<p>(<b>a</b>) Before attenuation correction of X-band radar reflectivity at 15:23 p.m. (UTC+8) on 27 July 2021, (<b>b</b>) S-band radar reflectivity, radar reflectivity corrected using (<b>c</b>) the XACL method, (d) the Z<sub>H</sub> method, and (<b>e</b>) Z<sub>H</sub>-K<sub>DP</sub> comprehensive method at 2° elevation PPI.</p>
Full article ">Figure 7
<p>Line chart of radial mean reflectivity at 125–135° azimuth, 8 July 2021, 10:46 a.m. (UTC+8) (<b>a</b>) and radial mean reflectivity at 120–130° azimuth, 27 July 2021, 15:23 p.m. (UTC+8) and (<b>b</b>) at 2° elevation PPI.</p>
Full article ">Figure 8
<p>(<b>a</b>) CR of X-band radar and (<b>b</b>) CR of S-band radar matching X-radar at 10:46 a.m. (UTC+8) on 8 July 2021.</p>
Full article ">Figure 9
<p>(<b>a</b>) CR of XACL method after correction, (<b>b</b>) CR of Z<sub>H</sub> after correction, and (<b>c</b>) CR of Z<sub>H</sub>-K<sub>DP</sub> after correction at 10:46 a.m. (UTC+8) on 8 July 2021.</p>
Full article ">Figure 10
<p>(<b>a</b>) CR of X-band radar and (<b>b</b>) CR of S-band radar matching X-band radar at 15:23 p.m. (UTC+8) on 27 July 2021.</p>
Full article ">Figure 11
<p>(<b>a</b>) CR of the XACL method after correction, (<b>b</b>) CR of the Z<sub>H</sub> method after correction, and (<b>c</b>) CR of the Z<sub>H</sub>-K<sub>DP</sub> comprehensive method after correction at 15:23 p.m. (UTC+8) on 27 July 2021.</p>
Full article ">Figure 12
<p>Comparison of X-band radar CR and S-band radar CR before (<b>a</b>) and after (<b>b</b>) the XACL method, (<b>c</b>) the Z<sub>H</sub> method, and (<b>d</b>) the Z<sub>H</sub>-K<sub>DP</sub> comprehensive method correction.</p>
Full article ">Figure 13
<p>Distribution of 1024 automatic weather stations and the detection of X-band radar. Blue dots indicate automatic weather stations, the black circle indicates the detection of X-band radar, and the red inverted triangle is the location of the X-band radar.</p>
Full article ">
15 pages, 1154 KiB  
Technical Note
Machine Learning of Usable Area of Gable-Roof Residential Buildings Based on Topographic Data
by Leszek Dawid, Kacper Cybiński and Żanna Stręk
Remote Sens. 2023, 15(3), 863; https://doi.org/10.3390/rs15030863 - 3 Feb 2023
Cited by 1 | Viewed by 1713
Abstract
In real estate appraisal, especially of residential buildings, one of the primary evaluation parameters is the property’s usable area. When determining the property price, Polish appraisers use data from comparable transactions included in the Real Estate Price Register (REPR), which is highly incomplete, [...] Read more.
In real estate appraisal, especially of residential buildings, one of the primary evaluation parameters is the property’s usable area. When determining the property price, Polish appraisers use data from comparable transactions included in the Real Estate Price Register (REPR), which is highly incomplete, especially regarding properties’ usable areas. This incompleteness renders the identification of comparable transactions challenging and may lead to incorrect prediction of the property price. We address this challenge by applying machine learning methods to estimate the usable area of buildings with gable roofs based only on their topographic data, which is widely available in Poland in the Database of Topographic Objects (BDOT10k) of Light Detection and Ranging (LiDAR) origin. We show that three features are enough to make accurate predictions of the usable area: the covered area, the building’s height, and the number of stories optionally. A neural network trained on buildings from architectural bureaus reached a 4% median percentage error on the same source and 15% on the real buildings from the city of Koszalin, Poland. Therefore, the proposed method can be applied by appraisers to estimate the usable area of buildings with known transaction prices and solve the problem of finding comparable properties for appraisal. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing in Poland)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sketch of the building used to model the relationship between the usable area and the building’s and the knee wall’s heights. <span class="html-italic">H</span>—height of the building, and <span class="html-italic">h</span>—height of the knee wall. The red values marked on the sketch are the verging heights for three used norms described in <a href="#sec2dot1-remotesensing-15-00863" class="html-sec">Section 2.1</a>: 140, 190, and 220 cm.</p>
Full article ">Figure 2
<p>A render of the roof model we used for our Monte Carlo simulation, with a rendered measurement grid overlaid on top and roof ridge marked in orange. Subplot (<b>a</b>) corresponds to LoD1 measurement grid, with a grid spacing of 1 × 1 m. Subplot (<b>b</b>) corresponds to LoD2 measurement grid, with a grid spacing of <math display="inline"><semantics> <mrow> <mfrac> <mn>1</mn> <mn>3</mn> </mfrac> <mo>×</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> </mrow> </semantics></math> m. The dimensions we used here are width <math display="inline"><semantics> <mrow> <mo>=</mo> <mn>12.44</mn> </mrow> </semantics></math> m, depth <math display="inline"><semantics> <mrow> <mo>=</mo> <mn>15.65</mn> </mrow> </semantics></math> m, height <math display="inline"><semantics> <mrow> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> m.</p>
Full article ">Figure 3
<p>Worst-case scenario measurement error (WCSME) for LoD1 and LoD2 measurements as a function of roof height and roof width.</p>
Full article ">Figure 4
<p>Mean measurement error (MME) for LoD1 and LoD2 measurements as a function of roof height and roof width.</p>
Full article ">Figure 5
<p>Predictions of (<b>a</b>) the linear regression and (<b>b</b>) the neural network with 10 hidden neurons of the usable area of buildings from architectural bureaus, with a distinction between one- (diamonds) and two-story (dots) buildings. Input features are the covered area and the building’s height.</p>
Full article ">Figure 6
<p>Predictions of (<b>a</b>) the linear regression and (<b>b</b>) the neural network with 10 hidden neurons of the usable area of buildings from architectural bureaus, with a distinction between one- (diamonds) and two-story (dots) buildings. Input features are the covered area, the building’s height, and the number of stories.</p>
Full article ">Figure 7
<p>Predictions of the neural networks with 10 hidden neurons of the usable area of buildings in Koszalin, Poland, with a distinction between one- (diamonds) and two-story (dots) buildings. Input features are (<b>a</b>) the covered area, the building’s height, and (<b>b</b>) also the number of stories.</p>
Full article ">
17 pages, 6796 KiB  
Article
Stability Analysis of Rocky Slopes on the Cuenca–Girón–Pasaje Road, Combining Limit Equilibrium Methods, Kinematics, Empirical Methods, and Photogrammetry
by Xavier Delgado-Reivan, Cristhian Paredes-Miranda, Silvia Loaiza, Michelle Del Pilar Villalta Echeverria, Maurizio Mulas and Luis Jordá-Bordehore
Remote Sens. 2023, 15(3), 862; https://doi.org/10.3390/rs15030862 - 3 Feb 2023
Cited by 7 | Viewed by 2683
Abstract
The 3D point clouds obtained from the low-cost, remote, and precise SfM (Structure from Motion) technique allow the extraction and acquisition of discontinuities and their characteristics both manually, with the compass and virtual ruler of the Cloud Compare software, and automatically with the [...] Read more.
The 3D point clouds obtained from the low-cost, remote, and precise SfM (Structure from Motion) technique allow the extraction and acquisition of discontinuities and their characteristics both manually, with the compass and virtual ruler of the Cloud Compare software, and automatically with the DSE (Discontinuity Set Extractor) program, which is faster, more accurate, and safe. Some control plans have been used, which basically consist of identifying one or several fractures and taking measurements on them manually and remotely. The difference between both types of measurements is around 5°, which we believe is reasonable since it is within the precision and repeatability of measurements with a geologist’s compass. This work analyzes the stability of six slopes (five excavated and one natural) by applying five different analysis methodologies based on the rock mass classification system (SMR, RHRSmod, and Qslope), kinematic analysis, and analytical analysis (limit equilibrium). Their results were compared with what was observed in the field to identify the most appropriate analysis methodologies adjusted to reality. The necessary parameters for analyzing each of the slopes, such as orientation, quantity, spacing, and persistence of the discontinuities, were obtained from the automatic analysis. This type of analysis eliminates the subjectivity of the authors, although the findings are related and resemble those obtained manually. The main contribution of the article consists of the application of fast and low-cost techniques to the evaluation of slopes. It is a type of analysis that is in high demand today in many Andean countries, and this work aims to provide an answer. These methodologies suggested by scientific articles such as this one will later be integrated into some procedures and will be taken into account by technical reports. The results show that with the available information and by applying low-cost techniques, the SMR system is the methodology that presents the best results and adjusts better to the reality of the study area. Therefore, SMR is a necessary parameter to determine rockfall hazards through modified RHRS. Full article
Show Figures

Figure 1

Figure 1
<p>General aspects of the study area: (<b>a</b>) Location of the study area: Azuay (Ecuador). (<b>b</b>) Aerial view of the analyzed area with the main structural lineament [<a href="#B31-remotesensing-15-00862" class="html-bibr">31</a>]. (<b>c</b>) Geological map of the study area: Saraguro group (E<sub>3</sub>n<sub>1</sub>S), Jubones formation (n<sub>1</sub>n<sub>2</sub>Jb), Jubones formation (n<sub>1</sub>n<sub>2</sub>Jb). IIGE (2005—modified). The numbers included in the figure are the studied slopes.</p>
Full article ">Figure 2
<p>General view of slopes analyzed in the Cuenca–Girón–Pasaje route direction East–West: (<b>a</b>) Slope 1; (<b>b</b>) Slope 2; (<b>c</b>) Slope 3; (<b>d</b>) Slope 4; (<b>e</b>) Slope 5; (<b>f</b>) Slope 6.</p>
Full article ">Figure 3
<p>Construction of the 3D point cloud with the Agisoft program (for example, Slope 2 located at km 80 + 900): (<b>a</b>) 3D point cloud and camera orientation; (<b>b</b>) orientation control points and ground control points.</p>
Full article ">Figure 4
<p>Discontinuities automatically identified with DSE of Slope 2 located at km 80 + 900: (<b>a</b>) J1; (<b>b</b>) J2 and (<b>c</b>) J3.</p>
Full article ">Figure 5
<p>Automatic identification of discontinuities with the support of the DSE program: (<b>a</b>) Slope 1; (<b>b</b>) Slope 2; (<b>c</b>) Slope 3; (<b>d</b>) Slope 4; (<b>e</b>) Slope 5; (<b>f</b>) Slope 6.</p>
Full article ">Figure 6
<p>Kinematic analysis: (<b>a</b>) Slope 1 (possible wedge failure J2–J3); (<b>b</b>) Slope 2 (possible planar failure by J3); (<b>c</b>) Slope 3 (possible overturning J4); (<b>d</b>) Slope 4 (possible planar fracture J2 and J3); (<b>e</b>) Slope 5 (possible overturning J2); (<b>f</b>) Slope 6 (possible planar failure J1).</p>
Full article ">
18 pages, 10418 KiB  
Article
Distribution of Enhanced Potentially Toxic Element Contaminations Due to Natural and Coexisting Gold Mining Activities Using Planet Smallsat Constellations
by Satomi Kimijima, Masahiko Nagai and Masayuki Sakakibara
Remote Sens. 2023, 15(3), 861; https://doi.org/10.3390/rs15030861 - 3 Feb 2023
Cited by 4 | Viewed by 1920
Abstract
Potentially toxic elements (PTEs) from natural and anthropogenic activities threaten the environment and human health. The associations of PTEs with natural hazards can be powerful and prominent mechanisms to release PTEs, considerably hastening their multiple contaminations and widespread distribution. This study primarily aimed [...] Read more.
Potentially toxic elements (PTEs) from natural and anthropogenic activities threaten the environment and human health. The associations of PTEs with natural hazards can be powerful and prominent mechanisms to release PTEs, considerably hastening their multiple contaminations and widespread distribution. This study primarily aimed to investigate the enhanced potential distribution of PTE contaminations (arsenic, lead, and mercury) from coexisting gold mining operations combined with massive riverbank erosion in Indonesia from 2002 to 2022, where soil and water are highly contaminated naturally, using PlanetScope smallsat constellations, Google Earth imagery, and hydrographic datasets. According to the findings, increased barren extents were found because of mining deposits and road network developments. Enhanced natural and anthropogenic PTE runoffs would be transported across two different sub-basins, affecting broader parts of the Bone River. Between 2002 and 2022, 139.3% of river expansion was identified, eroding a maximum of 3,436,139.4 m3 of contaminated soil. Particularly land surfaces were repeatedly transformed from rivers to agricultural lands in the low Bone River, possibly contaminated by fertilizer spills. The combination of PTE potentials from different sources would further exacerbate the contamination level at an estuary. These findings are expected to aid in the timely monitoring of and assuming volumes, rates, and distribution of PTEs from various natural and anthropogenic activities and alert PTE contamination risks to ecosystems and human health. Future work in this area should aim to investigate contamination levels at the estuary, where contaminated materials from both natural and anthropogenic activities are accumulated. Full article
(This article belongs to the Special Issue Small Satellites for Disaster and Environmental Monitoring)
Show Figures

Figure 1

Figure 1
<p>Overall methodological workflow used in this study.</p>
Full article ">Figure 2
<p>(<b>a</b>) Regional overview, (<b>b</b>) geological characteristics and river sub-basins in the study area overlapped with the data from the Shuttle Radar Topography Mission. The study area in the right panel corresponds to assessing the mining development. The study area in the left panel corresponds to assessing the riverbank erosion.</p>
Full article ">Figure 3
<p>Pools of water mixed with hydrogen peroxide (<b>a</b>) and mercury mixed with cyanide (<b>b</b>) for immersing materials in the study sites [<a href="#B22-remotesensing-15-00861" class="html-bibr">22</a>].</p>
Full article ">Figure 4
<p>Georeferenced map of water and sediments sampling points and plants (Pteris vittata) distribution in the study site.</p>
Full article ">Figure 5
<p>Landcover classification using PlanetScope series; (<b>a</b>–<b>g</b>) landcover transformations in the coexisting mining site (2019–2022); (<b>h</b>,<b>i</b>) overview of the target area.</p>
Full article ">Figure 6
<p>Time series of landcover changes of built-up and barren extents by target deposits.</p>
Full article ">Figure 7
<p>Transformations of west side river extents using PlanetScope series; (<b>a</b>–<b>g</b>) transformations of river extents by highlighting differences (2002–2022); (<b>h</b>) site overview.</p>
Full article ">Figure 8
<p>Transformations of east side river extents using PlanetScope series; (<b>a</b>–<b>g</b>) transformations of river extents by highlighting differences (2002–2022); (<b>h</b>) site overview.</p>
Full article ">Figure 9
<p>Time series of the extents of the entire Bone River, east and west side, and eroded volumes from 2017 to 2022.</p>
Full article ">Figure 10
<p>Time series of the monthly precipitation in the study area.</p>
Full article ">
24 pages, 181744 KiB  
Article
AI-Prepared Autonomous Freshwater Monitoring and Sea Ground Detection by an Autonomous Surface Vehicle
by Sebastian Pose, Stefan Reitmann, Gero Jörn Licht, Thomas Grab and Tobias Fieback
Remote Sens. 2023, 15(3), 860; https://doi.org/10.3390/rs15030860 - 3 Feb 2023
Cited by 5 | Viewed by 2483
Abstract
Climate change poses special and new challenges to inland waters, requiring intensive monitoring. An application based on an autonomous operation swimming vehicle (ASV) is being developed that will provide simulations, spatially and depth-resolved water parameter monitoring, bathymetry detection, and respiration measurement. A clustered [...] Read more.
Climate change poses special and new challenges to inland waters, requiring intensive monitoring. An application based on an autonomous operation swimming vehicle (ASV) is being developed that will provide simulations, spatially and depth-resolved water parameter monitoring, bathymetry detection, and respiration measurement. A clustered load system is integrated with a high-resolution sonar system and compared with underwater photogrammetry objects. Additionally, a holistic 3D survey of the water body above and below the water surface is generated. The collected data are used for a simulation environment to train artificial intelligence (AI) in virtual reality (VR). These algorithms are used to improve the autonomous control of the ASV. In addition, possibilities of augmented reality (AR) can be used to visualize the data of the measurements and to use them for future ASV assistance systems. The results of the investigation into a flooded quarry are explained and discussed. There is a comprehensive, high-potential, simple, and rapid monitoring method for inland waters that is suitable for a wide range of scientific investigations and commercial uses due to climate change, simulation, monitoring, analyses, and work preparation. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic description of the investigations with the autonomous operation ASV with a continuous, three-dimensional, multisensory recording of inland water bathymetry, depth-resolved water parameters, the detection of groundwater inflow, measurement of water body respiration, validation of the results by scientific divers, and presentation of the results using AI and VR methods.</p>
Full article ">Figure 2
<p>Show the foundation of the hardware base as the swimming robot (<a href="#sec2dot2-remotesensing-15-00860" class="html-sec">Section 2.2</a>), the three different measurement cases of the sonar sampling (<a href="#sec3dot1-remotesensing-15-00860" class="html-sec">Section 3.1</a>), the water parameter (<a href="#sec2dot4-remotesensing-15-00860" class="html-sec">Section 2.4</a>) and gas exhausting measurement (<a href="#sec2dot4-remotesensing-15-00860" class="html-sec">Section 2.4</a>) with the roof of the software functions (<a href="#sec4-remotesensing-15-00860" class="html-sec">Section 4</a>) and visualisation.</p>
Full article ">Figure 3
<p>The used ASV’s (<b>a</b>) “Elisabeth” Clearpath Kingfisher and (<b>b</b>) “Ferdinand” with the operating system and the drives.</p>
Full article ">Figure 4
<p>(<b>a</b>) photo of the first prototype, (<b>b</b>) 3D-model of the second prototype sensor node [<a href="#B33-remotesensing-15-00860" class="html-bibr">33</a>] and (<b>c</b>) cad-model of the winch system.</p>
Full article ">Figure 5
<p>The in situ measurement special lab devices for water parameters, such as electrical conductivity, pH-value, and redox potential are housed for the use of scientific divers.</p>
Full article ">Figure 6
<p>Shows the (<b>a</b>) sonar data acquisition with the swimming platform “Ferdinand” at a flooded quarry in Germany and (<b>b</b>) the respiration measurement in the wetlands of Brazil.</p>
Full article ">Figure 7
<p>3D Model of the sonar point cloud with the reconstruction parameter for Agisoft Metashape.</p>
Full article ">Figure 8
<p>Course of the sound profile and the water parameters during the echo sounder measurement.</p>
Full article ">Figure 9
<p>Results of the reconstructed landscape and the reconstruction parameter of the Agisoft Metashape [<a href="#B38-remotesensing-15-00860" class="html-bibr">38</a>] as the digital height model of the drone images manually cleaned.</p>
Full article ">Figure 10
<p>Reconstructed underwater object (<b>a</b>) injection pump, (<b>b</b>) weapons, (<b>c</b>), switch box, and (<b>d</b>) wheel with the scale and the local coordinate system.</p>
Full article ">Figure 11
<p>Comparison of (<b>a</b>) the sonar data, (<b>b</b>) a detailed view of the sonar data, and (<b>c</b>) the photogrammetric data (right) of two gravestones.</p>
Full article ">Figure 12
<p>Combination of the sonar data processed with Qinsy [<a href="#B39-remotesensing-15-00860" class="html-bibr">39</a>], the drone recordings and underwater models aligned and stitched by Metashape for a holistic georeference model.</p>
Full article ">Figure 13
<p>Result of the reconstruction underwater and landscape model combined to the EPSG::25,833 coordinate system.</p>
Full article ">Figure 14
<p>Pipeline of depth-sensing data analysis with AI for the robotic boat.</p>
Full article ">Figure 15
<p>Robotic boat (<b>a</b>) “Elisabeth” within the virtual environment. Movements of the water surface are implemented through displacement maps and shrink-wrap modifiers. Color textures (<b>b</b>) enable possibilities for accurate renderings for visualization aspects.</p>
Full article ">Figure 16
<p>(<b>a</b>) Rigged fish model with four bones and bone constraints (green colored, <span class="html-italic">copy rotation</span> of the main bone at the head region) to create a wiggle animation; (<b>b</b>) shows the model used in a particle system influenced by a force field in the virtual environment created by photogrammetry (MatCap coloring for 3D representation).</p>
Full article ">Figure 17
<p>Side scan sonar results for synthetic data generation of point cloud data (perspective view).</p>
Full article ">Figure 18
<p>Dynamic scan of a virtual fish swarm (<b>left</b>: side perspective, <b>right</b>: top perspective).</p>
Full article ">Figure 19
<p>(<b>a</b>) The 3D mesh representation with colored material for segmentation with virtual sonar sensor (top), (<b>b</b>) segmented point cloud with fish (red) and ground (green) classes.</p>
Full article ">Figure 20
<p>Exemplary scene in (<b>a</b>) Freiberg X-SITE CAVE to visualize field trials using historic data of the runs embedded in 3D scenes (<b>b</b>).</p>
Full article ">Figure 21
<p>Virtual scene in Unity game engine to display large point clouds and stream them to an HMD, in our case Microsoft HoloLens 2 (HL2).</p>
Full article ">Figure 22
<p>Construction model and ASV-prototype for the next level with a higher performance and streamlined design of the winch and sonar system.</p>
Full article ">
24 pages, 17018 KiB  
Article
Impacts of Green Fraction Changes on Surface Temperature and Carbon Emissions: Comparison under Forestation and Urbanization Reshaping Scenarios
by Faisal Mumtaz, Jing Li, Qinhuo Liu, Aqil Tariq, Arfan Arshad, Yadong Dong, Jing Zhao, Barjeece Bashir, Hu Zhang, Chenpeng Gu and Chang Liu
Remote Sens. 2023, 15(3), 859; https://doi.org/10.3390/rs15030859 - 3 Feb 2023
Cited by 21 | Viewed by 4781
Abstract
Global land cover dynamics alter energy, water, and greenhouse gas exchange between land and atmosphere, affecting local to global weather and climate change. Although reforestation can provide localized cooling, ongoing land use land cover (LULC) shifts are expected to exacerbate urban heat island [...] Read more.
Global land cover dynamics alter energy, water, and greenhouse gas exchange between land and atmosphere, affecting local to global weather and climate change. Although reforestation can provide localized cooling, ongoing land use land cover (LULC) shifts are expected to exacerbate urban heat island impacts. In this study, we monitored spatiotemporal changes in green cover in response to land use transformation associated with the Khyber Pakhtunkhwa (KPK) provincial government’s Billion Tree Tsunami Project (BTTP) and the Ravi Urban Development Plan (RUDP) initiated by the provincial government of Punjab, both in Pakistan. The land change modeler (LCM) was used to assess the land cover changes and transformations between 2000 and 2020 across Punjab and KPK. Furthermore, a curve fit linear regression model (CFLRM) and sensitivity analysis were employed to analyze the impacts of land cover dynamics on land surface temperature (LST) and carbon emissions (CE). Results indicated a significant increase in green fraction of +5.35% under the BTTP, achieved by utilizing the bare land with an effective transition of 4375.87 km2. However, across the Punjab province, an alarming reduction in green fraction cover by −1.77% and increase in artificial surfaces by +1.26% was noted. A significant decrease in mean monthly LST by −4.3 °C was noted in response to the BTTP policy, while an increase of 5.3 °C was observed associated with the RUDP. A substantial increase in LST by 0.17 °C was observed associated with transformation of vegetation to artificial surfaces. An effective decrease in LST by −0.21 °C was observed over the opposite transition. Furthermore, sensitivity analysis suggested that LST fluctuations are affecting the % of CO2 emission. The current findings can assist policymakers in revisiting their policies to promote ecological conservation and sustainability in urban planning. Full article
Show Figures

Figure 1

Figure 1
<p>Maps showing the locations of study sites in the KPK and Punjab provinces, Pakistan.</p>
Full article ">Figure 2
<p>Flowchart indicating the detailed procedures for data collection, preprocessing, and evaluation of results.</p>
Full article ">Figure 3
<p>(<b>A</b>). Spatial patterns of land use land cover classes in KPK Province, including the sample sites (a) Peshawar (a1) 2000, (a2) 2010, and (a3) 2020; and (b) Swat (b1) 2000, (b2) 2010, and (b3) 2020. (<b>B</b>). Spatial patterns of land use land cover classes in Punjab Province, including sample site (c) Lahore (c1) 2000, (c2) 2010, and (c3) 2020.</p>
Full article ">Figure 4
<p>The LULC transition categories for KPK (<b>Left</b>) and Punjab (<b>right</b>) from 2000 to 2020.</p>
Full article ">Figure 5
<p>(<b>A</b>) Gain and loss for all LULC classes over KPK province from 2000–2020: (a) cultivated land, (b) vegetation, (c) water bodies, (d) artificial surfaces, (e) bare land, and (f) permanent snow and ice. (<b>B</b>) Gain and loss for all LULC classes over Punjab province from 2000–2020: (a) cultivated land, (b) vegetation, (c) water bodies, (d) artificial surfaces, and (e) bare land. (<b>C</b>) Gain and loss for all LULC classes over 2000–2020: sites (a) Peshawar (top), (b) Swat (middle), and (c) Lahore (bottom).</p>
Full article ">Figure 5 Cont.
<p>(<b>A</b>) Gain and loss for all LULC classes over KPK province from 2000–2020: (a) cultivated land, (b) vegetation, (c) water bodies, (d) artificial surfaces, (e) bare land, and (f) permanent snow and ice. (<b>B</b>) Gain and loss for all LULC classes over Punjab province from 2000–2020: (a) cultivated land, (b) vegetation, (c) water bodies, (d) artificial surfaces, and (e) bare land. (<b>C</b>) Gain and loss for all LULC classes over 2000–2020: sites (a) Peshawar (top), (b) Swat (middle), and (c) Lahore (bottom).</p>
Full article ">Figure 6
<p>Spatial patterns of LST over sites a, b, and c from 2000–2020: (<b>a1</b>) Peshawar 2000, (<b>a2</b>) Peshawar 2010, (<b>a3</b>) Peshawar 2020; (<b>b1</b>) Swat 2000, (<b>b2</b>) Swat 2010, (<b>b3</b>) Swat 2020; (<b>c1</b>) Lahore 2000, (<b>c2</b>) Lahore 2010, and (<b>c3</b>) Lahore 2020.Furthermore, we calculated the NDVI to highlight the relation of the green fraction with LST; the results are presented in <a href="#remotesensing-15-00859-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 7
<p>Spatial patterns of normalized difference vegetation index (NDVI) over sites a, b, and c from 2000–2020. Note: (<b>a1</b>) Peshawar 2000, (<b>a2</b>) Peshawar 2010, (<b>a3</b>) Peshawar 2020; (<b>b1</b>) Swat 2000, (<b>b2</b>) Swat 2010, (<b>b3</b>) Swat 2020; (<b>c1</b>) Lahore 2000, (<b>c2</b>) Lahore 2010, and (<b>c3</b>) Lahore 2020.</p>
Full article ">Figure 8
<p>Correlation between LST and NDVI over the sample sites (a, b, and c). Note: (<b>a1</b>) Peshawar 2000, (<b>a2</b>) Peshawar 2010, (<b>a3</b>) Peshawar 2020; (<b>b1</b>) Swat 2000, (<b>b2</b>) Swat 2010, (<b>b3</b>) Swat 2020; (<b>c1</b>) Lahore 2000, (<b>c2</b>) Lahore 2010, and (<b>c3</b>) Lahore 2020.</p>
Full article ">Figure 9
<p>Maps of carbon stock over sites (<b>a1</b>) Peshawar 2000, (<b>a2</b>) Peshawar 2010, (<b>a3</b>) Peshawar 2020; (<b>b1</b>) Swat 2000, (<b>b2</b>) Swat 2010, (<b>b3</b>) Swat 2020; (<b>c1</b>) Lahore 2000, (<b>c2</b>) Lahore 2010, and (<b>c3</b>) Lahore 2020.</p>
Full article ">Figure 10
<p>Sensitivity analysis between LST and carbon emission.</p>
Full article ">
17 pages, 26072 KiB  
Technical Note
Drought Vulnerability Curves Based on Remote Sensing and Historical Disaster Dataset
by Huicong Jia, Fang Chen, Enyu Du and Lei Wang
Remote Sens. 2023, 15(3), 858; https://doi.org/10.3390/rs15030858 - 3 Feb 2023
Viewed by 2344
Abstract
As drought vulnerability assessment is fundamental to risk management, it is urgent to develop scientific and reasonable assessment models to determine such vulnerability. A vulnerability curve is the key to risk assessment of various disasters, connecting analysis of hazard and risk. To date, [...] Read more.
As drought vulnerability assessment is fundamental to risk management, it is urgent to develop scientific and reasonable assessment models to determine such vulnerability. A vulnerability curve is the key to risk assessment of various disasters, connecting analysis of hazard and risk. To date, the research on vulnerability curves of earthquakes, floods and typhoons is relatively mature. However, there are few studies on the drought vulnerability curve, and its application value needs to be further confirmed and popularized. In this study, on the basis of collecting historical disaster data from 52 drought events in China from 2009 to 2013, three drought remote sensing indexes were selected as disaster-causing factors; the affected population was selected to reflect the overall disaster situation, and five typical regional drought vulnerability curves were constructed. The results showed that (1) in general, according to the statistics of probability distribution, most of the normalized difference vegetation index (NDVI) and the temperature vegetation drought index (TVDI) variance ratios were concentrated between 0 and ~0.15, and most of the enhanced vegetation index (EVI) variance ratios were concentrated between 0.15 and ~0.6. From a regional perspective, the NDVI and EVI variance ratio values of the northwest inland perennial arid area (NW), the southwest mountainous area with successive years of drought (SW), and the Hunan Hubei Jiangxi area with sudden change from drought to waterlogging (HJ) regions were close and significantly higher than the TVDI variance ratio values. (2) Most of the losses (drought at-risk populations, DRP) were concentrated in 0~0.3, with a cumulative proportion of about 90.19%. At the significance level, DRP obeys the Weibull distribution through hypothesis testing, and the parameters are optimal. (3) The drought vulnerability curve conformed to the distribution rule of the logistic curve, and the line shape was the growth of the loss rate from 0 to 1. It was found that the arid and ecologically fragile area in the farming pastoral ecotone (AP) region was always a high-risk area with high vulnerability, which should be the focus of drought risk prevention and reduction. The study reduces the difficulty of developing the vulnerability curve, indicating that the method can be widely used to other regions in the future. Furthermore, the research results are of great significance to the accurate drought risk early warning or whether to implement the national drought disaster emergency rescue response. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>Flow chart of the methodology of this research.</p>
Full article ">Figure 3
<p>Probability density and cumulative probability function (<b>a</b>) PDF for NDVI; (<b>b</b>) CDF for NDVI; (<b>c</b>) PDF for EVI; (<b>d</b>) CDF for EVI; (<b>e</b>) PDF for TVDI; (<b>f</b>) CDF for TVDI.</p>
Full article ">Figure 4
<p>Probability plot of variance ratio of NDVI, EVI, and TVDI in sub-regions of China: (<b>a</b>) NW; (<b>b</b>) SW; (<b>c</b>) HJ; (<b>d</b>) AP; and (<b>e</b>) YR.</p>
Full article ">Figure 5
<p>Probability plot of DRP. (<b>a</b>) Lognormal &gt; 95% confidence interval; (<b>b</b>) Exponential &gt; 95% confidence interval; (<b>c</b>) Gamma &gt; 95% confidence interval; (<b>d</b>) Weibull &gt; 95% confidence interval.</p>
Full article ">Figure 5 Cont.
<p>Probability plot of DRP. (<b>a</b>) Lognormal &gt; 95% confidence interval; (<b>b</b>) Exponential &gt; 95% confidence interval; (<b>c</b>) Gamma &gt; 95% confidence interval; (<b>d</b>) Weibull &gt; 95% confidence interval.</p>
Full article ">Figure 6
<p>Probability density and cumulative probability of the standardized DRP: (<b>a</b>) Probability density; and (<b>b</b>) Cumulative probability.</p>
Full article ">Figure 7
<p>Vulnerability curves for the five regions of China: (<b>a</b>) NW; (<b>b</b>) SW; (<b>c</b>) HJ; (<b>d</b>) AP; and (<b>e</b>) YR.</p>
Full article ">Figure 7 Cont.
<p>Vulnerability curves for the five regions of China: (<b>a</b>) NW; (<b>b</b>) SW; (<b>c</b>) HJ; (<b>d</b>) AP; and (<b>e</b>) YR.</p>
Full article ">
19 pages, 9095 KiB  
Article
Analysis of Temperature Semi-Annual Oscillations (SAO) in the Middle Atmosphere
by Ming Shangguan and Wuke Wang
Remote Sens. 2023, 15(3), 857; https://doi.org/10.3390/rs15030857 - 3 Feb 2023
Cited by 1 | Viewed by 1997
Abstract
The middle atmosphere plays an important role in the research of various dynamical and energy processes. Microwave Limb Sounder (MLS), reanalyses and model simulations with NCAR’s Whole Atmosphere Community Climate Model (WACCM) data in the range between 100 and 0.1 hPa from 2005 [...] Read more.
The middle atmosphere plays an important role in the research of various dynamical and energy processes. Microwave Limb Sounder (MLS), reanalyses and model simulations with NCAR’s Whole Atmosphere Community Climate Model (WACCM) data in the range between 100 and 0.1 hPa from 2005 to 2020 have been analyzed with a focus on the temperature semi-annual oscillations (SAO). Significant SAO of temperature is prominent in the tropical region (20°S–20°N) around 1–3 hPa, which is consistent with previous studies. We also found significant SAO in the northern hemisphere (NH) high latitudes between 8 and 0.3 hPa and southern hemisphere (SH) high latitudes between 0.5 and 0.1 hPa, which has been of less concern in previous studies. The thermal budget based on MERRA2 and simulations is used to explain the mechanism of SAO in the middle atmosphere. In the tropics, the two temperature peaks are mainly determined by radiative processes. In the NH high latitudes of the stratosphere, the temperature peak in January is mainly related to dynamical processes, while the temperature peak in July is determined by a combination of dynamical and radiative processes. In the NH high latitudes of the lower mesosphere, the first peak in June is primarily associated with dynamical and radiative processes, while the second peak in December is primarily associated with the dynamical processes. In the SH high latitudes of the lower mesosphere, the first temperature peak in July is mainly due to dynamical processes while the second temperature peak in December is mainly due to radiative processes. Various features are present in the SH and NH high latitude SAO in the lower mesosphere. Furthermore, we performed model simulations with and without SAO in sea surface temperatures (SST-SAO) to study the connection between SST and temperature SAO. WACCM6 results indicate that the SAO in the middle atmosphere is partially affected by the existence of an SST-SAO. By removing SAO in SST, the PSD magnitude of the SAO decreases in the tropical region and increases in the polar region. The amplitudes of total heating rates are also modified. The WACCM experiment confirms the relationship between SST-SAO and temperature SAO in the middle atmosphere. Full article
Show Figures

Figure 1

Figure 1
<p>The power spectrum densities (PSDs) of temperature SAO based on MLS (<b>a</b>), ERA5 (<b>b</b>), MERRA2 (<b>c</b>) and model simulation (<b>d</b>) in the period 2005–2020. The dots mark the significant area with an FDR level of 0.05.</p>
Full article ">Figure 2
<p>The ratio between SAO and maximum PSD based on MLS (<b>a</b>), ERA5 (<b>b</b>), MERRA2 (<b>c</b>) and model simulation (<b>d</b>) in the period 2005–2020. The dots mark the SAO significant area with an FDR level of 0.05.</p>
Full article ">Figure 3
<p>The monthly annual mean linearly detrended temperature around the tropical region (10<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–10<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) of MLS (<b>a</b>), ERA5 (<b>b</b>), MERRA2 (<b>c</b>) and model simulation (<b>d</b>) in the period of 2005–2020.</p>
Full article ">Figure 4
<p>The monthly annual mean temperature at 2 hPa for the tropical region (10<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–10<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) (<b>a</b>), the NH high latitudes (70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N–82<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) (<b>b</b>), the SH high latitudes (58<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S) (<b>c</b>) at 0.3 hPa and the NH high latitudes (70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N–82<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) at 0.3 hPa (<b>d</b>) in the period 2005–2020.</p>
Full article ">Figure 5
<p>Annual cycle of the zonal mean temperature and heating rates at 2 hPa averaged around the tropical region (10<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–10<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) (<b>a</b>) and NH high latitudes (70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N–82<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) (<b>b</b>) based on MERRA2 data. The red, blue, dashed blue and dashed red lines indicate the heating rates related to analysis tendency (DTDTANA), dynamics (DTDTDYN), radiation (DTDTRAD) and gravity wave drag (DTDTGWD), respectively. The positive total heating rates are filled with orange color and the negative total heating rates with blue color. The total heating rates, which are the sum of analysis, dynamical, radiative and GWD heating rates, have been enlarged 10 times to be more visible in the figures.</p>
Full article ">Figure 5 Cont.
<p>Annual cycle of the zonal mean temperature and heating rates at 2 hPa averaged around the tropical region (10<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–10<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) (<b>a</b>) and NH high latitudes (70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N–82<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) (<b>b</b>) based on MERRA2 data. The red, blue, dashed blue and dashed red lines indicate the heating rates related to analysis tendency (DTDTANA), dynamics (DTDTDYN), radiation (DTDTRAD) and gravity wave drag (DTDTGWD), respectively. The positive total heating rates are filled with orange color and the negative total heating rates with blue color. The total heating rates, which are the sum of analysis, dynamical, radiative and GWD heating rates, have been enlarged 10 times to be more visible in the figures.</p>
Full article ">Figure 6
<p>Annual cycle of the zonal mean temperature and heating rates at 0.3 hPa averaged around the SH high latitudes (58<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S) (<b>a</b>) and NH high latitudes (70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N–82<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) (<b>b</b>) based on MERRA2 data. The red, blue, dashed blue and dashed red lines indicate the heating rates related to analysis tendency (DTDTANA), dynamics (DTDTDYN), radiation (DTDTRAD) and gravity wave drag (DTDTGWD), respectively. The positive total heating rates are filled with orange color and the negative total heating rates with blue color. The total heating rates, which are the sum of analysis, dynamical, radiative and GWD heating rates, have been enlarged 10 times to be more visible in the figures.</p>
Full article ">Figure 6 Cont.
<p>Annual cycle of the zonal mean temperature and heating rates at 0.3 hPa averaged around the SH high latitudes (58<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S) (<b>a</b>) and NH high latitudes (70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N–82<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) (<b>b</b>) based on MERRA2 data. The red, blue, dashed blue and dashed red lines indicate the heating rates related to analysis tendency (DTDTANA), dynamics (DTDTDYN), radiation (DTDTRAD) and gravity wave drag (DTDTGWD), respectively. The positive total heating rates are filled with orange color and the negative total heating rates with blue color. The total heating rates, which are the sum of analysis, dynamical, radiative and GWD heating rates, have been enlarged 10 times to be more visible in the figures.</p>
Full article ">Figure 7
<p>The monthly annual mean MLS ozone (blue)/temperature (orange) at 2 hPa for the tropical region (10<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–10<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) (<b>a</b>), the NH high latitudes (70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N–82<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) (<b>b</b>), the SH high latitudes (58<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S) (<b>c</b>) at 0.3 hPa and the NH high latitudes (70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N–82<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) at 0.3 hPa (<b>d</b>) in the period 2005–2020.</p>
Full article ">Figure 8
<p>The correlation between MLS temperature SAO signal at 2 hPa averaged around the tropical region (10<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–10<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) (<b>a</b>), the NH high latitudes (70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–82<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S) (<b>b</b>) and SST-SAO. The dots mark the significant area at 95% level.</p>
Full article ">Figure 9
<p>The correlation between MLS temperature SAO signal at 0.3 hPa averaged around the SH high latitudes (58<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S) (<b>a</b>), the NH high latitudes (70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–82<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S) (<b>b</b>) and SST-SAO. The dots mark the significant area at 95% level.</p>
Full article ">Figure 10
<p>(<b>a</b>) The PSD of temperature SAO analyzed in the period 2005–2020 based on model simulation with removed SST-SAO (rmSAO). (<b>b</b>) Same as (<b>a</b>), but for the simulation with removed SST-SAO in the tropics (rmSAO-TP). The dots mark SAO significant areas with an FDR level of 0.05. (<b>c</b>) The relative difference of temperature SAO PSD between rmSAO and the control simulation (rmSAO-Control)/Control × 100 in the period 2005–2020. (<b>d</b>) Same as (<b>c</b>), but for the relative difference between rmSAO-TP and the control simulation (rmSAO-TP-Control)/Control × 100.</p>
Full article ">Figure 11
<p>Annual cycle of the heating rates at 2 hPa on average around the tropical region (10<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–10<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>N) (<b>a</b>), the NH high latitudes (70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–82<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S) (<b>b</b>). The red, blue and yellow lines indicate the heating rates related to dynamics (DTDTDYN), radiation (DTDTRAD) and GWD processes (DTDTGWD), respectively. The total heating rates, which are the sum of the dynamical, radiative and GWD heating rates, are illustrated by black lines. The solid, dashed and dotted-dashed lines indicate data from the control, rmSAO and rmSAO-TP simulations, respectively.</p>
Full article ">Figure 12
<p>Annual cycle of the heating rates at 0.3 hPa on average around the SH high latitudes (58<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S) (<b>a</b>) and the NH high latitudes (70<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S–82<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>S) (<b>b</b>). The red, blue and yellow lines indicate the heating rates related to dynamics (DTDTDYN), radiation (DTDTRAD) and GWD processes (DTDTGWD), respectively. The total heating rates, which are the sum of the dynamical, radiative and GWD heating rates, are illustrated by black lines. The solid, dashed and dotted-dashed lines indicate data from the control, rmSAO and rmSAO-TP simulations, respectively.</p>
Full article ">
24 pages, 8115 KiB  
Article
An Object-Oriented Method for Extracting Single-Object Aquaculture Ponds from 10 m Resolution Sentinel-2 Images on Google Earth Engine
by Boyi Li, Adu Gong, Zikun Chen, Xiang Pan, Lingling Li, Jinglin Li and Wenxuan Bao
Remote Sens. 2023, 15(3), 856; https://doi.org/10.3390/rs15030856 - 3 Feb 2023
Cited by 14 | Viewed by 3562
Abstract
Aquaculture plays a key role in achieving Sustainable Development Goals (SDGs), while it is difficult to accurately extract single-object aquaculture ponds (SOAPs) from medium-resolution remote sensing images (Mr-RSIs). Due to the limited spatial resolutions of Mr-RSIs, most studies have aimed to obtain aquaculture [...] Read more.
Aquaculture plays a key role in achieving Sustainable Development Goals (SDGs), while it is difficult to accurately extract single-object aquaculture ponds (SOAPs) from medium-resolution remote sensing images (Mr-RSIs). Due to the limited spatial resolutions of Mr-RSIs, most studies have aimed to obtain aquaculture areas rather than SOAPs. This study proposed an object-oriented method for extracting SOAPs. We developed an iterative algorithm combining grayscale morphology and edge detection to segment water bodies and proposed a segmentation degree detection approach to select and edit potential SOAPs. Then a classification decision tree combining aquaculture knowledge about morphological, spectral, and spatial characteristics of SOAPs was constructed for object filter. We selected a 707.26 km2 study region in Sri Lanka and realized our method on Google Earth Engine (GEE). A 25.11 km2 plot was chosen for verification, where 433 SOAPs were manually labeled from 0.5 m high-resolution RSIs. The results showed that our method could extract SOAPs with high accuracy. The relative error of total areas between extracted result and the labeled dataset was 1.13%. The MIoU of the proposed method was 0.6965, representing an improvement of between 0.1925 and 0.3268 over the comparative segmentation algorithms provided by GEE. The proposed method provides an available solution for extracting SOAPs over a large region and shows high spatiotemporal transferability and potential for identifying other objects. Full article
(This article belongs to the Special Issue Remote Sensing of Wetlands and Biodiversity)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>World aquaculture and fishery production and consumption. Data source: [<a href="#B14-remotesensing-15-00856" class="html-bibr">14</a>].</p>
Full article ">Figure 2
<p>Study region selected in this study: (<b>a</b>) location of the study region; (<b>b</b>) overview of the study region; (<b>c</b>) Sentinel-2 true-color RSI; (<b>d</b>) detailed image from 0.5 m Hr-RSI, where SOAPs can be explicitly seen; (<b>e</b>) detailed image from 10 m Sentinel-2 Mr-RSI, where SOAPs are hard to distinguish. All maps in this paper are projected using the Cylindrical Equal Area project (ESRI: 54034).</p>
Full article ">Figure 3
<p>Framework of the proposed method for SOAPs extraction.</p>
Full article ">Figure 4
<p>Flowchart of the iterative algorithm combining grayscale morphology and Canny edge detection (Note: Y = Yes; N = No).</p>
Full article ">Figure 5
<p>Example of the use of the proposed iterative algorithm to segment water pixels: (<b>a</b>) 0.5 m Hr-RSI: (<b>b</b>) 10 m Mr-RSI from Sentinel-2: (<b>c</b>) maximum NDWI image (MNI); (<b>d</b>) Canny edge image (CEI) generated from the MNI at the first iteration, where a few aquaculture ponds are segmented completely; (<b>e</b>) CEI at the second iteration, where most aquaculture ponds are segmented completely; (<b>f</b>) CEI at the third iteration, where all the aquaculture ponds are segmented completely.</p>
Full article ">Figure 6
<p>Introduction of the segmentation degree detection method: (<b>a</b>) examples of over-segmented objects (LSI <math display="inline"><semantics> <mrow> <mo>&gt;</mo> <mn>2.5</mn> </mrow> </semantics></math> and RPOC <math display="inline"><semantics> <mo>≤</mo> </semantics></math> 1.5); (<b>b</b>) examples of appropriately segmented objects (LSI <math display="inline"><semantics> <mrow> <mo>≤</mo> <mn>2.5</mn> </mrow> </semantics></math> and RPOC <math display="inline"><semantics> <mo>≤</mo> </semantics></math> <math display="inline"><semantics> <mrow> <mn>1.5</mn> </mrow> </semantics></math>); (<b>c</b>) examples of under-segmented objects (LSI <math display="inline"><semantics> <mo>&gt;</mo> </semantics></math> <math display="inline"><semantics> <mrow> <mn>2.5</mn> </mrow> </semantics></math> and RPOC <math display="inline"><semantics> <mo>&gt;</mo> </semantics></math> <math display="inline"><semantics> <mrow> <mn>1.5</mn> </mrow> </semantics></math>); (<b>d</b>) examples of under-segmented objects (LSI <math display="inline"><semantics> <mo>≤</mo> </semantics></math> <math display="inline"><semantics> <mrow> <mn>2.5</mn> </mrow> </semantics></math> and RPOC <math display="inline"><semantics> <mo>&gt;</mo> </semantics></math> <math display="inline"><semantics> <mrow> <mn>1.5</mn> </mrow> </semantics></math>); (<b>e</b>) an illustration of these segmentation degrees.</p>
Full article ">Figure 7
<p>Decision tree for extracting aquaculture ponds from potential SOAPs (Note: Y = Yes; N = No).</p>
Full article ">Figure 8
<p>Extraction result of SOAPs in the study region: (<b>a</b>) a comprehensive overview of the distribution of aquaculture ponds in the study region; (<b>b</b>) the similar shapes and sizes of SOAPs extracted in this study compared to the ground-truth data; (<b>c</b>) most extracted aquaculture ponds were separated from adjacent waters and extracted as SOAPs; (<b>d</b>) abandoned ponds excluded from the extraction result.</p>
Full article ">Figure 9
<p>(<b>a</b>) Number of SOAPs in different areal ranges extracted in the study region and (<b>b</b>) histogram of the numerical distribution of SOAPs ranging in size from 0 to 10,000 <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">m</mi> <mn>2</mn> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Comparison between extracted SOAPs and labeled SOAPs: (<b>a</b>) location of the verification region; (<b>b</b>) distribution of omission and commission SOAPs; (<b>c</b>) example of commission SOAPs; (<b>d</b>) example of omission SOAPs.</p>
Full article ">Figure 11
<p>Image segmentation result comparisons among the proposed, K-Means, G-Means, and SNIC methods; the red contours represent the labeled SOAPs.</p>
Full article ">Figure 12
<p>Segmentation accuracies comparison between the proposed K-Means, G-Means, and SNIC methods in different SOAP size classes: (<b>a</b>) RMSEs of the four methods for SOAPs in different size ranges; (<b>b</b>) as in (<b>a</b>), but for the MAEs; (<b>c</b>) as in (<b>a</b>), but for the MAPEs; (<b>d</b>) as in (<b>a</b>), but for the MIoUs.</p>
Full article ">
17 pages, 3806 KiB  
Article
Time-Dependent Systematic Biases in Inferring Ice Cloud Properties from Geostationary Satellite Observations
by Dongchen Li, Masanori Saito and Ping Yang
Remote Sens. 2023, 15(3), 855; https://doi.org/10.3390/rs15030855 - 3 Feb 2023
Cited by 2 | Viewed by 2204
Abstract
Geostationary satellite-based remote sensing is a powerful tool to observe and understand the spatiotemporal variation of cloud optical-microphysical properties and their climatologies. Solar reflectances measured from the Advanced Baseline Imager (ABI) instruments aboard Geostationary Operational Environmental Satellites 16 and 17 correspond to different [...] Read more.
Geostationary satellite-based remote sensing is a powerful tool to observe and understand the spatiotemporal variation of cloud optical-microphysical properties and their climatologies. Solar reflectances measured from the Advanced Baseline Imager (ABI) instruments aboard Geostationary Operational Environmental Satellites 16 and 17 correspond to different spatial pixel resolutions, from 0.5 km in a visible band, up to 2 km in infrared bands. For multi-band retrievals of cloud properties, reflectances with finer spatial resolution need to be resampled (averaged or sub-sampled) to match the coarsest resolution. Averaging all small pixels within a larger pixel footprint is more accurate but computationally demanding when the data volume is large. Thus, NOAA operational cloud products incorporate sub-sampling (selecting one high-resolution pixel) to resample the reflectance data, which could cause potential retrieval biases. In this study, we examine various error sources of retrieval biases of cloud optical thickness (COT) and cloud effective radius (CER) caused by sub-sampling, including the solar zenith angle, viewing zenith angle, pixel resolutions, and cloud types. CER retrievals from ice clouds based on sub-sampling have larger biases and uncertainties than COT retrievals. The relative error compared to pixel averaging is positive for clouds that have small COT or CER, and negative for clouds that have large COT or CER. The relative error of COT decreases as the pixel resolution becomes coarser. The COT retrieval biases are attributed mainly to cirrus and cirrostratus clouds, while the largest biases of CER retrievals are associated with cirrus clouds. Full article
(This article belongs to the Special Issue Scattering by Ice Crystals in the Earth's Atmosphere)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of three resampling cases. Case A represents averaging all 0.5-km pixels in the 2-km resolution pixel to represent the reflectance in the whole 2-km pixel. Case B represents averaging a block of 4 0.5-km sub-pixels to represent the reflectance in a 1-km pixel, and using that average as the sub-sampled 1-km pixel to represent the 2-km super-pixel reflectance. Case C uses the reflectance in one 0.5-km sub-pixel to represent the reflectance in the whole 2-km super-pixel.</p>
Full article ">Figure 2
<p>Schematic Diagram of the GOES-17 sun-view geometry.</p>
Full article ">Figure 3
<p>The distribution of projected effective super-pixel resolution in a GOES-17 ABI full-disk image.</p>
Full article ">Figure 4
<p>Nakajima–King schematic diagram based on the MODIS collection 6 ice particle model. The circular marks and lines in yellow correspond to the retrieval results from Case A, which is considered to be the benchmark. The red and green arrows indicate the circumstances that the reflectances from Case B or C are larger or smaller than the reflectance in Case A, and the lines in red and green stand for the corresponding COT and CER retrieval results, respectively.</p>
Full article ">Figure 5
<p>Relative errors of Case B and Case C as a function of (<b>a</b>) COT or (<b>b</b>) CER retrieved from pixel-averaging as in Case A. The shaded regions show ±1 weighted standard deviation.</p>
Full article ">Figure 6
<p>Box plot representing COT in various CER intervals based on Case A retrievals.</p>
Full article ">Figure 7
<p>2-D histogram of the COT and CER retrieval relative mean bias errors for Cirrus (<b>a</b>,<b>d</b>), Cirrostratus (<b>b</b>,<b>e</b>), and Cumulonimbus (<b>c</b>,<b>f</b>) as a function of SZA (x-axis) and effective pixel resolution (y-axis). The colors depict the relative mean bias error, defined as the difference of the retrieved COT or CER of Case C minus Case A.</p>
Full article ">Figure 8
<p>Relative error of (<b>a</b>) COT and (<b>b</b>) CER in Case B and Case C as a function of LT, based on GOES-17 retrievals for high clouds.</p>
Full article ">Figure 9
<p>Cloud type fraction, based on ISCCP cloud classification criteria, using the COT retrievals from Case A, GOES-17 cloud phase and CTP products.</p>
Full article ">Figure 10
<p>The contribution of each type of high cloud to the relative error of (<b>a</b>) COT and (<b>b</b>) CER retrievals as a function of LT with Case C sub-sampling.</p>
Full article ">
24 pages, 11773 KiB  
Article
Estimation of Leaf Nitrogen Content in Rice Using Vegetation Indices and Feature Variable Optimization with Information Fusion of Multiple-Sensor Images from UAV
by Sizhe Xu, Xingang Xu, Clive Blacker, Rachel Gaulton, Qingzhen Zhu, Meng Yang, Guijun Yang, Jianmin Zhang, Yongan Yang, Min Yang, Hanyu Xue, Xiaodong Yang and Liping Chen
Remote Sens. 2023, 15(3), 854; https://doi.org/10.3390/rs15030854 - 3 Feb 2023
Cited by 30 | Viewed by 5416
Abstract
LNC (leaf nitrogen content) in crops is significant for diagnosing the crop growth status and guiding fertilization decisions. Currently, UAV (unmanned aerial vehicles) remote sensing has played an important role in estimating the nitrogen nutrition of crops at the field scale. However, many [...] Read more.
LNC (leaf nitrogen content) in crops is significant for diagnosing the crop growth status and guiding fertilization decisions. Currently, UAV (unmanned aerial vehicles) remote sensing has played an important role in estimating the nitrogen nutrition of crops at the field scale. However, many existing methods of evaluating crop nitrogen based on UAV imaging techniques usually have used a single type of imagery such as RGB or multispectral images, seldom considering the usage of information fusion from different types of UAV imagery for assessing the crop nitrogen status. In this study, GS (Gram–Schmidt Pan Sharpening) was utilized to fuse images from two sensors of digital RGB and multispectral cameras mounted on UAV, and the specific bands of the multispectral cameras are blue, green, red, rededge and NIR. The color space transformation method, HSV (Hue-Saturation-Value), was used to separate soil background noise from crops due to the high spatial resolution of UAV images. Two methods of optimizing feature variables, the Successive Projection Algorithm (SPA) and the Competitive Adaptive Reweighted Sampling method (CARS), combined with two regularization regression algorithms, LASSO and RIDGE, were adopted to estimate the LNC, compared to the commonly used Random Forest algorithm. The results showed that: (1) the accuracy of LNC estimation using the fusion image is improved distinctly by a comparison to the original multispectral image; (2) the denoised images performed better than the original multispectral images in evaluating LNC in rice; (3) the RIDGE-SPA combined method, using SPA to select the MCARI, SAVI and OSAVI, had the best performance for LNC in rice, with an R2 of 0.76 and an RMSE of 10.33%. It can be demonstrated that the information fusion of multiple-sensor imagery from UAV coupling with the methods of optimizing feature variables can estimate the rice LNC more effectively, which can also provide a reference for guiding the decision making of fertilization in rice fields. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the study area and experimental design.</p>
Full article ">Figure 2
<p>UAV system: DJI P4M UAV (<b>upper</b>), image sensor (<b>lower-right corner</b>) and reflectance panel (<b>lower-left corner</b>).</p>
Full article ">Figure 3
<p>Pre-processing workflow of UAV imagery.</p>
Full article ">Figure 4
<p>Image Fusion of GS.</p>
Full article ">Figure 5
<p>Process of two HSV Color Space transformation.</p>
Full article ">Figure 6
<p>Process of background noise removal.</p>
Full article ">Figure 7
<p>Correlation Coefficient between VIs with LNC based on raw multispectral images at different stages.</p>
Full article ">Figure 8
<p>Correlation Coefficient between VIs with LNC based on processed images at different stages.</p>
Full article ">Figure 9
<p>Process of variable extraction by SPA. (<b>a</b>) The variation of RMSE. (<b>b</b>) The selection of optimal variables (the value of VIs is the size of the actual value of the vegetation index, and the Variable Index is the number of vegetation indices entered into the algorithm).</p>
Full article ">Figure 10
<p>Process of variable extraction by CARS. (<b>a</b>) The variation of RMSE. (<b>b</b>) The selection of the optimal number of variables.</p>
Full article ">Figure 11
<p>Predictive performance for each stage. (<b>I</b>–<b>III</b>) represent the Rice Jointing, Booting and Filling stage, respectively. (<b>a</b>) Using RIDGE Regression for original multispectral images; (<b>b</b>) Using LASSO Regression for original multispectral images; (<b>c</b>) Using RIDGE Regression for fusion images; (<b>d</b>) Using LASSO Regression for fusion images.</p>
Full article ">Figure 12
<p>Predictive performance for each stage. (<b>I</b>–<b>III</b>) represent the Rice Jointing, Booting and Filling stages, respectively. (<b>a</b>) Using RIDGE Regression for denoised original multispectral images; (<b>b</b>) Using LASSO Regression for denoised original multispectral images; (<b>c</b>) Using RIDGE Regression for denoised fusion images; (<b>d</b>) Using LASSO Regression for denoised fusion images.</p>
Full article ">Figure 13
<p>Predictive performance for each stage. (<b>I</b>–<b>III</b>) represent the Rice Jointing, Booting and Filling stages, respectively. (<b>a</b>) Using RIDGE-SPA for denoised original multispectral images; (<b>b</b>) Using RIDGE-CARS Regression for denoised original multispectral images; (<b>c</b>) Using RIDGE-SPA for denoised fusion images; (<b>d</b>) Using RIDGE-CARS for denoised fusion images.</p>
Full article ">Figure 14
<p>Spatial distribution of LNC in the Rice Canopy based on the SPA-RIDGE method.</p>
Full article ">
24 pages, 5911 KiB  
Article
Investigating the Potential of Crop Discrimination in Early Growing Stage of Change Analysis in Remote Sensing Crop Profiles
by Mengfan Wei, Hongyan Wang, Yuan Zhang, Qiangzi Li, Xin Du, Guanwei Shi and Yiting Ren
Remote Sens. 2023, 15(3), 853; https://doi.org/10.3390/rs15030853 - 3 Feb 2023
Cited by 10 | Viewed by 3348
Abstract
Currently, remote sensing crop identification is mostly based on all available images acquired throughout crop growth. However, the available image and data resources in the early growth stage are limited, which makes early crop identification challenging. Different crop types have different phenological characteristics [...] Read more.
Currently, remote sensing crop identification is mostly based on all available images acquired throughout crop growth. However, the available image and data resources in the early growth stage are limited, which makes early crop identification challenging. Different crop types have different phenological characteristics and seasonal rhythm characteristics, and their growth rates are different at different times. Therefore, making full use of crop growth characteristics to augment crop growth difference information at different times is key to early crop identification. In this study, we first calculated the differential features between different periods as new features based on images acquired during the early growth stage. Secondly, multi-temporal difference features of each period were constructed by combination, then a feature optimization method was used to obtain the optimal feature set of all possible combinations in different periods and the early key identification characteristics of different crops, as well as their stage change characteristics, were explored. Finally, the performance of classification and regression tree (Cart), Random Forest (RF), Gradient Boosting Decision Tree (GBDT), and Support Vector Machine (SVM) classifiers in recognizing crops in different periods were analyzed. The results show that: (1) There were key differences between different crops, with rice changing significantly in period F, corn changing significantly in periods E, M, L, and H, and soybean changing significantly in periods E, M, N, and H. (2) For the early identification of rice, the land surface water index (LSWI), simple ratio index (SR), B11, and normalized difference tillage index (NDTI) contributed most, while B11, normalized difference red-edge3 (NDRE3), LSWI, the green vegetation index (VIgreen), red-edge spectral index (RESI), and normalized difference red-edge2 (NDRE2) contributed greatly to corn and soybean identification. (3) Rice could be identified as early as 13 May, with PA and UA as high as 95%. Corn and soybeans were identified as early as 7 July, with PA and UA as high as 97% and 94%, respectively. (4) With the addition of more temporal features, recognition accuracy increased. The GBDT and RF performed best in identifying the three crops in the early stage. This study demonstrates the feasibility of using crop growth difference information for early crop recognition, which can provide a new idea for early crop recognition. Full article
(This article belongs to the Special Issue Within-Season Agricultural Monitoring from Remotely Sensed Data)
Show Figures

Figure 1

Figure 1
<p>The geographical location of the region.</p>
Full article ">Figure 2
<p>Workflow for early crop identification based on multi-temporal difference information.</p>
Full article ">Figure 3
<p>The mean spectral values of the three crops at different periods. (<b>a</b>) The mean spectral values of corn at different periods, (<b>b</b>) The mean spectral values of soybean at different periods, and (<b>c</b>) The mean spectral values of rice at different periods.</p>
Full article ">Figure 4
<p>Diagram of the multi-temporal crop discrimination information in the 2020 crop early stage.</p>
Full article ">Figure 5
<p>Maximum OA changes of the classifiers in 2020 different periods.</p>
Full article ">Figure 6
<p>Maximum kappa coefficient changes of the classifiers in 2020 different periods.</p>
Full article ">Figure 7
<p>Optimal identification results in 2020 different periods. (<b>a</b>) Identification results on 8 May; (<b>b</b>) Identification results on 13 May; (<b>c</b>) Identification results on 28 May; (<b>d</b>) Identification results on 12 June; (<b>e</b>) Identification results on 7 July.</p>
Full article ">Figure 8
<p>Percentage of different character types of rice in 2020 different periods.</p>
Full article ">Figure 9
<p>Frequency of the key change temporal on 13 May 2020.</p>
Full article ">Figure 10
<p>Percentage of different character types of corn in 2020 different periods.</p>
Full article ">Figure 10 Cont.
<p>Percentage of different character types of corn in 2020 different periods.</p>
Full article ">Figure 11
<p>Frequency of the key change temporal on 7 July 2020.</p>
Full article ">Figure 12
<p>Percentage of different character types of soybeans in 2020 different periods.</p>
Full article ">Figure 12 Cont.
<p>Percentage of different character types of soybeans in 2020 different periods.</p>
Full article ">Figure 13
<p>Frequency of the key change temporal on 7 July 2020.</p>
Full article ">
12 pages, 3277 KiB  
Technical Note
Evaluation of Sea Surface Wind Products from Scatterometer Onboard the Chinese HY-2D Satellite
by Sheng Yang, Lu Zhang, Mingsen Lin, Juhong Zou, Bo Mu and Hailong Peng
Remote Sens. 2023, 15(3), 852; https://doi.org/10.3390/rs15030852 - 3 Feb 2023
Cited by 4 | Viewed by 1524
Abstract
The Chinese new marine dynamic environment satellite HY-2D was launched on 19 May 2021, carrying a Ku-band scatterometer (named HSCAT-D). In this study, wind products observed by the HSCAT-D were validated by comparing with wind data from the U.S. National Data Buoy Center [...] Read more.
The Chinese new marine dynamic environment satellite HY-2D was launched on 19 May 2021, carrying a Ku-band scatterometer (named HSCAT-D). In this study, wind products observed by the HSCAT-D were validated by comparing with wind data from the U.S. National Data Buoy Center (NDBC) buoys and European Centre for Medium-Range Weather Forecasts (ECMWF) model. The statistical results show that the HSCAT-D winds have a good agreement with the buoys’ wind measurements: in comparison with buoy winds, the wind speed standard deviation (STD) and root-mean-squared errors (RMSE) of direction were 0.78 m/s and 14.10°, respectively. Other scatterometers’ wind data are also employed for comparisons, including the HY-2B scatterometer (HSCAT-B), HY-2C scatterometer (HSCAT-C), and MetOp-B scatterometer (ASCAT-B) winds. The statistical results indicate that errors for HSCAT-D winds are smaller than HSCAT-C but a little bit larger than HSCAT-B. The spectral analysis shows that the HSCAT-D wind products contain less small-scale information than ASCAT-B. Moreover, the Extended Triple Collocation (ETC) results show that the HSCAT-D wind product is of good quality and well-calibrated. We believe that the HSCAT-D wind products will be helpful for the scientific community, as shown by the encouraging validation results. Full article
Show Figures

Figure 1

Figure 1
<p>Locations of the collocated buoys.</p>
Full article ">Figure 2
<p>Scatterplots of wind speed (<b>a</b>) and direction (<b>b</b>) derived from HSCAT-D versus NDBC buoy winds.</p>
Full article ">Figure 3
<p>The bias of scatterometer wind speed (<b>a</b>) and direction (<b>b</b>) versus buoys from June 2021 to October 2022.</p>
Full article ">Figure 4
<p>Wind speed probability distribution functions of each scatterometer (<b>a</b>) and wind speed deviation PDFs of scatterometers vs. ECMWF data (<b>b</b>).</p>
Full article ">Figure 5
<p>Scatterplots of wind speed (<b>a</b>) and direction (<b>b</b>) derived from HSCAT-D versus ECMWF winds.</p>
Full article ">Figure 6
<p>Wind speed bias as a function of average wind speed between scatterometers and model winds (<b>a</b>) and the standard deviation of wind direction difference between scatterometers and ECMWF as a function of average wind speed (<b>b</b>).</p>
Full article ">Figure 7
<p>Cross-track variations in comparisons between scatterometer winds and ECMWF winds: (<b>a</b>) STD of wind speed differences, and (<b>b</b>) STD of wind direction differences.</p>
Full article ">Figure 8
<p>Spectra of the HSCAT-B (red), HSCAT-C (green), HSCAT-D (black), and ASCAT-B (pink) wind and ECMWF background real wind (blue) for (<b>a</b>) u-component and (<b>b</b>) v-component.</p>
Full article ">
17 pages, 3981 KiB  
Article
Mapping Land Use/Land Cover Changes and Forest Disturbances in Vietnam Using a Landsat Temporal Segmentation Algorithm
by Katsuto Shimizu, Wataru Murakami, Takahisa Furuichi and Ronald C. Estoque
Remote Sens. 2023, 15(3), 851; https://doi.org/10.3390/rs15030851 - 3 Feb 2023
Cited by 13 | Viewed by 4979
Abstract
Accurately mapping land use/land cover changes (LULCC) and forest disturbances provides valuable information for understanding the influence of anthropogenic activities on the environment at regional and global scales. Many approaches using satellite remote sensing data have been proposed for characterizing these long-term changes. [...] Read more.
Accurately mapping land use/land cover changes (LULCC) and forest disturbances provides valuable information for understanding the influence of anthropogenic activities on the environment at regional and global scales. Many approaches using satellite remote sensing data have been proposed for characterizing these long-term changes. However, a spatially and temporally consistent mapping of both LULCC and forest disturbances at medium spatial resolution is still limited despite their critical contributions to the carbon cycle. In this study, we examined the applicability of Landsat time series temporal segmentation and random forest classifiers to mapping LULCC and forest disturbances in Vietnam. We used the LandTrendr temporal segmentation algorithm to derive key features of land use/land cover transitions and forest disturbances from annual Landsat time series data. We developed separate random forest models for classifying land use/land cover and detecting forest disturbances at each segment and then derived LULCC and forest disturbances that coincided with each other during the period of 1988–2019. The results showed that both LULCC classification and forest disturbance detection achieved low accuracy in several classes (e.g., producer’s and user’s accuracies of 23.7% and 78.8%, respectively, for forest disturbance class); however, the level of accuracy was comparable to that of existing datasets using the same reference samples in the study area. We found relatively high confusion between several land use/land cover classes (e.g., grass/shrub, forest, and cropland) that can explain the lower overall accuracies of 67.6% and 68.4% in 1988 and 2019, respectively. The mapping of forest disturbances and LULCC suggested that most forest disturbances were followed by forest recovery, not by transitions to other land use/land cover classes. The landscape complexity and ephemeral forest disturbances contributed to the lower classification and detection accuracies in this study area. Nevertheless, temporal segmentation and derived features from LandTrendr were useful for the consistent mapping of LULCC and forest disturbances. We recommend that future studies focus on improving the accuracy of forest disturbance detection, especially in areas with subtle landscape changes, as well as land use/land cover classification in ambiguous and complex landscapes. Using more training samples and effective variables would potentially improve the classification and detection accuracies. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study area in northern Vietnam. The tree canopy cover in 2000 from Hansen et al. [<a href="#B49-remotesensing-15-00851" class="html-bibr">49</a>] was overlaid on a digital elevation model from the Shuttle Radar Topography Mission [<a href="#B50-remotesensing-15-00851" class="html-bibr">50</a>]. The country boundary dataset was sourced from Global Administrative Areas, v3.4 [<a href="#B51-remotesensing-15-00851" class="html-bibr">51</a>].</p>
Full article ">Figure 2
<p>Processing flow for mapping land use/land cover changes (LULCC) and forest disturbances.</p>
Full article ">Figure 3
<p>Illustration of the LULC classification and forest disturbance detection based on the LandTrendr temporal segmentation.</p>
Full article ">Figure 4
<p>(<b>A</b>) LULCC and forest disturbance mapping of 1988–2019 for the entire study area. (<b>B</b>) Subset map of LULC 1988. (<b>C</b>) Subset map of LULC 2019. (<b>D</b>) Subset map of forest disturbances colored by disturbance years. (<b>E</b>) Subset map of 2019 Landsat RGB composite.</p>
Full article ">
13 pages, 3867 KiB  
Technical Note
A Survey on SAR and Optical Satellite Image Registration
by Oscar Sommervold, Michele Gazzea and Reza Arghandeh
Remote Sens. 2023, 15(3), 850; https://doi.org/10.3390/rs15030850 - 3 Feb 2023
Cited by 23 | Viewed by 7298
Abstract
After decades of research, automatic synthetic aperture radar (SAR)-optical registration remains an unsolved problem. SAR and optical satellites utilize different imaging mechanisms, resulting in imagery with dissimilar heterogeneous characteristics. Transforming and translating these characteristics into a shared domain has been the main challenge [...] Read more.
After decades of research, automatic synthetic aperture radar (SAR)-optical registration remains an unsolved problem. SAR and optical satellites utilize different imaging mechanisms, resulting in imagery with dissimilar heterogeneous characteristics. Transforming and translating these characteristics into a shared domain has been the main challenge in SAR-optical matching for many years. Combining the two sensors will improve the quality of existing and future remote sensing applications across multiple industries. Several approaches have emerged as promising candidates in the search for combining SAR and optical imagery. In addition, recent research has indicated that machine learning-based approaches have great potential for filling the information gap posed by utilizing only one sensor type in Earth observation applications. However, several challenges remain, and combining them is a multi-step process where no one-size-fits-all approach is available. This article reviews traditional, state-of-the-art, and recent development trends in SAR-optical co-registration methods. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Step-by-step example of optical-optical image matching. (<b>a</b>) two optical images; (<b>b</b>) features detected by the SIFT descriptor on each image separately; (<b>c</b>) features are matched between the two images after evaluating the feature correspondence; (<b>d</b>) affine correction and image alignment. The image is taken from the SEN1-2 dataset [<a href="#B7-remotesensing-15-00850" class="html-bibr">7</a>], which is a benchmark dataset for SAR-optimal co-registration, sampled around the world and in all meteorological seasons. Images are acquired from Sentinel-1 (SAR) and Sentinel-2 (optical) with a 10 m/pixel resolution.</p>
Full article ">Figure 2
<p>Computing image similarity between the smaller SAR template (T) and the larger optical reference image (R) by iteratively moving the template across the reference image. Although precise, the sliding window technique is a computationally expensive method. With a 256 × 256 reference image and a 192 × 192 template, the resulting similarity map requires 4225 comparisons of 192 × 192 images. The image example is taken from the SEN1-2 dataset.</p>
Full article ">Figure 3
<p>A general Siamese template matches architecture. The red arrow indicates the possible weight-sharing between the two CNN-based feature extractors. Since the CNN translates the images into a homogeneous space, similarity metrics previously demonstrated to be inadequate, such as SSIM and NCC, can now be used to assess the similarity between the CNN feature maps. The satellite images are taken from the SEN1-2 dataset.</p>
Full article ">Figure 4
<p>Siamese-CNN architecture. The dilation rate of the dilated convolutions is denoted along the z-axis. Abbreviations: Convolution (Conv), Batch Normalization (BN), Rectified Linear Unit (ReLU).</p>
Full article ">Figure 5
<p>CNN architecture of the Deep Dense Feature Network.</p>
Full article ">Figure 6
<p>The U-Net architecture adopted in the FFT U-Net paper, here shown in a more shallow configuration for the sake of illustration. The actual network has four encoder-decoder stages.</p>
Full article ">
20 pages, 29608 KiB  
Technical Note
Construction of a Database of Pi-SAR2 Observation Data by Calibration and Scattering Power Decomposition Using the ABCI
by Yuya Arima, Toshifumi Moriyama, Yoshio Yamaguchi, Ryosuke Nakamura, Chiaki Tsutsumi and Shoichiro Kojima
Remote Sens. 2023, 15(3), 849; https://doi.org/10.3390/rs15030849 - 2 Feb 2023
Cited by 1 | Viewed by 1981
Abstract
Pi-SAR2 is an airborne polarimetric synthetic aperture radar operated by the National Institute of Information and Communications Technology. The polarimetric observation data of Pi-SAR2 are very valuable because of its high resolution, but it cannot be used effectively because the data are not [...] Read more.
Pi-SAR2 is an airborne polarimetric synthetic aperture radar operated by the National Institute of Information and Communications Technology. The polarimetric observation data of Pi-SAR2 are very valuable because of its high resolution, but it cannot be used effectively because the data are not well calibrated with respect to elevation. Therefore, we have calibrated the data according to the observation conditions. The Pi-SAR2 observation data are very large due to its high resolution and require sufficient computational resources to be calibrated. We utilized the AI Bridging Cloud Infrastructure (ABCI), constructed and operated by the National Institute of Advanced Industrial Science and Technology, to calculate them. This paper reports on the calibration, scattering power decomposition, and orthorectification of the Pi-SAR2 observation data using the ABCI. Full article
(This article belongs to the Special Issue SAR, Interferometry and Polarimetry Applications in Geoscience)
Show Figures

Figure 1

Figure 1
<p>Process flow.</p>
Full article ">Figure 2
<p>MGP_VVm image around Mount Hakone on 3 December 2015 (Center: N35.218, E139.031).</p>
Full article ">Figure 3
<p>Observation geometric of Pi-SAR2. (<b>a</b>) Interferometric observation. (<b>b</b>) Polarimetric observation.</p>
Full article ">Figure 4
<p>MGP_H_VVm image around Mount Hakone on 3 December 2015 (Center: N35.218, E139.031).</p>
Full article ">Figure 5
<p>Scattering power decomposition image with corner reflectors on a beach near Niigata Univ. on 21 August 2014 (Center: N37.881, E138.951).</p>
Full article ">Figure 6
<p>Phase difference of HV and VH around Mount Hakone on 3 December 2015 (Center: N35.218, E139.031). (<b>a</b>) MGP_H_HVm. (<b>b</b>) MGP_H_VHm. (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>r</mi> <mi>g</mi> <mo>(</mo> <msub> <mi>Z</mi> <mrow> <mi>H</mi> <mi>V</mi> </mrow> </msub> <msubsup> <mi>Z</mi> <mrow> <mi>V</mi> <mi>H</mi> </mrow> <mo>*</mo> </msubsup> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Phase differences of HV and VH for each observation series. (<b>a</b>) 2011-02-22. (<b>b</b>) 2011-08-26. (<b>c</b>) 2011-10-06. (<b>d</b>) 2012-01-11. (<b>e</b>) 2012-11-02. (<b>f</b>) 2013-01-11. (<b>g</b>) 2013-08-26. (<b>h</b>) 2013-10-17. (<b>i</b>) 2014-08-22. (<b>j</b>) 2015-02-12. (<b>k</b>) 2015-02-05. (<b>l</b>) 2015-12-06. (<b>m</b>) 2016-04-17. (<b>n</b>) 2016-10-13. (<b>o</b>) 2017-11-14.</p>
Full article ">Figure 8
<p>MGP_C_VVm image around Mount Hakone on 3 December 2015 (Center: N35.218, E139.031).</p>
Full article ">Figure 9
<p>G4U image before calibration around Mount Hakone on 3 December 2015 (Center: N35.218, E139.031).</p>
Full article ">Figure 10
<p>G4U image after calibration around Mount Hakone on 3 December 2015 (Center: N35.218, E139.031).</p>
Full article ">Figure 11
<p>6SD image before calibration around Mount Hakone on 3 December 2015. (Center: N35.218, E139.031).</p>
Full article ">Figure 12
<p>6SD image after calibration around Mount Hakone on 3 December 2015 (Center: N35.218, E139.031).</p>
Full article ">Figure 13
<p>Another G4U image before calibration near Niigata Univ. on 25 August 2013 (Center: N37.852, E138.965).</p>
Full article ">Figure 14
<p>Another G4U image after calibration near Niigata Univ. on 25 August 2013 (Center: N37.852, E138.965).</p>
Full article ">Figure 15
<p>Another 6SD image before calibration near Niigata Univ. on 25 August 2013 (Center: N37.852, E138.965).</p>
Full article ">Figure 16
<p>Another 6SD image after calibration near Niigata Univ. on 25 August 2013 (Center: N37.852, E138.965).</p>
Full article ">Figure 17
<p>Observation geometric of Pi-SAR2.</p>
Full article ">Figure 18
<p>Elevation extracted from the GSI DEM around Mount Hakone.</p>
Full article ">Figure 19
<p>VVm image after orthorectification around Mount Hakone on 3 December 2015 (Center: N35.218, E139.031).</p>
Full article ">
28 pages, 7900 KiB  
Article
Two-Branch Convolutional Neural Network with Polarized Full Attention for Hyperspectral Image Classification
by Haimiao Ge, Liguo Wang, Moqi Liu, Yuexia Zhu, Xiaoyu Zhao, Haizhu Pan and Yanzhong Liu
Remote Sens. 2023, 15(3), 848; https://doi.org/10.3390/rs15030848 - 2 Feb 2023
Cited by 17 | Viewed by 3004
Abstract
In recent years, convolutional neural networks (CNNs) have been introduced for pixel-wise hyperspectral image (HSI) classification tasks. However, some problems of the CNNs are still insufficiently addressed, such as the receptive field problem, small sample problem, and feature fusion problem. To tackle the [...] Read more.
In recent years, convolutional neural networks (CNNs) have been introduced for pixel-wise hyperspectral image (HSI) classification tasks. However, some problems of the CNNs are still insufficiently addressed, such as the receptive field problem, small sample problem, and feature fusion problem. To tackle the above problems, we proposed a two-branch convolutional neural network with a polarized full attention mechanism for HSI classification. In the proposed network, two-branch CNNs are implemented to efficiently extract the spectral and spatial features, respectively. The kernel sizes of the convolutional layers are simplified to reduce the complexity of the network. This approach can make the network easier to be trained and fit the network to small sample size conditions. The one-shot connection technique is applied to improve the efficiency of feature extraction. An improved full attention block, named polarized full attention, is exploited to fuse the feature maps and provide global contextual information. Experimental results on several public HSI datasets confirm the effectiveness of the proposed network. Full article
Show Figures

Figure 1

Figure 1
<p>The illustration of the residual connection, dense connection, and one-shot connection: (<b>a</b>) residual connection. (<b>b</b>) dense connection. (<b>c</b>) one-shot connection.</p>
Full article ">Figure 2
<p>The details of the fully attentional block. In the implementation, <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>×</mo> <mi>W</mi> </mrow> </semantics></math> represents the spatial size of the input feature map, and <math display="inline"><semantics> <mi>H</mi> </semantics></math> equals <math display="inline"><semantics> <mi>W</mi> </semantics></math>. <math display="inline"><semantics> <mi>S</mi> </semantics></math> represents the dimension after the merge operator for a clear illustration, and <math display="inline"><semantics> <mi>S</mi> </semantics></math> equals <math display="inline"><semantics> <mi>H</mi> </semantics></math> and <math display="inline"><semantics> <mi>W</mi> </semantics></math>. (<b>a</b>) The workflow of the fully attentional block. (<b>b</b>) The workflow of the construction block.</p>
Full article ">Figure 3
<p>The structure of the proposed network. (<b>a</b>) The workflow of the PTCN. (<b>b</b>) The workflow of the PFLA.</p>
Full article ">Figure 3 Cont.
<p>The structure of the proposed network. (<b>a</b>) The workflow of the PTCN. (<b>b</b>) The workflow of the PFLA.</p>
Full article ">Figure 4
<p>The heatmap of normalized confusion matrix for the UP dataset. (<b>a</b>) SVM. (<b>b</b>) DBMA. (<b>c</b>) DBDA. (<b>d</b>) PCIA. (<b>e</b>) SSGC. (<b>f</b>) OSDN. (<b>g</b>) PTCN.</p>
Full article ">Figure 5
<p>The full-factor classification maps for the UP dataset. (<b>a</b>) False-color map. (<b>b</b>) Ground-truth map. (<b>c</b>) SVM. (<b>d</b>) DBMA. (<b>e</b>) DBDA. (<b>f</b>) PCIA. (<b>g</b>) SSGC. (<b>h</b>) OSDN. (<b>i</b>) PTCN.</p>
Full article ">Figure 6
<p>The heatmap of normalized confusion matrix for the HH dataset. (<b>a</b>) SVM. (<b>b</b>) DBMA. (<b>c</b>) DBDA. (<b>d</b>) PCIA. (<b>e</b>) SSGC. (<b>f</b>) OSDN. (<b>g</b>) PTCN.</p>
Full article ">Figure 7
<p>The full-factor classification maps for the HH dataset. (<b>a</b>) False-color map. (<b>b</b>) Ground-truth map. (<b>c</b>) SVM. (<b>d</b>) DBMA. (<b>e</b>) DBDA. (<b>f</b>) PCIA. (<b>g</b>) SSGC. (<b>h</b>) OSDN. (<b>i</b>) PTCN.</p>
Full article ">Figure 8
<p>The heatmap of normalized confusion matrix for the JX dataset. (<b>a</b>) SVM. (<b>b</b>) DBMA. (<b>c</b>) DBDA. (<b>d</b>) PCIA. (<b>e</b>) SSGC. (<b>f</b>) OSDN. (<b>g</b>) PTCN.</p>
Full article ">Figure 9
<p>The full-factor classification maps for the JX dataset. (<b>a</b>) False-color map. (<b>b</b>) Ground-truth map. (<b>c</b>) SVM. (<b>d</b>) DBMA. (<b>e</b>) DBDA. (<b>f</b>) PCIA. (<b>g</b>) SSGC. (<b>h</b>) OSDN. (<b>i</b>) PTCN.</p>
Full article ">Figure 10
<p>The heatmap of normalized confusion matrix for the HU dataset. (<b>a</b>) SVM. (<b>b</b>) DBMA. (<b>c</b>) DBDA. (<b>d</b>) PCIA. (<b>e</b>) SSGC. (<b>f</b>) OSDN. (<b>g</b>) PTCN.</p>
Full article ">Figure 11
<p>The full-factor classification maps for the HU dataset. (<b>a</b>) False-color map. (<b>b</b>) Ground-truth map. (<b>c</b>) SVM. (<b>d</b>) DBMA. (<b>e</b>) DBDA. (<b>f</b>) PCIA. (<b>g</b>) SSGC. (<b>h</b>) OSDN. (<b>i</b>) PTCN.</p>
Full article ">Figure 11 Cont.
<p>The full-factor classification maps for the HU dataset. (<b>a</b>) False-color map. (<b>b</b>) Ground-truth map. (<b>c</b>) SVM. (<b>d</b>) DBMA. (<b>e</b>) DBDA. (<b>f</b>) PCIA. (<b>g</b>) SSGC. (<b>h</b>) OSDN. (<b>i</b>) PTCN.</p>
Full article ">Figure 12
<p>The OAs of the methods under different training sample proportions. (<b>a</b>) UP. (<b>b</b>) HH. (<b>c</b>) JX. (<b>d</b>) HU.</p>
Full article ">Figure 12 Cont.
<p>The OAs of the methods under different training sample proportions. (<b>a</b>) UP. (<b>b</b>) HH. (<b>c</b>) JX. (<b>d</b>) HU.</p>
Full article ">Figure 13
<p>The investigation of the spatial patch sizes of the PTCN.</p>
Full article ">Figure 14
<p>The investigation of the number of PCA components of the PTCN.</p>
Full article ">Figure 15
<p>The ablation experiments of PTCN on HSI datasets. (<b>a</b>) Ablation experiment for two-branch network. (<b>b</b>) Ablation experiment for one-shot connection. (<b>c</b>) Ablation experiment for self-attention block. (<b>d</b>) Ablation experiment for FLAT and PFLAT.</p>
Full article ">
17 pages, 12730 KiB  
Article
Coupling Flank Collapse and Magma Dynamics on Stratovolcanoes: The Mt. Etna Example from InSAR and GNSS Observations
by Giuseppe Pezzo, Mimmo Palano, Lisa Beccaro, Cristiano Tolomei, Matteo Albano, Simone Atzori and Claudio Chiarabba
Remote Sens. 2023, 15(3), 847; https://doi.org/10.3390/rs15030847 - 2 Feb 2023
Cited by 10 | Viewed by 3337
Abstract
Volcano ground deformation is a tricky puzzle in which different phenomena contribute to the surface displacements with different spatial–temporal patterns. We documented some high variable deformation patterns in response to the different volcanic and seismic activities occurring at Mt. Etna through the January [...] Read more.
Volcano ground deformation is a tricky puzzle in which different phenomena contribute to the surface displacements with different spatial–temporal patterns. We documented some high variable deformation patterns in response to the different volcanic and seismic activities occurring at Mt. Etna through the January 2015–March 2021 period by exploiting an extensive dataset of GNSS and InSAR observations. The most spectacular pattern is the superfast seaward motion of the eastern flank. We also observed that rare flank motion reversal indicates that the short-term contraction of the volcano occasionally overcomes the gravity-controlled sliding of the eastern flank. Conversely, fast dike intrusion led to the acceleration of the sliding flank, which could potentially evolve into sudden collapses, fault creep, and seismic release, increasing the hazard. A better comprehension of these interactions can be of relevance for addressing short-term scenarios, yielding a tentative forecasting of the quantity of magma accumulating within the plumbing system. Full article
(This article belongs to the Special Issue Assessment and Prediction of Volcano Hazard Using Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Tectonic map of central Mediterranean. Abbreviations are: ME, Mt. Etna; MEFS, Malta Escarpment fault system. (<b>b</b>) Simplified geologic map of Mt. Etna. Legend: (1) volcanic rocks; (2) Early Quaternary clays; (3) Pre-Quaternary sedimentary rocks. Continuous GNSS stations are reported as red diamonds. Abbreviations are as follows: NE-Rift, North-East Rift; W-Rift, Western Rift; SE-Rift, South-East Rift; P-PF, Provenzana-Pernicana fault; TFS, Timpe fault system; SVF, Santa Venerina fault; STF, Santa Tecla fault; FF, Fiandaca fault; NF, Nizzeti fault system; TF, Trecastagni fault; MTF, Mascalucia-Tremestieri fault; ATF, Aci Trezza fault; RF, Ragalna fault; ESEL, ESE lineament.</p>
Full article ">Figure 2
<p>Timeline cartoon reporting the main deforming events and periods, from T1 to T4, as described in the main text.</p>
Full article ">Figure 3
<p>Ascending (panels <b>a</b>,<b>c</b>) and descending (panels <b>b</b>,<b>d</b>) LoS velocity maps relative to T1 (18 January 2015–22 December 2018) and T2 (28 December 2018–2 June 2019) time intervals. Positive values (blue colors) represent scatterers approaching the satellite, negative values (red colors) are moving away. Main faults are also reported as black lines.</p>
Full article ">Figure 4
<p>Ascending (panels <b>a</b>,<b>c</b>) and descending (panels <b>b</b>,<b>d</b>) velocity maps relative to T3 (2 June 2019–15 February 2021) and T4 (15 February 2021–29 March 2021) time intervals. Positive values (blue colors) represent scatterers approaching the satellite, negative values (red colors) are moving away. Main faults are also reported as black lines.</p>
Full article ">Figure 5
<p>Horizontal (panels <b>a</b>,<b>c</b>) and vertical (panels <b>b</b>,<b>d</b>) component velocity maps relative to T1 (18 January 2015–22 December 2018) and T2 (28 December 2018–2 June 2019) time intervals. Positive values (blue colors) represent scatterers eastward in panels (<b>a</b>,<b>c</b>); and uplifting in panels (<b>b</b>,<b>d</b>). Negative values (red colors) are moving westward in panels (<b>a</b>,<b>c</b>), and subsiding in panels (<b>b</b>,<b>d</b>). GNSS velocity vectors of the horizontal and vertical components relative to the same time period are reported for each panel as black arrows.</p>
Full article ">Figure 6
<p>Horizontal (panels <b>a</b>,<b>c</b>) and vertical (panels <b>b</b>,<b>d</b>) component velocity maps relative to T3 (2 June 2019–15 February 2021) and T4 (15 February 2021–29 March 2021) time intervals. Positive values (blue colors) represent scatterers eastward in panels (<b>a</b>,<b>c</b>); and uplifting in panels (<b>b</b>,<b>d</b>). Negative values (red colors) are moving westward in panels (<b>a</b>,<b>c</b>), and subsiding in panels (<b>b</b>,<b>d</b>). GNSS velocity vectors of the horizontal and vertical components relative to the same time period are reported for each panel as black arrows.</p>
Full article ">
24 pages, 9088 KiB  
Article
A Modified 2-D Notch Filter Based on Image Segmentation for RFI Mitigation in Synthetic Aperture Radar
by Zewen Fu, Hengrui Zhang, Jianhui Zhao, Ning Li and Fengbin Zheng
Remote Sens. 2023, 15(3), 846; https://doi.org/10.3390/rs15030846 - 2 Feb 2023
Cited by 10 | Viewed by 3116
Abstract
Synthetic aperture radar (SAR), as an active microwave sensor, can inevitably receive radio frequency interference (RFI) generated by various electromagnetic equipment. When the SAR system receives RFI, it will affect SAR imaging and limit the application of SAR images. As a kind of [...] Read more.
Synthetic aperture radar (SAR), as an active microwave sensor, can inevitably receive radio frequency interference (RFI) generated by various electromagnetic equipment. When the SAR system receives RFI, it will affect SAR imaging and limit the application of SAR images. As a kind of RFI mitigation method, notch filtering method is a classical method with high efficiency and robust performance. However, the notch filtering methods pay no attention to the protection of useful signals. This paper proposed a modified 2-D notch filter based on image segmentation for RFI mitigation with signal-protected capability. (1) The adaptive gamma correction (AGC) approach was utilized to enhance the SAR image with RFI in the range-frequency and azimuth-time domain. (2) The modified selective binary and Gaussian filtering regularized level set (SBGFRLS) model was utilized to further process the image after AGC to accurately extract the contour of the useful signals with interference, which is more conducive to protecting the useful signals without interference. (3) The Generalized Singular Value Thresholding (GSVT) based low-rank sparse decomposition (LRSD) model was utilized to separate the RFI signals and the useful signals. Then, the useful signals were restored to the raw data. The simulation experiments and measured data experiments show that the proposed method can effectively mitigate RFI and protect the useful signals whether there are RFI with single source or multiple sources. Full article
(This article belongs to the Special Issue SAR-Based Signal Processing and Target Recognition)
Show Figures

Figure 1

Figure 1
<p>Common sources of RFI.</p>
Full article ">Figure 2
<p>The 3-dimensional diagram in the range-frequency and azimuth-time domain. (<b>a</b>) The data containing RFI with a single source. (<b>b</b>) The data containing RFI with multiple sources.</p>
Full article ">Figure 3
<p>Structural analysis of RFI in the range-frequency domain. (<b>a</b>) Spectrogram of SAR echoes contaminated by RFI. (<b>b</b>) Eigenvalue sequences and analysis corresponding to (<b>a</b>).</p>
Full article ">Figure 4
<p>Flowchart of the proposed method.</p>
Full article ">Figure 5
<p>Original SAR image.</p>
Full article ">Figure 6
<p>RFI mitigation performance for the FNF, TNF, and proposed method under different SINR conditions.</p>
Full article ">Figure 7
<p>ROIs in <a href="#remotesensing-15-00846-f006" class="html-fig">Figure 6</a>. (<b>a</b>–<b>d</b>) Mitigation results for the FNF method. (<b>e</b>–<b>h</b>) Mitigation results for the TNF method. (<b>m</b>–<b>p</b>) Mitigation results for the proposed method.</p>
Full article ">Figure 8
<p>SAR image of the measured Sentinel-1 data. (<b>a</b>) The RFI-corrupted burst with a single source. (<b>b</b>) The RFI-corrupted burst with multiple sources.</p>
Full article ">Figure 9
<p>The image after the modified SBGFRLS model in the case of RFI with a single source.</p>
Full article ">Figure 10
<p>The 3-dimensional diagram in the range-frequency and azimuth-time domain after RFI mitigation: (<b>a</b>) FNF; (<b>b</b>) the proposed method.</p>
Full article ">Figure 11
<p>The RFI mitigation results for the SAR data containing RFI with a single source: (<b>a</b>) FNF; (<b>b</b>) TNF; (<b>c</b>) the proposed method.</p>
Full article ">Figure 12
<p>The image after the modified SBGFRLS model in the case of RFI with multiple sources.</p>
Full article ">Figure 13
<p>The 3-dimensional diagram in the range-frequency and azimuth-time domain after RFI mitigation: (<b>a</b>) FNF; (<b>b</b>) the proposed method.</p>
Full article ">Figure 14
<p>The RFI mitigation results for the SAR data containing RFI with multiple sources: (<b>a</b>) FNF; (<b>b</b>)TNF; (<b>c</b>) the proposed method.</p>
Full article ">
20 pages, 4583 KiB  
Article
Very High Resolution Automotive SAR Imaging from Burst Data
by Mattia Giovanni Polisano, Marco Manzoni, Stefano Tebaldini, Andrea Monti-Guarnieri, Claudio Maria Prati and Ivan Russo
Remote Sens. 2023, 15(3), 845; https://doi.org/10.3390/rs15030845 - 2 Feb 2023
Cited by 10 | Viewed by 2332
Abstract
This paper proposes a method for efficient and accurate removal of grating lobes in automotive Synthetic Aperture Radar (SAR) images. Grating lobes can indeed be mistaken as real targets, inducing in this way false alarms in the target detection procedure. Grating lobes are [...] Read more.
This paper proposes a method for efficient and accurate removal of grating lobes in automotive Synthetic Aperture Radar (SAR) images. Grating lobes can indeed be mistaken as real targets, inducing in this way false alarms in the target detection procedure. Grating lobes are present whenever SAR focusing is performed using data acquired on a non-continuous basis. This kind of acquisition is typical in the automotive scenario, where regulations do not allow for a continuous operation of the radar. Radar pulses are thus transmitted and received in bursts, leading to a spectrum of the signal containing gaps. We start by deriving a suitable reference frame in which SAR images are focused. It will be shown that working in this coordinate system is particularly convenient since it allows for a signal spectrum that is space-invariant and with spectral gaps described by a simple one-dimensional function. After an inter-burst calibration step, we exploit these spectral characteristics of the signal by implementing a compressive sensing algorithm aimed at removing grating lobes. The proposed approach is validated using real data acquired by an eight-channel automotive radar operating in burst mode at 77 GHz. Results demonstrate the practical possibility to process a synthetic aperture length as long as up to 2 m reaching in this way extremely fine angular resolutions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>): Real array configuration for an 8-channel automotive radar made up by two transmitting antennas (in blue) and four receiving antennas (in red) in forward-looking configuration. (<b>b</b>): Virtual array for an 8-channel automotive radar spaced by <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math> in forward-looking configuration.</p>
Full article ">Figure 2
<p>Uniform Linear array with spacing distance <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mi>λ</mi> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math> in forward SAR looking configuration, with a reference point target <math display="inline"><semantics> <msub> <mi mathvariant="bold">p</mi> <mn mathvariant="bold">0</mn> </msub> </semantics></math> moving at a speed <math display="inline"><semantics> <mrow> <mi>v</mi> </mrow> </semantics></math> with a Pulse Repetition Interval <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>R</mi> <mi>I</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Grating lobes suppression workflow block diagram.</p>
Full article ">Figure 4
<p>Burst acquisition mode representation: while the vehicle is moving the radar works for a period of time <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>o</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mi>N</mi> <mo>×</mo> <mi>P</mi> <mi>R</mi> <mi>I</mi> </mrow> </semantics></math>, where <span class="html-italic">N</span> is the number of slow time samples for each burst, and it is silent for a period of time <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>o</mi> <mi>f</mi> <mi>f</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Back projection grid representation in a Cartesian reference system for a forward looking Automotive SAR.</p>
Full article ">Figure 6
<p>Back projection grid representation in a Polar reference system for a forward looking Automotive SAR.</p>
Full article ">Figure 7
<p>(<b>a</b>): Scene focused in the (r,e) reference system. (<b>b</b>): CLEAN processed image, the image is much less dense since the grating lobes have been removed. (<b>c</b>): Spatial spectrum image, for each burst there is a column of data which is space invariant in the validity region of the CSA.</p>
Full article ">Figure 8
<p>(<b>a</b>): Image geocoded into the Cartesian reference system. (<b>b</b>): Image geocoded into the Cartesian reference system without the grating lobes.</p>
Full article ">Figure 9
<p>Optical reference taken from a camera placed on top of the vehicle.</p>
Full article ">Figure 10
<p>(<b>a</b>): SAR image of a gate detail before the grating lobes suppressing algorithm. (<b>b</b>): SAR image of a gate detail after the grating lobes suppressing algorithm. (<b>c</b>): Optical reference of the gate detail.</p>
Full article ">Figure 11
<p>(<b>a</b>): SAR image of a car detail before the grating lobes suppressing algorithm. (<b>b</b>): SAR image of a car detail after the grating lobes suppressing procedure. (<b>c</b>): Optical reference of the car detail.</p>
Full article ">Figure 12
<p>(<b>a</b>): Car detail of a SAR image generated using the synthetic aperture of a single burst length. (<b>b</b>): Car detail of a SAR image generated using the incoherent sum of the three bursts. (<b>c</b>): Car detail of a SAR image generated using the coherent sum of the three bursts. (<b>d</b>): Car detail of a SAR image generated using the grating lobes removing procedure.</p>
Full article ">Figure 13
<p>(<b>a</b>): Gate detail of a SAR image generated with the synthetic aperture of a single burst length. (<b>b</b>): Gate detail of a SAR image generated using the incoherent sum of the three bursts. (<b>c</b>): Gate detail of a SAR image generated using the coherent sum of the three bursts. (<b>d</b>): Gate detail of a SAR image generated using the grating lobes removing procedure.</p>
Full article ">
26 pages, 11788 KiB  
Article
Mountain Tree Species Mapping Using Sentinel-2, PlanetScope, and Airborne HySpex Hyperspectral Imagery
by Marcin Kluczek, Bogdan Zagajewski and Tomasz Zwijacz-Kozica
Remote Sens. 2023, 15(3), 844; https://doi.org/10.3390/rs15030844 - 2 Feb 2023
Cited by 22 | Viewed by 5435
Abstract
Europe’s mountain forests, which are naturally valuable areas due to their high biodiversity and well-preserved natural characteristics, are experiencing major alterations, so an important component of monitoring is obtaining up-to-date information concerning species composition, extent, and location. An important aspect of mapping tree [...] Read more.
Europe’s mountain forests, which are naturally valuable areas due to their high biodiversity and well-preserved natural characteristics, are experiencing major alterations, so an important component of monitoring is obtaining up-to-date information concerning species composition, extent, and location. An important aspect of mapping tree stands is the selection of remote sensing data that vary in temporal, spectral, and spatial resolution, as well as in open and commercial access. For the Tatra Mountains area, which is a unique alpine ecosystem in central Europe, we classified 13 woody species by iterative machine learning methods using random forest (RF) and support vector machine (SVM) algorithms of more than 1000 polygons collected in the field. For this task, we used free Sentinel-2 multitemporal satellite data (10 m pixel size, 12 spectral bands, and 21 acquisition dates), commercial PlanetScope data (3 m pixel size, 8 spectral bands, and 3 acquisitions dates), and airborne HySpex hyperspectral data (2 m pixel size, 430 spectral bands, and a single acquisition) with fusion of the data of topographic derivatives based on Shuttle Radar Topography Mission (SRTM) and airborne laser scanning (ALS) data. The iterative classification method achieved the highest F1-score with HySpex (0.95 RF; 0.92 SVM) imagery, but the multitemporal Sentinel-2 data cube, which consisted of 21 scenes, offered comparable results (0.93 RF; 0.89 SVM). The three images of the high-resolution PlanetScope produced slightly less accurate results (0.89 RF; 0.87 SVM). Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of study area (right image: Sentinel-2 RGB composition acquired 21 July 2022; ESA Copernicus Open Access Hub).</p>
Full article ">Figure 2
<p>Broadleaf (<b>A</b>) and coniferous (<b>B</b>) forest stands in the Tatra Mountains. Photo credit: M. Kluczek, 2022.</p>
Full article ">Figure 3
<p>Sentinel-2 cloudless acquisition dates for years 2018–2022, depending on vegetation season; the start and end dates of vegetation were differentiated in the analyzed years. Source: Calculated based on ERA5 climate reanalysis data [<a href="#B34-remotesensing-15-00844" class="html-bibr">34</a>]. In the case of the HySpex image, due to a frequent cloud cover, the size of the area, significant elevation differences (this required flightlines at different altitudes to acquire comparable pixel sizes for the whole area), the area of the park was acquired in different periods.</p>
Full article ">Figure 4
<p>Main steps of processing procedure. As input data, we used: (1) airborne hyperspectral HySpex, thermal images, and LiDAR point cloud data; (2) multitemporal Sentinel-2, PlanetScope images, and Shuttle Radar Topography Mission (SRTM) satellite data; and (3) field-verified patterns of dominant tree species as reference data. First, we optimized the parameters of the random forest and support vector machine classifiers. Second, iterative classification was performed, repeating the procedure 100 times based on the prepared training and validation (50:50) sets. As output variables, we determined the importance of the used satellite spectral bands, obtained maps of tree species, and calculated their classification accuracies.</p>
Full article ">Figure 5
<p>Location of polygons acquired during field campaigns (digital elevation model source: Shuttle Radar Topography Mission, NASA; background image: Sentinel-2 RGB composition 21 July 2022, ESA).</p>
Full article ">Figure 6
<p>F1-score aggregated for all tree species (background classes excluded) depending on number of training pixels per class: random forest (RF) and support vector machine (SVM) classifiers; 100 iterations. IQR, interquartile range; Q1, lower quartile; Q3, upper quartile; TF, topographic features (digital elevation model, canopy height model, and slope and aspect maps).</p>
Full article ">Figure 7
<p>Impact of variables on improvement in mean F1-score values for 100 iterations. Explanation: TF, topographic features (digital elevation model, canopy height model, and slope and aspect maps); TIR, thermal infrared.</p>
Full article ">Figure 8
<p>F1-score values for classes based on 100 iterations for 700 pixels using random forest as the classifier. IQR, interquartile range; Q1, lower quartile; Q3, upper quartile; TF, topographic features; (digital elevation model, canopy height model, and slope and aspect maps).</p>
Full article ">Figure 9
<p>F1-score values for classes based on 100 iterations for 700 pixels with support vector machine as the classifier. IQR, interquartile range; Q1, lower quartile; Q3, upper quartile; TF, topographic features; (digital elevation model, canopy height model, and slope and aspect maps).</p>
Full article ">Figure 10
<p>Comparison of single date classification (19 May 2022) between Sentinel-2 and PlanetScope data. F1-score values for dominant tree species based on 100 iterations. TF, topographic features based on Shuttle Radar Topography Mission data: digital elevation model, and slope and aspect maps.</p>
Full article ">Figure 11
<p>Map of occurrence of dominant woody species in study area. Classification based on multitemporal Sentinel-2 data with Shuttle Radar Topography Mission derivatives (digital elevation model, and slope and aspect maps) and random forest classifier.</p>
Full article ">Figure 12
<p>Comparison of obtained classification maps based on HySpex hyperspectral images, PlanetScope, and Sentinel-2 data, with topographic features (digital elevation model, canopy height model, and slope and aspect maps).</p>
Full article ">
28 pages, 6416 KiB  
Article
Urbanization Trends Analysis Using Hybrid Modeling of Fuzzy Analytical Hierarchical Process-Cellular Automata-Markov Chain and Investigating Its Impact on Land Surface Temperature over Gharbia City, Egypt
by Eman Mostafa, Xuxiang Li and Mohammed Sadek
Remote Sens. 2023, 15(3), 843; https://doi.org/10.3390/rs15030843 - 2 Feb 2023
Cited by 18 | Viewed by 2524
Abstract
Quick population increase and the desire for urbanization are the main drivers for accelerating urban expansion on agricultural lands in Egypt. This issue is obvious in governorates with no desert backyards. This study aims to (1) explore the trend of Land Use Land [...] Read more.
Quick population increase and the desire for urbanization are the main drivers for accelerating urban expansion on agricultural lands in Egypt. This issue is obvious in governorates with no desert backyards. This study aims to (1) explore the trend of Land Use Land Cover Change (LULCC) through the period of 1991–2018; (2) upgrade the reliability of predicting LULCC by integrating the Cellular Automata (CA)-Markov chain and fuzzy analytical hierarchy process (FAHP); and (3) perform analysis of urbanization risk on LST trends over the Gharbia governorate for the decision makers to implement effective strategies for sustainable land use. Multi-temporal Landsat images were used to monitor LULCC dynamics from 1991 to 2018 and then simulate LULCC in 2033 and 2048. Two comparable models were adopted for the simulation of spatiotemporal dynamics of land use in the study area: CA-Markov chain and FAHP-CA-Markov chain hybrid models. The second model upgrades the potential of the CA-Markov chain for prediction by its integration with FAHP, which can determine the locations of high potential to be urbanized. The outcomes stated a significant LULCC in Gharbia during the study period—specifically, urban sprawl on agricultural land, and this trend is predicted to carry on. The agricultural sector represented 91.2% in 1991 and reduced to 83.7% in 2018. The built-up area is almost doubled by 2048 with respect to 2018. The regression analysis revealed the LST increase due to urbanization, causing an urban heat island phenomenon. Criteria-based analysis reveals the district’s vulnerability to rapid urbanization, which is efficient for data-gap zones. The simulation results make sense since the FAHP-CA-Markov simulated the LULCC in a thoughtful way, considering the driving forces of LULCC, while the CA-Markov chain results were relatively random. Therefore, the FAHP-CA-Markov chain is the pioneer to be relied upon for future projection. The findings of this work provide a better understanding of LULCC trends over the years supporting decision makers toward sustainable land use. Thus, further urbanization should be planned to avert the loss of agricultural land and uninterrupted increasing temperatures. Full article
(This article belongs to the Special Issue Urban Planning Supported by Remote Sensing Technology)
Show Figures

Figure 1

Figure 1
<p>The study area of Gharbia governorate, Egypt [<a href="#B35-remotesensing-15-00843" class="html-bibr">35</a>].</p>
Full article ">Figure 2
<p>The conceptual flowchart of the adopted methodology. (<b>a</b>) Supervised classification of the processed images for obtaining LULC maps; (<b>b</b>) simulating the LULCC using CA-Markov chain model and the integrated model of CA-Markov chain with FAHP; and (<b>c</b>) estimation of LST and investigating the impact of LULCC on LST based on regression analysis; and (<b>d</b>) conducting regression analysis to obtain the relationship between LULC and the corresponding LST.</p>
Full article ">Figure 3
<p>Spatial distribution of LULC over the study area during the study period: (<b>a</b>) LULC map in 1991; (<b>b</b>) LULC map in 2003; and (<b>c</b>) LULC map in 2018.</p>
Full article ">Figure 4
<p>The simulated LULC for 2018 obtained based on the traditional CA-Markov model.</p>
Full article ">Figure 5
<p>Simulated LULC map for 2018 based on the hybrid model of FAHP-CA-Markov chain.</p>
Full article ">Figure 6
<p>Projected land cover for 2033 and 2048, respectively.</p>
Full article ">Figure 7
<p>Spatial distribution of daytime land surface temperature (LST) over the study area during the study period in June 1991, 2003, and 2018, all at around 8:00 am.</p>
Full article ">Figure 8
<p>Land surface temperature maps illustrating UHI intensity and UHI zones. The white color represents non-UHI zones, and UHI intensity is mentioned for each image.</p>
Full article ">Figure 9
<p>Regression analysis to retrieve the relationship between the LST and LULC over Gharbia in (<b>a</b>) 1991, (<b>b</b>) 2003, and (<b>c</b>) 2018.</p>
Full article ">
24 pages, 16710 KiB  
Article
Remote Sensing Image Change Detection Based on Deep Multi-Scale Multi-Attention Siamese Transformer Network
by Mengxuan Zhang, Zhao Liu, Jie Feng, Long Liu and Licheng Jiao
Remote Sens. 2023, 15(3), 842; https://doi.org/10.3390/rs15030842 - 2 Feb 2023
Cited by 30 | Viewed by 5317
Abstract
Change detection is a technique that can observe changes in the surface of the earth dynamically. It is one of the most significant tasks in remote sensing image processing. In the past few years, with the ability of extracting rich deep image features, [...] Read more.
Change detection is a technique that can observe changes in the surface of the earth dynamically. It is one of the most significant tasks in remote sensing image processing. In the past few years, with the ability of extracting rich deep image features, the deep learning techniques have gained popularity in the field of change detection. In order to obtain obvious image change information, the attention mechanism is added in the decoder and output stage in many deep learning-based methods. Many of these approaches neglect to upgrade the ability of the encoders and the feature extractors to extract the representational features. To resolve this problem, this study proposes a deep multi-scale multi-attention siamese transformer network. A special contextual attention module combining a convolution and self-attention module is introduced into the siamese feature extractor to enhance the global representation ability. A lightly efficient channel attention block is added in the siamese feature extractor to obtain the information interaction among different channels. Furthermore, a multi-scale feature fusion module is proposed to fuse the features from different stages of the siamese feature extractor, and it can detect objects of different sizes and irregularities. To increase the accuracy of the proposed approach, the transformer module is utilized to model the long-range context in two-phase images. The experimental results on the LEVIR-CD and the CCD datasets show the effectiveness of the proposed network. Full article
(This article belongs to the Special Issue Remote Sensing and Machine Learning of Signal and Image Processing)
Show Figures

Figure 1

Figure 1
<p>The architecture of the proposed DMMSTNet.</p>
Full article ">Figure 2
<p>The architecture of the feature extractor.</p>
Full article ">Figure 3
<p>The detailed calculation process of the CoT module.</p>
Full article ">Figure 4
<p>The details of the ECA module.</p>
Full article ">Figure 5
<p>(<b>a</b>) The details of the MFF module. (<b>b</b>) The implementation of deformable convolution.</p>
Full article ">Figure 6
<p>(<b>a</b>) The details of the transformer module. (<b>b</b>) The details of the spatial attention module.</p>
Full article ">Figure 7
<p>Bitemporal images and ground truth maps from the LEVIR and the CCD datasets. The first row represents the image at Time1. The second row represents the image at Time2. The third row represents the ground truth map.</p>
Full article ">Figure 8
<p>Visualization on the LEVIR-CD dataset. (<b>a</b>) The images at time1. (<b>b</b>) The images at time2. (<b>c</b>) The ground truth. (<b>d</b>) The results of IFN. (<b>e</b>) The results of SNUNet. (<b>f</b>) The results of STANet. (<b>g</b>) The results of BiT. (<b>h</b>) The results of MSPSNet. (<b>i</b>) The results of DMMSTNet. The true positives are marked in white color, the true negatives are marked in black color, the false positives are marked in red color and the false negatives are marked in green color.</p>
Full article ">Figure 9
<p>Visualization results on the CCD dataset. (<b>a</b>) The images at time1. (<b>b</b>) The images at time2. (<b>c</b>) The ground truth. (<b>d</b>) The results of IFN. (<b>e</b>) The results of SNUNet. (<b>f</b>) The results of STANet. (<b>g</b>) The results of BiT. (<b>h</b>) The results of MSPSNet. (<b>i</b>) The results of DMMSTNet.</p>
Full article ">Figure 10
<p>Ablation Base1 experimental results on the LEVIR-CD dataset. (<b>a</b>) The images at time1. (<b>b</b>) The images at time2. (<b>c</b>) The ground truth. (<b>d</b>) The results of Base1. (<b>e</b>) The results of “Base1 + CoT”. (<b>f</b>) The results of “Base1 + Transformer”.</p>
Full article ">Figure 11
<p>Ablation Base1 experimental results on the CCD dataset. (<b>a</b>) The images at time1. (<b>b</b>) The images at time2. (<b>c</b>) The ground truth. (<b>d</b>) The results of Base1. (<b>e</b>) The results of “Base1 + CoT”. (<b>f</b>) The results of “Base1 + Transformer”.</p>
Full article ">Figure 12
<p>Ablation Base2 experimental results on the LEVIR-CD dataset. (<b>a</b>) The images at time1. (<b>b</b>) The images at time2. (<b>c</b>) The ground truth. (<b>d</b>) The results of Base2. (<b>e</b>) The results of “Base2 + ECA”. (<b>f</b>) The results of “Base2 + MFF”. (<b>g</b>) The results of DMMSTNet.</p>
Full article ">Figure 13
<p>Ablation Base2 experimental results on the CCD dataset. (<b>a</b>) The images at time1. (<b>b</b>) The images at time2. (<b>c</b>) The ground truth. (<b>d</b>) The results of Base2. (<b>e</b>) The results of “Base2 + ECA”. (<b>f</b>) The results of “Base2 + MFF”. (<b>g</b>) The results of DMMSTNet.</p>
Full article ">
19 pages, 1464 KiB  
Article
The Deep Atmospheric Composition of Jupiter from Thermochemical Calculations Based on Galileo and Juno Data
by Frank Rensen, Yamila Miguel , Mantas Zilinskas, Amy Louca, Peter Woitke, Christiane Helling and Oliver Herbort
Remote Sens. 2023, 15(3), 841; https://doi.org/10.3390/rs15030841 - 2 Feb 2023
Cited by 1 | Viewed by 2915
Abstract
The deep atmosphere of Jupiter is obscured beneath thick clouds. This causes direct observations to be difficult, and thermochemical equilibrium models fill in the observational gaps. This research uses Galileo and Juno data together with the Gibbs free energy minimization code GGchem to [...] Read more.
The deep atmosphere of Jupiter is obscured beneath thick clouds. This causes direct observations to be difficult, and thermochemical equilibrium models fill in the observational gaps. This research uses Galileo and Juno data together with the Gibbs free energy minimization code GGchem to update the gas phase and condensation equilibrium chemistry of the deep atmosphere of Jupiter down to 1000 bars. Specifically, the Galileo data provides helium abundances and, with the incorporated Juno data, we use new enrichment values for oxygen, nitrogen, carbon and sulphur. The temperature profile in Jupiter’s deep atmosphere is obtained following recent interior model calculations that fit the gravitational harmonics measured by Juno. Following this approach, we produced pressure–mixing ratio plots for H, He, C, N, O, Na, Mg, Si, P, S and K that give a complete chemical model of all species occurring to abundances down to a 1020 mixing ratio. The influence of the increased elemental abundances can be directly seen in the concentration of the dominant carriers for each element: the mixing ratio of NH3 increased by a factor of 1.55 as compared with the previous literature, N2 by 5.89, H2O by 1.78, CH4 by 2.82 and H2S by 2.69. We investigate the influence of water enrichment values observed by Juno on these models and find that no liquid water clouds form at the oxygen enrichment measured by Galileo, EH2O = 0.47, while they do form at higher water abundance as measured by Juno. We update the mixing ratios of important gas phase species, such as NH3, H2O, CO, CH4 and H2S, and find that new gas phase species, such as CN, (NaCN)2, S2O and K+, and new condensates, namely H3PO4 (s), LiCl (s), KCl (s), NaCl (s), NaF (s), MgO (s), Fe (s) and MnS (s), form in the atmosphere. Full article
(This article belongs to the Special Issue Remote Sensing Observations of the Giant Planets)
Show Figures

Figure 1

Figure 1
<p>The temperature–pressure profile used in this work. The grey shaded area denotes data from the <span class="html-italic">Galileo</span> entry probe.</p>
Full article ">Figure 2
<p>The most abundant equilibrium gas phase compounds in the Jovian (deep) atmosphere.</p>
Full article ">Figure 3
<p>Gas phase chemistry for all carbon-bearing species that reach a peak mixing mixing ratio at some point in the atmosphere of more than a 10<math display="inline"><semantics> <msup> <mrow/> <mrow> <mo>−</mo> <mn>20</mn> </mrow> </msup> </semantics></math> mixing ratio. The species are split into two panels for clarity.</p>
Full article ">Figure 4
<p>The same as in <a href="#remotesensing-15-00841-f003" class="html-fig">Figure 3</a> but for the nitrogen gas phase chemistry.</p>
Full article ">Figure 5
<p>The same as in <a href="#remotesensing-15-00841-f003" class="html-fig">Figure 3</a> but for the oxygen gas phase chemistry.</p>
Full article ">Figure 6
<p>The same as in <a href="#remotesensing-15-00841-f003" class="html-fig">Figure 3</a> but for the sodium (<b>left</b>) and magnesium (<b>right</b>) gas phase chemistries.</p>
Full article ">Figure 7
<p>The same as in <a href="#remotesensing-15-00841-f003" class="html-fig">Figure 3</a> but for the silicon (<b>top left</b>), phorphorous (<b>top right</b>), sulphur (<b>bottom left</b>) and potassium (<b>bottom right</b>) gas phase chemistries.</p>
Full article ">Figure 8
<p>Condensation chemistry in the deep Jovian atmosphere. Species denoted [l] are liquids; the others are solids.</p>
Full article ">Figure 9
<p>Comparison between the gas chemistry for minimal (dashed lines, E<math display="inline"><semantics> <msub> <mrow/> <mrow> <msub> <mi>H</mi> <mn>2</mn> </msub> <mi>O</mi> </mrow> </msub> </semantics></math> = 0.47) and maximal (solid lines, E<math display="inline"><semantics> <msub> <mrow/> <mrow> <msub> <mi>H</mi> <mn>2</mn> </msub> <mi>O</mi> </mrow> </msub> </semantics></math> = 10) H<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>O enrichment values.</p>
Full article ">Figure 10
<p>Comparison between the condensate chemistry for minimal (dashed lines, E<math display="inline"><semantics> <msub> <mrow/> <mrow> <msub> <mi>H</mi> <mn>2</mn> </msub> <mi>O</mi> </mrow> </msub> </semantics></math> = 0.47) and maximal (straight lines, E<math display="inline"><semantics> <msub> <mrow/> <mrow> <msub> <mi>H</mi> <mn>2</mn> </msub> <mi>O</mi> </mrow> </msub> </semantics></math> = 10) H<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>O enrichment values.</p>
Full article ">
20 pages, 2079 KiB  
Article
BiTSRS: A Bi-Decoder Transformer Segmentor for High-Spatial-Resolution Remote Sensing Images
by Yuheng Liu, Yifan Zhang, Ye Wang and Shaohui Mei
Remote Sens. 2023, 15(3), 840; https://doi.org/10.3390/rs15030840 - 2 Feb 2023
Cited by 8 | Viewed by 2519
Abstract
Semantic segmentation of high-spatial-resolution (HSR) remote sensing (RS) images has been extensively studied, and most of the existing methods are based on convolutional neural network (CNN) models. However, the CNN is regarded to have less power in global representation modeling. In the past [...] Read more.
Semantic segmentation of high-spatial-resolution (HSR) remote sensing (RS) images has been extensively studied, and most of the existing methods are based on convolutional neural network (CNN) models. However, the CNN is regarded to have less power in global representation modeling. In the past few years, methods using transformer have attracted increasing attention and generate improved results in semantic segmentation of natural images, owing to their powerful ability in global information acquisition. Nevertheless, these transformer-based methods exhibit limited performance in semantic segmentation of RS images, probably because of the lack of comprehensive understanding in the feature decoding process. In this paper, a novel transformer-based model named the bi-decoder transformer segmentor for remote sensing (BiTSRS) is proposed, aiming at alleviating the problem of flexible feature decoding, through a bi-decoder design for semantic segmentation of RS images. In the proposed BiTSRS, the Swin transformer is adopted as encoder to take both global and local representations into consideration, and a unique design module (ITM) is designed to deal with the limitation of input size for Swin transformer. Furthermore, BiTSRS adopts a bi-decoder structure consisting of a Dilated-Uper decoder and a fully deformable convolutional network (FDCN) module embedded with focal loss, with which it is capable of decoding a wide range of features and local detail deformations. Both ablation experiments and comparison experiments were conducted on three representative RS images datasets. The ablation analysis demonstrates the contributions of specifically designed modules in the proposed BiTSRS to performance improvement. The comparison experimental results illustrate that the proposed BiTSRS clearly outperforms some state-of-the-art semantic segmentation methods. Full article
(This article belongs to the Special Issue Signal Processing Theory and Methods in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The overall architecture of the proposed BiTSRS. The architecture includes three main parts: an Input Transform Module (ITM) to transform large-scale inputs into small-scale feature maps, a Swin transformer encoder to acquire global and local representations, and a bi-decoder structure to decode coarse and fine information.</p>
Full article ">Figure 2
<p>The structure of ITM. The transform blocks with residual connections and the global average pooling (GAP) are used to preserve the original information.</p>
Full article ">Figure 3
<p>Illustration of different transformer blocks. (<b>a</b>) The standard transformer block. (<b>b</b>) Two successive Swin transformer blocks, the window block and shifted window block, which compute window multi-head self-attention (W-MSA) and shifted window multi-head self-attention (SW-MSA), respectively. LN and MLP denote layer normalization and multi-layer perceptron, respectively.</p>
Full article ">Figure 4
<p>The structure of Dilated-Uper decoder.</p>
Full article ">Figure 5
<p>Comparison of visualization results on the ISPRS Vaihingen dataset. (<b>a</b>) FCN. (<b>b</b>) DeepLabv3+. (<b>c</b>) SETR. (<b>d</b>) HRNet. (<b>e</b>) UNetFormer. (<b>f</b>) ST-UNet. (<b>g</b>) Baseline. (<b>h</b>) BiTSRS.</p>
Full article ">Figure 6
<p>Comparison of visualization results on the ISPRS Potsdam dataset. (<b>a</b>) FCN. (<b>b</b>) DeepLabv3+. (<b>c</b>)SETR. (<b>d</b>) HRNet. (<b>e</b>) UNetFormer. (<b>f</b>) ST-UNet. (<b>g</b>) Baseline. (<b>h</b>) BiTSRS.</p>
Full article ">Figure 7
<p>Comparison of visualization results on the LoveDA dataset. (<b>a</b>) FCN. (<b>b</b>) DeepLabv3+. (<b>c</b>)SETR. (<b>d</b>) HRNet. (<b>e</b>) UNetFormer. (<b>f</b>) ST-UNet. (<b>g</b>) Baseline. (<b>h</b>) BiTSRS.</p>
Full article ">
19 pages, 10071 KiB  
Article
Extension of Scattering Power Decomposition to Dual-Polarization Data for Tropical Forest Monitoring
by Ryu Sugimoto, Ryosuke Nakamura, Chiaki Tsutsumi and Yoshio Yamaguchi
Remote Sens. 2023, 15(3), 839; https://doi.org/10.3390/rs15030839 - 2 Feb 2023
Cited by 6 | Viewed by 2817
Abstract
A new scattering power decomposition method is developed for accurate tropical forest monitoring that utilizes data in dual-polarization mode instead of quad-polarization (POLSAR) data. This improves the forest classification accuracy and helps to realize rapid deforestation detection because dual-polarization data are more frequently [...] Read more.
A new scattering power decomposition method is developed for accurate tropical forest monitoring that utilizes data in dual-polarization mode instead of quad-polarization (POLSAR) data. This improves the forest classification accuracy and helps to realize rapid deforestation detection because dual-polarization data are more frequently acquired than POLSAR data. The proposed method involves constructing scattering power models for dual-polarization data considering the radar scattering scenario of tropical forests (i.e., ground scattering, volume scattering, and helix scattering). Then, a covariance matrix is created for dual-polarization data and is decomposed to obtain three scattering powers. We evaluated the proposed method by using simulated dual-polarization data for the Amazon, Southeast Asia, and Africa. The proposed method showed an excellent forest classification performance with both user’s accuracy and producer’s accuracy at >98% for window sizes greater than 7 × 14 pixels, regardless of the transmission polarization. It also showed a comparable deforestation detection performance to that obtained by POLSAR data analysis. Moreover, the proposed method showed better classification performance than vegetation indices and was found to be robust regardless of the transmission polarization. When applied to actual dual-polarization data from the Amazon, it provided accurate forest map and deforestation detection. The proposed method will serve tropical forest monitoring very effectively not only for future dual-polarization data but also for accumulated data that have not been fully utilized. Full article
(This article belongs to the Special Issue SAR, Interferometry and Polarimetry Applications in Geoscience)
Show Figures

Figure 1

Figure 1
<p>Study sites: (<b>a</b>) Rio Branco in Brazil, (<b>b</b>) Ucayali River in the Amazon, (<b>c</b>) Kalimantan in Indonesia, and (<b>d</b>) Congo Basin in the Republic of the Congo. The white solid and dotted rectangles indicate subareas A and B, respectively. The red square in (<b>a</b>) shows the reference data area described in <a href="#sec2dot1dot2-remotesensing-15-00839" class="html-sec">Section 2.1.2</a>. The mosaic images used as the base map were acquired by Planet/Dove in (<b>a</b>) December 2017–May 2018, (<b>b</b>) June–November 2016, (<b>c</b>) June–November 2016, and (<b>d</b>) December 2015–May 2016.</p>
Full article ">Figure 2
<p>Forest map at Rio Branco generated by using the proposed method with HH/HV data acquired on 5 January and 19, 2018, and a window size of 10 × 20 pixels. The red rectangle shows the reference data area. (<b>a</b>) Mosaic image acquired by Planet/Dove from December 2017 to May 2018. The white rectangle shows the observation area of PALSAR-2. (<b>b</b>) RGB image generated by the proposed method: ground scattering is in red and blue and volume scattering is in green. (<b>c</b>) Forest map generated by the proposed method.</p>
Full article ">Figure 3
<p>Forest maps of Ucayali River in the Amazon using the proposed method and 6SD method with POLSAR data acquired on 16 April 2016 and a window size of 10 × 20 pixels. (<b>a</b>) Mosaic image acquired by Planet/Dove from June to November 2016. The white rectangle shows the observation area of PALSAR-2. (<b>b</b>) 6SD RGB image: double-bounce scattering is in red, volume scattering is in green, and surface scattering is in blue. (<b>c</b>) Forest map using the 6SD method. (<b>d</b>) RGB image of the proposed method: ground scattering is in red and blue, and volume scattering is in green. (<b>e</b>) Forest map using the proposed method.</p>
Full article ">Figure 4
<p>Forest maps of Kalimantan in Indonesia using the proposed method and 6SD method with the POLSAR data acquired on 29 October 2016 and a window size of 10 × 20 pixels. (<b>a</b>) Mosaic image acquired by Planet/Dove from June to November 2016. The white rectangle shows the observation area of PALSAR-2. (<b>b</b>–<b>e</b>) Same as in <a href="#remotesensing-15-00839-f003" class="html-fig">Figure 3</a>. The white dotted rectangle shows the oil palm plantation.</p>
Full article ">Figure 5
<p>Forest maps of the Congo Basin in the Republic of the Congo using the proposed method and 6SD method with the POLSAR data acquired on 7 May 2016 and a window size of 10 × 20 pixels. (<b>a</b>) Mosaic image acquired by Planet/Dove from December 2015 to May 2016. The white rectangle shows the observation area of PALSAR-2. (<b>b</b>–<b>e</b>) Same as in <a href="#remotesensing-15-00839-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 6
<p>Distributions of the (<b>a</b>) volume scattering power, (<b>b</b>) RFDI, and (<b>c</b>) RVI using the forest and non-forest pixels of the reference data. These values were derived using HH/HV data and a window size of 10 × 20 pixels.</p>
Full article ">Figure 7
<p>Forest map of Altamira using the proposed method with actual dual-polarization data acquired on 4 April 2019. (<b>a</b>) Forest map generated by the proposed method. (<b>b</b>) JAXA Global PALSAR-2 Forest/Non-Forest map in 2018. The white rectangle shows the observation area of dual-polarization data. The red arrows indicate deforestation in 2019 as visually confirmed by Planet/Dove mosaic images.</p>
Full article ">Figure 8
<p>Deforestation at Altamira detected by using the proposed method with actual dual-polarization data acquired on 4 April 2019 and 12 May 2022. The yellow outline shows the area detected by the proposed method. Mosaic images acquired by Planet/Dove in (<b>a</b>) June–November 2019 and (<b>b</b>) April 2022. (<b>c</b>) Drone image of the detected area acquired in February 2022. RGB images generated by the proposed method using dual-polarization data acquired on (<b>d</b>) 4 April 2019 and (<b>e</b>) 12 May 2022. Ground scattering is indicated in red and blue, and volume scattering is in green. (<b>f</b>) Box and whisker plot of the scattering power components using pixels in the yellow outline. The whiskers extend to 1.5 times the interquartile range.</p>
Full article ">Figure 9
<p>Forest degradation at Altamira not detected by using the proposed method with actual dual-polarization data acquired on 4 April 2019 and 12 May 2022. The yellow outline shows the deforestation area based on visual interpretation of Planet/Dove mosaic images. (<b>a</b>–<b>f</b>) Same as in <a href="#remotesensing-15-00839-f008" class="html-fig">Figure 8</a>.</p>
Full article ">Figure 10
<p>Box and whisker plots of the scattering power components using the forest pixels of the reference data: (<b>a</b>) proposed method with HH/HV data, (<b>b</b>) proposed method with VV/VH data, and (<b>c</b>) 6SD method with POLSAR data. The whiskers extend to 1.5 times the interquartile range. In (<b>c</b>), <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mi>o</mi> </msub> </mrow> </semantics></math> is the summation of the helix scattering, oriented dipole scattering, and compound dipole scattering powers.</p>
Full article ">
21 pages, 3894 KiB  
Article
An Integrated Multi-Factor Coupling Approach for Marine Dynamic Disaster Assessment in China’s Coastal Waters
by Lin Zhou, Meng Sun, Yueming Liu, Yongzeng Yang, Tianyun Su and Zhen Jia
Remote Sens. 2023, 15(3), 838; https://doi.org/10.3390/rs15030838 - 2 Feb 2023
Cited by 1 | Viewed by 1782
Abstract
Marine dynamic disasters, such as storm surges and huge waves, can cause large economic and human losses. The assessment of marine dynamic disasters is, thus, important, but improvements to its reliability are needed. The current study improved and integrated the assessment from the [...] Read more.
Marine dynamic disasters, such as storm surges and huge waves, can cause large economic and human losses. The assessment of marine dynamic disasters is, thus, important, but improvements to its reliability are needed. The current study improved and integrated the assessment from the perspective of multi-factor coupling. Using a weighted index system, a marine dynamic disaster assessment indicator system suitable for China’s coastal waters was established, and a method for calculating the weight of disaster indicators was proposed from the perspective of rapid assessment. To reduce the assessment deviation in coastal waters, a multi-factor coupling algorithm was proposed. This algorithm obtained amplitude variations of wave orbital motion in horizontal and vertical directions, which was used to evaluate the influence of background current and terrain slope on coastal ocean waves. Landsat 8 remote sensing images were used to carry out an object-oriented extraction of raft and cage aquaculture areas in China’s coastal waters. The aquaculture density was then used as the main basis for a vulnerability assessment. Finally, the whole assessment system was integrated and verified during a typical storm surge process in coastal waters around the Shandong Peninsula in China. The coupled variations were also added to the assessment process and increased the risk value by an average of 12% in the High Sea States of the case study. Full article
Show Figures

Figure 1

Figure 1
<p>Proportion of major disaster-causing factors in direct economic losses and human losses from marine disasters in China.</p>
Full article ">Figure 2
<p>Indicator system for marine dynamic disaster risk assessment.</p>
Full article ">Figure 3
<p>Experimental results of the effect of background current and terrain slope on coastal ocean waves.</p>
Full article ">Figure 4
<p>Change in coastal ocean waves at different times during the storm surge process at the same location.</p>
Full article ">Figure 5
<p>Change of the coastal ocean waves caused by nearshore strengthening effect.</p>
Full article ">Figure 6
<p>Comparison between (<b>a</b>) the assessment results without the coupled strengthening effect; and (<b>b</b>) the assessment results with the coupled strengthening effect.</p>
Full article ">Figure 7
<p>Aquaculture areas extracted from Landsat 8. (<b>a</b>) Classified by aquaculture type; (<b>b</b>) Classified by aquaculture density.</p>
Full article ">Figure 8
<p>Terrain slope around Shandong Peninsula.</p>
Full article ">Figure 9
<p>Comprehensive assessment result considering risk assessment and vulnerability assessment.</p>
Full article ">
Previous Issue
Back to TopTop