[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 11, March-1
Previous Issue
Volume 11, February-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 11, Issue 4 (February-2 2019) – 102 articles

Cover Story (view full-size image): As the cost of satellite missions grows, government agencies are working to increase the relevance and usefulness of the Earth science data they produce. The US National Aeronautics and Space Administration’s (NASA) Early Adopter Program seeks to formalize partnerships with data users early within the satellite planning and development process to maximize the value of the mission. NASA’s Earth Science Division (ESD) delivers a suite of Earth observation datasets, where the multiple-use nature of all its investments is pivotal, and observations are designed to serve curiosity-based and applications-oriented science, and simultaneously deliver societal benefits. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
27 pages, 4553 KiB  
Article
Sub-Nyquist SAR via Quadrature Compressive Sampling with Independent Measurements
by Huizhang Yang, Chengzhi Chen, Shengyao Chen and Feng Xi
Remote Sens. 2019, 11(4), 472; https://doi.org/10.3390/rs11040472 - 25 Feb 2019
Cited by 8 | Viewed by 3976
Abstract
This paper presents an efficient sampling system for the acquisition of synthetic aperture radar (SAR) data at sub-Nyquist rate. The system adopts a quadrature compressive sampling architecture, which uses modulation, filtering, sampling and digital quadrature demodulation to produce sub-Nyquist or compressive measurements. In [...] Read more.
This paper presents an efficient sampling system for the acquisition of synthetic aperture radar (SAR) data at sub-Nyquist rate. The system adopts a quadrature compressive sampling architecture, which uses modulation, filtering, sampling and digital quadrature demodulation to produce sub-Nyquist or compressive measurements. In the sequential transmit-receive procedure of SAR, the analog echoes are modulated by random binary chipping sequences to inject randomness into the measurement projection, and the chipping sequences are independent from one observation to another. As a result, the system generates a sequence of independent structured measurement matrices, and then the resulting sensing matrix has better restricted isometry property, as proved by theoretical analysis. As a standard recovery problem in compressive sensing, image formation from the sub-Nyquist measurements has significantly improved performance, which in turn promotes low sampling/data rate. Moreover, the resulting sensing matrix has structures suitable for fast matrix-vector products, based on which we provide a first-order fast image formation algorithm. The performance of the proposed sampling system is assessed by synthetic and real data sets. Simulation results suggest that the proposed system is a valid candidate for sub-Nyquist SAR. Full article
(This article belongs to the Special Issue SAR in Big Data Era)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The architecture of QuadCS. The first subsystem is to perform low-rate random measurement; the second subsystem is a classical digital quadrature demodulation system [<a href="#B20-remotesensing-11-00472" class="html-bibr">20</a>] used for extract compressive in-phase and quadrature measurement.</p>
Full article ">Figure 2
<p>Illustration of stripmap SAR imaging geometry.</p>
Full article ">Figure 3
<p>Block diagram of simulations. We take synthetic, real images and real raw data as input data, respectively. The simulated data output by QuadCS and Xampling are processed by a sparse recovery algorithm.</p>
Full article ">Figure 4
<p>Scenes of interest in simulations. (<b>a</b>) a synthetic image; (<b>b</b>) a real SAR SLC image, collected by TerraSAR-X on 1 May 2008, and the imaged area is near the coast of Barcelona; (<b>c</b>) an SLC image focused from RADARSAT-1 raw data and the red-framed area is the Tsawwassen Ferry Port.</p>
Full article ">Figure 5
<p>Recovery of the synthetic image in 10 dB AWGN. From top to bottom: QuadCS with independent chipping sequences (Quad-IndSeq), QuadCS with equal chipping sequences (Quad-EquSeq), Xampling. From left to right: recovered images and corresponding residuals with compression ratios <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>16</mn> </mrow> </semantics></math>. Range and azimuth are along the horizontal and vertical axes, respectively. The relative errors ( measured in dB) of these images are shown at the lower left corners of their residual images, respectively.</p>
Full article ">Figure 6
<p>RRMSE vs. sparsity (averaged by 100 times). From top to bottom: compression ratios, denoted by <math display="inline"><semantics> <mi>α</mi> </semantics></math>, are <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>16</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>32</mn> </mrow> </semantics></math>, respectively. From left to right: SNR = 20 dB, SNR = 10 dB and SNR = 3 dB, respectively. We can see that QuadCS-IndSeq has the smallest RRMSE in all cases.</p>
Full article ">Figure 7
<p>Recovery of the TerraSAR-X SLC image. From top to bottom: QuadCS-IndSeq, QuadCS-EquSeq, Xampling. From left to right: recovered images and corresponding residuals with compression ratios <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>16</mn> </mrow> </semantics></math>. Range and azimuth are along the horizontal and vertical axes, respectively. The relative errors for these images are shown at the lower left corners of their residual images, respectively. For the sparse scene, QuadCS-IndSeq achieves <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>5</mn> <mo>.</mo> <mn>7</mn> </mrow> </semantics></math> dB even at the compression ratio <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>16</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Recovery of the RADARSAR-1 image. From top to bottom: QuadCS-IndSeq, QuadCS-EquSeq, Xampling. From left to right: recovered images and corresponding residuals with compression ratios <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>8</mn> </mrow> </semantics></math>. Range and azimuth are along the horizontal and vertical axes, respectively. The relative errors for these images are shown at the lower left corners of their residual images, respectively. Due to lack of scene sparsity, all of the simulated systems cannot provide satisfactory performances.</p>
Full article ">Figure 8 Cont.
<p>Recovery of the RADARSAR-1 image. From top to bottom: QuadCS-IndSeq, QuadCS-EquSeq, Xampling. From left to right: recovered images and corresponding residuals with compression ratios <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>8</mn> </mrow> </semantics></math>. Range and azimuth are along the horizontal and vertical axes, respectively. The relative errors for these images are shown at the lower left corners of their residual images, respectively. Due to lack of scene sparsity, all of the simulated systems cannot provide satisfactory performances.</p>
Full article ">
18 pages, 4360 KiB  
Article
Photon-Counting Lidar: An Adaptive Signal Detection Method for Different Land Cover Types in Coastal Areas
by Yue Ma, Wenhao Zhang, Jinyan Sun, Guoyuan Li, Xiao Hua Wang, Song Li and Nan Xu
Remote Sens. 2019, 11(4), 471; https://doi.org/10.3390/rs11040471 - 25 Feb 2019
Cited by 33 | Viewed by 6395
Abstract
Airborne or space-borne photon-counting lidar can provide successive photon clouds of the Earth’s surface. The distribution and density of signal photons are very different because different land cover types have different surface profiles and reflectance, especially in coastal areas where the land cover [...] Read more.
Airborne or space-borne photon-counting lidar can provide successive photon clouds of the Earth’s surface. The distribution and density of signal photons are very different because different land cover types have different surface profiles and reflectance, especially in coastal areas where the land cover types are various and complex. A new adaptive signal photon detection method is proposed to extract the signal photons for different land cover types from the raw photons captured by the MABEL (Multiple Altimeter Beam Experimental Lidar) photon-counting lidar in coastal areas. First, the surface types with 30 m resolution are obtained via matching the geographic coordinates of the MABEL trajectory with the NLCD (National Land Cover Database) datasets. Second, in each along-track segment with a specific land cover type, an improved DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm with adaptive thresholds and a JONSWAP (Joint North Sea Wave Project) wave algorithm is proposed and integrated to detect signal photons on different surface types. The result in Pamlico Sound indicates that this new method can effectively detect signal photons and successfully eliminate noise photons below the water level, whereas the MABEL result failed to extract the signal photons in vegetation segments and failed to discard the after-pulsing noise photons. In the Atlantic Ocean and Pamlico Sound, the errors of the RMS (Root Mean Square) wave height between our result and in-situ result are −0.06 m and 0.00 m, respectively. However, between the MABEL and in-situ result, the errors are −0.44 m and −0.37 m, respectively. The mean vegetation height between the East Lake and Pamlico Sound was also calculated as 15.17 m using the detecting signal photons from our method, which agrees well with the results (15.56 m) from the GFCH (Global Forest Canopy Height) dataset. Overall, for different land cover types in coastal areas, our study indicates that the proposed method can significantly improve the performance of the signal photon detection for photon-counting lidar data, and the detected signal photons can further obtain the water levels and vegetation heights. The proposed approach can also be extended for ICESat-2 (Ice, Cloud, and land Elevation Satellite-2) datasets in the future. Full article
(This article belongs to the Special Issue Applications of Remote Sensing in Coastal Areas)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Pamlico Sound and its location in the USA. The MABEL trajectory (red lines) was from the ocean (east) to the land (west). The Oregon Inlet Marina Station is inside the sound, and Duck Station is on a platform above the Atlantic Ocean (yellow filled circles).</p>
Full article ">Figure 2
<p>Flow chart of the method for detecting signal photons.</p>
Full article ">Figure 3
<p>MABEL trajectory on Google Maps (<b>A</b>), the captured photons by the MABEL lidar (<b>B</b>), and the results of detected signal photons (<b>C</b>). The top figure shows the trajectory on 21/09/2012 when the MABEL lidar flew over Pamlico Sound (in North Carolina, USA). The MABEL raw data illustrated using green filled circles are very noisy because they contain both reflected laser photons (i.e., signal photons) and noise photons caused by the background, backscatter, and detector noise. The signal photons of the MABEL result are illustrated by blue circles in the middle figure (<b>B</b>), and the signal photons of our result are illustrated by red circles in the bottom figure (<b>C</b>). In the middle and bottom figures, the abscissa represents the along-track distance and the vertical coordinate represents elevation. All photons are transformed from three-dimensional coordinates (i.e., geographic coordinates including latitude, longitude, and elevation) to two-dimensional coordinates (i.e., along-track coordinates including distance and elevation). The elevation is based on the WGS84 ellipsoidal height. The along-track distance starts from the beginning of the MABEL trajectory (in the farthest east and where the distance is zero). It should be noted that to show the signal photons more clearly, only photons within the elevation range of −50 to 10 m are illustrated (the range gate of the MABEL lidar is approximately 1500 m). In both the middle and bottom figures, the boundaries of different land cover types of the along-track trajectory are illustrated using dashed vertical lines.</p>
Full article ">Figure 4
<p>The enlarged details of the along-track distance segment from 4 to 10 km in <a href="#remotesensing-11-00471-f003" class="html-fig">Figure 3</a>. The top figure (<b>A</b>) illustrates the corresponding high-resolution aerial image from Google Earth. The middle figure (<b>B</b>) illustrates the MABEL result of extracted signal photons, and the bottom figure (<b>C</b>) shows our result. In the blue box of the top (<b>A</b>), middle (<b>B</b>), and bottom figures (<b>C</b>), a tower was captured by the MABEL lidar.</p>
Full article ">Figure 5
<p>The enlarged details of the along-track distance segment from 44 to 52 km in <a href="#remotesensing-11-00471-f002" class="html-fig">Figure 2</a>. The top figure illustrates (<b>A</b>) the corresponding high-resolution aerial image from Google Earth. The middle figure (<b>B</b>) illustrates the MABEL result of extracted signal photons, and the bottom figure (<b>C</b>) shows our result.</p>
Full article ">
27 pages, 8095 KiB  
Article
A Coarse-to-Fine Registration Strategy for Multi-Sensor Images with Large Resolution Differences
by Kai Li, Yongsheng Zhang, Zhenchao Zhang and Guangling Lai
Remote Sens. 2019, 11(4), 470; https://doi.org/10.3390/rs11040470 - 25 Feb 2019
Cited by 17 | Viewed by 4715
Abstract
Automatic image registration for multi-sensors has always been an important task for remote sensing applications. However, registration for images with large resolution differences has not been fully considered. A coarse-to-fine registration strategy for images with large differences in resolution is presented. The strategy [...] Read more.
Automatic image registration for multi-sensors has always been an important task for remote sensing applications. However, registration for images with large resolution differences has not been fully considered. A coarse-to-fine registration strategy for images with large differences in resolution is presented. The strategy consists of three phases. First, the feature-base registration method is applied on the resampled sensed image and the reference image. Edge point features acquired from the edge strength map (ESM) of the images are used to pre-register two images quickly and robustly. Second, normalized mutual information-based registration is applied on the two images for more accurate transformation parameters. Third, the final transform parameters are acquired through direct registration between the original high- and low-resolution images. Ant colony optimization (ACO) for continuous domain is adopted to optimize the similarity metrics throughout the three phases. The proposed method has been tested on image pairs with different resolution ratios from different sensors, including satellite and aerial sensors. Control points (CPs) extracted from the images are used to calculate the registration accuracy of the proposed method and other state-of-the-art methods. The feature-based preregistration validation experiment shows that the proposed method effectively narrows the value range of registration parameters. The registration results indicate that the proposed method performs the best and achieves sub-pixel registration accuracy of images with resolution differences from 1 to 50 times. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of the proposed method. ACO: ant colony optimization.</p>
Full article ">Figure 2
<p>Illustration of the edge strength maps (ESMs) of a noisy image: (<b>a</b>) a noisy synthetic cartoon image with noise; (<b>b</b>) small-scaled gradient-based ESM; (<b>c</b>) ANDD-based ESM with the same edge resolution; (<b>d</b>) fused ESM.</p>
Full article ">Figure 3
<p>Simple examples of measurement <span class="html-italic">D</span>. (<b>a</b>) The distances between the dots and the nearest triangle (connected with a dotted line) are 1. (<b>b</b>) The distances between the dots and the nearest triangle are 1 with an outlier in the lower right corner with a distance of 10.</p>
Full article ">Figure 4
<p>Image pairs to be registered. The specific parameters of <a href="#remotesensing-11-00470-f004" class="html-fig">Figure 4</a>a–j are listed in <a href="#remotesensing-11-00470-t001" class="html-table">Table 1</a>. Red points in the images show the spatial distribution of extracted control points (CPs) using Automatic Point Measurement (APM).</p>
Full article ">Figure 4 Cont.
<p>Image pairs to be registered. The specific parameters of <a href="#remotesensing-11-00470-f004" class="html-fig">Figure 4</a>a–j are listed in <a href="#remotesensing-11-00470-t001" class="html-table">Table 1</a>. Red points in the images show the spatial distribution of extracted control points (CPs) using Automatic Point Measurement (APM).</p>
Full article ">Figure 5
<p>Feature-based preregistration results using different edge points extraction methods. (<b>a</b>,<b>d</b>) use the Canny algorithm to obtain edge points; (<b>b</b>,<b>e</b>) use the phase convergence model to obtain edge points; (<b>c</b>,<b>f</b>) use the ANDD-based ESM to obtain edge points.</p>
Full article ">Figure 6
<p>Manually registered images using ERDAS AutoSync. (<b>a</b>) The green component and red component are the reference image and the warped sensed image of the No. 1 image pairs, respectively; (<b>b</b>) the green component and red component are the reference image and the warped sensed image of the No. 4 image pairs, respectively.</p>
Full article ">Figure 7
<p>D, I, and SMI values under different horizontal translations between the manually registered images in <a href="#remotesensing-11-00470-f006" class="html-fig">Figure 6</a>a. (<b>a</b>–<b>c</b>) are the D, I, and SMI values when the horizontal displacement is between [−800 800] at pixel intervals; (<b>d</b>–<b>f</b>) are the D, I, and SMI values when the horizontal displacement is between [−10 10] at 0.1 pixel intervals.</p>
Full article ">Figure 8
<p>D, I, and SMI values under different horizontal translations between the manually registered images in <a href="#remotesensing-11-00470-f006" class="html-fig">Figure 6</a>b. (<b>a</b>–<b>c</b>) are the D, I, and SMI values when the horizontal displacement is between [−600 600] at pixel intervals; (<b>d</b>–<b>f</b>) are the D, I, and SMI values when the horizontal displacement is between [−10 10] at 0.1 pixel intervals.</p>
Full article ">Figure 9
<p>Registration results using the proposed method displayed as a checkerboard mosaicked image. (<b>a</b>–<b>e</b>) are registered images for No. 1–5 image pairs.</p>
Full article ">Figure 10
<p>Feature-based preregistration results using different edge points extraction methods for the No. 1 image pair. (<b>a</b>,<b>d</b>) use the Canny algorithm to obtain edge points; (<b>b</b>,<b>e</b>) use the phase convergence model to obtain edge points; (<b>c</b>,<b>f</b>) use the ANDD-based ESM to obtain edge points.</p>
Full article ">Figure 11
<p>Feature-based preregistration results using different edge points extraction methods for the No. 4 image pair. (<b>a</b>,<b>d</b>) use the Canny algorithm to obtain edge points; (<b>b</b>,<b>e</b>) use the phase convergence model to obtain edge points; (<b>c</b>,<b>f</b>) use the ANDD-based ESM to obtain edge points.</p>
Full article ">
20 pages, 5481 KiB  
Article
Analyzing Performances of Different Atmospheric Correction Techniques for Landsat 8: Application for Coastal Remote Sensing
by Christopher O. Ilori, Nima Pahlevan and Anders Knudby
Remote Sens. 2019, 11(4), 469; https://doi.org/10.3390/rs11040469 - 25 Feb 2019
Cited by 99 | Viewed by 11368
Abstract
Ocean colour (OC) remote sensing is important for monitoring marine ecosystems. However, inverting the OC signal from the top-of-atmosphere (TOA) radiance measured by satellite sensors remains a challenge as the retrieval accuracy is highly dependent on the performance of the atmospheric correction as [...] Read more.
Ocean colour (OC) remote sensing is important for monitoring marine ecosystems. However, inverting the OC signal from the top-of-atmosphere (TOA) radiance measured by satellite sensors remains a challenge as the retrieval accuracy is highly dependent on the performance of the atmospheric correction as well as sensor calibration. In this study, the performances of four atmospheric correction (AC) algorithms, the Atmospheric and Radiometric Correction of Satellite Imagery (ARCSI), Atmospheric Correction for OLI ‘lite’ (ACOLITE), Landsat 8 Surface Reflectance (LSR) Climate Data Record (Landsat CDR), herein referred to as LaSRC (Landsat 8 Surface Reflectance Code), and the Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) Data Analysis System (SeaDAS), implemented for Landsat 8 Operational Land Imager (OLI) data, were evaluated. The OLI-derived remote sensing reflectance (Rrs) products (also known as Level-2 products) were tested against near-simultaneous in-situ data acquired from the OC component of the Aerosol Robotic Network (AERONET-OC). Analyses of the match-ups revealed that generic atmospheric correction methods (i.e., ARCSI and LaSRC), which perform reasonably well over land, provide inaccurate Level-2 products over coastal waters, in particular, in the blue bands. Between water-specific AC methods (i.e., SeaDAS and ACOLITE), SeaDAS was found to perform better over complex waters with root-mean-square error (RMSE) varying from 0.0013 to 0.0005 sr−1 for the 443 and 655 nm channels, respectively. An assessment of the effects of dominant environmental variables revealed AC retrieval errors were influenced by the solar zenith angle and wind speed for ACOLITE and SeaDAS in the 443 and 482 nm channels. Recognizing that the AERONET-OC sites are not representative of inland waters, extensive research and analyses are required to further evaluate the performance of various AC methods for high-resolution imagers like Landsat 8 and Sentinel-2 under a broad range of aquatic/atmospheric conditions. Full article
(This article belongs to the Special Issue Atmospheric Correction of Remote Sensing Data)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map showing the 14 validation sites from the ocean colour (OC) component of the Aerosol Robotic Network (AERONET-OC) station. 1: Galata, 2: Gloria, 3: GOT Seaprism, 4: Gustav Dalen Tower, 5: Helsinki, 6: Lake Erie, 7: Long Island Sound Coastal Observatory (LISCO), 8: Martha’s Vineyard Coastal Observatory (MVCO), 9: Palgrunden, 10: Thornton C-Power; 11: USC Seaprism, 12: Venise, 13: WaveCIS Site CSI-6, 14: Zeebrugge-MOW1.</p>
Full article ">Figure 2
<p>Number of match-ups between Landsat 8 OLI scenes and AERONET-OC site measurements within ±30-min window of the Landsat 8 overpass (GAL: Galata, GLO: Gloria, GOT: Got Seaprism, GUS: Gustav Dalen Tower, HEL: Helsinki, ERIE: Lake Erie, LIS: LISCO, MVC: MVCO, PAL: Palgrunden, THO: Thornton C-Power, USC: USC Seaprism, VEN: Venise, WAV: WaveCIS Site CSI-6, ZEB: Zeebrugge-MOW1). Dark blue represents the total number of initial match-ups within a ±30-min time window of the Landsat 8 overpass times for each site. Light blue represents the total number of final match-ups used for analysis after excluding scenes with sunglint and performing the match-up exercise.</p>
Full article ">Figure 3
<p>Scatterplots of the relationship between in-situ measurements (x-axis) and OLI estimates (y-axis) for each OLI band acquired over 14 AERONET-OC sites. Regression lines are shown in colours, while the thick dotted black lines are 1:1 lines. (<b>a</b>) 443 nm; (<b>b</b>) 482 nm; (<b>c</b>) 561 nm; (<b>d</b>) 651 nm.</p>
Full article ">Figure 4
<p>Overall band-by-band RMSE and mean bias results for all algorithms.</p>
Full article ">Figure 5
<p>Scatterplots of the error (sr<sup>−1</sup>) showing the dependency of <span class="html-italic">R</span><sub>rs</sub> retrieval accuracy from both ACOLITE and SeaDAS on (<b>a</b>) AOT(869), (<b>b</b>) SZA, and (<b>c</b>) wind speed. AOT(869) and wind speed were derived from coincident measurements at each AERONET-OC site used in this study, while SZA was obtained by subtracting the sun elevation angle provided in the Landsat 8 metadata from 90. Each circle represents a match-up data point, for a total of 54 data points across the 14 AERONET-OC sites. The 54 match-ups and their corresponding environmental parameter values are tabulated in <a href="#remotesensing-11-00469-t0A2" class="html-table">Table A2</a>.</p>
Full article ">Figure A1
<p>The root-mean-square errors showing the impacts of per-band spectral adjustment on AERONET-OC match-ups. For all AC methods, there is no noticeable effect in the 443 nm channel. Similarly, for the land-based AC methods, there are no observable differences in the 443 and 482 nm channels. Band adjustment improves the results for bands 2, 3, and 4 for SeaDAS, decreasing RMSE values by 16.6, 23.9, and 43.8% in the 482, 561, and 655 nm wavelengths, respectively, and also improves results for bands 3 and 4 for ACOLITE by 15.6 and 24.2%, respectively. For SeaDAS, the largest observable difference is in the 655 nm channel. This is by far the largest improvement from band adjustment across all bands and AC methods. Overall, SeaDAS is the most sensitive method to spectral band differences, with the largest difference (improvement) in the 655 nm channel. (<b>a</b>) ARCSI; (<b>b</b>) LaSRC; (<b>c</b>) ACOLITE; (<b>d</b>) SeaDAS.</p>
Full article ">Figure A2
<p>Line graphs showing the <span class="html-italic">R</span><sub>rs</sub> spectra of each of the 14 AERONET-OC stations (Results were averaged for each station except GOT Seaprism for which only one match-up is available).</p>
Full article ">Figure A2 Cont.
<p>Line graphs showing the <span class="html-italic">R</span><sub>rs</sub> spectra of each of the 14 AERONET-OC stations (Results were averaged for each station except GOT Seaprism for which only one match-up is available).</p>
Full article ">
21 pages, 10147 KiB  
Article
Spatially Explicit Mapping of Soil Conservation Service in Monetary Units Due to Land Use/Cover Change for the Three Gorges Reservoir Area, China
by Shicheng Li, Zilu Bing and Gui Jin
Remote Sens. 2019, 11(4), 468; https://doi.org/10.3390/rs11040468 - 25 Feb 2019
Cited by 87 | Viewed by 30343
Abstract
Studies of land use/cover change (LUCC) and its impact on ecosystem service (ES) in monetary units can provide information that governments can use to identify where protection and restoration is economically most important. Translating ES in monetary units into decision making strongly depends [...] Read more.
Studies of land use/cover change (LUCC) and its impact on ecosystem service (ES) in monetary units can provide information that governments can use to identify where protection and restoration is economically most important. Translating ES in monetary units into decision making strongly depends on the availability of spatially explicit information on LUCC and ES. Yet such datasets are unavailable for the Three Gorges Reservoir Area (TGRA) despite its perceived soil conservation service value (SCSV). The availability of remote sensing-based datasets and advanced GIS techniques has enhanced the potential of spatially explicit ES mapping exercises. Here, we first explored LUCC in the TGRA for four time periods (1995–2000, 2000–2005, 2005–2010, and 2010–2015). Then, applying a value transfer method with an equivalent value factor spatialized using the normalized difference vegetation index (NDVI), we estimated the changes of monetary SCSV in response to LUCC in a spatially explicit way. Finally, the sensitivity of SCSV changes in response to LUCC was determined. Major findings: (i) Expansion of construction land and water bodies and contraction of cropland characterized the LUCC in all periods. Their driving factors include the relocation of residents, construction of the Three Gorges Dam, urbanization, and the Grain for Green Program; (ii) The SCSV for TGRA was generally stable for 1995–2015, declining slightly (<1%), suggesting a sustainable human–environment relationship in the TGRA. The SCSV prevails in regions with elevations (slopes) of 400–1600 m (0°–10°); for Chongqing and its surrounding regions it decreased significantly during 1995–2015; (iii) SCSV’s sensitivity index was 1.04, 0.53, 0.92, and 1.25 in the four periods, respectively, which is generally low. Chongqing and its surrounding regions, with their pervasive urbanization and dense populations, had the highest sensitivity. For 1995–2015, 70.63% of the study area underwent increases in this sensitivity index. Our results provide crucial information for policymaking concerning ecological conservation and compensation. Full article
(This article belongs to the Special Issue Remote Sensing for Terrestrial Ecosystem Health)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the Three Gorges Reservoir Area (TGRA). The TGRA can be divided into three parts: head, central, and tail regions. The division between head and central regions is the provincial boundary of Hubei Province and Chongqing Municipality. The digital elevation model (DEM) was downloaded from Geospatial Data Cloud.</p>
Full article ">Figure 2
<p>This study’s framework. EVF: equivalent value factor; NDVI: Normalized Difference Vegetation Index; LUCC: land use/cover change; TGRA: Three Gorges Reservoir Area; SCSV: Soil Conservation Service Value; LUI: land use intensity.</p>
Full article ">Figure 3
<p>Land use/cover maps in the Three Gorges Reservoir Area for (<b>a</b>) 1995, (<b>b</b>) 2000, (<b>c</b>) 2005, (<b>d</b>) 2010, and (<b>e</b>) 2015. The data is sourced from references [<a href="#B58-remotesensing-11-00468" class="html-bibr">58</a>,<a href="#B62-remotesensing-11-00468" class="html-bibr">62</a>], which can be downloaded from the Resource and Environment Data Cloud Platform. The number in parentheses is the area percentage of each land use type.</p>
Full article ">Figure 3 Cont.
<p>Land use/cover maps in the Three Gorges Reservoir Area for (<b>a</b>) 1995, (<b>b</b>) 2000, (<b>c</b>) 2005, (<b>d</b>) 2010, and (<b>e</b>) 2015. The data is sourced from references [<a href="#B58-remotesensing-11-00468" class="html-bibr">58</a>,<a href="#B62-remotesensing-11-00468" class="html-bibr">62</a>], which can be downloaded from the Resource and Environment Data Cloud Platform. The number in parentheses is the area percentage of each land use type.</p>
Full article ">Figure 4
<p>Land use transformation in the Three Gorges Reservoir Area for (<b>a</b>) 1995–2000, (<b>b</b>) 2000–2005, (<b>c</b>) 2005–2010, and (<b>d</b>) 2010–2015. The numbers in parentheses indicate the total area (unit: km<sup>2</sup>) transformed.</p>
Full article ">Figure 5
<p>Soil conservation service values (SCSV) of the Three Gorges Reservoir Area for (<b>a</b>) 1995, (<b>b</b>) 2000, (<b>c</b>) 2005, (<b>d</b>) 2010, and (<b>e</b>) 2015; their respective total SCSVs are 2.447, 2.441, 2.444, 2.440, and 2.424 billion USD.</p>
Full article ">Figure 5 Cont.
<p>Soil conservation service values (SCSV) of the Three Gorges Reservoir Area for (<b>a</b>) 1995, (<b>b</b>) 2000, (<b>c</b>) 2005, (<b>d</b>) 2010, and (<b>e</b>) 2015; their respective total SCSVs are 2.447, 2.441, 2.444, 2.440, and 2.424 billion USD.</p>
Full article ">Figure 6
<p>Vertical spatial heterogeneity of the soil conservation service value (SCSV). The SCSV percentage change along (<b>a</b>) elevation and (<b>b</b>) slope in the Three Gorges Reservoir Area.</p>
Full article ">Figure 7
<p>Changes of soil conservation service values in the Three Gorges Reservoir Area for (<b>a</b>) 1995–2000, (<b>b</b>) 2000–2005, (<b>c</b>) 2005–2010, and (<b>d</b>) 2010–2015.</p>
Full article ">Figure 8
<p>Spatial patterns of the sensitivity index of soil conservation service value (SCSV) changes in response to land use/cover change (LUCC) in Three Gorges Reservoir Area, for the periods (<b>a</b>) 1995–2000, (<b>b</b>) 2000–2005, (<b>c</b>) 2005–2010, and (<b>d</b>) 2010–2015. No LUCC occurred in the blank (white) areas.</p>
Full article ">
15 pages, 2307 KiB  
Article
Drifting Effects of NOAA Satellites on Long-Term Active Fire Records of Europe
by Helga Weber and Stefan Wunderle
Remote Sens. 2019, 11(4), 467; https://doi.org/10.3390/rs11040467 - 25 Feb 2019
Cited by 11 | Viewed by 4172
Abstract
Explicit knowledge of different error sources in long-term climate records from space is required to understand and mitigate their impacts on resulting time series. Imagery of the heritage Advanced Very High Resolution Radiometer (AVHRR) provides unique potential for climate research dating back to [...] Read more.
Explicit knowledge of different error sources in long-term climate records from space is required to understand and mitigate their impacts on resulting time series. Imagery of the heritage Advanced Very High Resolution Radiometer (AVHRR) provides unique potential for climate research dating back to the 1980s, flying onboard a series of successive National Oceanic and Atmospheric Administration (NOAA) and Meteorological Operational (MetOp) satellites. However, the NOAA satellites are affected by severe orbital drift that results in spurious trends in time series. We identified the impact and extent of the orbital drift in 1 km AVHRR long-term active fire data. This record contains data of European fire activity from 1985–2016 and was analyzed on a regional scale and extended across Europe. Inconsistent sampling of the diurnal active fire cycle due to orbital drift with a maximum delay of ∼5 h over NOAA-14 lifetime revealed a ∼90% decline in the number of observed fires. However, interregional results were less conclusive and other error sources as well as interannual variability were more pronounced. Solar illumination, measured by the sun zenith angle (SZA), related changes in background temperatures were significant for all regions and afternoon satellites with major changes in −0.03 to −0.09 K deg 1 for B T 34 (p 0 . 001). Based on example scenes, we simulated the influence of changing temperatures related to changes in the SZA on the detection of active fires. These simulations showed a profound influence of the active fire detection capabilities dependent on biome and land cover characteristics. The strong decrease in the relative changes in the apparent number of active fires calculated over the satellites lifetime highlights that a correction of the orbital drift effect is essential even over short time periods. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The map displays the geographical location of the five regions across Europe’s major biomes. These biomes include the mediterranean, temperate, and boreal from south–north and east–west. The boxes illustrate the selected regions at 1 × 1<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> spatial resolution and their subregions (with a size of 0.5<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math>).</p>
Full article ">Figure 2
<p>Annual mean local hour and relative number of mean yearly active fire pixels retrieved from the Advanced Very High Resolution Radiometer (AVHRR) active fire product (AVH18). The local hour refers to the overhead time of the National Oceanic and Atmospheric Administration (NOAA) satellites above Bern, Switzerland. Some satellites overpassed Europe twice at daytime ∼102 min apart, which is indicated by the annual mean minimum and maximum overpass time (<b>a</b>). However, each of the regions were typically covered by only one of these overpasses. The relative number of mean yearly active fire pixels are shown for Europe (<b>b</b>) and relative to the maximum observed value for the regions 1 to 5 (<b>c</b>–<b>g</b>). Note, that very few fires were detected in region 5 and none of them by NOAA-12.</p>
Full article ">Figure 3
<p>Sun zenith angle (SZA) and calculated monthly trends in <math display="inline"><semantics> <mrow> <mo>▵</mo> <msub> <mrow> <mi>B</mi> <mi>T</mi> </mrow> <mn>34</mn> </msub> </mrow> </semantics></math> shown for the five study regions for NOAA-11 and NOAA-18. Both platforms exhibit a pronounced orbital drifting indicated by the increase in SZA over each platforms life time, whereby NOAA-11 (<b>a</b>) experiences a stronger drift over a shorter lifetime compared to NOAA-18 (<b>c</b>). Note, that NOAA-18 was drifting towards noon until 2009, resulting in lower SZA values. The gap in year 2009 of the NOAA-18 time series in region 5 results from clouds obscuring the region. Panels (<b>b</b>) for NOAA-11 and (<b>d</b>) for NOAA-18 illustrate the calculated monthly mean trends in <math display="inline"><semantics> <mrow> <mo>▵</mo> <msub> <mrow> <mi>B</mi> <mi>T</mi> </mrow> <mn>34</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Relative change in the apparent number of active fires calculated over the satellites’ lifetime: (<b>a</b>) NOAA-11 image 1990-07-29, (<b>b</b>) NOAA-14 image 1995-07-23, and (<b>c</b>) NOAA-18 image 2007-07-25. Only three examples were selected here for illustration purposes but identically results were found for other scenes and satellites. The calculations are based on two scenarios with calculated trends only in non-fire background temperatures (solid lines; scenario 1) and including temperatures of the fire pixels (dashed lines; scenario 2) based on the SZA trends both for the regional (2, 3, 4) and European scale. Note, the different lifetime and related impact of the orbital drift of the NOAA satellites.</p>
Full article ">
19 pages, 3682 KiB  
Article
Estimation of Surface Air Specific Humidity and Air–Sea Latent Heat Flux Using FY-3C Microwave Observations
by Qidong Gao, Sheng Wang and Xiaofeng Yang
Remote Sens. 2019, 11(4), 466; https://doi.org/10.3390/rs11040466 - 24 Feb 2019
Cited by 6 | Viewed by 5386
Abstract
Latent heat flux (LHF) plays an important role in the global hydrological cycle and is therefore necessary to understand global climate variability. It has been reported that the near-surface specific humidity is a major source of error for satellite-derived LHF. Here, a new [...] Read more.
Latent heat flux (LHF) plays an important role in the global hydrological cycle and is therefore necessary to understand global climate variability. It has been reported that the near-surface specific humidity is a major source of error for satellite-derived LHF. Here, a new empirical model relating multichannel brightness temperatures ( T B ) obtained from the Fengyun-3 (FY-3C) microwave radiometer and sea surface air specific humidity ( Q a ) is proposed. It is based on the relationship between T B , Q a , sea surface temperature (SST), and water vapor scale height. Compared with in situ data, the new satellite-derived Q a and LHF both exhibit better statistical results than previous estimates. For Q a , the bias, root mean square difference (RMSD), and the correlation coefficient (R2) between satellite and buoy in the mid-latitude region are 0.08 g/kg, 1.76 g/kg, and 0.92, respectively. For LHF, the bias, RMSD, and R2 are 2.40 W/m2, 34.24 W/m2, and 0.87, respectively. The satellite-derived Q a are also compared with National Oceanic and Atmospheric Administration (NOAA) Cooperative Institute for Research in Environmental Sciences (CIRES) humidity datasets, with a bias, RMSD, and R2 of 0.02 g/kg, 1.02 g/kg, and 0.98, respectively. The proposed method can also be extended in the future to observations from other space-borne microwave radiometers. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Methodological framework to estimate global air–sea <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mrow> <mi>a</mi> </mrow> </msub> </mrow> </semantics></math> and latent heat flux (LHF) using FY-3C satellite observations.</p>
Full article ">Figure 2
<p>(<b>a</b>–<b>e</b>) The relationship between sea surface temperature (SST), column water vapor (W), and surface water vapor mixing ratio <math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mi>v</mi> </msub> </mrow> </semantics></math> (g/kg, color) on different ranges of water vapor scale height <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> (m). Each figure shows a cubic regression curve calculated from SST and W. (<b>f</b>) SST compared to increase ratio (W/SST, kg/m<sup>2</sup><b>·</b>°C) over five <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>v</mi> <mo> </mo> </mrow> </msub> </mrow> </semantics></math> ranges, calculated from daily data and averaged from the range of 60° S to 60° N.</p>
Full article ">Figure 3
<p>Scatter diagrams showing the relationship between <math display="inline"><semantics> <mrow> <mtext> </mtext> <msub> <mi>T</mi> <mrow> <mi>B</mi> <mo>,</mo> <mn>23</mn> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mi>a</mi> </msub> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math>, using the Sample 1 data. (<b>a</b>) For <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 1300 m; (<b>b</b>) for 1300 m &lt; <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 1800 m; (<b>c</b>) for 1800 m &lt; <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 2300 m; (<b>d</b>) for 2300 m &lt; <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 2800 m; (<b>e</b>) for <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> &gt; 2800 m; and (<b>f</b>) for all <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>v</mi> <mo> </mo> </mrow> </msub> </mrow> </semantics></math> ranges. Each figure shows a linear regression line calculated using all <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> range data. Color depth indicates the density of data.</p>
Full article ">Figure 4
<p>Scatter diagrams showing the relationship between <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>B</mi> <mo>,</mo> <mn>89</mn> <mi>H</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mi>a</mi> </msub> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math>, using the Sample 1 data. (<b>a</b>) For <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 1300 m; (<b>b</b>) for 1300 m &lt; <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 1800 m; (<b>c</b>) for 1800 m &lt; <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 2300 m; (<b>d</b>) for 2300 m &lt; <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 2800 m; (<b>e</b>) for <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> &gt; 2800 m; and (<b>f</b>) for all <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>v</mi> <mo> </mo> </mrow> </msub> </mrow> </semantics></math> ranges. Each figure shows a linear regression line calculated using all <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> range data. Color depth indicates the density of data.</p>
Full article ">Figure 5
<p>The relationship between <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>B</mi> <mo>,</mo> <mn>23</mn> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mi>a</mi> </msub> <mo>,</mo> </mrow> </semantics></math> and SST. (<b>a</b>–<b>e</b>) For <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 1300 m; (<b>f</b>–<b>j</b>) for 1300 m &lt; <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 1800 m; (<b>k</b>–<b>o</b>) for 1800 m &lt; <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 2300 m; (<b>p</b>–<b>t</b>) for 2300 m &lt; <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 2800 m; and (<b>u</b>–<b>y</b>) for <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> &gt; 2800 m. Each subfigure shows a linear regression black line calculated using all data of the same <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> range; the red line was calculated using the corresponding SST range data.</p>
Full article ">Figure 6
<p>The relationship between <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>B</mi> <mo>,</mo> <mn>89</mn> <mi>H</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mi>a</mi> </msub> <mo>,</mo> </mrow> </semantics></math> and SST. (<b>a</b>–<b>e</b>) For <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 1300 m; (<b>f</b>–<b>j</b>) for 1300 m &lt; <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 1800 m; (<b>k</b>–<b>o</b>) for 1800 m &lt; <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 2300 m; (<b>p</b>–<b>t</b>) for 2300 m &lt; <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> ≤ 2800 m; and (<b>u</b>–<b>y</b>) for <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> &gt; 2800 m. Each subfigure shows a linear regression black line calculated using all data of the same <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>v</mi> </msub> </mrow> </semantics></math> range; the red line was calculated using the corresponding SST range data.</p>
Full article ">Figure 7
<p>SST compared with observation sensitivity from a linear fitting for each SST range. (<b>a</b>) For 23 V GHz and (<b>b</b>) for 89 H GHz.</p>
Full article ">Figure 8
<p>Comparisons between in situ observed and four satellite-derived <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mi>a</mi> </msub> </mrow> </semantics></math> values (g/kg). (<b>a</b>) For SC95; (<b>b</b>) for IW12; (<b>c</b>) for TO18; and (<b>d</b>) for the proposed model.</p>
Full article ">Figure 9
<p>Zonal distributions of the comparative statistics between in situ observed and satellite-derived <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mi>a</mi> </msub> </mrow> </semantics></math> values for four algorithms, including (<b>a</b>) averaged bias, (<b>b</b>) root mean square difference (RMSD), and standard deviation difference (SDD, plotted as red lines). Statistics were averaged for each 2° bias.</p>
Full article ">Figure 10
<p>The common coverage area of NOAA CIRES Multi-Satellite Humidity (dark blue strip) and FY-3C humidity (colored strip) on 6 October 2014. Color depth indicates the value of specific air humidity. (<b>a</b>) Ascending orbit and (<b>b</b>) descending orbit.</p>
Full article ">Figure 11
<p>Results of the comparison between NOAA CIRES Multi-Satellite Humidity and FY-3C humidity on 6 October 2014. Color depth indicates the density of data.</p>
Full article ">
28 pages, 11016 KiB  
Article
Detection and Validation of Tropical Peatland Flaming and Smouldering Using Landsat-8 SWIR and TIRS Bands
by Parwati Sofan, David Bruce, Eriita Jones and Jackie Marsden
Remote Sens. 2019, 11(4), 465; https://doi.org/10.3390/rs11040465 - 24 Feb 2019
Cited by 22 | Viewed by 6861 | Correction
Abstract
A Tropical Peatland Combustion Algorithm (ToPeCAl) was first established from Landsat-8 images acquired in 2015, which were used to detect peatland combustion in flaming and smouldering stages. Detection of smouldering combustion from space remains a challenge due to its low temperature and generally [...] Read more.
A Tropical Peatland Combustion Algorithm (ToPeCAl) was first established from Landsat-8 images acquired in 2015, which were used to detect peatland combustion in flaming and smouldering stages. Detection of smouldering combustion from space remains a challenge due to its low temperature and generally small spatial extent. The ToPeCAl consists of the Shortwave Infrared Combustion Index based on reflectance (SICIρ), and Top of Atmosphere (TOA) reflectance in Shortwave Infrared band-7 (SWIR-2), TOA brightness temperature of Thermal Infrared band-10 (TIR-1), and TOA reflectance of band-1, the Landsat-8 aerosol band. The implementation of ToPeCAl was then validated using terrestrial and aerial images (helicopter and drone) collected during fieldwork in Central Kalimantan, Indonesia in the 2018 fire season, on the same day as Landsat-8 overpasses. The overall accuracy of ToPeCAl was found to be 82% with omission errors in a small area (less than 30 m × 30 m) from mixtures of smouldering and vegetation pixels, and commission errors (with minimum area of 30 m x 30 m) on high reflective building rooftops in urban areas. These errors were further reduced by masking and removing urban areas prior to analysis using landuse Geographic Information System (GIS) data; improving the overall mapping accuracy to 93%. For comparison, the day and night-time VIIRS (375 m) active fire product (VNP14IMG) was utilised, obtaining a lower probability of fire detection of 71% compared to ground truth, and 57–72% agreement in a buffer distance of 375 m to 1500 m when compared to the Landsat-8 ToPeCAl results. The night-time data of VNP14IMG was found to have a better correspondence with ToPeCAl results from Landsat 8 than day-time data. This finding could lead to a potential merger of ToPeCAl with VNP14IMG to fill the temporal gaps of peatland fire information when using Landsat. However, the VNP14IMG product exhibited overestimation compared with the results of ToPeCAl applied to Landsat-8. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) The algorithm development site analysed from the Getis–Ord optimised hotspot applied to 1 km grid cell counts of Moderate Resolution Imaging Spectroradiometer (MODIS active fires (2007–2017) overlaid on peatland map in Central Kalimantan and (<b>B</b>) Validation site in Riau Province. The Hot Spots in red tones represent the high density of MODIS active fires with statistically significant confidence levels from 90% to 99%, while the Cold Spots, in blue tones, represent the low distribution of fire hotspots with statistical significance from 90% to 99%. The white colour is not statistically significant. The MODIS active fire source from The National Aeronautics and Space Administration-Fire Information for Resource Management System (NASA-FIRMS). Peatland map sources from the Ministry of Agriculture (MoA). The boundary administration is from Geospatial Information Agency (BIG).</p>
Full article ">Figure 2
<p>(<b>A</b>) An example of a combustion area on the 764 Red Green Blue (RGB) images of Landsat-8 on September 20, 2015 with combustion area surrounding non-combustion area (forest, plantation, burnt area), (<b>B</b>) The Top of Atmosphere (TOA) planetary reflectance values of Landsat-8 bands. The solid line pattern refers to objects in clear sky condition, while the dotted line pattern relates to smoky objects, (<b>C</b>) The median values of TOA brightness temperatures of Thermal Infrared band-10 (TIR-1) plotted in the clustered column, while the median values of the ratio of Shortwave Infrared band-7 (SWIR-2) and SWIR-1 reflectance, Shortwave Infrared Combustion Index (SICI<sub>ρ</sub>), are shown in a black line with markers. Smoky conditions are shown in the light blue shading area to the right of clear sky conditions on the left. The red line is SICI<sub>ρ</sub> threshold used to separate flaming and smouldering from other background objects.</p>
Full article ">Figure 3
<p>(<b>A</b>) Sample of combustion area in the 764 RGB image of Landsat-8 on 19 August 2015 which had high reflectance of SWIR-1 and SWIR-2, (<b>B</b>) the enlargement of combustion areas showing 10 sample pixels, (<b>C</b>) the extracted SWIR-1 and SWIR-2 reflectances with SICI<sub>ρ</sub> values; pixel#2, #3, and #7 had SICI<sub>ρ</sub> ≤ 1.</p>
Full article ">Figure 4
<p>Flowchart of Landsat-8 image processing for mapping tropical peatland combustion.</p>
Full article ">Figure 5
<p>An example of buffering VNP14IMG point data overlaid with ToPeCAl results from Landsat-8. The VNPIMG data and ToPeCAl mapping relates to September 28, 2018. There is a 3 and 15 hour time difference from Landsat-8 to VNP14IMG acquisition times. The buffer zones are 187.5 m, 375 m, 500 m, 750 m, 1000 m, 1250 m, and 1500 m from both the VNP14IMG points (5:30 UTC and 18:12 UTC).</p>
Full article ">Figure 6
<p>(<b>A</b>,<b>B</b>) An example of peatland combustion surrounded by cloud cover on a 764 RGB images of Landsat-8 on 4 September 2015 with the classified peatland combustion, (<b>C</b>,<b>D</b>) Peatland combustion in heavy smoke conditions and the classified peatland combustion on 20 September 2015.</p>
Full article ">Figure 7
<p>Comparison of Global Operational Land Imager (GOLI) and ToPeCAl implementation on peatland combustion using Landsat-8 28 September 2018, in Central Kalimantan. (<b>A</b>) The 764 RGB images of Landsat-8 of peatland combustion area, (<b>B</b>) The GOLI active fire (magenta) overlaid on the RGB image, (<b>C</b>) The ToPeCAl mapping result on RGB image is shown in three classes of peatland combustion (S, FS, F), (<b>D</b>). An overlay of GOLI and ToPeCAl showing a larger area of combustion from ToPeCAL, especially for smouldering fire, (<b>E</b>,<b>F</b>) Drone image and its enlargement showing smouldering peatland but not active/flaming fire.</p>
Full article ">Figure 8
<p>The coverage of ground truth collection in Central Kalimantan (27 September–6 October 2018) overlaid on Landsat-8 images of 28 September 2018 (right) and 5 October 2018 (left).</p>
Full article ">Figure 9
<p>(<b>A</b>) The 764 RGB image of Landsat-8 on 28 September 2018, (<b>B</b>) The result of ToPeCAl mapping which consists of smouldering (S), mixture of flaming and smouldering (FS), and flaming (F) with ground truth base station locations marked in magenta triangle, (<b>C</b>) The pictures of peatland fires taken from helicopter on 28 September 2018. Field pictures sourced from the local Indonesian Disaster Management Agency (BNPB) of Central Kalimantan Province.</p>
Full article ">Figure 10
<p>(<b>A</b>) The 764 RGB image of Landsat-8 on 28 September 2018 and (<b>B</b>) the result of peatland combustion mapping with ground truth location marked in red square, (<b>C</b>) The picture of peatland smouldering fire taken from a drone on 1 October 2018.</p>
Full article ">Figure 11
<p>(<b>A</b>) The 764 RGB image of Landsat-8 on 5 October 2018, presented a combustion area which was predominantly vegetated, (<b>B</b>) The result of peatland combustion mapping, and (<b>C</b>) Drone imaging and (<b>D</b>) Terrestrial picture of smouldering vegetation area in the area on 6 October 2018.</p>
Full article ">Figure 12
<p>(<b>A</b>,<b>B</b>) The 764 RGB image of Landsat-8 on 20 August 2016 showing a combustion area with the result of peatland combustion mapping and <b>(C)</b> a helicopter oblique picture taken on the same day taken in Riau Province; (<b>D</b>,<b>E</b>) The 764 RGB Image of Landsat-8 on 4 April 2018 and the result of peatland combustion mapping with ground truth location marked in red square. The pictures (<b>F</b>) of peatland fires from the ground and helicopter; sourced from local BNPB of Riau Province on 4 April 2018.</p>
Full article ">Figure 13
<p>POD of peatland combustion (S, FS, and F) detected by VNP14IMG from day-time and night-time data in buffered zones of 187.5 m to 1500 m.</p>
Full article ">
16 pages, 9609 KiB  
Article
Mapping of River Terraces with Low-Cost UAS Based Structure-from-Motion Photogrammetry in a Complex Terrain Setting
by Hui Li, Lin Chen, Zhaoyang Wang and Zhongdi Yu
Remote Sens. 2019, 11(4), 464; https://doi.org/10.3390/rs11040464 - 24 Feb 2019
Cited by 19 | Viewed by 7155
Abstract
River terraces are the principal geomorphic features for unraveling tectonics, sea level, and climate conditions during the evolutionary history of a river. The increasing availability of high-resolution topography data generated by low-cost Unmanned Aerial Systems (UAS) and modern photogrammetry offer an opportunity to [...] Read more.
River terraces are the principal geomorphic features for unraveling tectonics, sea level, and climate conditions during the evolutionary history of a river. The increasing availability of high-resolution topography data generated by low-cost Unmanned Aerial Systems (UAS) and modern photogrammetry offer an opportunity to identify and characterize these features. In this paper, we assessed the capabilities of UAS-based Structure-from-Motion (SfM) photogrammetry, coupled with a river terrace detection algorithm for mapping of river terraces over a 1.9 km2 valley of complex terrain setting, with a focus on the performance of this latest technology over such complex terrains. With the proposed image acquisition approach and SfM photogrammetry, we constructed a 3.8 cm resolution orthomosaic and digital surface model (DSM). The vertical accuracy of DSM was assessed against 196 independent checkpoints measured with a real-time kinematic (RTK) GPS. The results indicated that the root mean square error (RMSE) and mean absolute error (MAE) were 3.1 cm and 2.9 cm, respectively. These encouraging results suggest that this low-cost, logistically simple method can deliver high-quality terrain datasets even in the complex terrain, competitive with those obtained using more expensive laser scanning. A simple algorithm was then employed to detect river terraces from the generated DSM. The results showed that three levels of river terraces and a high-level floodplain were identified. Most of the detected river terraces were confirmed by field observations. Despite the highly erosive nature of fluvial systems, this work obtained good results, allowing fast analysis of fluvial valleys and their comparison. Overall, our results demonstrated that the low-cost UAS-based SfM technique could yield highly accurate ultrahigh-resolution topography data over complex terrain settings, making it particularly suitable for quick and cost-effective mapping of micro to medium-sized geomorphic features under such terrains in remote or poorly accessible areas. Methods discussed in this paper can also be applied to produce highly accurate digital terrain data over large spatial extents for some other places of complex terrains. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographical location of the study area in relation to (<b>a</b>) China (as indicated by red dot), and (<b>b</b>) Dashi River watershed (as indicated by a red square). (<b>c</b>) The color shaded digital surface model (DSM) and (<b>d</b>) panoramic photograph illustrating the topography and landscape of the study area. Flow is west to east and photo is taken downstream looking upstream.</p>
Full article ">Figure 2
<p>Unmanned Aerial Systems (UAS) aerial image acquisition and ground control points (GCP) collection setup. (<b>a</b>) Phantom 4p quadcopter at takeoff. (<b>b</b>) Flight lines of two flights (as shown by red and yellow color) and corresponding take-off/landing locations (as indicated by cyan stars). (<b>c</b>) Real-time kinematic (RTK) GPS base station setup. Examples of ground targets, (<b>d</b>) roadside painted targets and (<b>e</b>) 1 × 1 m painted canvas.</p>
Full article ">Figure 3
<p>The locations of ground control points (GCPs) and ground truth points (GTPs) overlaid on the generated orthomosaic of the study area (<b>a</b>), as well as GCP targets of red paint (<b>b</b>) and painted canvas (<b>c</b>) seen from the acquired images.</p>
Full article ">Figure 4
<p>DSM accuracy based on 196 GTPs. The histogram (<b>a</b>) shows the distribution of dH errors. The graph (<b>b</b>) shows the elevation estimate from the DSM as a function of the RTK-DGPS elevation.</p>
Full article ">Figure 5
<p>The distribution of potential river terraces (<b>a</b>) and validated river terraces (<b>b</b>) overlaid on the shaded DSM of the study valley.</p>
Full article ">Figure 6
<p>Fieldwork validation of the potential terrace areas detected in the study river valley. (<b>a</b>) Riser of T0 floodplain adjacent to the modern river channel. (<b>b</b>) Riser of T2 terrace. (<b>c</b>) Fluvial gravels on T2 terrace tread. (<b>d</b>) A side view of T3 bedrock strath terrace, showing fluvial sediments on the bedrock.</p>
Full article ">Figure 7
<p>Camera locations and image overlaps. The legend indicates the number of overlapping images of that location.</p>
Full article ">Figure 8
<p>A 3D representation of the study area, illustrating the river terraces of different levels. The numbers in parentheses indicate the heights of terrace treads above the present bankfull level.</p>
Full article ">
16 pages, 2424 KiB  
Technical Note
Information Needs of Next-Generation Forest Carbon Models: Opportunities for Remote Sensing Science
by Céline Boisvenue and Joanne C. White
Remote Sens. 2019, 11(4), 463; https://doi.org/10.3390/rs11040463 - 23 Feb 2019
Cited by 26 | Viewed by 6588
Abstract
Forests are integral to the global carbon cycle, and as a result, the accurate estimation of forest structure, biomass, and carbon are key research priorities for remote sensing science. However, estimating and understanding forest carbon and its spatiotemporal variations requires diverse knowledge from [...] Read more.
Forests are integral to the global carbon cycle, and as a result, the accurate estimation of forest structure, biomass, and carbon are key research priorities for remote sensing science. However, estimating and understanding forest carbon and its spatiotemporal variations requires diverse knowledge from multiple research domains, none of which currently offer a complete understanding of forest carbon dynamics. New large-area forest information products derived from remotely sensed data provide unprecedented spatial and temporal information about our forests, which is information that is currently underutilized in forest carbon models. Our goal in this communication is to articulate the information needs of next-generation forest carbon models in order to enable the remote sensing community to realize the best and most useful application of its science, and perhaps also inspire increased collaboration across these research fields. While remote sensing science currently provides important contributions to large-scale forest carbon models, more coordinated efforts to integrate remotely sensed data into carbon models can aid in alleviating some of the main limitations of these models; namely, low sample sizes and poor spatial representation of field data, incomplete population sampling (i.e., managed forests exclusively), and an inadequate understanding of the processes that influence forest carbon accumulation and fluxes across spatiotemporal scales. By articulating the information needs of next-generation forest carbon models, we hope to bridge the knowledge gap between remote sensing experts and forest carbon modelers, and enable advances in large-area forest carbon modeling that will ultimately improve estimates of carbon stocks and fluxes. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Potential limits to vegetation net primary production (NPP) based on the fundamental physiological limits of vapor pressure deficit, water balance, light, and temperature [<a href="#B15-remotesensing-11-00463" class="html-bibr">15</a>]. NPP is the rate of carbon accumulation in plants after losses from plant respiration and other metabolic processes (which are necessary to maintain the plant’s living systems) are taken into account [<a href="#B29-remotesensing-11-00463" class="html-bibr">29</a>].</p>
Full article ">
13 pages, 31129 KiB  
Article
Source Parameter Estimation of the 2009 Ms6.0 Yao’an Earthquake, Southern China, Using InSAR Observations
by Wei Qu, Bing Zhang, Zhong Lu, Jin Woo Kim, Qin Zhang, Yuan Gao, Ming Hao, Wu Zhu and Feifei Qu
Remote Sens. 2019, 11(4), 462; https://doi.org/10.3390/rs11040462 - 23 Feb 2019
Cited by 8 | Viewed by 4061
Abstract
On 9 July 2009, an Ms6.0 earthquake occurred in mountainous area of Yao’an in Yunnan province of Southern China. Although the magnitude of the earthquake was moderate, it attracted the attention of many Earth scientists because of its threat to the safety of [...] Read more.
On 9 July 2009, an Ms6.0 earthquake occurred in mountainous area of Yao’an in Yunnan province of Southern China. Although the magnitude of the earthquake was moderate, it attracted the attention of many Earth scientists because of its threat to the safety of the population and its harm to the local economy. However, the source parameters remain poorly understood due to the sparse distribution of seismic and GNSS (Global Navigation Satellite System) stations in this mountainous region. Therefore, in this study, the two L-band ALOS (Advanced Land Observing Satellite-1) PALSAR (Phased Array type L-band Synthetic Aperture Radar) images from an ascending track is used to investigate the coseismic deformation field, and further determine the location, fault geometry and slip distribution of the earthquake. The results show that the Yao’an earthquake was a strike-slip event with a down-dip slip component. The slip mainly occurred at depths of 3–8 km, with a maximum slip of approximately 70 cm at a depth of 6 km, which is shallower than the reported focal depth of ~10 km. An analysis of the seismic activity and tectonics of the Yao’an area reveals that the 9 July 2009 Yao’an earthquake was the result of regional stress accumulation, which eventually led to the rupture of the northwestern most part of the Maweijing fault. Full article
(This article belongs to the Special Issue Remote Sensing of Tectonic Deformation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Tectonic location of the Yao’an and its surroundings in China, as well as the major plate boundaries in and near mainland China. The red box indicates Yao’an and its adjacent tectonic belts (<b>b</b>). The white solid lines represent major active faults [<a href="#B30-remotesensing-11-00462" class="html-bibr">30</a>]. The small red circles represent major cities. The red star indicates Yao’an. The focal mechanism balls represent the 2009 Yao’an Ms6.0 earthquake and 2000 Yao’an Ms6.5 earthquake (GCMT). The characters in italics represent the major faults of the region, such as Ganzi-Yushu (1), Xianshuihe (2), Longriba (3), Longmenshan (4), Anninghe (5), Zemuhe (6), Xiaojinghe (7), Lijiang–Xiaojinhe (8), Jinshajiang (9), Nujiang (10), Red River (11), Lancangjiang (12), Nahua-Chuxiong (13), and Maweijing (14). The block surrounded by the black dotted line is Sichuan-Yunnan block (SYB), which is also called Chuandian Block. The rhombic block enclosed by the black thin solid line is the Yunnan block (YNB), which is also called Diandong Block.</p>
Full article ">Figure 2
<p>Coseismic interferogram (<b>a</b>) and resampled image (<b>b</b>) of the 2009 Yao’an earthquake.</p>
Full article ">Figure 3
<p>Coseismic deformation and model for uniform slip inversion of the Yao’an earthquake; (<b>a</b>) Observed coseismic interferogram (the black line represents the modeled fault trace); (<b>b</b>) model synthetic interferogram; (<b>c</b>) residual interferogram between observation and modeling; (<b>d</b>) a–a’ is a profile across the fault trace and the profile of line-of-sight (LOS) displacement (blue dots), model LOS displacement (red dots) and topography (grey).</p>
Full article ">Figure 4
<p>Trade-off curve between roughness and relative fitting residuals. The cross represents the location of the smooth factor used in modeling.</p>
Full article ">Figure 5
<p>Coseismic deformation and model for distribution slip inversion of the Yao’an earthquake; (<b>a</b>) Observed coseismic deformation; (<b>b</b>) modeled deformation for distributed slip models; and (<b>c</b>) Residual, which is the difference between (<b>a</b>) and (<b>b</b>).</p>
Full article ">Figure 6
<p>(<b>a</b>) Slip distributions for a fault plane of 22 km length, 12 km downdip width, and 87° dip; (<b>b</b>) 3-D view of the fault from WSW; (<b>c</b>) 1σ uncertainty in fault slip estimated through a Monte Carlo process with 100 inversions. The thick black line in the top of each subfigure represent the projection of fault trace on the surface.</p>
Full article ">Figure 7
<p>The focal mechanisms of the 2000 Ms6.5 Yao’an earthquake and 2009 Ms6.0 Yao’an earthquake [<a href="#B47-remotesensing-11-00462" class="html-bibr">47</a>,<a href="#B49-remotesensing-11-00462" class="html-bibr">49</a>], and the earthquake sequence of 2009 Ms6.0 Yao’an earthquake. The blue solid triangle represents the major city.</p>
Full article ">
23 pages, 43417 KiB  
Article
Multi-Frequency, Multi-Sonar Mapping of Shallow Habitats—Efficacy and Management Implications in the National Marine Park of Zakynthos, Greece
by Elias Fakiris, Philippe Blondel, George Papatheodorou, Dimitris Christodoulou, Xenophon Dimas, Nikos Georgiou, Stavroula Kordella, Charalampos Dimitriadis, Yuri Rzhanov, Maria Geraga and George Ferentinos
Remote Sens. 2019, 11(4), 461; https://doi.org/10.3390/rs11040461 - 23 Feb 2019
Cited by 61 | Viewed by 9407
Abstract
In this work, multibeam echosounder (MBES) and dual frequency sidescan sonar (SSS) data are combined to map the shallow (5–100 m) benthic habitats of the National Marine Park of Zakynthos (NMPZ), Greece, a Marine Protected Area (MPA). NMPZ hosts extensive prairies of the [...] Read more.
In this work, multibeam echosounder (MBES) and dual frequency sidescan sonar (SSS) data are combined to map the shallow (5–100 m) benthic habitats of the National Marine Park of Zakynthos (NMPZ), Greece, a Marine Protected Area (MPA). NMPZ hosts extensive prairies of the protected Mediterranean phanerogams Posidonia oceanica and Cymodocea nodosa, as well as reefs and sandbanks. Seafloor characterization is achieved using the multi-frequency acoustic backscatter of: (a) the two simultaneous frequencies of the SSS (100 and 400 kHz) and (b) the MBES (180 kHz), as well as the MBES bathymetry. Overall, these high-resolution datasets cover an area of 84 km2 with ground coverage varying from 50% to 100%. Image texture, terrain and backscatter angular response analyses are applied to the above, to extract a range of statistical features. Those have different spatial densities and so they are combined through an object-based approach based on the full-coverage 100-kHz SSS mosaic. Supervised classification is applied to data models composed of operationally meaningful combinations between the above features, reflecting single-sonar or multi-sonar mapping scenarios. Classification results are validated against a detailed expert interpretation habitat map making use of extensive ground-truth data. The relative gain of one system or one feature extraction method or another are thoroughly examined. The frequency-dependent separation of benthic habitats showcases the potentials of multi-frequency backscatter and bathymetry from different sonars, improving evidence-based interpretations of shallow benthic habitats. Full article
(This article belongs to the Special Issue Advances in Undersea Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area of the National Marine Park of Zakynthos (NMPZ), Greece, overlaid by the geophysical and tow-camera survey lines. Blue boxes indicate areas used as examples later in the text. On the top-left the research vessel and the main instrumentation used in the field work are presented.</p>
Full article ">Figure 2
<p>Chirp shallow seismic profiles showing the different geological and the benthic habitats of the study area.</p>
Full article ">Figure 3
<p>Flow diagram about the object-based feature extraction method, starting from the swath sonar datasets (left column), followed by the feature extraction stage (middle column) showing representative examples of features organized per feature extraction method and resulting in the segment averaged corresponding features (right column).</p>
Full article ">Figure 4
<p>The detailed manual habitat map generated taking all available marine acoustic and ground truth data into account. Bottom: Example underwater images of each habitat type.</p>
Full article ">Figure 5
<p>Schematic table of the 49 features, organized by sonar system, feature extraction method and <span class="html-italic">data model</span>. Grey boxes indicate features that were initially considered in each data model while black ones correspond to the ones eventually selected through the feature selection stage. Vertical or horizontal colored bars indicate groups of features that compose a particular <span class="html-italic">data model</span>.</p>
Full article ">Figure 6
<p>An example showing how ICA decomposition was used for separating 400-kHz transmit interference and nadir zone artefacts in the 100-kHz SSS mosaic feature maps. The GLCM contrast map (right side) is arbitrarily chosen to exhibit the noise removal process.</p>
Full article ">Figure 7
<p>Left (<b>a</b>): Angular response ranges as defined in [<a href="#B65-remotesensing-11-00461" class="html-bibr">65</a>]. Right (<b>b</b>): angular ranges as proposed by FA (factor analysis) loadings of all stacked ARCs constructed from the MBES data.</p>
Full article ">Figure 8
<p>Segmentation (right) of the 100 kHz SSS mosaic (left) using the Segmentation by Weighted Aggregation (SWA) method.</p>
Full article ">Figure 9
<p>Average angular response curves (ARCs) per Habitat type, as derived by the MBES backscatter data.</p>
Full article ">Figure 10
<p>A dual frequency SSS record (<span class="html-italic">Area A</span> in <a href="#remotesensing-11-00461-f001" class="html-fig">Figure 1</a>) with <span class="html-italic">P. oceanica</span>, <span class="html-italic">C. nodosa</span> and sand beds. (<b>a</b>) The 100 kHz record, (<b>b</b>) the 400 kHz record and (<b>c</b>) a pseudo-color image with green and blue channels corresponding to 100 kHz and 400 kHz respectively.</p>
Full article ">Figure 11
<p>The frequency dependent separation of bottom types as made evident in a sub-region (<span class="html-italic">Area B</span> in <a href="#remotesensing-11-00461-f001" class="html-fig">Figure 1</a>) available backscatter mosaics: (<b>a</b>) 100 kHz SSS, (<b>b</b>) 400 kHz SSS and (<b>c</b>) 180 kHz MBES. <span class="html-italic">P. oceanica</span> and <span class="html-italic">C. nodosa</span> beds are indicated with arrows.</p>
Full article ">Figure 12
<p>The classification accuracy results (Cohen’s Kappa coefficient) per <span class="html-italic">data model</span> (described in <a href="#remotesensing-11-00461-f005" class="html-fig">Figure 5</a>).</p>
Full article ">Figure 13
<p>Comparison between the manual habitat map (<b>a</b>) calculated as the most frequent class per segment and the best automated one (<b>b</b>) corresponding to <span class="html-italic">data model m10</span> (all sonar systems and feature extraction techniques considered).</p>
Full article ">Figure 14
<p>Per habitat type classification accuracy comparison between the <span class="html-italic">data models</span> that correspond to single-sonar mapping using either SSS (<span class="html-italic">m3</span>) or MBES (<span class="html-italic">m4</span>, <span class="html-italic">m6</span> and <span class="html-italic">m8</span>) system) and the <span class="html-italic">data model m10</span> which regards the multi-sonar mapping scenario. <span class="html-italic">Data models</span> are described in <a href="#remotesensing-11-00461-f005" class="html-fig">Figure 5</a> while “Bs” stands for backscatter.</p>
Full article ">
25 pages, 12067 KiB  
Article
Assessment of MERRA-2 Surface PM2.5 over the Yangtze River Basin: Ground-based Verification, Spatiotemporal Distribution and Meteorological Dependence
by Lijie He, Aiwen Lin, Xinxin Chen, Hao Zhou, Zhigao Zhou and Peipei He
Remote Sens. 2019, 11(4), 460; https://doi.org/10.3390/rs11040460 - 23 Feb 2019
Cited by 81 | Viewed by 6875
Abstract
A good understanding of how meteorological conditions exacerbate or mitigate air pollution is critical for developing robust emission reduction policies. Thus, based on a multiple linear regression (MLR) model in this study, the quantified impacts of six meteorological variables on PM2.5 (i.e., [...] Read more.
A good understanding of how meteorological conditions exacerbate or mitigate air pollution is critical for developing robust emission reduction policies. Thus, based on a multiple linear regression (MLR) model in this study, the quantified impacts of six meteorological variables on PM2.5 (i.e., particle matter with diameter of 2.5 µm or less) and its major components were estimated over the Yangtze River Basin (YRB). The 38-year (1980–2017) daily PM2.5 and meteorological data were derived from the newly-released Modern-Era Retrospective Analysis and Research and Application, version 2 (MERRA-2) products. The MERRA-2 PM2.5 was underestimated compared with ground measurements, partly due to the bias in the MERRA-2 Aerosol Optical Depth (AOD) assimilation. An over-increasing trend in each PM2.5 component occurred for the whole study period; however, this has been curbed since 2007. The MLR model suggested that meteorological variability could explain up to 67% of the PM2.5 changes. PM2.5 was robustly anti-correlated with surface wind speed, precipitation and boundary layer height (BLH), but was positively correlated with temperature throughout the YRB. The relationship of relative humidity (RH) and total cloud cover with PM2.5 showed regional dependencies, with negative correlation in the Yangtze River Delta (YRD) and positive correlation in the other areas. In particular, PM2.5 was most sensitive to surface wind speed, and the sensitivity was approximately −2.42 µg m−3 m−1 s. This study highlighted the impact of meteorological conditions on PM2.5 growth, although it was much smaller than the anthropogenic emissions impact. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Distribution of ground air monitoring stations and China Aerosol Remote Sensing Network (CARSNET) sites over the Yangtze River Basin (YRB). The YRB was divided into four regions, labeled from 1 to 4, which are the Yangtze River Delta (YRD), Central China (CC), Sichuan Basin (SB) and source of the YRB (SYR).</p>
Full article ">Figure 2
<p>Scatter density maps between MERRA-2 and ground-based daily mean PM<sub>2.5</sub> concentrations over the (<b>a</b>) YRB, (<b>b</b>) YRD, (<b>c</b>) CC, (<b>d</b>) SB and (<b>e</b>) SYR for the period of Jan. 2015–Dec. 2016. The parameter N represents the number of matchups, R is the correlation coefficient, RMSE is the root mean square error, RMB is relative mean bias and MAE is the mean absolute error. The color bars represent the density of the matchups.</p>
Full article ">Figure 3
<p>Monthly variations in PM<sub>2.5</sub> concentrations derived from MERRA-2 (red dashed lines) reanalysis and ground sites (blue dashed lines) over the (<b>a</b>) YRB, (<b>b</b>) YRD, (<b>c</b>) CC, (<b>d</b>) SB and (<b>e</b>) SYR for the period of Jan. 2015–Dec. 2016. The shading represents the standard deviation of MERRA-2 (red) and ground-based PM<sub>2.5</sub> (gray).</p>
Full article ">Figure 4
<p>Scatter plot of daily averaged (<b>a</b>) and annual mean AOD (<b>b</b>) between MERRA-2 and CARSNET AOD at five ground sites of the YRB during 2013–2016. In (<b>b</b>), the vertical and horizontal bars refer to the standard deviations of the annual mean MERRA-2 and CARSNET AOD, respectively.</p>
Full article ">Figure 5
<p>(<b>a</b>) Daily variations in AOD from satellite retrievals (Aqua and Terra), ground observations (CARSNET) and MERRA-2 reanalysis at Wuhan in 2016. The contributions of each aerosol species derived from MERRA-2 reanalysis are also shown. (<b>b</b>) Daily variations in AOD during a heavy pollution episode.</p>
Full article ">Figure 6
<p>Annual mean distributions of each MERRA-2 PM<sub>2.5</sub> component, including (<b>a</b>) the PM<sub>2.5</sub>, (<b>b</b>) Sulfate PM<sub>2.5</sub>, (<b>c</b>) Black Carbon PM<sub>2.5</sub>, (<b>d</b>) Organic Carbon PM<sub>2.5</sub>, (<b>e</b>) Dust PM<sub>2.5</sub> and (<b>f</b>) Sea Salt PM<sub>2.5</sub> over the YRB during 1980–2017.</p>
Full article ">Figure 7
<p>Seasonal variations of each MERRA-2 PM<sub>2.5</sub> component: (<b>left</b>) the PM<sub>2.5</sub>, (<b>middle</b>) SO<sub>4</sub> and (<b>right</b>) BC over the YRB during 1980–2017.</p>
Full article ">Figure 8
<p>Seasonal variations of each MERRA-2 PM<sub>2.5</sub> component: (<b>left</b>) OC, (<b>middle</b>) Dust<sub>2.5</sub> and (<b>right</b>) SS<sub>2.5</sub>.</p>
Full article ">Figure 9
<p>Annual mean trends in MERRA-2 PM<sub>2.5</sub> and its five major components over the YRB from 1980 to 2017, in units of µg m<sup>−3</sup>.</p>
Full article ">Figure 10
<p>Spatial patterns of each MERRA-2 PM<sub>2.5</sub> component over the YRB from 1980 to 2017. The pixels marked with black dots represent significant trends (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 11
<p>Coefficients of determination (R<sup>2</sup>) of the MLR model between PM<sub>2.5</sub> and the six meteorological variables.</p>
Full article ">Figure 12
<p>Multiple linear regression coefficients (<span class="html-italic">β</span><sub>1</sub>) of each PM<sub>2.5</sub> component with surface wind speed (Wind), in units of µg m<sup>−3</sup> m<sup>−1</sup> s. (<b>a</b>–<b>f</b>) refer to the regression coefficients of PM<sub>2.5</sub>, SO<sub>4</sub>, BC, OC, Dust<sub>2.5</sub> and SS<sub>2.5</sub> with surface wind speed, respectively.</p>
Full article ">Figure 13
<p>Multiple linear regression coefficients (<span class="html-italic">β</span><sub>2</sub>) of each PM<sub>2.5</sub> component with surface temperature (T), in units of µg m<sup>−3</sup> K<sup>−1</sup>. (<b>a</b>–<b>f</b>) refer to the regression coefficients of PM<sub>2.5</sub>, SO<sub>4</sub>, BC, OC, Dust<sub>2.5</sub> and SS<sub>2.5</sub> with surface temperature, respectively.</p>
Full article ">Figure 14
<p>Multiple linear regression coefficients (<span class="html-italic">β</span><sub>3</sub>) of each PM<sub>2.5</sub> component with surface precipitation (Precip), in units of µg m<sup>−3</sup> mm<sup>−1</sup> day. (<b>a</b>–<b>f</b>) refer to the regression coefficients of PM<sub>2.5</sub>, SO<sub>4</sub>, BC, OC, Dust<sub>2.5</sub> and SS<sub>2.5</sub> with precipitation, respectively.</p>
Full article ">Figure 15
<p>Multiple linear regression coefficients (<span class="html-italic">β</span><sub>4</sub>) of each PM<sub>2.5</sub> component with relative humidity (RH), in units of µg m<sup>−3</sup> %<sup>−1</sup>. The blank grids indicate invalid values. (<b>a</b>–<b>f</b>) refer to the regression coefficients of PM<sub>2.5</sub>, SO<sub>4</sub>, BC, OC, Dust<sub>2.5</sub> and SS<sub>2.5</sub> with relative humidity, respectively.</p>
Full article ">Figure 16
<p>Multiple linear regression coefficients (<span class="html-italic">β</span><sub>5</sub>) of each PM<sub>2.5</sub> component with total cloud cover (cloud), in units of µg m<sup>−3</sup> %<sup>−1</sup>. (<b>a</b>–<b>f</b>) refer to the regression coefficients of PM<sub>2.5</sub>, SO<sub>4</sub>, BC, OC, Dust<sub>2.5</sub> and SS<sub>2.5</sub> with total cloud cover, respectively.</p>
Full article ">Figure 17
<p>Multiple linear regression coefficients (<span class="html-italic">β</span><sub>6</sub>) of each PM<sub>2.5</sub> component with boundary layer height (BLH), in units of µg m<sup>−3</sup> m<sup>−1</sup>. (<b>a</b>–<b>f</b>) refer to the regression coefficients of PM<sub>2.5</sub>, SO<sub>4</sub>, BC, OC, Dust<sub>2.5</sub> and SS<sub>2.5</sub> with boundary layer height, respectively.</p>
Full article ">Figure A1
<p>Scatter plot between GWR and ground-based annual mean PM<sub>2.5</sub> concentrations derived from 476 air quality monitoring stations of the YRB for the period of Jan. 2015–Dec. 2016. The black line is a 1:1 line and the red line is a linear fit line.</p>
Full article ">Figure A2
<p>The spatial (top) and frequency (bottom) distributions of multiyear-averaged surface PM<sub>2.5</sub> concentrations derived from MERRA-2 (<b>a</b>,<b>b</b>), GWR-1 (<b>c</b>,<b>d</b>) over the YRB for the period of 1998–2016.</p>
Full article ">Figure A3
<p>Temporal variations of annual mean PM<sub>2.5</sub> concentration derived from MERRA-2 (black) and GWR (red) over the YRB from 1998 to 2016.</p>
Full article ">Figure A4
<p>Spatial distribution of PM<sub>2.5</sub> annual trend derived form MERRA-2 (<b>a</b>) and GWR (<b>b</b>) over the YRB from 1998 to 2016. Grids marked by grey points represented PM<sub>2.5</sub> trends that did not exceed a 95% significant level.</p>
Full article ">Figure A5
<p>Coefficients of determination (R<sup>2</sup>) of the MLR model between PM<sub>2.5</sub> and the six meteorological variables in spring (<b>a</b>), summer (<b>b</b>), autumn (<b>c</b>) and winter (<b>d</b>).</p>
Full article ">Figure A6
<p>Annual trends in meteorological factors over the YRB from 1980 to 2017.</p>
Full article ">
23 pages, 2768 KiB  
Article
Detecting Square Markers in Underwater Environments
by Jan Čejka, Fabio Bruno, Dimitrios Skarlatos and Fotis Liarokapis
Remote Sens. 2019, 11(4), 459; https://doi.org/10.3390/rs11040459 - 23 Feb 2019
Cited by 26 | Viewed by 6015
Abstract
Augmented reality can be deployed in various application domains, such as enhancing human vision, manufacturing, medicine, military, entertainment, and archeology. One of the least explored areas is the underwater environment. The main benefit of augmented reality in these environments is that it can [...] Read more.
Augmented reality can be deployed in various application domains, such as enhancing human vision, manufacturing, medicine, military, entertainment, and archeology. One of the least explored areas is the underwater environment. The main benefit of augmented reality in these environments is that it can help divers navigate to points of interest or present interesting information about archaeological and touristic sites (e.g., ruins of buildings, shipwrecks). However, the harsh sea environment affects computer vision algorithms and complicates the detection of objects, which is essential for augmented reality. This paper presents a new algorithm for the detection of fiducial markers that is tailored to underwater environments. It also proposes a method that generates synthetic images with such markers in these environments. This new detector is compared with existing solutions using synthetic images and images taken in the real world, showing that it performs better than other detectors: it finds more markers than faster algorithms and runs faster than robust algorithms that detect the same amount of markers. Full article
(This article belongs to the Special Issue Underwater 3D Recording & Modelling)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>General workflow of algorithms detecting square markers.</p>
Full article ">Figure 2
<p>Workflows of ARUco, Base version of UWARUco, and Masked version of UWARUco.</p>
Full article ">Figure 3
<p>(<b>a</b>) shows a marker recorded in bad visibility conditions. ARUco uses adaptive thresholding and compares pixels to a threshold that is decreased by seven (see (<b>b</b>)); notice a discontinuous border of the marker. The Base version of UWARUco does not decrease the threshold (see (<b>c</b>)), which preserves the border, but introduces many small objects.</p>
Full article ">Figure 4
<p>Brightness mask (<b>a</b>) and noise mask (<b>b</b>) computed from the input image in <a href="#remotesensing-11-00459-f003" class="html-fig">Figure 3</a>a and the final mask computed by ANDing both masks together (<b>c</b>). When applied to the thresholded image in <a href="#remotesensing-11-00459-f003" class="html-fig">Figure 3</a>c, the number of objects is reduced, while the border is still preserved (<b>d</b>). Small pixel-size objects are further removed by a median filter.</p>
Full article ">Figure 5
<p>Effect of turbidity and glow in a real image. Due to the turbidity, distant objects are less visible than close objects, as in foggy images. Furthermore, notice a small glow around the central and the right marker.</p>
Full article ">Figure 6
<p>Pipeline of the synthetic image generator. See the text for further details.</p>
Full article ">Figure 7
<p>Synthetically-generated image.</p>
Full article ">Figure 8
<p>Results of the reference test. Top row: results of marker-detecting algorithms; bottom row: results of image-improving algorithms; left column: percentage of detected markers; right column: detection time in milliseconds. Many solutions detected 100% of markers, so their results overlap. CLAHE, contrast-limited adaptive histogram equalization; WB, white balancing; MBUWWB, marker-based underwater white balancing.</p>
Full article ">Figure 9
<p>Results of the test with bad visibility. Top row: results of marker-detecting algorithms; bottom row: results of image-improving algorithms; left column: percentage of detected markers; right column: detection time in milliseconds. Since the detection time was evaluated only when the markers were detected, there were no values when markers were lost.</p>
Full article ">Figure 10
<p>Results of the test with glowing markers. Top row: results of marker-detecting algorithms; bottom row: results of image-improving algorithms; left column: percentage of detected markers; right column: detection time in milliseconds. As with the reference test, many solutions again detected 100% of markers, and their results overlap.</p>
Full article ">
15 pages, 1837 KiB  
Article
Spectral Heterogeneity Predicts Local-Scale Gamma and Beta Diversity of Mesic Grasslands
by H. Wayne Polley, Chenghai Yang, Brian J. Wilsey and Philip A. Fay
Remote Sens. 2019, 11(4), 458; https://doi.org/10.3390/rs11040458 - 23 Feb 2019
Cited by 21 | Viewed by 5290
Abstract
Plant species diversity is an important metric of ecosystem functioning, but field assessments of diversity are constrained in number and spatial extent by labor and other expenses. We tested the utility of using spatial heterogeneity in the remotely-sensed reflectance spectrum of grassland canopies [...] Read more.
Plant species diversity is an important metric of ecosystem functioning, but field assessments of diversity are constrained in number and spatial extent by labor and other expenses. We tested the utility of using spatial heterogeneity in the remotely-sensed reflectance spectrum of grassland canopies to model both spatial turnover in species composition and abundances (β diversity) and species diversity at aggregate spatial scales (γ diversity). Shannon indices of γ and β diversity were calculated from field measurements of the number and relative abundances of plant species at each of two spatial grains (0.45 m2 and 35.2 m2) in mesic grasslands in central Texas, USA. Spectral signatures of reflected radiation at each grain were measured from ground-level or an unmanned aerial vehicle (UAV). Partial least squares regression (PLSR) models explained 59–85% of variance in γ diversity and 68–79% of variance in β diversity using spatial heterogeneity in canopy optical properties. Variation in both γ and β diversity were associated most strongly with heterogeneity in reflectance in blue (350–370 nm), red (660–770 nm), and near infrared (810–1050 nm) wavebands. Modeled diversity was more sensitive by a factor of three to a given level of spectral heterogeneity when derived from data collected at the small than larger spatial grain. As estimated from calibrated PLSR models, β diversity was greater, but γ diversity was smaller for restored grassland on a lowland clay than upland silty clay soil. Both γ and β diversity of grassland can be modeled by using spatial heterogeneity in vegetation optical properties provided that the grain of reflectance measurements is conserved. Full article
(This article belongs to the Special Issue Applications of Spectroscopy in Agriculture and Vegetation Research)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of the sampling design employed to develop partial least squares regression (PLSR) models relating spectral signatures of grassland canopies to species diversity: (<b>a</b>) Cover by species was measured in 76 cm diameter plots, (<b>b</b>) Eight plots of which were randomly-located in each 7 m diameter patch. (<b>c</b>) Patches were positioned in pasture, native prairie, and eight stands of restored grassland, the latter sampled in both spring and summer (n = 46 samples in total). Diversity was modeled at the patch (plot aggregate) and community (patch aggregate) spatial scales. Patch diversity was calculated by aggregating species data across the eight 76 cm plots sampled per patch. Community diversity was calculated by aggregating species cover data across random combinations of sampled patches, including combinations of four (an example of which is illustrated), 8, 16, 24, 32, and 40 patches. Diversity at the patch and community scales was modeled using the CV in reflectance among plots or patches at each waveband measured. Note that illustrations of the eight rectangular stands in restored grassland and of grassland types and patches (<b>c</b>) are not scaled to actual size.</p>
Full article ">Figure 2
<p>Mean values of reflectance as a function of wavelength for 76 cm diameter plots that differed in functional group dominance. Reflectance per waveband was averaged across plots in which a given functional group contributed &gt;50% of plant cover (n = 4–6 plots per group).</p>
Full article ">Figure 3
<p>Relationships between measured diversity of eight plot (each 0.45 m<sup>2</sup>) aggregates of grassland vegetation (patches) and diversity predicted from spatial heterogeneity in brightness normalized values of canopy reflectance using PLSR: PLSR explained 59% and 68% of the variance in (<b>a</b>) γ and (<b>b</b>) β diversity, respectively (n = 48; root mean predicted residual sum of squares (PRESS) = 1.02 and 1.04, respectively). Lines show the 1:1 relationship with measured diversity.</p>
Full article ">Figure 4
<p>Standardized weightings of regression coefficients from PLSR relationships between two indices of species diversity and spatial heterogeneity in brightness normalized values of canopy reflectance: (<b>a</b>) γ diversity and (<b>b</b>) β diversity across eight plot (each 0.45 m<sup>2</sup>) aggregates of grassland vegetation (patches; n = 48).</p>
Full article ">Figure 5
<p>Gamma (γ) diversity of random assemblages of plots (each 0.45 m<sup>2</sup>) sampled in 7 m diameter patches with high (remnant prairie), intermediate (restored grassland), and low diversity (pasture). The value of γ diversity approached saturation as the number of plots sampled per patch was increased from six to eight, as indicated by hyperbolic regression fits to data at high (adj. r<sup>2</sup> = 0.95; standard error of estimate (SEE) = 0.77) and intermediate diversity (adj. r<sup>2</sup> = 0.71; SEE = 1.09). There was no significant relationship between γ diversity and plot number for low diversity pasture (P = 0.29).</p>
Full article ">Figure 6
<p>Relationships between measured diversity of grassland communities and diversity predicted from spatial heterogeneity in brightness normalized values of canopy reflectance using PLSR: (<b>a</b>) γ and (<b>b</b>) unit interval of β diversity (β<sub>0-1</sub>) of grassland communities created by randomly-selecting between 4 and 40 patches. PLSR explained 85% and 79% of the variance in γ diversity and β<sub>0-1</sub>, respectively (n = 47; root mean predicted residual sum of squares, PRESS = 0.65 and 1.09, respectively). Lines show the 1:1 relationship with measured diversity. Shannon β diversity ranged from 2.7 to 1.9 and is inversely correlated to β<sub>0-1</sub>.</p>
Full article ">Figure 7
<p>Standardized weightings of regression coefficients from PLSR relationships between two indices of species diversity and spatial heterogeneity in brightness normalized values of canopy reflectance: (<b>a</b>) γ diversity and (<b>b</b>) unit interval of β diversity (β<sub>0-1</sub>) of combinations of 7 m diameter patches of grassland vegetation (communities; n = 47).</p>
Full article ">Figure 8
<p>Diversity of restored grassland on clay and silty clay soils during spring 2016–2018: (<b>a</b>) γ and (<b>b</b>) β diversity were estimated from reflectance values using PLSR models of relationships between diversity and the CV in in brightness normalized reflectance values of visible through near infrared wavebands.</p>
Full article ">
21 pages, 3985 KiB  
Article
Retrieving the Lake Trophic Level Index with Landsat-8 Image by Atmospheric Parameter and RBF: A Case Study of Lakes in Wuhan, China
by Yadong Zhou, Baoyin He, Fei Xiao, Qi Feng, Jiefeng Kou and Hui Liu
Remote Sens. 2019, 11(4), 457; https://doi.org/10.3390/rs11040457 - 22 Feb 2019
Cited by 17 | Viewed by 4985
Abstract
The importance of atmospheric correction is pronounced for retrieving physical parameters in aquatic systems. To improve the retrieval accuracy of trophic level index (TLI), we built eight models with 43 samples in Wuhan and proposed an improved method by taking atmospheric water vapor [...] Read more.
The importance of atmospheric correction is pronounced for retrieving physical parameters in aquatic systems. To improve the retrieval accuracy of trophic level index (TLI), we built eight models with 43 samples in Wuhan and proposed an improved method by taking atmospheric water vapor (AWV) information and Landsat-8 (L8) remote sensing image into the input layer of radical basis function (RBF) neural network. All image information taken in RBF have been radiometrically calibrated. Except model(a), image data used in the other seven models were not atmospherically corrected. The eight models have different inputs and the same output (TLI). The models are as follows: (1) model(a), the inputs are seven single bands; (2) model(c), besides seven single bands (b1, b2, b3, b4, b5, b6, b7), we added the AWV parameter k1 to the inputs; (3) model(c1), the inputs are AWV difference coefficient k2 and the seven bands; (4) model(c2), the input layers include seven single bands, k1 and k2; (5) model(b), seven band ratios (b3/b5, b1/b2, b3/b7, b2/b5, b2/b7, b3/b6, and b3/b4) were used as input parameters; (6) model(b1), the inputs are k1 and seven band ratios; (7) model(b2), the inputs are k2 and seven band ratios; (8) model(b3), the inputs are k1, k2, and seven band ratios. We estimated models with root mean squared error (RMSE), model(a) > model(b3) > model(b1) > model(c2) > model(c) > model(b) > model(c1) > model(b2). RMSE of the eight models are 12.762, 11.274, 10.577, 8.904, 8.361, 6.396, 5.389, and 5.104, respectively. Model b2 and c1 are two best models in these experiments, which confirms both the seven single bands and band ratios with k2 are superior to other models. Results also corroborate that most lakes in Wuhan urban area are in mesotrophic and light eutrophic states. Full article
(This article belongs to the Special Issue Satellite Monitoring of Water Quality and Water Environment)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area and stations.</p>
Full article ">Figure 2
<p>The workflow of this research.</p>
Full article ">Figure 3
<p>Spectral of thermal infrared channels of Landsat8 and AVHRR made by ENVI5.3.</p>
Full article ">Figure 4
<p>Scatter diagram of measured-retrieved TLI based on seven single bands and atmospheric parameters.</p>
Full article ">Figure 5
<p>Scatter diagram of measured-retrieved TLI based on seven band ratios and atmospheric parameters.</p>
Full article ">Figure 6
<p>Distribution of TLI-retrieved data based on single bands and atmospheric parameters.</p>
Full article ">Figure 7
<p>Distribution of TLI-retrieved data based on band ratios and atmospheric parameters.</p>
Full article ">
28 pages, 11311 KiB  
Article
A New Remote Sensing Dryness Index Based on the Near-Infrared and Red Spectral Space
by Jieyun Zhang, Qingling Zhang, Anming Bao and Yujuan Wang
Remote Sens. 2019, 11(4), 456; https://doi.org/10.3390/rs11040456 - 22 Feb 2019
Cited by 27 | Viewed by 7620
Abstract
Soil moisture, as a crucial indicator of dryness, is an important research topic for dryness monitoring. In this study, we propose a new remote sensing dryness index for measuring soil moisture from spectral space. We first established a spectral space with remote sensing [...] Read more.
Soil moisture, as a crucial indicator of dryness, is an important research topic for dryness monitoring. In this study, we propose a new remote sensing dryness index for measuring soil moisture from spectral space. We first established a spectral space with remote sensing reflectance data at the near-infrared (NIR) and red (R) bands. Considering the distribution regularities of soil moisture in this space, we formulated the Ratio Dryness Monitoring Index (RDMI) as a new dryness monitoring indicator. We compared RDMI values with in situ soil moisture content data measured at 0–10 cm depth. Results showed that there was a strong negative correlation (R = −0.89) between the RDMI values and in situ soil moisture content. We further compared RDMI with existing remote sensing dryness indices, and the results demonstrated the advantages of the RDMI. We applied the RDMI to the Landsat-8 imagery to map dryness distribution around the Fukang area on the Northern slope of the Tianshan Mountains, and to the MODIS imagery to detect the spatial and temporal changes in dryness for the entire Xinjiang in 2013 and 2014. Overall, the RDMI index constructed, based on the NIR–Red spectral space, is simple to calculate, easy to understand, and can be applied to dryness monitoring at different scales. Full article
(This article belongs to the Special Issue Advances in Remote Sensing-based Disaster Monitoring and Assessment)
Show Figures

Figure 1

Figure 1
<p>Study area (Xinjiang Uygur Autonomous Region, China). The inset image is the surrounding area of the Fukang City and locations of fifty-one soil samples were taken.</p>
Full article ">Figure 2
<p>The positions of the fifty-one soil moisture samples with different land cover types in the surrounding areas of the Fukang City. Typical land cover types, such as desert, farmland, bare land, wetlands, are marked with text in the map.</p>
Full article ">Figure 3
<p>(<b>a</b>) The triangle distribution characteristics in the near-infrared (NIR)–Red spectral space. (<b>b</b>) The PVI and soil moisture (SM) iso-lines [<a href="#B49-remotesensing-11-00456" class="html-bibr">49</a>]. NIR and Red represent near-infrared and red band reflectance, respectively.</p>
Full article ">Figure 4
<p>The definition of PDI, the PDI value is defined as the distance of E and F in the NIR–Red spectral space [<a href="#B43-remotesensing-11-00456" class="html-bibr">43</a>].</p>
Full article ">Figure 5
<p>The sketch map of the TVDI. TVDI is established in the two-dimensional space of NDVI and LST, and the TVDI value is defined as the ratio of the distance from the pixel to the wet edge (A) to the distance between the dry and wet edges (B). [<a href="#B46-remotesensing-11-00456" class="html-bibr">46</a>].</p>
Full article ">Figure 6
<p>The definition of Temperature–Vegetation–Soil Moisture Dryness Index (TVMDI). TVMDI is based on PVI, LST, and SM as the axis of the three-dimensional space. The TVMDI value is defined as the square root of the square sum of PVI, LST, and SM at the pixel position [<a href="#B49-remotesensing-11-00456" class="html-bibr">49</a>].</p>
Full article ">Figure 7
<p>The definition of the triangle edges. AB is the soil edge, AC is the wet edge, and BC is the dry edge. All pixels are distributed in this triangular region in the NIR–Red spectral space.</p>
Full article ">Figure 8
<p>Development of the Ratio Dryness Monitoring Index (RDMI) based on the NIR–Red spectral space. DP and DE are the lines passing through the pixel P that are parallel to soil edge (AB). The RDMI value is the ratio of DP and DE.</p>
Full article ">Figure 9
<p>Flowchart for calculating the RDMI. (<b>a</b>) Establishment of the NIR–Red spectral space, (<b>b</b>) extracting the triangle boundary point set, and (<b>c</b>) the triangle boundary fit according to the point set, and calculation of the RDMI.</p>
Full article ">Figure 10
<p>The triangle distribution of the NIR and red bands values in the NIR–Red spectral space generated with real Landsat-8 OLI data. The yellow points represent the soil pixels and the blue points represent the wet pixels.</p>
Full article ">Figure 11
<p>(<b>a</b>) The relationship between the RDMI and in situ soil moisture (n = 26). (<b>b</b>) The correlation between the estimated soil moisture content and the in situ soil moisture content.</p>
Full article ">Figure 12
<p>The dryness maps of the surrounding area of Fukang City using Landsat-8 imagery on the 12 June 2014. (<b>a</b>) RDMI, (<b>b</b>) the false color composite image, (<b>c</b>) PDI, (<b>d</b>) MPDI, (<b>e</b>) TVDI, and (<b>f</b>) TVMDI. (All the dryness values in these maps were normalized. The spatial resolution of these maps is 30 m).</p>
Full article ">Figure 12 Cont.
<p>The dryness maps of the surrounding area of Fukang City using Landsat-8 imagery on the 12 June 2014. (<b>a</b>) RDMI, (<b>b</b>) the false color composite image, (<b>c</b>) PDI, (<b>d</b>) MPDI, (<b>e</b>) TVDI, and (<b>f</b>) TVMDI. (All the dryness values in these maps were normalized. The spatial resolution of these maps is 30 m).</p>
Full article ">Figure 13
<p>A comparison of the frequency distributions of the RDMI, MPDI, PDI, TVDI, and the TVMDI dryness maps.</p>
Full article ">Figure 14
<p>The RDMI maps of the Xinjiang province and the differences in dryness in spatial distributions. (<b>a</b>) RDMI map for June 2, 2013; (<b>b</b>) illustration of the Xinjiang area; (<b>c</b>) RDMI map for June 2, 2014; (<b>d</b>) the RDMI difference on the same date of different years; (<b>e</b>) RDMI map for August 21, 2014; (<b>f</b>) the RDMI difference on the different date of the same year.</p>
Full article ">Figure 15
<p>The soil, wet edge pixels, and fitting parameters of the NIR–Red spectral spaces in Xinjiang on three different dates (2 June 2013, 2 June 2014, and 21 August 2014). S<sub>soil</sub>, I<sub>soil</sub> are the slopes and intercepts of soil edge, S<sub>wet</sub>, I<sub>wet</sub> are the slopes and intercepts of the wet edge.</p>
Full article ">Figure 16
<p>A comparison of the RDMI frequency distribution on different dates. (<b>a</b>) A comparison between 2 June 2013 and 2 June 2014; and (<b>b</b>) a comparison between 2 June 2014 and 21 August 2014.</p>
Full article ">Figure 17
<p>The RDMI average values on different dates for different land cover conditions in Xinjiang.</p>
Full article ">
18 pages, 8112 KiB  
Article
Crop Classification Based on a Novel Feature Filtering and Enhancement Method
by Limin Wang, Qinghan Dong, Lingbo Yang, Jianmeng Gao and Jia Liu
Remote Sens. 2019, 11(4), 455; https://doi.org/10.3390/rs11040455 - 22 Feb 2019
Cited by 25 | Viewed by 7172
Abstract
Vegetation indices, such as the normalized difference vegetation index (NDVI) or enhanced vegetation index (EVI) derived from remote sensing images, are widely used for crop classification. However, vegetation index profiles for different crops with a similar phenology lead to difficulties in discerning these [...] Read more.
Vegetation indices, such as the normalized difference vegetation index (NDVI) or enhanced vegetation index (EVI) derived from remote sensing images, are widely used for crop classification. However, vegetation index profiles for different crops with a similar phenology lead to difficulties in discerning these crops both spectrally and temporally. This paper proposes a feature filtering and enhancement (FFE) method to map soybean and maize, two major crops widely cultivated during the summer season in Northeastern China. Different vegetation indices are first calculated and the probability density functions (PDFs) of these indices for the target classes are established based on the hypothesis of normal distribution; the vegetation index images are then filtered using the PDFs to obtain enhanced index images where the pixel values of the target classes are ”enhanced”. Subsequently, the minimum Gini index of each enhanced index image is computed, generating at the same time the weight for every index. A composite enhanced feature image is produced by summing all indices with their weights. Finally, a classification is made from the composite enhanced feature image by thresholding, which is derived automatically based on the samples. The efficiency of the proposed FFE method is compared with the maximum likelihood classification (MLC), support vector machine (SVM), and random forest (RF) in a mapping operation to determine the soybean and maize distribution in a county in Northeastern China. The classification accuracies resulting from this comparison show that the FFE method outperforms MLC, and its accuracies are similar to those of SVM and RF, with an overall accuracy of 0.902 and a kappa coefficient of 0.846. This indicates that the FFE method is an appropriate method for crop classification to distinguish crops with a similar phenology. Our research also shows that when the sample size reaches a certain level (e.g., 2000), the mean and standard deviation of the sample are very close to the actual values, which leads to high classification accuracy. In a case where the condition of normal distribution is not fulfilled, the PDF of the vegetation index can be created by a lookup table. Furthermore, as the method is rather simple and explicit, and convenient in terms of computing, it can be used as the backbone for automatic crop mapping operations. Full article
(This article belongs to the Special Issue Remote Sensing in Support of Transforming Smallholder Agriculture)
Show Figures

Figure 1

Figure 1
<p>Location of study area (<b>a</b>) and the false color Landsat-8 image of study area (near infrared (NIR), red, green as R, G, B) (<b>b</b>).</p>
Full article ">Figure 2
<p>Flowchart of feature filtering and enhancement (FFE)-based crop classification. (<b>a</b>) Image processing; (<b>b</b>) Index features calculation; (<b>c</b>) sample collection; (<b>d</b>) classification based on FFE; (<b>e</b>) accuracy assessment. SVM: support vector machine; MLC: maximum likelihood classification; RF: random forest; NDVI: normalized difference vegetation index; MNDWI: modified normalized difference water index.</p>
Full article ">Figure 3
<p>Landsat image (SWIR1/NIR/Red as R/G/B) and the distribution of sample blocks (<b>a</b>), the enlargement of one sample block (<b>b</b>), and its classification (<b>c</b>).</p>
Full article ">Figure 4
<p>Illustration of feature filtering and enhancement based on probability density function (PDF).</p>
Full article ">Figure 5
<p>Reference classification result based on RapidEye image. (<b>a</b>) RapidEye image on July 27, 2014, (<b>b</b>) classification result based on RapidEye image, (<b>c</b>) partial magnification of classification result. The reference classification result is obtained by the unsupervised Iterative Self-Organizing Data Analysis (ISODATA) and visual interpretation approaches.</p>
Full article ">Figure 6
<p>Spectral curves of major classes on DOY 164 (<b>a</b>), 180 (<b>b</b>), 219 (<b>c</b>), and 260 (<b>d</b>), and time series of NDVI (<b>e</b>), MNDWI (<b>f</b>), SWIR<sub>mean</sub> (<b>g</b>), and VLI (<b>h</b>).</p>
Full article ">Figure 7
<p>NDVI distributions and PDFs of soybean (<b>a</b>), maize (<b>b</b>), and others (<b>c</b>) on DOY 180.</p>
Full article ">Figure 8
<p>The mean and standard deviations of NDVI of soybean, maize, and others at different growing stage.</p>
Full article ">Figure 9
<p>NDVI images of different stages before (<b>a</b>–<b>d</b>) and after filtering (<b>e</b>–<b>h</b>) (soybean as the target class). Figures (<b>a</b>) and (<b>e</b>) are on DOY 164, figures (<b>b</b>) and (<b>f</b>) are on DOY 180, figures (<b>c</b>) and (<b>g</b>) are on DOY 219, and figures (<b>d</b>) and (<b>h</b>) are on DOY 260.</p>
Full article ">Figure 9 Cont.
<p>NDVI images of different stages before (<b>a</b>–<b>d</b>) and after filtering (<b>e</b>–<b>h</b>) (soybean as the target class). Figures (<b>a</b>) and (<b>e</b>) are on DOY 164, figures (<b>b</b>) and (<b>f</b>) are on DOY 180, figures (<b>c</b>) and (<b>g</b>) are on DOY 219, and figures (<b>d</b>) and (<b>h</b>) are on DOY 260.</p>
Full article ">Figure 10
<p>Composite enhanced feature images of soybean (<b>a</b>,<b>b</b>) and maize (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 10 Cont.
<p>Composite enhanced feature images of soybean (<b>a</b>,<b>b</b>) and maize (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 11
<p>Crop distribution results of the study area based on FFE (<b>a</b>,<b>e</b>), MLC (<b>b</b>,<b>f</b>), SVM (<b>c</b>,<b>g</b>), and RF (<b>d</b>,<b>h</b>) approaches.</p>
Full article ">Figure 12
<p>Accuracy assessment of each classification method. OA: overall accuracy; PA: producer’s accuracy; UA: user’s accuracy.</p>
Full article ">Figure 13
<p>The same objects with different spectra. (<b>a</b>) Different colors of tow sub-classes of maize in the Landsat image of DOY 260 (NIR, SWIR1 and red as R, G, B); and (<b>b</b>) the spectra of maize A and maize B.</p>
Full article ">Figure 14
<p>Impact of sample size on the mean (<b>a</b>) and standard deviation (<b>b</b>) estimation (NDVI of soybean in Landsat image of DOY 219) and (<b>c</b>) the classification accuracy. The actual values are calculated from all soybean pixels.</p>
Full article ">
16 pages, 3896 KiB  
Article
Multi-GNSS Relative Positioning with Fixed Inter-System Ambiguity
by Hua Chen, Weiping Jiang and Jiancheng Li
Remote Sens. 2019, 11(4), 454; https://doi.org/10.3390/rs11040454 - 22 Feb 2019
Cited by 15 | Viewed by 3975
Abstract
In multi-GNSS cases, two types of Double Difference (DD) ambiguity could be formed including an intra-system ambiguity and an inter-system ambiguity, which are identified as the DD ambiguity between satellites from the same and from different GNSS systems, respectively. We studied the relative [...] Read more.
In multi-GNSS cases, two types of Double Difference (DD) ambiguity could be formed including an intra-system ambiguity and an inter-system ambiguity, which are identified as the DD ambiguity between satellites from the same and from different GNSS systems, respectively. We studied the relative positioning methods using intra-system DD observations and using Un-Difference (UD) observations, and developed a frequency-free approach for fixing inter-system ambiguity based on UD observations for multi-GNSS positioning, where the inter-system phase bias is calculated with the help of a fixed Single-Difference (SD) ambiguity. The consistency between the receiver-end uncalibrated phase delays (RUPD) and the SD ambiguity were investigated and the positioning performance of this new approach was assessed. The results show that RUPD could be modeled as a constant if the receiver were tracking satellites continuously. Furthermore, compared to the method using DD observations with only an intra-system DD ambiguity fixed, the new ambiguity fixing approach has a better performance, especially in hard environments with a large cut-off angle or serve signal obstructions. Full article
Show Figures

Figure 1

Figure 1
<p>The procedure of intra- and inter-system ambiguity fixing in the actual data processing.</p>
Full article ">Figure 2
<p>The selected baseline. GPS and Galileo are observed for each baseline.</p>
Full article ">Figure 3
<p>STD and number of ambiguities used for WLRUPD.</p>
Full article ">Figure 4
<p>The STD and number of ambiguities used for NLRUPD.</p>
Full article ">Figure 5
<p>Observation time of all satellites for the baselines DUND-MQZG and CEDU-MOBS on day 063, 2017.</p>
Full article ">Figure 6
<p>Residuals of the SD ambiguity after removing the RUPD and the integer part with the intra-system DD ambiguities fixed.</p>
Full article ">Figure 7
<p>Residuals of the SD ambiguity after removing the RUPD and the integer part without the intra-system DD ambiguities fixed.</p>
Full article ">Figure 8
<p>Inter-system phase bias derived for GPS and Galileo L1-E1 for KIR8 and KIRU.</p>
Full article ">Figure 9
<p>Inter-system phase bias derived for GPS and Galileo L1/L2-E1/E5a for KIR8 and KIRU.</p>
Full article ">Figure 10
<p>Simulated obstructions. The shadow part shows that those satellites with an elevation under 70 degrees, whose azimuths belong to [210 330], are blocked.</p>
Full article ">Figure 11
<p>Simulated obstructions. The shadow part shows that those satellites with an elevation above 70 degrees, whose azimuths belong to [120 240], are blocked.</p>
Full article ">
12 pages, 6680 KiB  
Article
A Set of Satellite-Based Near Real-Time Meteorological Drought Monitoring Data over China
by Xuejun Zhang, Zhicheng Su, Juan Lv, Weiwei Liu, Miaomiao Ma, Jian Peng and Guoyong Leng
Remote Sens. 2019, 11(4), 453; https://doi.org/10.3390/rs11040453 - 22 Feb 2019
Cited by 11 | Viewed by 4897
Abstract
A high-resolution and near real-time drought monitoring dataset has not been made readily available in drought-prone China, except for the low-resolution global product. Here we developed a set of near real-time meteorological drought data at a 0.25° spatial resolution over China, by seamlessly [...] Read more.
A high-resolution and near real-time drought monitoring dataset has not been made readily available in drought-prone China, except for the low-resolution global product. Here we developed a set of near real-time meteorological drought data at a 0.25° spatial resolution over China, by seamlessly merging the satellite-based near real-time (RT) precipitation (3B42RTv7) into the high-quality gauge-based retrospective product (CN05.1) using the quantile-mapping (QM) bias-adjustment method. Comparing the standard precipitation index (SPI) from the satellite-gauge merged product (SGMP) with that from the retrospective ground product CN05.1 (OBS) shows that the SGMP reproduces well the observed spatial distribution of SPI and the pattern of meteorological drought across China, at both the 6-month and 12-month time scales. In contrast, the UN-SGMP generated by merging the unadjusted raw satellite precipitation into the gauging data shows systematical overestimation of the SPI, leaving less meteorological droughts to be identified. Furthermore, the SGMP is found to be able to capture the inter-annual variation of percentage area in meteorological droughts. These validation results suggest that the newly developed drought dataset is reliable for monitoring meteorological drought dynamics in near real-time. This dataset will be routinely updated as the satellite RT precipitation is made available, thus facilitating near real-time drought diagnosis in China. Full article
(This article belongs to the Special Issue Observations, Modeling, and Impacts of Climate Extremes)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The 10 major hydrological zones of China denoted by a unique number: 1. Songhua River; 2. Liao River; 3. Hai River; 4. Yellow River; 5. Huai River; 6. Yangtze River; 7. Pearl River; 8. Southeast region; 9. Southwest region; and 10. Northwest region.</p>
Full article ">Figure 2
<p>Monthly mean precipitation from (<b>a</b>) the raw satellite-based 3B42RTV7 data and (<b>b</b>) the gauge-based CN05.1 data, as well as (<b>c</b>) the difference between the two products over the overlapping period (2001–2016). (1–10) the basins/regions shown in <a href="#remotesensing-11-00453-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 3
<p>Comparisons of monthly precipitation from the 3B42RTV7 before adjustment (blue dash line), 3B42RTV7 after adjustment (rea line), and the gauge-based CN05.1 data (black line) during the overlapping period (2001–2016) for the 10 hydrological zones as shown in <a href="#remotesensing-11-00453-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 4
<p>Comparisons of monthly precipitation and the 12-month SPI (SPI12) from the UN-SGMP (blue dash line), SGMP (red line), and OBS data (black line) during the overlapping period (2001–2016) in three example grid cells.</p>
Full article ">Figure 5
<p>Six-month SPI (SPI6) derived from the OBS data (CN05.1; left panels), the UN-SGMP data (unadjusted 3B42RTV7–CN05.1 merged data; middle panels), and the SGMP data (adjusted 3B42RTV7–CN05.1 merged data; right panels) for January (<b>a</b>–<b>c</b>) and August 2010 (<b>d</b>–<b>f</b>) and for January (<b>g</b>–<b>i</b>) and May 2012 (<b>j</b>–<b>l</b>).</p>
Full article ">Figure 6
<p>12-month SPI (SPI12) derived from the gauge-based N05.1 data (left panels), 3B42RTV7–CN05.1 merged data (middle panels), and adjusted 3B42RTV7–CN05.1 merged data (right panels) for January (<b>a</b>–<b>c</b>) and August 2010 (<b>d</b>–<b>f</b>) and for January (<b>g</b>–<b>i</b>), and May 2012 (<b>j</b>–<b>l</b>).</p>
Full article ">Figure 7
<p>The 10-y percentage area in meteorological drought identified from the (<b>a</b>) 6-month SPI (SPI6) and (<b>b</b>) 12-month SPI (SPI12) based on the UN-SGMP (blue dash line), SGMP (red line), and OBS (black line) data.</p>
Full article ">
38 pages, 10532 KiB  
Article
Glacier Facies Mapping Using a Machine-Learning Algorithm: The Parlung Zangbo Basin Case Study
by Jingxiao Zhang, Li Jia, Massimo Menenti and Guangcheng Hu
Remote Sens. 2019, 11(4), 452; https://doi.org/10.3390/rs11040452 - 22 Feb 2019
Cited by 59 | Viewed by 9228
Abstract
Glaciers in the Tibetan Plateau are an important indicator of climate change. Automatic glacier facies mapping utilizing remote sensing data is challenging due to the spectral similarity of supraglacial debris and the adjacent bedrock. Most of the available glacier datasets do not provide [...] Read more.
Glaciers in the Tibetan Plateau are an important indicator of climate change. Automatic glacier facies mapping utilizing remote sensing data is challenging due to the spectral similarity of supraglacial debris and the adjacent bedrock. Most of the available glacier datasets do not provide the boundary of clean ice and debris-covered glacier facies, while debris-covered glacier facies play a key role in mass balance research. The aim of this study was to develop an automatic algorithm to distinguish ice cover types based on multi-temporal satellite data, and the algorithm was implemented in a subregion of the Parlung Zangbo basin in the southeastern Tibetan Plateau. The classification method was built upon an automated machine learning approach: Random Forest in combination with the analysis of topographic and textural features based on Landsat-8 imagery and multiple digital elevation model (DEM) data. Very high spatial resolution Gao Fen-1 (GF-1) Panchromatic and Multi-Spectral (PMS) imagery was used to select training samples and validate the classification results. In this study, all of the land cover types were classified with overall good performance using the proposed method. The results indicated that fully debris-covered glaciers accounted for approximately 20.7% of the total glacier area in this region and were mainly distributed at elevations between 4600 m and 4800 m above sea level (a.s.l.). Additionally, an analysis of the results clearly revealed that the proportion of small size glaciers (<1 km2) were 88.3% distributed at lower elevations compared to larger size glaciers (≥1 km2). In addition, the majority of glaciers (both in terms of glacier number and area) were characterized by a mean slope ranging between 20° and 30°, and 42.1% of glaciers had a northeast and north orientation in the Parlung Zangbo basin. Full article
(This article belongs to the Special Issue Remote Sensing of Glaciers at Global and Regional Scales)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area located in the Parlung Zangbo basin and the Landsat-8 Operational Land Imager (OLI) image acquired on 6 October 2015 (a false color composite with a band combination, R = shortwave infrared band (band 7), G = near-infrared band (band 5), and B = green band (band 3)).</p>
Full article ">Figure 2
<p>The flowchart of the automatic glacier facies mapping methodology. DEM, digital elevation model; NDWI, Normalized Difference Water Index; MID, mixed ice and debris; SGD, supraglacial debris.</p>
Full article ">Figure 3
<p>An example of different land cover classes where training samples were selected based on visual interpretation of the Landsat image and the GF-1 image. (<b>a</b>) A false color composite image with a band combination of 5/4/3 (R/G/B) of Landsat-8 OLI data on 6 October 2015; (<b>b</b>) A false color composite image with a band combination of 4/3/2 (R/G/B) of the fused GF-1 Panchromatic and Multi-Spectral (PMS) data on 2 August 2015; (<b>c</b>–<b>f</b>) Close-up details about the pink rectangles in (a) and (b). The letters in yellow indicate SI = snow-ice, MID = mixed ice and debris, SGD = supraglacial debris, BL = bare land, W = water bodies, V = vegetation, S = shadowed regions, and OL = other land cover.</p>
Full article ">Figure 4
<p>The surface reflectance from the Landsat-8 OLI bands for all of the selected land cover samples of the 10 major land cover types.</p>
Full article ">Figure 5
<p>An example of the land surface temperature (LST) of different land cover types at the Yanong glacier of the Parlung Zangbo basin with one transect (from point a to point b) across the Yanong glacier and its surroundings (A transect in red): (<b>a</b>) A false color composite image with a band combination of 7/5/3 (R/G/B) of Landsat-8 OLI data on 6 October 2015; (<b>b</b>) a Landsat-8 LST image; and (<b>c</b>) statistics of land surface temperature across the transect (direction from NW to SE). The letters indicate MID = mixed ice and debris, SGD = supraglacial debris, and Land = bare land. The dashed line in green highlights LST = 273.15 K.</p>
Full article ">Figure 5 Cont.
<p>An example of the land surface temperature (LST) of different land cover types at the Yanong glacier of the Parlung Zangbo basin with one transect (from point a to point b) across the Yanong glacier and its surroundings (A transect in red): (<b>a</b>) A false color composite image with a band combination of 7/5/3 (R/G/B) of Landsat-8 OLI data on 6 October 2015; (<b>b</b>) a Landsat-8 LST image; and (<b>c</b>) statistics of land surface temperature across the transect (direction from NW to SE). The letters indicate MID = mixed ice and debris, SGD = supraglacial debris, and Land = bare land. The dashed line in green highlights LST = 273.15 K.</p>
Full article ">Figure 6
<p>The conceptual workflow of the Random Forest (RF) classifier.</p>
Full article ">Figure 7
<p>(<b>a</b>) The normalized feature importance for the whole 10 land cover classes in the RF classification. 1–6: Landsat-8 OLI surface reflectance (Blue, Green, Red, NIR, SWIR1, and SWIR2 band); 7: land surface temperature; 8–10: NDSI, NDWI, and NDVI; 11–22: 12 DEM-derived features (elevation, slope, aspect, shaded relief, profile convexity, plan convexity, longitudinal convexity, cross-sectional convexity, minimum curvature, maximum curvature, root-mean-square error, and absolute elevation change); 23–70: eight textural features of each OLI band (average, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation). (<b>b</b>) Normalized feature importance for the eight textural features for each OLI band in the RF classification.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) The normalized feature importance for the whole 10 land cover classes in the RF classification. 1–6: Landsat-8 OLI surface reflectance (Blue, Green, Red, NIR, SWIR1, and SWIR2 band); 7: land surface temperature; 8–10: NDSI, NDWI, and NDVI; 11–22: 12 DEM-derived features (elevation, slope, aspect, shaded relief, profile convexity, plan convexity, longitudinal convexity, cross-sectional convexity, minimum curvature, maximum curvature, root-mean-square error, and absolute elevation change); 23–70: eight textural features of each OLI band (average, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation). (<b>b</b>) Normalized feature importance for the eight textural features for each OLI band in the RF classification.</p>
Full article ">Figure 8
<p>The normalized feature importance for four glacier classes, i.e., (<b>a</b>) snow-ice, (<b>b</b>) mixed ice and debris, (<b>c</b>) supraglacial debris, and (<b>d</b>) shadowed ice in the RF classification. (1–6: Landsat-8 OLI surface reflectance (Blue, Green, Red, NIR, SWIR1, and SWIR2 band); 7: land surface temperature; 8–10: NDSI, NDWI, and NDVI; 11–22: 12 DEM-derived features (elevation, slope, aspect, shaded relief, profile convexity, plan convexity, longitudinal convexity, cross-sectional convexity, minimum curvature, maximum curvature, root-mean-square error, and absolute elevation change); 23–70: eight textural features of each OLI band (average, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation).</p>
Full article ">Figure 8 Cont.
<p>The normalized feature importance for four glacier classes, i.e., (<b>a</b>) snow-ice, (<b>b</b>) mixed ice and debris, (<b>c</b>) supraglacial debris, and (<b>d</b>) shadowed ice in the RF classification. (1–6: Landsat-8 OLI surface reflectance (Blue, Green, Red, NIR, SWIR1, and SWIR2 band); 7: land surface temperature; 8–10: NDSI, NDWI, and NDVI; 11–22: 12 DEM-derived features (elevation, slope, aspect, shaded relief, profile convexity, plan convexity, longitudinal convexity, cross-sectional convexity, minimum curvature, maximum curvature, root-mean-square error, and absolute elevation change); 23–70: eight textural features of each OLI band (average, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation).</p>
Full article ">Figure 9
<p>The preliminary classification result of Landsat data (6 October 2015) using the RF algorithm.</p>
Full article ">Figure 10
<p>The out-of-bag (OOB) error rate plot. The dashed line in red represents the accepted error rate threshold.</p>
Full article ">Figure 11
<p>Examples of correctly classified and misclassified areas in the preliminary classification result. (<b>a</b>,<b>c</b>) A false color composite image acquired on 6 October 2015 (band7-SWIR, band5-NIR, and band3-Green for R/G/B); (<b>b</b>,<b>d</b>) The land cover map.</p>
Full article ">Figure 12
<p>Examples of the classification results before and after overlaying. (<b>a</b>,<b>c</b>) A false color composite image (band7-SWIR, band5-NIR, and band3-Green for R/G/B); (<b>b</b>,<b>d</b>) The land cover map using one image. (<b>e</b>) Classification results after overlaying (without post-processing). The date of the image in (<b>a</b>) is 6 October 2015. The date of the image in (<b>c</b>) is 18 July 2015.</p>
Full article ">Figure 12 Cont.
<p>Examples of the classification results before and after overlaying. (<b>a</b>,<b>c</b>) A false color composite image (band7-SWIR, band5-NIR, and band3-Green for R/G/B); (<b>b</b>,<b>d</b>) The land cover map using one image. (<b>e</b>) Classification results after overlaying (without post-processing). The date of the image in (<b>a</b>) is 6 October 2015. The date of the image in (<b>c</b>) is 18 July 2015.</p>
Full article ">Figure 13
<p>The final classification result after post-processing based on multi-temporal Landsat images.</p>
Full article ">Figure 14
<p>The distribution of (<b>a</b>) glacier number, glacier area, and mean altitude for different size classes; (<b>b</b>) glacier number and glacier area for different mean slopes; and (<b>c</b>) glacier number and glacier area for various aspects of the study area.</p>
Full article ">Figure 15
<p>(<b>a</b>) The distribution of glacier elevation (background: a false color composite image with a band combination of 7/5/3 (R/G/B) of the Landsat-8 OLI image acquired on 6 October 2015); (<b>b</b>) Hypsometry of all glaciers in the study area.</p>
Full article ">Figure 15 Cont.
<p>(<b>a</b>) The distribution of glacier elevation (background: a false color composite image with a band combination of 7/5/3 (R/G/B) of the Landsat-8 OLI image acquired on 6 October 2015); (<b>b</b>) Hypsometry of all glaciers in the study area.</p>
Full article ">Figure 16
<p>A comparison of the RF classification results (black lines), Southeastern Qinghai–Tibet Plateau Glacier Inventory (SEQTPGI, red lines), and the second Chinese Glacier Inventory (CGI2, yellow lines). (<b>a</b>) A false color composite image acquired on 6 October 2015 (band7-SWIR, band5-NIR, and band3-Green for R/G/B); (<b>b–d</b>) Glacier outlines of different datasets with the Landsat-8 OLI image (6 October 2015) as a background.</p>
Full article ">Figure 17
<p>A comparison of the RF classification results with (black lines) or without (pink lines) elevation change information, SEQTPGI (red lines) and CGI2 (yellow lines). (<b>a</b>) A false color composite image acquired on 6 October 2015 (band7-SWIR, band5-NIR, and band3-Green for R/G/B); (<b>b</b>) LST map; and (<b>c</b>) Elevation change map.</p>
Full article ">
20 pages, 7982 KiB  
Article
Multi-Feature Based Ocean Oil Spill Detection for Polarimetric SAR Data Using Random Forest and the Self-Similarity Parameter
by Shengwu Tong, Xiuguo Liu, Qihao Chen, Zhengjia Zhang and Guangqi Xie
Remote Sens. 2019, 11(4), 451; https://doi.org/10.3390/rs11040451 - 22 Feb 2019
Cited by 60 | Viewed by 6385
Abstract
Synthetic aperture radar (SAR) is an important means to detect ocean oil spills which cause serious damage to the marine ecosystem. However, the look-alikes, which have a similar behavior to oil slicks in SAR images, will reduce the oil spill detection accuracy. Therefore, [...] Read more.
Synthetic aperture radar (SAR) is an important means to detect ocean oil spills which cause serious damage to the marine ecosystem. However, the look-alikes, which have a similar behavior to oil slicks in SAR images, will reduce the oil spill detection accuracy. Therefore, a novel oil spill detection method based on multiple features of polarimetric SAR data is proposed to improve the detection accuracy in this paper. In this method, the self-similarity parameter, which is sensitive to the randomness of the scattering target, is introduced to enhance the discrimination ability between oil slicks and look-alikes. The proposed method uses the Random Forest classification combing self-similarity parameter with seven well-known features to improve oil spill detection accuracy. Evaluations and comparisons were conducted with Radarsat-2 and UAVSAR polarimetric SAR datasets, which shows that: (1) the oil spill detection accuracy of the proposed method reaches 92.99% and 82.25% in two datasets, respectively, which is higher than three well-known methods. (2) Compared with other seven polarimetric features, self-similarity parameter has the better oil spill detection capability in the scene with lower wind speed close to 2–3 m/s, while, when the wind speed is close to 9–12 m/s, it is more suitable for oil spill detection in the downwind scene where the microwave incident direction is similar to the sea surface wind direction and performs well in the scene with incidence angle range from 29.7° to 43.5°. Full article
(This article belongs to the Special Issue SAR in Big Data Era)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The flow chart of the proposed method.</p>
Full article ">Figure 2
<p>The value of features in the oil slicks, look-alikes, and water. (<b>a</b>) three eigenvalues; (<b>b</b>) self-similarity parameter.</p>
Full article ">Figure 3
<p>Pauli color-coded image of the Radarsat-2 polarimetric SAR image as the first dataset.</p>
Full article ">Figure 4
<p>Pauli color-coded image of the UAVSAR polarimetric SAR image in the second dataset.</p>
Full article ">Figure 5
<p>The Probability Density Function (PDF) of eight polarimetric features values within selected regions of Radarsat-2 images. The features <span class="html-italic">V</span> and <span class="html-italic">Rco</span> are log transformed for visualization purposes; (<b>a</b>) <span class="html-italic">DoP</span>; (<b>b</b>) <span class="html-italic">μ</span>; (<b>c</b>) log<span class="html-italic">Rco</span>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>η</mi> </mrow> </semantics></math>; (<b>e</b>) <span class="html-italic">Ph</span>; (<b>f</b>) log<span class="html-italic">V</span>; (<b>g</b>) <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>12</mn> </mrow> </msub> </mrow> </semantics></math>; and (<b>h</b>) <math display="inline"><semantics> <mrow> <mi>r</mi> <mi>r</mi> <msub> <mi>r</mi> <mi>s</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5 Cont.
<p>The Probability Density Function (PDF) of eight polarimetric features values within selected regions of Radarsat-2 images. The features <span class="html-italic">V</span> and <span class="html-italic">Rco</span> are log transformed for visualization purposes; (<b>a</b>) <span class="html-italic">DoP</span>; (<b>b</b>) <span class="html-italic">μ</span>; (<b>c</b>) log<span class="html-italic">Rco</span>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>η</mi> </mrow> </semantics></math>; (<b>e</b>) <span class="html-italic">Ph</span>; (<b>f</b>) log<span class="html-italic">V</span>; (<b>g</b>) <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>12</mn> </mrow> </msub> </mrow> </semantics></math>; and (<b>h</b>) <math display="inline"><semantics> <mrow> <mi>r</mi> <mi>r</mi> <msub> <mi>r</mi> <mi>s</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p><span class="html-italic">K</span>-means Classifications of Radarsat-2 image into three classes. (<b>a</b>) Dop; (<b>b</b>) <span class="html-italic">μ</span>; (<b>c</b>) log<span class="html-italic">Rco</span>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>η</mi> </mrow> </semantics></math>; (<b>e</b>) <span class="html-italic">Ph</span>; (<b>f</b>) log<span class="html-italic">V</span>; (<b>g</b>) <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>12</mn> </mrow> </msub> </mrow> </semantics></math>; and (<b>h</b>) <math display="inline"><semantics> <mrow> <mi>r</mi> <mi>r</mi> <msub> <mi>r</mi> <mi>s</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 6 Cont.
<p><span class="html-italic">K</span>-means Classifications of Radarsat-2 image into three classes. (<b>a</b>) Dop; (<b>b</b>) <span class="html-italic">μ</span>; (<b>c</b>) log<span class="html-italic">Rco</span>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>η</mi> </mrow> </semantics></math>; (<b>e</b>) <span class="html-italic">Ph</span>; (<b>f</b>) log<span class="html-italic">V</span>; (<b>g</b>) <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>12</mn> </mrow> </msub> </mrow> </semantics></math>; and (<b>h</b>) <math display="inline"><semantics> <mrow> <mi>r</mi> <mi>r</mi> <msub> <mi>r</mi> <mi>s</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Jeffreys–Matusita (J–M) distances for eight polarimetric features in a Radarsat-2 image.</p>
Full article ">Figure 8
<p>The Random Forest classification results. (<b>a</b>) Radarsat-2 image; and (<b>b</b>) UAVSAR image.</p>
Full article ">Figure 9
<p>The oil spill detection results of four different methods in the Radarsat-2 polarimetric SAR image. (<b>a</b>) FOS; (<b>b</b>) GRK; (<b>c</b>) EBM; and (<b>d</b>) SRF.</p>
Full article ">Figure 10
<p>The oil spill detection results of four different methods in the UAVSAR polarimetric SAR image. (<b>a</b>) FOS; (<b>b</b>) GRK; (<b>c</b>) EBM; and (<b>d</b>) SRF.</p>
Full article ">Figure 11
<p>Contributions of eight features in the RF classification.</p>
Full article ">Figure 12
<p>Signal-to-noise analyses of each channel of two sensors, the vertical bars show the mean and standard deviation of the backscatter values <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mn>0</mn> </msub> </mrow> </semantics></math> in the regions indicated by samples in <a href="#remotesensing-11-00451-f002" class="html-fig">Figure 2</a> and <a href="#remotesensing-11-00451-f003" class="html-fig">Figure 3</a>. (<b>a</b>) HH channel (Radarsat-2); (<b>b</b>) VV channel (Radarsat-2); (<b>c</b>) VH channel (Radarsat-2); (<b>d</b>) HH channel (UAVSAR); (<b>e</b>) VV channel (UAVSAR); (<b>f</b>) VH channel (UAVSAR).</p>
Full article ">Figure 13
<p>The J–M distances between the oil slicks and the look-alikes of the polarimetric features in different wind direction scenes. E40, E60, and E80 represent the oil slicks with oil volumetric fractions of 40%, 60%, and 80%, respectively, VO represents the vegetable oil, the suffix D represents the downwind and U represents the upwind.</p>
Full article ">Figure 14
<p>The J–M distances between the oil slicks and the look-alikes of the polarimetric features in different incidence angle scenes. The suffix H indicates higher incidence angle scene (the incidence angle of the oil slick areas ranges from 39.6° to 43.5°), and L indicates lower incidence angle scene (the incidence angle of the oil slick areas ranges from 29.7° to 34.4°).</p>
Full article ">
16 pages, 4039 KiB  
Article
The Influence of Spectral Pretreatment on the Selection of Representative Calibration Samples for Soil Organic Matter Estimation Using Vis-NIR Reflectance Spectroscopy
by Yi Liu, Yaolin Liu, Yiyun Chen, Yang Zhang, Tiezhu Shi, Junjie Wang, Yongsheng Hong, Teng Fei and Yang Zhang
Remote Sens. 2019, 11(4), 450; https://doi.org/10.3390/rs11040450 - 21 Feb 2019
Cited by 67 | Viewed by 5528
Abstract
In constructing models for predicting soil organic matter (SOM) by using visible and near-infrared (vis–NIR) spectroscopy, the selection of representative calibration samples is decisive. Few researchers have studied the inclusion of spectral pretreatments in the sample selection strategy. We collected 108 soil samples [...] Read more.
In constructing models for predicting soil organic matter (SOM) by using visible and near-infrared (vis–NIR) spectroscopy, the selection of representative calibration samples is decisive. Few researchers have studied the inclusion of spectral pretreatments in the sample selection strategy. We collected 108 soil samples and applied six commonly used spectral pretreatments to preprocess soil spectra, namely, Savitzky–Golay (SG) smoothing, first derivative (FD), logarithmic function log(1/R), mean centering (MC), standard normal variate (SNV), and multiplicative scatter correction (MSC). Then, the Kennard–Stone (KS) strategy was used to select calibration samples based on the pretreated spectra, and the size of the calibration set varied from 10 samples to 86 samples (80% of the total samples). These calibration sets were employed to construct partial least squares regression models (PLSR) to predict SOM, and the built models were validated by a set of 21 samples (20% of the total samples). The results showed that 64−78% of the calibration sets selected by the inclusion of pretreatment demonstrated significantly better performance of SOM estimation. The average improved residual predictive deviations (ΔRPD) were 0.06, 0.13, 0.19, and 0.13 for FD, log(1/R), MSC, and SNV, respectively. Thus, we concluded that spectral pretreatment improves the sample selection strategy, and the degree of its influence varies with the size of the calibration set and the type of pretreatment. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Maps that show the location of the sampled region, the positions of the sampling sites, and the landscapes, as indicated by a Landsat 7 enhanced thematic mapper plus (ETM+) scan line corrector off (SLC-off) image with a composition of bands 4 (red), 3 (green), and 2 (blue).</p>
Full article ">Figure 2
<p>The spectral reflectance of soil samples (<span class="html-italic">n</span> = 106). The principal positions of spectral absorption by organics and water are highlighted.</p>
Full article ">Figure 3
<p>The results of pretreatment’s (hollow square) influence on sample selection. None (black circle) denotes the result of sample selection without pretreatment. SG denotes Savitzky–Golay smoothing (<b>a</b>). FD denotes first derivative (<b>b</b>). MC denotes mean centering (<b>c</b>). log(1/R) denotes logarithmic function (<b>d</b>). MSC denotes multiplicative scatter correction (<b>e</b>). SNV denotes standard normal variate (<b>f</b>). RPD denotes residual predictive deviation.</p>
Full article ">Figure 4
<p>The results of the proportion of calibration sets when pretreatment influenced sample selection positively (dark gray bar) or negatively (light gray bar) and the average <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>RPD</mi> </mrow> </semantics></math> (black bar). SG denotes Savitzky–Golay smoothing. FD denotes first derivative. MC denotes mean centering. log(1/R) denotes logarithmic function. MSC denotes multiplicative scatter correction. SNV denotes standard normal variate.</p>
Full article ">Figure 5
<p>A boxplot of the RPD of the partial least squares regression (PLSR) model after including pretreatment in sample selection. FD denotes first derivative. log(1/R) denotes logarithmic function. MSC denotes multiplicative scatter correction. SNV denotes standard normal variate.</p>
Full article ">Figure 6
<p>The Euclidean distance among samples of raw spectra (<b>a</b>) and the change in Euclidean distance after the spectra were pretreated by first derivative (FD) (<b>b</b>), logarithmic function (log(1/R)) (<b>c</b>), multiplicative scatter correction (MSC) (<b>d</b>), and standard normal variate (SNV) (<b>e</b>). All the calibration samples are sorted in ascending order according to SOM content and then numbered #1, #2, …, #85.</p>
Full article ">Figure 7
<p>A subset of 14 samples selected based on raw and pretreated spectra. The gray ellipse shows the major difference in sample selection between raw and pretreated spectra.</p>
Full article ">
13 pages, 4892 KiB  
Article
Mapping Winter Wheat Planting Area and Monitoring Its Phenology Using Sentinel-1 Backscatter Time Series
by Yang Song and Jing Wang
Remote Sens. 2019, 11(4), 449; https://doi.org/10.3390/rs11040449 - 21 Feb 2019
Cited by 69 | Viewed by 7339
Abstract
Crop planting area mapping and phenology monitoring are of great importance to analyzing the impacts of climate change on agricultural production. In this study, crop planting area and phenology were identified based on Sentinel-1 backscatter time series in the test region of the [...] Read more.
Crop planting area mapping and phenology monitoring are of great importance to analyzing the impacts of climate change on agricultural production. In this study, crop planting area and phenology were identified based on Sentinel-1 backscatter time series in the test region of the North China Plain, East Asia, which has a stable cropping pattern and similar phenological stages across the region. Ground phenological observations acquired from a typical agro-meteorological station were used as a priori knowledge. A parallelepiped classifier processed VH (vertical transmitting, horizontal receiving) and VV (vertical transmitting, vertical receiving) backscatter signals in order to map the winter wheat planting area. An accuracy assessment showed that the total classification accuracy reached 84% and the Kappa coefficient was 0.77. Both the difference ( σ d ) between VH and VV and its slope were obtained to contrast with a priori knowledge and then used to extract the phenological metrics. Our findings from the analysis of the time series showed that the seedling, tillering, overwintering, jointing, and heading of winter wheat may be closely related to σ d and its slope. Overall, this study presents a generalizable methodology for mapping the winter wheat planting area and monitoring phenology using Sentinel-1 backscatter time series, especially in areas lacking optical remote sensing data. Our results suggest that the main change in Sentinel-1 backscatter is dominated by the vegetation canopy structure, which is different from the established methods using optical remote sensing data, and it is available for phenological metrics extraction. Full article
(This article belongs to the Special Issue Time Series Analysis Based on SAR Images)
Show Figures

Figure 1

Figure 1
<p>Location of the test region and an agro-meteorological station in the North China Plain, East Asia.</p>
Full article ">Figure 2
<p>Time series of daily precipitation (blue) and average temperature (red) in the test region during the growth stage of winter wheat from DOY 283 to DOY 169 (1 October 2016 to 6 June 2017).</p>
Full article ">Figure 3
<p>The parallelepiped classifier in the 3D feature space. Each axis (<span class="html-italic">a</span>, <span class="html-italic">b</span>, <span class="html-italic">c</span>) represents band values of the remote sensing image. The boxes represent the classes which we defined as the decision boundaries using the training regions of interest (ROIs).</p>
Full article ">Figure 4
<p>Temporal backscatter profiles of different land cover types (winter wheat, urban, other crops, water, and bare soil). The results showed the curve changes of <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mi>v</mi> <mi>v</mi> </mrow> </msub> </mrow> </semantics></math> (<b>left</b>) and <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mi>v</mi> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math> (<b>right</b>) selected from ROIs during the growth stages of winter wheat from 9 October 2016 to 30 June 2017.</p>
Full article ">Figure 5
<p>Extraction of winter wheat cropland in the test region by the multi-temporal stacked Sentinel-1A bands using a parallelepiped classifier. The total accuracy of the classification was 84% and the Kappa coefficient was 0.77.</p>
Full article ">Figure 6
<p>Smoothed Sentinel-1 backscatter time series of ROIs using the Harmonic Analysis of Time Series (HANTS) algorithm for winter wheat pixels in the test region of the North China Plain.</p>
Full article ">Figure 7
<p>(<b>a</b>) The <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>d</mi> </msub> </mrow> </semantics></math> (VH-VV) time series from 9 October 2016 (DOY 283) to 30 June 2017 (DOY 169). (<b>b</b>) The change in the slope of the <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>d</mi> </msub> </mrow> </semantics></math> time series.</p>
Full article ">
21 pages, 5503 KiB  
Article
Research on Resource Allocation Method of Space Information Networks Based on Deep Reinforcement Learning
by Xiangli Meng, Lingda Wu and Shaobo Yu
Remote Sens. 2019, 11(4), 448; https://doi.org/10.3390/rs11040448 - 21 Feb 2019
Cited by 11 | Viewed by 4923
Abstract
The space information networks (SIN) have a series of characteristics, such as strong heterogeneity, multiple types of resources, and difficulty in management. Aiming at the problem of resource allocation in SIN, this paper firstly establishes a hierarchical and domain-controlled SIN architecture based on [...] Read more.
The space information networks (SIN) have a series of characteristics, such as strong heterogeneity, multiple types of resources, and difficulty in management. Aiming at the problem of resource allocation in SIN, this paper firstly establishes a hierarchical and domain-controlled SIN architecture based on software-defined networking (SDN). On this basis, the transmission, caching, and computing resources of the whole network are managed uniformly. The Asynchronous Advantage Actor-Critic (A3C) algorithm in deep reinforcement learning is introduced to model the process of resource allocation. The simulation results show that the proposed scheme can effectively improve the expected benefits of unit resources and improve the resource utilization efficiency of the SIN. Full article
Show Figures

Figure 1

Figure 1
<p>Overall networking architecture of the hierarchical and domain-controlled space information network (SIN) architecture.</p>
Full article ">Figure 2
<p>Network control architecture of the hierarchical and domain-controlled SIN architecture.</p>
Full article ">Figure 3
<p>Geometric diagram of low Earth orbit (LEO) satellite and user.</p>
Full article ">Figure 4
<p>Satellite channel model diagram.</p>
Full article ">Figure 5
<p>Framework of the Asynchronous Advantage Actor-Critic (A3C) algorithm based on the SIN.</p>
Full article ">Figure 6
<p>Convergence performance under different schemes.</p>
Full article ">Figure 7
<p>Expected benefits of unit resources under different elevation angles.</p>
Full article ">Figure 8
<p>Expected benefits of unit resources under different task content.</p>
Full article ">Figure 9
<p>The relationship between the unit charging price for using transmission resources and the expected benefit of unit resources.</p>
Full article ">Figure 10
<p>The relationship between the unit charging price for using caching resources and the expected benefit of unit resources.</p>
Full article ">Figure 11
<p>The relationship between the unit charging price for using computing resources and the expected benefit of unit resources.</p>
Full article ">
19 pages, 7598 KiB  
Article
Landsat 4, 5 and 7 (1982 to 2017) Analysis Ready Data (ARD) Observation Coverage over the Conterminous United States and Implications for Terrestrial Monitoring
by Alexey V. Egorov, David P. Roy, Hankui K. Zhang, Zhongbin Li, Lin Yan and Haiyan Huang
Remote Sens. 2019, 11(4), 447; https://doi.org/10.3390/rs11040447 - 21 Feb 2019
Cited by 51 | Viewed by 8138
Abstract
The Landsat Analysis Ready Data (ARD) are designed to make the U.S. Landsat archive straightforward to use. In this paper, the availability of the Landsat 4 and 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) ARD over the conterminous [...] Read more.
The Landsat Analysis Ready Data (ARD) are designed to make the U.S. Landsat archive straightforward to use. In this paper, the availability of the Landsat 4 and 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) ARD over the conterminous United States (CONUS) are quantified for a 36-year period (1 January 1982 to 31 December 2017). Complex patterns of ARD availability occur due to the satellite orbit and sensor geometry, cloud, sensor acquisition and health issues and because of changing relative orientation of the ARD tiles with respect to the Landsat orbit paths. Quantitative per-pixel and summary ARD tile results are reported. Within the CONUS, the average annual number of non-cloudy observations in each 150 × 150 km ARD tile varies from 0.53 to 16.80 (Landsat 4 TM), 11.08 to 22.83 (Landsat 5 TM), 9.73 to 21.72 (Landsat 7 ETM+) and 14.23 to 30.07 (all three sensors). The annual number was most frequently only 2 to 4 Landsat 4 TM observations (36% of the CONUS tiles), increasing to 14 to 16 Landsat 5 TM observations (26% of tiles), 12 to 14 Landsat 7 ETM+ observations (31% of tiles) and 18 to 20 observations (23% of tiles) when considering all three sensors. The most frequently observed ARD tiles were in the arid south-west and in certain mountain rain shadow regions and the least observed tiles were in the north-east, around the Great Lakes and along parts of the north-west coast. The quality of time series algorithm results is expected to be reduced at ARD tiles with low reported availability. The smallest annual number of cloud-free observations for the Landsat 5 TM are over ARD tile h28v04 (northern New York state), for Landsat 7 ETM+ are over tile h25v07 (Ohio and Pennsylvania) and for Landsat 4 TM are over tile h22v08 (northern Indiana). The greatest annual number of cloud-free observations for the Landsat 5 TM and 7 ETM+ ARD are over southern California ARD tile h04v11 and for the Landsat 4 TM are over southern Arizona tile h06v13. The reported results likely overestimate the number of good surface observations because shadows and cirrus clouds were not considered. Implications of the findings for terrestrial monitoring and future ARD research are discussed. Full article
(This article belongs to the Special Issue Science of Landsat Analysis Ready Data)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>CONUS ARD tile locations with horizontal (h = 0 to 32) and vertical (v = 0 to 20) tile coordinates. The ARD tile boundaries are shown by grey lines and are defined by 5000 × 5000 30 m pixels in the Albers equal area projection (see [<a href="#B2-remotesensing-11-00447" class="html-bibr">2</a>] for the projection parameters); national and state boundaries are shown as black lines. Tiles where at least 50% of their coverage is within the CONUS (defined by vector data, [<a href="#B16-remotesensing-11-00447" class="html-bibr">16</a>]) are shown white, otherwise shown grey.</p>
Full article ">Figure 2
<p>Annual number of Landsat ARD granules over the CONUS for each sensor over the 36-year study period (1 January 1982 to 31 December 2017). The sensor count data (colored bars) are shown stacked on top of each other in periods with overlapping sensor data.</p>
Full article ">Figure 3
<p>Seasonal number of Landsat ARD granules over the CONUS over the 36-year study period shown in order of winter (December to February), spring (March to May), summer (June to August) and autumn (September to November) for each year. The sensor count data (colored bars) are shown stacked on top of each other in periods with overlapping sensor data.</p>
Full article ">Figure 4
<p>Total number (see color scale) of (<b>a</b>) non-fill, (<b>b</b>) non-fill non-cloudy and (<b>c</b>) non-fill non-cloudy non-shadow Landsat 4 TM, 5 TM and 7 ETM+ observations at each CONUS ARD 30 m pixel location over the 36 year study period.</p>
Full article ">Figure 5
<p>Example detailed spatial subset of <a href="#remotesensing-11-00447-f004" class="html-fig">Figure 4</a>a over the northern tip of Maine (state boundary in black) showing the total number of non-fill Landsat 4 TM, 5 TM and 7 ETM+ observations over the 36 year study period for six ARD tiles (h29 to h31, v01 to v02).</p>
Full article ">Figure 6
<p>The average annual number (<math display="inline"><semantics> <mrow> <msubsup> <mi>μ</mi> <mrow> <mtext> </mtext> <mi>p</mi> <mi>i</mi> <mi>x</mi> <mi>e</mi> <mi>l</mi> </mrow> <mrow> <mtext> </mtext> <mi>s</mi> <mi>e</mi> <mi>n</mi> <mi>s</mi> <mi>o</mi> <mi>r</mi> </mrow> </msubsup> </mrow> </semantics></math> Equation (1)) of non-fill non-cloudy observations at each CONUS ARD 30 m pixel location over the 36 year study period for (<b>a</b>) Landsat 4 TM, 5 TM and 7 ETM+ combined, (<b>b</b>) Landsat 7 ETM+, (<b>c</b>) Landsat 5 TM, (<b>d</b>) Landsat 4 TM.</p>
Full article ">Figure 7
<p>The interquartile range (<math display="inline"><semantics> <mrow> <mi>I</mi> <mi>Q</mi> <msubsup> <mi>R</mi> <mrow> <mtext> </mtext> <mi>p</mi> <mi>i</mi> <mi>x</mi> <mi>e</mi> <mi>l</mi> </mrow> <mrow> <mtext> </mtext> <mi>s</mi> <mi>e</mi> <mi>n</mi> <mi>s</mi> <mi>o</mi> <mi>r</mi> </mrow> </msubsup> </mrow> </semantics></math> Equation (2)) of the annual number of non-fill non-cloudy observations at each CONUS ARD 30 m pixel location for (a) Landsat 4 TM, 5 TM and 7 ETM+ combined, (b) Landsat 7 ETM+, (c) Landsat 5 TM, (d) Landsat 4 TM. The maximum CONUS ARD <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>Q</mi> <msubsup> <mi>R</mi> <mrow> <mtext> </mtext> <mi>p</mi> <mi>i</mi> <mi>x</mi> <mi>e</mi> <mi>l</mi> </mrow> <mrow> <mtext> </mtext> <mi>s</mi> <mi>e</mi> <mi>n</mi> <mi>s</mi> <mi>o</mi> <mi>r</mi> </mrow> </msubsup> </mrow> </semantics></math> values are 55 (<b>a</b>), 27 (<b>b</b>), 31 (<b>c</b>) and 6 (<b>d</b>). Light grey show locations where there were observations over the sensor lifetime but where the interquartile range is zero, for example, where the annual number of non-fill non-cloudy observations was the same among the years.</p>
Full article ">Figure 8
<p>Histogram of the temporal differences between consecutive non-fill non-cloudy Landsat observations over the 36 year study period at the most observed non-fill non-cloudy CONUS ARD pixel location (located at 32°47′34.09″ N 114°55′47.60″ W, southern California, ARD tile h05v13) for (<b>a</b>) Landsat 4 TM, 5 TM and 7 ETM+ combined, (<b>b</b>) Landsat 7 ETM+, (<b>c</b>) Landsat 5 TM, (<b>d</b>) Landsat 4 TM.</p>
Full article ">Figure 9
<p>The average annual number (<math display="inline"><semantics> <mrow> <msubsup> <mi>μ</mi> <mrow> <mtext> </mtext> <mi>t</mi> <mi>i</mi> <mi>l</mi> <mi>e</mi> </mrow> <mrow> <mtext> </mtext> <mi>s</mi> <mi>e</mi> <mi>n</mi> <mi>s</mi> <mi>o</mi> <mi>r</mi> </mrow> </msubsup> </mrow> </semantics></math>, Equation (3)) of non-fill non-cloudy observations across each CONUS ARD tile over the 36 year study period for (<b>a</b>) Landsat 4 TM, 5 TM and 7 ETM+ combined, (<b>b</b>) Landsat 7 ETM+, (<b>c</b>) Landsat 5 TM, (<b>d</b>) Landsat 4 TM. The ten <span class="html-italic">most</span> (red outlined) and the ten <span class="html-italic">least</span> (white outlined) observed CONUS ARD tiles (white tiles in <a href="#remotesensing-11-00447-f001" class="html-fig">Figure 1</a>) are shown and are summarized in <a href="#remotesensing-11-00447-t002" class="html-table">Table 2</a> and <a href="#remotesensing-11-00447-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 10
<p>CONUS histograms of the average annual number (<math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mrow> <mtext> </mtext> <mi>t</mi> <mi>i</mi> <mi>l</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math>, Equation (3)) of non-fill non-cloudy observations across each ARD tile for the 36 year study period, for (<b>a</b>) Landsat 4 TM, 5 TM and 7 ETM+ combined, (<b>b</b>) Landsat 7 ETM+, (<b>c</b>) Landsat 5 TM, (<b>d</b>) Landsat 4 TM. The histogram bins are defined as &lt;2, 2 – &lt;4, 4 – &lt;6, …, 30 - &lt;32. All results derived from the <a href="#remotesensing-11-00447-f009" class="html-fig">Figure 9</a> data.</p>
Full article ">
19 pages, 16827 KiB  
Article
Fusing Multimodal Video Data for Detecting Moving Objects/Targets in Challenging Indoor and Outdoor Scenes
by Zacharias Kandylakis, Konstantinos Vasili and Konstantinos Karantzalos
Remote Sens. 2019, 11(4), 446; https://doi.org/10.3390/rs11040446 - 21 Feb 2019
Cited by 11 | Viewed by 4486
Abstract
Single sensor systems and standard optical—usually RGB CCTV video cameras—fail to provide adequate observations, or the amount of spectral information required to build rich, expressive, discriminative features for object detection and tracking tasks in challenging outdoor and indoor scenes under various environmental/illumination conditions. [...] Read more.
Single sensor systems and standard optical—usually RGB CCTV video cameras—fail to provide adequate observations, or the amount of spectral information required to build rich, expressive, discriminative features for object detection and tracking tasks in challenging outdoor and indoor scenes under various environmental/illumination conditions. Towards this direction, we have designed a multisensor system based on thermal, shortwave infrared, and hyperspectral video sensors and propose a processing pipeline able to perform in real-time object detection tasks despite the huge amount of the concurrently acquired video streams. In particular, in order to avoid the computationally intensive coregistration of the hyperspectral data with other imaging modalities, the initially detected targets are projected through a local coordinate system on the hypercube image plane. Regarding the object detection, a detector-agnostic procedure has been developed, integrating both unsupervised (background subtraction) and supervised (deep learning convolutional neural networks) techniques for validation purposes. The detected and verified targets are extracted through the fusion and data association steps based on temporal spectral signatures of both target and background. The quite promising experimental results in challenging indoor and outdoor scenes indicated the robust and efficient performance of the developed methodology under different conditions like fog, smoke, and illumination changes. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>In challenging indoor or outdoor environments with dynamically changing conditions like different smoke, fog, humidity, etc., levels, the standard moving object detection and tracking algorithms fail to detect moving targets based on just a single imaging (usually RGB CCTV) source.</p>
Full article ">Figure 2
<p>The multisensor system on the left with the thermal (top left and right), the SWIR (middle top) and the two hyperspectral snapshot sensors (middle bottom) on top of the single board computer with the embedded processor. The sensor specifications are presented in the table on the right.</p>
Full article ">Figure 3
<p>The concurrently acquired multimodal imaging data from the developed multisensor system including the SWIR, thermal, and the two hyperspectral (4 × 4 and 5 × 5 snapshot mosaic) sensors. The corresponding frame from a standard RGB optical CCTV sensor is presented (top left), as well.</p>
Full article ">Figure 4
<p>The multisensor system observes the region of interest (ROI), while each sensor has different field of views (FOVs). The ROI plane is associated with an arbitrarily defined, local coordinate system (LCS). In order to establish correspondences among the different FOVs, the perspective and inverse perspective transformations are estimated which can relate image coordinates among the different oblique views of all imaging sources.</p>
Full article ">Figure 5
<p>The developed processing pipeline for efficient multisensor data fusion.</p>
Full article ">Figure 6
<p>The possible moving targets (PMT) from the SWIR (left) and thermal (middle) sensors are projected through the LCS onto the hyperspectral data cube and the associated/verified targets are detected (right). Indicative results for frame #118 (top) and frame #123 (down) from the outdoor scene dataset are presented.</p>
Full article ">Figure 7
<p>The temporal spectral signatures of both the target and the background are calculated and employed during the target recognition and verification step.</p>
Full article ">Figure 8
<p>Detection results from the developed method (based on the BS detector) on the outD1 dataset. Frames #33, #47, and #90 are presented along with the intermediate ‘detected moving objects’ and finally ‘verified targets’ overlaid in the SWIR and hyperspectral (539 nm and 630 nm, respectively) bands.</p>
Full article ">Figure 9
<p>Indicative successful real-time detection results on the outdoor outD1 dataset from the integrated BS, FR-CNN, and YOLO detectors on the SWIR footage.</p>
Full article ">Figure 10
<p>Experimental results after the application of the developed framework in the outD1 dataset (indicative frames #033, #079, #087, and #108). For each frame the SWIR and three hyperspectral bands (476, 539, and 630 nm) are presented. The detected targets are annotated with a red color onto the SWIR. Their projections are also shown onto the hyperspectral images. Zoom-in views are, also, provided.</p>
Full article ">Figure 11
<p>Indicative detection examples overlaid onto the SWIR data for the first indoor dataset (inD1) based on the integrated BS, FR-CNN, and YOLO detectors.</p>
Full article ">Figure 12
<p>Indicative detection examples overlaid onto the SWIR data for the second indoor dataset (inD2) with the presence of significant smoke based on the integrated BS, FR-CNN, and YOLO detectors.</p>
Full article ">
27 pages, 15279 KiB  
Article
Spatiotemporal Patterns and Morphological Characteristics of Ulva prolifera Distribution in the Yellow Sea, China in 2016–2018
by Yingzhi Cao, Yichen Wu, Zhixiang Fang, Xiaojian Cui, Jianfeng Liang and Xiao Song
Remote Sens. 2019, 11(4), 445; https://doi.org/10.3390/rs11040445 - 21 Feb 2019
Cited by 44 | Viewed by 5396
Abstract
The world’s largest macroalgal blooms, Ulva prolifera, have appeared in the Yellow Sea every summer on different scales since 2007, causing great harm to the regional marine economy. In this study, the Normalized Difference of Vegetation Index (NDVI) index was used to [...] Read more.
The world’s largest macroalgal blooms, Ulva prolifera, have appeared in the Yellow Sea every summer on different scales since 2007, causing great harm to the regional marine economy. In this study, the Normalized Difference of Vegetation Index (NDVI) index was used to extract the green tide of Ulva prolifera from MODIS images in the Yellow Sea in 2016–2018, to investigate its spatiotemporal patterns and to calculate its occurrence probability. Using the standard deviational ellipse (SDE), the morphological characteristics of the green tide, including directionality and regularity, were analyzed. The results showed that the largest distribution and coverage areas occurred in 2016, with 57,384 km2 and 2906 km2, respectively and that the total affected region during three years was 163,162 km2. The green tide drifted northward and died out near Qingdao, Shandong Province, which was found to be a high-risk region. The coast of Jiangsu Province was believed to be the source of Ulva prolifera, but it was probably not the only one. The regularity of the boundary shape of the distribution showed a change that was opposite to the variation of scale. Several sharp increases were found in the parameters of the SDE in all three years. In conclusion, the overall situation of Ulva prolifera was still severe in recent years, and the sea area near Qingdao became the worst hit area of the green tide event. It was also shown that the sea surface wind played an important part in its migration and morphological changes. Full article
(This article belongs to the Special Issue Advanced Topics in Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of the study area and major cities along the coast.</p>
Full article ">Figure 2
<p>Research flow chart for this study.</p>
Full article ">Figure 3
<p>Data processing steps.</p>
Full article ">Figure 4
<p>Spatial and temporal distribution of <span class="html-italic">U. prolifera</span> in the Yellow Sea, China in 2018. The area within the red line represents the distribution of <span class="html-italic">U. prolifera</span>. The blue gradient band represents the coverage area of <span class="html-italic">U. prolifera</span>, and the darker the color, the denser the algae are. The statistical values of the two kinds of areas appear in the upper left corner of each sub-figure; A1 denotes the affected area, and A2 represents the coverage area. (<b>a</b>) The situation of <span class="html-italic">U. prolifera</span> on June 3. (<b>b</b>) The situation of <span class="html-italic">U. prolifera</span> on June 12. (<b>c</b>) The situation of <span class="html-italic">U. prolifera</span> on June 21. (<b>d</b>) The situation of <span class="html-italic">U. prolifera</span> on June 24. (<b>e</b>) The situation of <span class="html-italic">U. prolifera</span> on June 29. (<b>f</b>) The situation of <span class="html-italic">U. prolifera</span> on July 15. (<b>g</b>) The situation of <span class="html-italic">U. prolifera</span> on July 19. (<b>h</b>) The situation of <span class="html-italic">U. prolifera</span> on July 23. (<b>i</b>) The situation of <span class="html-italic">U. prolifera</span> on August 2. (<b>j</b>) The situation of <span class="html-italic">U. prolifera</span> on August 11. In image (<b>j</b>), <span class="html-italic">U. prolifera</span> exists as only a slender filament, which has a negligible effect on the larger area. Therefore, only its coverage is counted. (Coordinate system: WGS_1984_UTM_zone_51N).</p>
Full article ">Figure 4 Cont.
<p>Spatial and temporal distribution of <span class="html-italic">U. prolifera</span> in the Yellow Sea, China in 2018. The area within the red line represents the distribution of <span class="html-italic">U. prolifera</span>. The blue gradient band represents the coverage area of <span class="html-italic">U. prolifera</span>, and the darker the color, the denser the algae are. The statistical values of the two kinds of areas appear in the upper left corner of each sub-figure; A1 denotes the affected area, and A2 represents the coverage area. (<b>a</b>) The situation of <span class="html-italic">U. prolifera</span> on June 3. (<b>b</b>) The situation of <span class="html-italic">U. prolifera</span> on June 12. (<b>c</b>) The situation of <span class="html-italic">U. prolifera</span> on June 21. (<b>d</b>) The situation of <span class="html-italic">U. prolifera</span> on June 24. (<b>e</b>) The situation of <span class="html-italic">U. prolifera</span> on June 29. (<b>f</b>) The situation of <span class="html-italic">U. prolifera</span> on July 15. (<b>g</b>) The situation of <span class="html-italic">U. prolifera</span> on July 19. (<b>h</b>) The situation of <span class="html-italic">U. prolifera</span> on July 23. (<b>i</b>) The situation of <span class="html-italic">U. prolifera</span> on August 2. (<b>j</b>) The situation of <span class="html-italic">U. prolifera</span> on August 11. In image (<b>j</b>), <span class="html-italic">U. prolifera</span> exists as only a slender filament, which has a negligible effect on the larger area. Therefore, only its coverage is counted. (Coordinate system: WGS_1984_UTM_zone_51N).</p>
Full article ">Figure 5
<p>Spatial and temporal distribution of <span class="html-italic">U. prolifera</span> in the Yellow Sea, China in 2017. (<b>a</b>) The situation of <span class="html-italic">U. prolifera</span> on April 30. (<b>b</b>) The situation of <span class="html-italic">U. prolifera</span> on May 7. (<b>c</b>) The situation of <span class="html-italic">U. prolifera</span> on May 18. (<b>d</b>) The situation of <span class="html-italic">U. prolifera</span> on May 27. (<b>e</b>) The situation of <span class="html-italic">U. prolifera</span> on June 4. (<b>f</b>) The situation of <span class="html-italic">U. prolifera</span> on June 14. (<b>g</b>) The situation of <span class="html-italic">U. prolifera</span> on June 18. (<b>h</b>) The situation of <span class="html-italic">U. prolifera</span> on June 26. (<b>i</b>) The situation of <span class="html-italic">U. prolifera</span> on July 13.</p>
Full article ">Figure 5 Cont.
<p>Spatial and temporal distribution of <span class="html-italic">U. prolifera</span> in the Yellow Sea, China in 2017. (<b>a</b>) The situation of <span class="html-italic">U. prolifera</span> on April 30. (<b>b</b>) The situation of <span class="html-italic">U. prolifera</span> on May 7. (<b>c</b>) The situation of <span class="html-italic">U. prolifera</span> on May 18. (<b>d</b>) The situation of <span class="html-italic">U. prolifera</span> on May 27. (<b>e</b>) The situation of <span class="html-italic">U. prolifera</span> on June 4. (<b>f</b>) The situation of <span class="html-italic">U. prolifera</span> on June 14. (<b>g</b>) The situation of <span class="html-italic">U. prolifera</span> on June 18. (<b>h</b>) The situation of <span class="html-italic">U. prolifera</span> on June 26. (<b>i</b>) The situation of <span class="html-italic">U. prolifera</span> on July 13.</p>
Full article ">Figure 6
<p>Spatial and temporal distribution of <span class="html-italic">U. prolifera</span> in the Yellow Sea, China in 2016. (<b>a</b>) The situation of <span class="html-italic">U. prolifera</span> on May 16. (<b>b</b>) The situation of <span class="html-italic">U. prolifera</span> on May 25. (<b>c</b>) The situation of <span class="html-italic">U. prolifera</span> on June 1. (<b>d</b>) The situation of <span class="html-italic">U. prolifera</span> on June 17. (<b>e</b>) The situation of <span class="html-italic">U. prolifera</span> on June 25. (<b>f</b>) The situation of <span class="html-italic">U. prolifera</span> on July 2. (<b>g</b>) The situation of <span class="html-italic">U. prolifera</span> on July 14. (<b>h</b>) The situation of <span class="html-italic">U. prolifera</span> on July 24. (<b>i</b>) The situation of <span class="html-italic">U. prolifera</span> on July 29. In image (<b>i</b>), only the coverage area of <span class="html-italic">U. prolifera</span> was counted, which existed as only a slender filament. The region in the red box is the coverage of <span class="html-italic">U. prolifera</span>, which is magnified in the detail image below.</p>
Full article ">Figure 6 Cont.
<p>Spatial and temporal distribution of <span class="html-italic">U. prolifera</span> in the Yellow Sea, China in 2016. (<b>a</b>) The situation of <span class="html-italic">U. prolifera</span> on May 16. (<b>b</b>) The situation of <span class="html-italic">U. prolifera</span> on May 25. (<b>c</b>) The situation of <span class="html-italic">U. prolifera</span> on June 1. (<b>d</b>) The situation of <span class="html-italic">U. prolifera</span> on June 17. (<b>e</b>) The situation of <span class="html-italic">U. prolifera</span> on June 25. (<b>f</b>) The situation of <span class="html-italic">U. prolifera</span> on July 2. (<b>g</b>) The situation of <span class="html-italic">U. prolifera</span> on July 14. (<b>h</b>) The situation of <span class="html-italic">U. prolifera</span> on July 24. (<b>i</b>) The situation of <span class="html-italic">U. prolifera</span> on July 29. In image (<b>i</b>), only the coverage area of <span class="html-italic">U. prolifera</span> was counted, which existed as only a slender filament. The region in the red box is the coverage of <span class="html-italic">U. prolifera</span>, which is magnified in the detail image below.</p>
Full article ">Figure 7
<p>(<b>a</b>) The probability distribution of the occurrence of <span class="html-italic">U. prolifera</span> in the Yellow Sea during 2016–2018. From dark green to bright red, different colors represent different ranges of probability, with red <span style="color:red">■</span> representing the highest-incidence area of <span class="html-italic">U. prolifera</span>. (<b>b</b>) The pie chart shows the percentage of the region corresponding to each probability as a part of the total affected area. The percentage and the specific area are shown on the pie chart. At the bottom of the pie chart, there is a probabilistic color bar. Each color corresponds to an interval of the probability of the occurrence of <span class="html-italic">U. prolifera</span>, the same as in (<b>a</b>).</p>
Full article ">Figure 8
<p>(<b>a</b>) Migration trajectory of the average barycenter in 2018. (<b>b</b>) Migration trajectory of the average barycenter in 2017. (<b>c</b>) Migration trajectory of the average barycenter in 2016. The direction of the black arrow in three graphs indicates the main direction of the path, and the length of the line represents the path length and the consistency of the subpaths, to some extent. Generally speaking, the longer the migration path is, the more consistent the direction trend of the subpaths is, and the longer the length of the main direction line is.</p>
Full article ">Figure 9
<p>Standard deviation ellipse of the coverage area of <span class="html-italic">U. prolifera</span> in 2018. The first standard deviation ellipse was used in this paper, containing the data with a ratio of 68% (the proportion of data contained does not affect the values of two key indicators). (<b>a</b>) The SDE on June 3. (<b>b</b>) The SDE on June 12. (<b>c</b>) The SDE on June 21. (<b>d</b>) The SDE on June 24. (<b>e</b>) The SDE on June 29. (<b>f</b>) The SDE on July 15. (<b>g</b>) The SDE on July 19. (<b>h</b>) The SDE on July 23. (<b>i</b>) The SDE on August 2. (<b>j</b>) The SDE on August 11. Four parameters of the standard deviation ellipse are marked in (<b>a</b>). The yellow round dot is the elliptical center point, and the two green lines passing through the center point and perpendicular to each other are the X-axis and Y-axis, respectively. The angle between the north direction and the long axis is the directional angle, which is represented by an orange arc in the diagram. The annotation of these parameters is omitted from other subgraphs. The direction and oblateness of the ellipse are indicated at the bottom of each subgraph. <math display="inline"><semantics> <mi>θ</mi> </semantics></math> represents the direction angle, whereas <math display="inline"><semantics> <mi>e</mi> </semantics></math> is the oblateness.</p>
Full article ">Figure 10
<p>(<b>A</b>) Standard deviation ellipse of the coverage area of <span class="html-italic">U. prolifera</span> in 2017: (<b>a</b>) The SDE on April 30; (<b>b</b>) The SDE on May 7; (<b>c</b>) The SDE on May 18; (<b>d</b>) The SDE on May 27; (<b>e</b>) The SDE on June 4; (<b>f</b>) The SDE on June 14; (<b>g</b>) The SDE on June 18; (<b>h</b>) The SDE on June 26; (<b>i</b>) The SDE on July 13. (<b>B</b>) Standard deviation ellipse of the coverage area of <span class="html-italic">U. prolifera</span> in 2016: (<b>a</b>) The SDE on May 16; (<b>b</b>) The SDE on May 25; (<b>c</b>) The SDE on June 1; (<b>d</b>) The SDE on June 17; (<b>e</b>) The SDE on June 25; (<b>f</b>) The SDE on July 2; (<b>g</b>) The SDE on July 14; (<b>h</b>) The SDE on July 24; (<b>i</b>) The SDE on July 29.</p>
Full article ">Figure 11
<p>(<b>a</b>) Line chart of changes in oblateness in 2018. (<b>b</b>) Line chart of changes in oblateness in 2017. (<b>c</b>) Line chart of changes in oblateness in 2016. The horizontal coordinate is the date, and the vertical coordinate is the oblateness. The specific value of each date is marked above the broken line. The red segments in the figure indicate the periods when oblateness increased dramatically.</p>
Full article ">Figure 12
<p>(<b>a</b>) Direction angle variation over time in 2018. (<b>b</b>) Direction angle variation over time in 2017. (<b>c</b>) Direction angle variation over time in 2016. The direction of the straight line (a, b, c, etc.) indicates the direction of the standard deviation ellipse on the corresponding date. Dates and values are marked on the right side of the figure. The vertical axis of the coordinate system is the north direction. Red and blue lines are used to highlight the process of abrupt angle changes, which are represented in the figure by magenta arcs.</p>
Full article ">Figure 13
<p>(<b>a</b>) Distribution area and perimeter ratio in 2018 are shown in the diagram as two broken lines. The transverse coordinates represent the time series. The left vertical coordinate is the distribution area, and the right vertical coordinate is the ratio of the perimeter to area. (<b>b</b>) Distribution area and perimeter ratio in 2017. (<b>c</b>) Distribution area and perimeter ratio in 2016.</p>
Full article ">Figure 14
<p>(<b>a</b>) Histogram of <span class="html-italic">U. prolifera</span> distribution and coverage area in 2018. The abscissa is the date, and the ordinate is the area in square kilometers. The stable period of the coverage area in 2018 is highlighted and marked by magenta boxes in the graph. (<b>b</b>) Histogram of <span class="html-italic">U. prolifera</span> distribution and coverage area in 2017. (<b>c</b>) Histogram of <span class="html-italic">U. prolifera</span> distribution and coverage area in 2016.</p>
Full article ">Figure 15
<p>(<b>A</b>) Weekly average surface wind speed and direction from early June to mid-August in 2018. (<b>B</b>) Weekly average surface wind speed and direction from late April to July in 2017. (<b>C</b>) Weekly average surface wind speed and direction from early May to July in 2016. Dara was obtained from remote sensing systems (RSS), <a href="http://www.remss.com/" target="_blank">http://www.remss.com/</a>. Remote Sensing Systems is supported by NASA, NOAA, and the NSF, offering research quality products of sea surface wind speed and direction and other products for use in research and climate study.</p>
Full article ">Figure 15 Cont.
<p>(<b>A</b>) Weekly average surface wind speed and direction from early June to mid-August in 2018. (<b>B</b>) Weekly average surface wind speed and direction from late April to July in 2017. (<b>C</b>) Weekly average surface wind speed and direction from early May to July in 2016. Dara was obtained from remote sensing systems (RSS), <a href="http://www.remss.com/" target="_blank">http://www.remss.com/</a>. Remote Sensing Systems is supported by NASA, NOAA, and the NSF, offering research quality products of sea surface wind speed and direction and other products for use in research and climate study.</p>
Full article ">Figure 16
<p>Three-day average surface wind speed and direction of three periods of oblateness change in 2018, obtained from RSS. In each specific period, the upper sub-graph represents the sea-surface wind (SSW) at the starting time, and the lower vector graph shows the changes in wind speed and direction within this period. (<b>a</b>) The change of SSW in period one. (<b>b</b>) The change of SSW in period two. (<b>c</b>) The change of SSW in period three.</p>
Full article ">Figure 17
<p>Three-day average surface wind speed and direction of oblateness change periods in 2017, obtained from RSS. (<b>a</b>–<b>c</b>) The change of SSW in period one. (<b>d</b>,<b>e</b>) The change of SSW in period two.</p>
Full article ">Figure 18
<p>Three-day average surface wind speed and direction of oblateness change periods in 2016, obtained from RSS. (<b>a</b>–<b>c</b>) The change of SSW in period one. (<b>d</b>,<b>e</b>) The change of SSW in period two.</p>
Full article ">
17 pages, 61620 KiB  
Article
Shrinkage of Nepal’s Second Largest Lake (Phewa Tal) Due to Watershed Degradation and Increased Sediment Influx
by C. Scott Watson, Jeffrey S. Kargel, Dhananjay Regmi, Summer Rupper, Joshua M. Maurer and Alina Karki
Remote Sens. 2019, 11(4), 444; https://doi.org/10.3390/rs11040444 - 21 Feb 2019
Cited by 18 | Viewed by 14997
Abstract
Phewa Lake is an environmental and socio-economic asset to Nepal and the city of Pokhara. However, the lake area has decreased in recent decades due to sediment influx. The rate of this decline and the areal evolution of Phewa Lake due to artificial [...] Read more.
Phewa Lake is an environmental and socio-economic asset to Nepal and the city of Pokhara. However, the lake area has decreased in recent decades due to sediment influx. The rate of this decline and the areal evolution of Phewa Lake due to artificial damming and sedimentation is disputed in the literature due to the lack of a historical time series. In this paper, we present an analysis of the lake’s evolution from 1926 to 2018 and model the 50-year trajectory of shrinkage. The area of Phewa Lake expanded from 2.44 ± 1.02 km2 in 1926 to a maximum of 4.61 ± 0.07 km2 in 1961. However, the lake area change was poorly constrained prior to a 1957–1958 map. The contemporary lake area was 4.02 ± 0.07 km2 in April 2018, and expands seasonally by ~0.18 km2 due to the summer monsoon. We found no evidence to support a lake area of 10 km2 in 1956–1957, despite frequent reporting of this value in the literature. Based on the rate of areal decline and sediment influx, we estimate the lake will lose 80% of its storage capacity in the next 110–347 years, which will affect recreational use, agricultural irrigation, fishing, and a one-megawatt hydroelectric power facility. Mitigation of lake shrinkage will require addressing landslide activity and sediment transport in the watershed, as well as urban expansion along the shores. Full article
(This article belongs to the Special Issue Remote Sensing of Inland Waters and Their Catchments)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Phewa Lake catchment and the city of Pokhara located in central Nepal. True-colour RapidEye background (10 November 2017).</p>
Full article ">Figure 2
<p>Changes in lake area before and after the most recent dam (1982). (<b>a</b>) Lake area change from 1926–1976 and (<b>b</b>) 1988–2018. (<b>c</b>) Lake shoreline change from 1926–1976 shown on a 1962 Corona satellite image, and (<b>d</b>) 1988–2018 shown on a RapidEye satellite image (April 2018). The 1926 and 1958 outlines are dashed due to georeferencing and geodetic uncertainty.</p>
Full article ">Figure 3
<p>Seasonal changes at Phewa Lake. (<b>a–c</b>) Shoreline change between high (10 October 2017) and low (27 March 2018) water levels. Backgrounds are RapidEye false colour composites (bands: near-infrared, red, and blue). (<b>d</b>) Water level elevation recorded at Phewa Dam with a seven-day moving average. (<b>e</b>) Mean lake temperature derived from ASTER Thermal Infrared (n = 10, 2000–2018), Landsat 8 Thermal Infrared Sensor (n = 48, 2013–2018), and Gurung et al. (2005) (n = 30, 2000–2002).</p>
Full article ">Figure 4
<p>Lake bathymetry (18 May 2018). The Harpan River enters the lake at points (1) and (2).</p>
Full article ">Figure 5
<p>Modelled area (<b>a</b>) and volume (<b>b</b>) decrease for high and low sediment influx scenarios. (<b>c</b>) The resulting shoreline change to the year 2067.</p>
Full article ">Figure 6
<p>(<b>a</b><b>–f</b>) Landscape changes in the upper part of the Phewa catchment from 1988–1994, and (<b>g</b><b>–j</b>) lake shrinkage on the western shore. Images are Landsat 5 false colour composites (bands: near-infrared, red, and blue).</p>
Full article ">Figure 7
<p>(<b>a</b>) Landslides and normalised difference vegetation index (NDVI) change in the Phewa Catchment. Inset box refers to <a href="#app1-remotesensing-11-00444" class="html-app">Figure S3</a>. (<b>b</b>) Areas of change in NDVI from 2010–2017.</p>
Full article ">Figure 8
<p>Changes in lake area from 2010–2018 and the locations of sediment influx. Backgrounds are RapidEye false colour composites (bands: near-infrared, red, and blue). The corresponding classified lake outlines are shown as blue outlines (yellow outline for March 2010).</p>
Full article ">Figure 9
<p>(<b>a</b>) Tracks, roads, and buildings in the Phewa Lake catchment. (<b>b</b>) Catchment slope derived from the Shuttle Radar Topography Mission DEM. (<b>c</b>) Enlarged view of the lakeshore. (<b>d</b>) Difference in the number of buildings within 200 m of the lakeshore, aggregated to a 1 km<sup>2</sup> grid (2004–2018).</p>
Full article ">Figure 10
<p>(<b>a</b>) Phewa Lake in 1961 with a panchromatic Corona satellite image background; (<b>b</b>) Phewa Lake in 2017 with a RapidEye image (10 November 2017) false colour composite background (bands: near-infrared, green, and blue).</p>
Full article ">
15 pages, 2314 KiB  
Article
A Novel Approach for the Detection of Developing Thunderstorm Cells
by Richard Müller, Stéphane Haussler, Matthias Jerg and Dirk Heizenreder
Remote Sens. 2019, 11(4), 443; https://doi.org/10.3390/rs11040443 - 21 Feb 2019
Cited by 13 | Viewed by 5307
Abstract
This study presents a novel approach for the early detection of developing thunderstorms. To date, methods for the detection of developing thunderstorms have usually relied on accurate Atmospheric Motion Vectors (AMVs) for the estimation of the cooling rates of convective clouds, which correspond [...] Read more.
This study presents a novel approach for the early detection of developing thunderstorms. To date, methods for the detection of developing thunderstorms have usually relied on accurate Atmospheric Motion Vectors (AMVs) for the estimation of the cooling rates of convective clouds, which correspond to the updraft strengths of the cloud objects. In this study, we present a method for the estimation of the updraft strength that does not rely on AMVs. The updraft strength is derived directly from the satellite observations in the SEVIRI water vapor channels. For this purpose, the absolute value of the vector product of spatio-temporal gradients of the SEVIRI water vapor channels is calculated for each satellite pixel, referred to as Normalized Updraft Strength (NUS). The main idea of the concept is that vertical updraft leads to NUS values significantly above zero, whereas horizontal cloud movement leads to NUS values close to zero. Thus, NUS is a measure of the strength of the vertical updraft and can be applied to distinguish between advection and convection. The performance of the method has been investigated for two summer periods in 2016 and 2017 by validation with lightning data. Values of the Critical Success Index (CSI) of about 66% for 2016 and 60% for 2017 demonstrate the good performance of the method. The Probability of Detection (POD) values for the base case are 81.8% for 2016 and 89.2% for 2017, respectively. The corresponding False Alarm Ratio (FAR) values are 22.6% (2016) and 36.4% (2017), respectively. In summary, the method has the potential to reduce forecast lead time significantly and can be quite useful in regions without a well-maintained radar network. Full article
(This article belongs to the Special Issue Remote Sensing Methods and Applications for Traffic Meteorology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Simplified illustration of the generation of Cbs. Stage 1, towering cumulus; Stage 2, strong convection and updraft of the cloud (developing thunderstorm) leading to the generation of a mature Cb (Stage 3). Once the Cb has reached the tropopause, the updraft is retarded, and kinetic updraft energy is transmitted into vertical movement/development. This leads to the development of an anvil and subsequent dissipation of the Cb after a certain lifetime.</p>
Full article ">Figure 2
<p>SEVIRI normalized weighting function of MSG for clear sky, copyright (2002) EUMETSAT.</p>
Full article ">Figure 3
<p>Cloud regime dominated by stratiform clouds (advection). (<b>Left</b>) the BT of WV7.3 for 8 June 2017, 8:45 UTC. (<b>Right</b>) Normalized Updraft Strength (NUS) values for the same region, derived from the 8:45 and 9:00 UTC BT images. The NUS values are not significantly above zero, and no convection is indicated. For the complete region and time duration, lightning did not occur.</p>
Full article ">Figure 4
<p>Example of convective clouds and developing thunderstorms, 1 June 2017. (<b>Top</b>) the BT of the two water vapor channels (<b>left</b>: WV7.3, <b>right</b>: WV6.2) at 08:45 UTC. (<b>Bottom left</b>) The NUS values derived from the water vapor images (08:45 and 09:00). (<b>Bottom right</b>) Number of lightning events in a 0.05 × 0.05 degree resolution occurring +4–19 min after the satellite scan of the 09:00 image.</p>
Full article ">Figure 5
<p>Skill scores (POD, FAR, CSI) in % plotted against the NUS threshold for the 2016 (<b>left</b>) and 2017 (<b>right</b>) period and an SR of 32 km, respectively. Left: POD and FAR decreased with increasing NUS. CSI was relatively stable, but had the highest values around 0.015 for 2016 and 0.02 for 2017. For these experiments, the NWP filtering and the lightning settings were set in accordance with Experiment 1 in <a href="#remotesensing-11-00443-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 6
<p>Example of the effect of false alarms due to a frontal system in a low Convective Available Potential Energy (CAPE) environment (6 June 2017, 06 UTC). (<b>Top</b>) The image of the water vapor channel. The frontal system is apparent in the WV channel as a transition region with very dry (cloud-free) and very moist (cloudy) air. (<b>Bottom</b>) Image of the validation results. The red points are correctly-detected developing thunderstorms, while the black points with yellow background, indicating that CAPE is less than 60, are examples of false alarms in a low CAPE environment. In grey are false alarms that were successfully filtered out. In yellow are all regions where the NWP filtering (CAPE &gt; 60 J/kg) would allow the detection of developing thunderstorms. Depending on the temperature gradients and CAPE, the frontal system could lead to the development of Cbs or false alarms.</p>
Full article ">Figure 7
<p>Illustration of the blurring effect on the observed NUS values. The objects colored in red stand for developing thunderstorms; the objects colored in grey stand for harmless clouds or missed detection caused by blurring. The values of the cell with an NUS value of 0.04 (<b>left top</b>) would be observed in the relevant satellite pixels as 0.01 (<b>left bottom</b>) because of its location, which is due to chance; whereas the cell with 0.022 and 0.014 (<b>right top</b>) would be observed close to their real values (<b>right bottom</b>). Assuming a threshold of 0.15, the largest cell would not be seen and counted as a missed detection. However, reducing the threshold of 0.1 would lead to an additional false alarm induced by the cell region with NUS of 0.12, and still, the large cell would be counted as a missed detection. Furthermore, the small cell with large updraft (<b>right top</b>) would not be apparent in the satellite observations. Thus, increasing the NUS threshold decreases FAR, but decreases POD, as well, and vice versa.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop