[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 9, May
Previous Issue
Volume 9, March
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 9, Issue 4 (April 2017) – 96 articles

Cover Story (view full-size image): Ice sheets hold the largest potential for sea level rise in the upcoming decades to centuries and represent the largest source of uncertainty for projections. Ice sheet surface velocity is a fundamental observable aspect of their dynamics that has only recently become available from space. This provides essential information for assessments of past, current and future contributions to sea level, and our understanding of the physics of ice flow. In this issue, Mouginot et al., funded through the NASA’s MEaSUREs program, have developed a new methodology to fuse multisensor data sources, including CSA's RADARSAT-2, ESA's Sentinel-1 and USGS's Landsat-8, to form coherent and comprehensive time series of ice velocity over the entire Antarctic and Greenland ice sheets. These measurements of ice motion over the ice sheets provide a quantum leap improvement of observational constraints for ice sheet numerical [...] Read more.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17366 KiB  
Article
Interference of Heavy Aerosol Loading on the VIIRS Aerosol Optical Depth (AOD) Retrieval Algorithm
by Yang Wang, Liangfu Chen, Shenshen Li, Xinhui Wang, Chao Yu, Yidan Si and Zili Zhang
Remote Sens. 2017, 9(4), 397; https://doi.org/10.3390/rs9040397 - 23 Apr 2017
Cited by 27 | Viewed by 7495
Abstract
Aerosol optical depth (AOD) has been widely used in climate research, atmospheric environmental observations, and other applications. However, high AOD retrieval remains challenging over heavily polluted regions, such as the North China Plain (NCP). The Visible Infrared Imaging Radiometer Suite (VIIRS), which was [...] Read more.
Aerosol optical depth (AOD) has been widely used in climate research, atmospheric environmental observations, and other applications. However, high AOD retrieval remains challenging over heavily polluted regions, such as the North China Plain (NCP). The Visible Infrared Imaging Radiometer Suite (VIIRS), which was designed as a successor to the Moderate Resolution Imaging Spectroradiometer (MODIS), will undertake the aerosol observations mission in the coming years. Using the VIIRS AOD retrieval algorithm as an example, we analyzed the influence of heavy aerosol loading through the 6SV radiative transfer model (RTM) with a focus on three aspects: cloud masking, ephemeral water body tests, and data quality estimation. First, certain pixels were mistakenly screened out as clouds and ephemeral water bodies because of heavy aerosols, resulting in the loss of AOD retrievals. Second, the greenness of the surface could not be accurately identified by the top of atmosphere (TOA) index, and the quality of the aggregation data may be artificially high. Thus, the AOD retrieval algorithm did not perform satisfactorily, indicated by the low availability of data coverage (at least 37.97% of all data records were missing according to ground-based observations) and overestimation of the data quality (high-quality data increased from 63.42% to 80.97% according to radiative simulations). To resolve these problems, the implementation of a spatial variability cloud mask method and surficial index are suggested in order to improve the algorithm. Full article
(This article belongs to the Special Issue Remote Sensing of Atmospheric Pollution)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Annual average AOD distributions over the research area in 2015 (MODIS Collection 6 Deep Blue AOD at 550 nm). The right-hand figure shows the NCP, which is marked by a black square frame in the left-hand figure.</p>
Full article ">Figure 2
<p>National Oceanic and Atmospheric Administration (NOAA)’s VIIRS AOD products (all data quality) over hazy areas. The AOD products were overlaid on the true color image, and no retrieval areas were set as transparent. Some AOD values are invalid, which are marked with red ellipses, because of heavy haze events.</p>
Full article ">Figure 3
<p>IP to EDR aggregation flow chart.</p>
Full article ">Figure 4
<p>Two-day VIIRS true color images (<b>a</b>,<b>b</b>) and NOAA cloud mask result (<b>c</b>,<b>d</b>) over the NCP on 23 December 2013 and 18 March 2016. The cloud pixels are represented in blue in the cloud mask result.</p>
Full article ">Figure 5
<p>ρ<sub>TOA</sub> simulation for the M1 (<b>a</b>) and M3 (<b>b</b>) bands under different AOD values (ranging from 0 to 3). The different lines represent several surface reflectance values.</p>
Full article ">Figure 6
<p>Histograms (<b>a</b>,<b>b</b>) and cumulative histograms (<b>c</b>,<b>d</b>) of the ρ<sub>TOA</sub> STD in the VIIRS bands M1 (left column) and M3 (right column) for three types of pixels, including clouds (blue), haze (grey), and clear sky (green). The red lines are the suggested thresholds of the spatial variability test, which are 0.005 for M1 and 0.01 for M3.</p>
Full article ">Figure 7
<p>VIIRS true color image on 13 January 2014 (<b>a</b>) and 10 March 2014 (<b>b</b>) and the corresponding ephemeral water body test results (<b>c</b>,<b>d</b>) over the NCP. The ephemeral water body pixels are represented in blue.</p>
Full article ">Figure 8
<p>TOA NDVI simulation results for six types of land cover. The satellite zenith angle was 30°, the solar zenith angle was 30°, and the relative azimuth angle was 120°. In this simulation, the aerosol type was assumed to be continental, and the AOD ranged from 0 to 3.</p>
Full article ">Figure 9
<p>(<b>a</b>) MODIS surface reflectance at 1.23 µm over the NCP. The NDVI<sub>SWIR</sub> values were simulated by using the surface reflectance under aerosol conditions of AOD = 0.1, 1 and 2. (<b>b</b>) Difference in the NDVI<sub>SWIR</sub> simulation values between AOD = 1 and 0.1. (<b>c</b>) Difference in the NDVI<sub>SWIR</sub> simulation values between AOD = 2 and 0.1. The surface type identification results under different aerosol loads of (<b>d</b>) AOD = 0.1, (<b>e</b>) AOD = 1 and (<b>f</b>) AOD = 2 were also identified.</p>
Full article ">Figure 10
<p>Histograms of NDVI<sub>SWIR</sub> when AOD = 0.1, 1, and 2. The red lines represent the NDVI<sub>SWIR</sub> frequency peak under the three different atmospheric conditions.</p>
Full article ">Figure 11
<p>EDR data quality simulation results over the NCP under different aerosol loading: (<b>a</b>) AOD = 0.1, (<b>b</b>) AOD = 1, and (<b>c</b>) AOD = 2.</p>
Full article ">
26468 KiB  
Article
Soil Moisture Estimation over Vegetated Agricultural Areas: Tigris Basin, Turkey from Radarsat-2 Data by Polarimetric Decomposition Models and a Generalized Regression Neural Network
by Mehmet Siraç Özerdem, Emrullah Acar and Remzi Ekinci
Remote Sens. 2017, 9(4), 395; https://doi.org/10.3390/rs9040395 - 23 Apr 2017
Cited by 27 | Viewed by 7470
Abstract
Determining the soil moisture in agricultural fields is a significant parameter to use irrigation systems efficiently. In contrast to standard soil moisture measurements, good results might be acquired in a shorter time over large areas by remote sensing tools. In order to estimate [...] Read more.
Determining the soil moisture in agricultural fields is a significant parameter to use irrigation systems efficiently. In contrast to standard soil moisture measurements, good results might be acquired in a shorter time over large areas by remote sensing tools. In order to estimate the soil moisture over vegetated agricultural areas, a relationship between Radarsat-2 data and measured ground soil moistures was established by polarimetric decomposition models and a generalized regression neural network (GRNN). The experiments were executed over two agricultural sites on the Tigris Basin, Turkey. The study consists of four phases. In the first stage, Radarsat-2 data were acquired on different dates and in situ measurements were implemented simultaneously. In the second phase, the Radarsat-2 data were pre-processed and the GPS coordinates of the soil sample points were imported to this data. Then the standard sigma backscattering coefficients with the Freeman–Durden and H/A/α polarimetric decomposition models were employed for feature extraction and a feature vector with four sigma backscattering coefficients (σhh, σhv, σvh, and σvv) and six polarimetric decomposition parameters (entropy, anisotropy, alpha angle, volume scattering, odd bounce, and double bounce) were generated for each pattern. In the last stage, GRNN was used to estimate the regional soil moisture with the aid of feature vectors. The results indicated that radar is a strong remote sensing tool for soil moisture estimation, with mean absolute errors around 2.31 vol %, 2.11 vol %, and 2.10 vol % for Datasets 1–3, respectively; and 2.46 vol %, 2.70 vol %, 7.09 vol %, and 5.70 vol % on Datasets 1 & 2, 2 & 3, 1 & 3, and 1 & 2 & 3, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>The location of the study area, presented as both (<b>a</b>) Radarsat-2 image and (<b>b</b>) Google Earth image. The black rectangular areas indicate the coverage of two experimental sites.</p>
Full article ">Figure 2
<p>Three Radarsat-2 images were acquired over the Tigris Basin, Diyarbakır and preprocessed on (<b>a</b>) 27 February 2015; (<b>b</b>) 8 April 2015; and (<b>c</b>) 10 June 2015. The Dual pol (hh + vv) RGB image was obtained by combining three different (R = hh; G = vh; B = hh/hv) bands of Radarsat-2 data.</p>
Full article ">Figure 3
<p>Three surface scattering mechanisms.</p>
Full article ">Figure 4
<p>The architecture of the GRNN model.</p>
Full article ">Figure 5
<p>The resulting Radarsat-2 data from 27 February 2015 after (<b>a</b>) standard sigma backscattering technique; (<b>b</b>) Freeman–Durden; and (<b>c</b>) H/A/α models.</p>
Full article ">Figure 6
<p>The relationship between the measured and estimated soil moistures (SM) for Dataset 1.</p>
Full article ">Figure 7
<p>The relationship between the measured and estimated soil moistures (SM) over testing areas 1–4 for Dataset 1 (<b>a</b>–<b>d</b>), respectively.</p>
Full article ">Figure 8
<p>Radarsat-2 data from 8 April 2015 after (<b>a</b>) standard sigma backscattering technique; (<b>b</b>) Freeman–Durden; and (<b>c</b>) H/A/α models.</p>
Full article ">Figure 9
<p>The relationship between measured and estimated SM for Dataset 2.</p>
Full article ">Figure 10
<p>The relationship between measured and estimated SM over testing areas 1–4 for Dataset 2 (<b>a</b>–<b>d</b>), respectively.</p>
Full article ">Figure 10 Cont.
<p>The relationship between measured and estimated SM over testing areas 1–4 for Dataset 2 (<b>a</b>–<b>d</b>), respectively.</p>
Full article ">Figure 11
<p>Radarsat-2 data from derived 10 June 2015 after (<b>a</b>) standard sigma backscattering technique; (<b>b</b>) Freeman–Durden; and (<b>c</b>) H/A/α models.</p>
Full article ">Figure 12
<p>The relationship between measured and estimated SM for Dataset 3.</p>
Full article ">Figure 13
<p>The relationship between measured and estimated SM over testing areas 1–4 for Dataset 3 (<b>a</b>–<b>d</b>), respectively.</p>
Full article ">Figure 14
<p>The relationship between measured and estimated SM for combined Datasets 1&amp;2.</p>
Full article ">Figure 15
<p>The relationship between measured and estimated SM for combined Datasets 1&amp;3.</p>
Full article ">Figure 16
<p>The relationship between measured and estimated SM for combined Datasets 2&amp;3.</p>
Full article ">Figure 17
<p>The relationship between measured and estimated SM for combined Datasets 1&amp;2&amp;3.</p>
Full article ">
7727 KiB  
Article
Simulated Imagery Rendering Workflow for UAS-Based Photogrammetric 3D Reconstruction Accuracy Assessments
by Richard K. Slocum and Christopher E. Parrish
Remote Sens. 2017, 9(4), 396; https://doi.org/10.3390/rs9040396 - 22 Apr 2017
Cited by 29 | Viewed by 7171
Abstract
Structure from motion (SfM) and MultiView Stereo (MVS) algorithms are increasingly being applied to imagery from unmanned aircraft systems (UAS) to generate point cloud data for various surveying and mapping applications. To date, the options for assessing the spatial accuracy of the SfM-MVS [...] Read more.
Structure from motion (SfM) and MultiView Stereo (MVS) algorithms are increasingly being applied to imagery from unmanned aircraft systems (UAS) to generate point cloud data for various surveying and mapping applications. To date, the options for assessing the spatial accuracy of the SfM-MVS point clouds have primarily been limited to empirical accuracy assessments, which involve comparisons against reference data sets, which are both independent and of higher accuracy than the data they are being used to test. The acquisition of these reference data sets can be expensive, time consuming, and logistically challenging. Furthermore, these experiments are also almost always unable to be perfectly replicated and can contain numerous confounding variables, such as sun angle, cloud cover, wind, movement of objects in the scene, and camera thermal noise, to name a few. The combination of these factors leads to a situation in which robust, repeatable experiments are cost prohibitive, and the experiment results are frequently site-specific and condition-specific. Here, we present a workflow to render computer generated imagery using a virtual environment which can mimic the independent variables that would be experienced in a real-world UAS imagery acquisition scenario. The resultant modular workflow utilizes Blender, an open source computer graphics software, for the generation of photogrammetrically-accurate imagery suitable for SfM processing, with explicit control of camera interior orientation, exterior orientation, texture of objects in the scene, placement of objects in the scene, and ground control point (GCP) accuracy. The challenges and steps required to validate the photogrammetric accuracy of computer generated imagery are discussed, and an example experiment assessing accuracy of an SfM derived point cloud from imagery rendered using a computer graphics workflow is presented. The proposed workflow shows promise as a useful tool for sensitivity analysis and SfM-MVS experimentation. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A cube with a 10 × 10 checkerboard pattern on each wall is used to validate the photogrammetric accuracy of the Blender Internal Render Engine.</p>
Full article ">Figure 2
<p>A circular plane was placed so it was encompassed by the viewing volume of only the central pixel (<b>left</b>) to examine the effect of antialiasing on the rendered image quality. A 5 × 5 pixel image was rendered with no antialiasing (<b>middle</b>) and with eight sample antialiasing (<b>right</b>).</p>
Full article ">Figure 3
<p>Each black and white square in the checkerboard (<b>left</b>) represents one texel in the texture applied to the image with no interpolation. This same texture is rendered with interpolation (<b>right</b>) to demonstrate the effect. The leftmost rendered image demonstrates that the final texture that is rendered contains the full resolution of the desired texture, and that the Blender Internal Renderer is not artificially downsampling the texture.</p>
Full article ">Figure 4
<p>Pictorial representation of the simUAS (simulated UAS) imagery rendering workflow. Note: The SfM-MVS step is shown as a “black box” to highlight the fact that the procedure can be implemented using any SfM-MVS software, including proprietary commercial software.</p>
Full article ">Figure 5
<p>The scene was generated in Blender to represent a hilly topography (<b>left</b>) with 10 GCPs (<b>center</b>), distributed throughout the scene and a 3 m cube placed in the center (<b>right</b>).</p>
Full article ">Figure 6
<p>A flight plan and GCP distribution was generated to simulate common UAS experiment design in the real world. The camera trajectory was designed for a GSD of 1.00 cm and a sidelap and overlap of 75% each.</p>
Full article ">Figure 7
<p>The imagery from Blender, rendered using a pinhole camera model, is postprocessed to introduce lens and camera effects. The magnitudes of the postprocessing effects are set high in this example to clearly demonstrate the effect of each. The full size image (<b>left</b>) and a close up image (<b>right</b>) are both shown in order to depict both the large and small scale effects.</p>
Full article ">Figure 8
<p>The elevation, error, number of points, and standard deviation of error are gridded to 0.5 m grid cells using a binning gridding algorithm and visualized.</p>
Full article ">Figure 9
<p>A 50 cm wide section of the point cloud containing a box (3 m cube) is shown with the dense reconstruction point clouds overlaid to demonstrate the effect of point cloud dense reconstruction quality on accuracy near sharp edges.</p>
Full article ">Figure 10
<p>The points along the side of a vertical plane on a box were isolated and the error perpendicular to the plane of the box were visualized for each dense reconstruction setting, with white regions indicating no point cloud data. Notice that the region with data gaps in the point cloud from the ultra-high setting corresponds to the region of the plane with low image texture, as shown in the lower right plot.</p>
Full article ">Figure 11
<p>The signed error probability distribution for each of the calculated dense point clouds clearly indicates the increase in accuracy (decrease in variance) for increasing dense reconstruction setting.</p>
Full article ">
2981 KiB  
Article
Extrapolating Forest Canopy Fuel Properties in the California Rim Fire by Combining Airborne LiDAR and Landsat OLI Data
by Mariano García, Sassan Saatchi, Angeles Casas, Alexander Koltunov, Susan L. Ustin, Carlos Ramirez and Heiko Balzter
Remote Sens. 2017, 9(4), 394; https://doi.org/10.3390/rs9040394 - 22 Apr 2017
Cited by 39 | Viewed by 6915
Abstract
Accurate, spatially explicit information about forest canopy fuel properties is essential for ecosystem management strategies for reducing the severity of forest fires. Airborne LiDAR technology has demonstrated its ability to accurately map canopy fuels. However, its geographical and temporal coverage is limited, thus [...] Read more.
Accurate, spatially explicit information about forest canopy fuel properties is essential for ecosystem management strategies for reducing the severity of forest fires. Airborne LiDAR technology has demonstrated its ability to accurately map canopy fuels. However, its geographical and temporal coverage is limited, thus making it difficult to characterize fuel properties over large regions before catastrophic events occur. This study presents a two-step methodology for integrating post-fire airborne LiDAR and pre-fire Landsat OLI (Operational Land Imager) data to estimate important pre-fire canopy fuel properties for crown fire spread, namely canopy fuel load (CFL), canopy cover (CC), and canopy bulk density (CBD). This study focused on a fire prone area affected by the large 2013 Rim fire in the Sierra Nevada Mountains, California, USA. First, LiDAR data was used to estimate CFL, CC, and CBD across an unburned 2 km buffer with similar structural characteristics to the burned area. Second, the LiDAR-based canopy fuel properties were extrapolated over the whole area using Landsat OLI data, which yielded an R2 of 0.8, 0.79, and 0.64 and RMSE of 3.76 Mg·ha−1, 0.09, and 0.02 kg·m−3 for CFL, CC, and CBD, respectively. The uncertainty of the estimates was estimated for each pixel using a bootstrapping approach, and the 95% confidence intervals are reported. The proposed methodology provides a detailed spatial estimation of forest canopy fuel properties along with their uncertainty that can be readily integrated into fire behavior and fire effects models. The methodology could be also integrated into the LANDFIRE program to improve the information on canopy fuels. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area comprising the footprint of the Rim fire in the Sierra Nevada Mountains, CA.</p>
Full article ">Figure 2
<p>Schematic representation of the method used to estimate canopy bulk density (CBD) from the airborne LiDAR data. (<b>A</b>) LiDAR point cloud of a 0.09 ha plot; (<b>B</b>) Vertical distribution of canopy elements derived from the LiDAR pseudo-waveform; (<b>C</b>) Derivation of the fuel vertical profile from LiDAR data and estimation of CBD using the maximum of the smoothed FVP.</p>
Full article ">Figure 3
<p>Scatter plot of LiDAR-based versus field-based CFL estimates. The solid line represents the 1:1 line.</p>
Full article ">Figure 4
<p>Scatter plots of LiDAR-based versus Landsat OLI canopy fuel properties. Left, canopy fuel load; center, canopy cover; right, canopy bulk density. The solid line represents the 1:1 line.</p>
Full article ">Figure 5
<p>Spatial distribution of the Landsat-based extrapolated canopy fuel properties and the associated uncertainties. Left, canopy fuel load; center, canopy cover; right, canopy bulk density.</p>
Full article ">
12719 KiB  
Article
Hyperspectral and Multispectral Retrieval of Suspended Sediment in Shallow Coastal Waters Using Semi-Analytical and Empirical Methods
by Xiaochi Zhou, Marco Marani, John D. Albertson and Sonia Silvestri
Remote Sens. 2017, 9(4), 393; https://doi.org/10.3390/rs9040393 - 21 Apr 2017
Cited by 15 | Viewed by 5363
Abstract
Natural lagoons and estuaries worldwide are experiencing accelerated ecosystem degradation due to increased anthropogenic pressure. As a key driver of coastal zone dynamics, suspended sediment concentration (SSC) is difficult to monitor with adequate spatial and temporal resolutions both in the field and using [...] Read more.
Natural lagoons and estuaries worldwide are experiencing accelerated ecosystem degradation due to increased anthropogenic pressure. As a key driver of coastal zone dynamics, suspended sediment concentration (SSC) is difficult to monitor with adequate spatial and temporal resolutions both in the field and using remote sensing. In particular, the spatial resolutions of currently available remote sensing data generated by satellite sensors designed for ocean color retrieval, such as MODIS (Moderate Resolution Imaging Spectroradiometer) or SeaWiFS (Sea-Viewing Wide Field-of-View Sensor), are too coarse to capture the dimension and geomorphological heterogeneity of most estuaries and lagoons. In the present study, we explore the use of hyperspectral (Hyperion) and multispectral data, i.e., the Landsat TM (Thematic Mapper) and ETM+ (Enhanced Thematic Mapper Plus), ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer), and ALOS (Advanced Land Observing Satellite), to estimate SSC through semi-analytical and empirical approaches in the Venice lagoon (Italy). Key parameters of the retrieval models are calibrated and cross-validated by matching the remote sensing estimates of SSC with in situ data from a network of water quality sensors. Our analysis shows that, despite the higher spectral resolution, hyperspectral data provide limited advantages over the use of multispectral data, mainly due to information redundancy and cross-band correlation. Meanwhile, the limited historical archive of hyperspectral data (usually acquired on demand) severely reduces the chance of observing high turbidity events, which are relatively rare but critical in controlling the coastal sediment and geomorphological dynamics. On the contrary, retrievals using available multispectral data can encompass a much wider range of SSC values due to their frequent acquisitions and longer historical archive. For the retrieval methods considered in this study, we find that the semi-analytical method outperforms empirical approaches, when applied to both the hyperspectral and multispectral dataset. Interestingly, the improved performance emerges more clearly when the data used for testing are kept separated from those used in the calibration, suggesting a greater ability of semi-analytical models to “generalize” beyond the specific data set used for model calibration. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Bathymetric map of the Venice lagoon showing the location of the 10 measurement stations (white circles, VE1 to VE10), modified after [<a href="#B24-remotesensing-09-00393" class="html-bibr">24</a>]. The Venice lagoon is located within the black circle on the map of Italy on the upper left.</p>
Full article ">Figure 2
<p>Comparison of reflectance estimated from the moderate-resolution imaging spectroradiometer (MODIS) against the Hyperion Synthetic MODIS (HSM), with data collected on 10 February 2005 (<b>a</b>–<b>c</b>), 18 June 2005 (<b>d</b>–<b>f</b>), 4 July 2005 (<b>g</b>–<b>i</b>), 20 July 2005 (<b>j</b>–<b>l</b>), and 7 January 2006 (<b>m</b>–<b>o</b>). Colors denote the density (in percentage) of the points, which is estimated as a 2 dimensional histogram using 20 equally spaced bins in both x and y directions. Yellow (black) means a high (low) density of points in a given area of the plot. The Spearman’s rank correlation coefficient (COR) and root-mean-square-error (RMSE) are indicated in each subplot. The <span class="html-italic">p</span>-values for testing the null hypothesis that there is no correlation between the reflectance estimated from MODIS and HSM are all &lt;0.001, suggesting a significant correlation.</p>
Full article ">Figure 3
<p>Sensitivity of the (<b>a</b>) root-mean-square-error (RMSE), and the suspended sediment specific (<b>b</b>) absorption (<math display="inline"> <semantics> <mi>γ</mi> </semantics> </math>) and (<b>c</b>) backscattering (<math display="inline"> <semantics> <mi>η</mi> </semantics> </math>) coefficients at around 400 nm to the input bottom reflectance (<math display="inline"> <semantics> <mrow> <msub> <mi>ρ</mi> <mi>b</mi> </msub> </mrow> </semantics> </math>) values using all Hyperion bands.</p>
Full article ">Figure 4
<p>(<b>a</b>) Root-mean-square-error (RMSE) of the SSC calibration as a function of the suspended sediment specific absorption (<math display="inline"> <semantics> <mi>γ</mi> </semantics> </math>) and backscattering (<math display="inline"> <semantics> <mi>η</mi> </semantics> </math>) coefficients at around 400 nm using all Hyperion bands (RMSE values are color-coded based on the color bar legend), and (<b>b</b>) estimated SSC<sub>E</sub> versus measured SSC using optimal <math display="inline"> <semantics> <mi>γ</mi> </semantics> </math> and <math display="inline"> <semantics> <mi>η</mi> </semantics> </math> in (<b>a</b>).</p>
Full article ">Figure 5
<p>Root-mean-square-error (RMSE) of the SSC calibration as a function of the suspended sediment specific absorption (<math display="inline"> <semantics> <mi>γ</mi> </semantics> </math>) and backscattering (<math display="inline"> <semantics> <mi>η</mi> </semantics> </math> ) coefficients at around 400 nm using Hyperion bands centered at (<b>a</b>) 660 nm (seven spectral bands from 630 nm to 690 nm), and (<b>b</b>) 560 nm (nine spectral bands ranging from 520 nm to 600 nm). RMSE values are color-coded based on the color bar legend.</p>
Full article ">Figure 6
<p>Root-mean-square error (RMSE) obtained from calibration (<b>a</b>) and validation (<b>b</b>) of the inversion of the radiative transfer model (Equation (8)) using combinations of different number of Hyperion bands. In the box-whisker plots, the red central line is the median, the edges of the box are the 25th and 75th percentiles, the whiskers extend to the extreme data points within approximately ±2.7 standard deviation from the mean (covering 99.3% data), and outliers are plotted as red crosses. Out of range data (indicate complete model failure) are represented with an arrow and the corresponding RMSE values are reported next to it.</p>
Full article ">Figure 7
<p>Root-mean-square error (RMSE) obtained from calibration left (<b>a</b>,<b>c</b>) and validation right (<b>b</b>,<b>d</b>) of the inversion of the radiative transfer model (Equation (8)) using different number of Hyperion bands combinations: mean and one standard deviation, which is denoted as error bar (<b>a</b>,<b>b</b>), and minimum (<b>c</b>,<b>d</b>), computed at a spectral band using all the possible pairs of that spectral band with the others. Out of range data (indicate complete model failure) are represented with an arrow and the corresponding RMSE values are reported next to it.</p>
Full article ">Figure 8
<p>Root-mean-square error (RMSE) obtained from calibration (<b>a</b>,<b>c</b>,<b>e</b>) and validation (<b>b</b>,<b>d</b>,<b>f</b>) of the inversion of the radiative transfer model (Equation (8)) using: single Hyperion band (<b>a</b>,<b>b</b>); two Hyperion bands with mean and one standard deviation that is denoted as error bar (<b>c</b>,<b>d</b>) and minimum (<b>e</b>,<b>f</b>), computed at a spectral band using all the possible pairs of that spectral band with the other. Out of range data (indicate complete model failure) are represented with an arrow and the corresponding RMSE values are reported next to it, spectral bands fail with empirical models are marked as a black cross.</p>
Full article ">Figure 9
<p>Validation of RTM-inversion method (Equation (8)) using different subset size of multispectral data (total of 52). In the box-whisker plots, the red central line is the median, the edges of the box are the 25th and 75th percentiles, the whiskers extend to the extreme data points within approximately ±2.7 standard deviation from the mean (covering 99.3% data), and outliers are plotted as red crosses. For comparison, the median, max, and min RMSE estimated using the Hyperion data (total of 20) with two spectral bands are also plotted (black dot and arrows).</p>
Full article ">Figure 10
<p>Hyperion Synthetic ETM+ (HSE) reflectance plotted as a function of SSC along with the multispectral data.</p>
Full article ">
6406 KiB  
Article
Modeling Biomass Production in Seasonal Wetlands Using MODIS NDVI Land Surface Phenology
by Maria Lumbierres, Pablo F. Méndez, Javier Bustamante, Ramón Soriguer and Luis Santamaría
Remote Sens. 2017, 9(4), 392; https://doi.org/10.3390/rs9040392 - 21 Apr 2017
Cited by 84 | Viewed by 11798
Abstract
Plant primary production is a key driver of several ecosystem functions in seasonal marshes, such as water purification and secondary production by wildlife and domestic animals. Knowledge of the spatio-temporal dynamics of biomass production is therefore essential for the management of resources—particularly in [...] Read more.
Plant primary production is a key driver of several ecosystem functions in seasonal marshes, such as water purification and secondary production by wildlife and domestic animals. Knowledge of the spatio-temporal dynamics of biomass production is therefore essential for the management of resources—particularly in seasonal wetlands with variable flooding regimes. We propose a method to estimate standing aboveground plant biomass using NDVI Land Surface Phenology (LSP) derived from MODIS, which we calibrate and validate in the Doñana National Park’s marsh vegetation. Out of the different estimators tested, the Land Surface Phenology maximum NDVI (LSP-Maximum-NDVI) correlated best with ground-truth data of biomass production at five locations from 2001–2015 used to calibrate the models (R2 = 0.65). Estimators based on a single MODIS NDVI image performed worse (R2 ≤ 0.41). The LSP-Maximum-NDVI estimator was robust to environmental variation in precipitation and hydroperiod, and to spatial variation in the productivity and composition of the plant community. The determination of plant biomass using remote-sensing techniques, adequately supported by ground-truth data, may represent a key tool for the long-term monitoring and management of seasonal marsh ecosystems. Full article
(This article belongs to the Special Issue Remote Sensing of Above Ground Biomass)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) Location of the Doñana National Park in the southwest of Spain. (<b>B</b>) Location of the study area inside the Doñana National Park marsh. (<b>C</b>) Ground-truth biomass plots inside the study area. (<b>D</b>) Zoom to a MODIS validation pixel that exemplifies the sample design stratification. (<b>E</b>) Picture of the helophyte community (May 2016). (<b>F</b>) Picture of an area heavily grazed by cattle (June 2016).</p>
Full article ">Figure 2
<p>Model calibration. Relationship between the best NDVI estimator tested (LSP-Maximum-NDVI) and the logarithm of biomass production (kg dw/ha). Continuous line: regression line. Dotted lines: 95% confidence intervals. The dot colors represent the five different locations of the calibration biomass plots.</p>
Full article ">Figure 3
<p>Results of the validation survey. Aboveground biomass production per species at each of the nine MODIS pixels sampled. N = 12 sample plots per pixel.</p>
Full article ">Figure 4
<p>Model validation. Major axis regression between measured and predicted biomass. Continuous line: regression line. Dotted lines: 95% confidence intervals.</p>
Full article ">Figure 5
<p>Model predictions. Estimated biomass production (in kg dw/ha) per pixel across the study area.</p>
Full article ">Figure 6
<p>Trend analysis. (<b>A</b>) Changes in biomass production from 2001 to 2016, based on the Theil-Sen slope estimator. Positive values (blue colors): increase. Negative values (red colors): decrease. (<b>B</b>) Average biomass production (kg dw/ha) from 2001 to 2016. All categories except the one for “non-significant results” indicate Theil-Sen slope estimator values significantly different from zero.</p>
Full article ">Figure 7
<p>Effect of cumulative precipitation (from September to April) on biomass production (average across the study area). Continuous line: regression fit. Dotted line: 95% confidence intervals. Note the log-transformation in both axes.</p>
Full article ">
14257 KiB  
Article
A Novel Pan-Sharpening Framework Based on Matting Model and Multiscale Transform
by Yong Yang, Weiguo Wan, Shuying Huang, Pan Lin and Yue Que
Remote Sens. 2017, 9(4), 391; https://doi.org/10.3390/rs9040391 - 21 Apr 2017
Cited by 55 | Viewed by 8033
Abstract
Pan-sharpening aims to sharpen a low spatial resolution multispectral (MS) image by combining the spatial detail information extracted from a panchromatic (PAN) image. An effective pan-sharpening method should produce a high spatial resolution MS image while preserving more spectral information. Unlike traditional intensity-hue-saturation [...] Read more.
Pan-sharpening aims to sharpen a low spatial resolution multispectral (MS) image by combining the spatial detail information extracted from a panchromatic (PAN) image. An effective pan-sharpening method should produce a high spatial resolution MS image while preserving more spectral information. Unlike traditional intensity-hue-saturation (IHS)- and principal component analysis (PCA)-based multiscale transform methods, a novel pan-sharpening framework based on the matting model (MM) and multiscale transform is presented in this paper. First, we use the intensity component (I) of the MS image as the alpha channel to generate the spectral foreground and background. Then, an appropriate multiscale transform is utilized to fuse the PAN image and the upsampled I component to obtain the fused high-resolution gray image. In the fusion, two preeminent fusion rules are proposed to fuse the low- and high-frequency coefficients in the transform domain. Finally, the high-resolution sharpened MS image is obtained by linearly compositing the fused gray image with the upsampled foreground and background images. The proposed framework is the first work in the pan-sharpening field. A large number of experiments were tested on various satellite datasets; the subjective visual and objective evaluation results indicate that the proposed method performs better than the IHS- and PCA-based frameworks, as well as other state-of-the-art pan-sharpening methods both in terms of spatial quality and spectral maintenance. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>An example of image matting: (<b>a</b>) QuickBird multispectral (MS) image; (<b>b</b>) alpha channel; (<b>c</b>) foreground image; (<b>d</b>) background image; (<b>e</b>) <span class="html-italic">I</span> component of the MS image; (<b>f</b>) foreground color; (<b>g</b>) background color.</p>
Full article ">Figure 2
<p>Three-level multiscale and multidirectional decomposition of the nonsubsampled Shearlet transform (NSST).</p>
Full article ">Figure 3
<p>Decomposition diagram of NSST: (<b>a</b>) WorldView-2 satellite image; (<b>b</b>) low-frequency image; (<b>c</b>) high-frequency images.</p>
Full article ">Figure 4
<p>The schematic diagram of the proposed pan-sharpening framework.</p>
Full article ">Figure 5
<p>The flow chart of upsampled <span class="html-italic">I</span> and matched panchromatic (PAN) image fusion using NSST.</p>
Full article ">Figure 6
<p>The fused results of the intensity-hue-saturation (IHS), PCA- and matting model-based methods on WorldView-2 dataset: (<b>a</b>) MS image; (<b>b</b>) PAN image; (<b>c</b>) reference image; (<b>d</b>) IHS–wavelet transform (WT) result; (<b>e</b>) PCA–WT result; (<b>f</b>) proposed matting model–WT result.</p>
Full article ">Figure 7
<p>The fusion results of WorldView-2 datasets on coast area: (<b>a</b>) MS image; (<b>b</b>) PAN image; (<b>c</b>) reference image; (<b>d</b>) DWT–Sparse Representation (SR); (<b>e</b>) Curvelet; (<b>f</b>) NSST–SR; (<b>g</b>) Guided Filter (GF); (<b>h</b>) Additive Wavelet Luminance Proportional (AWLP); (<b>i</b>) Bilateral Filter Luminance Proportional (BFLP); (<b>j</b>) Matting Model (MM); (<b>k</b>) MM–DWT; (<b>l</b>) MM–NSST.</p>
Full article ">Figure 8
<p>The fusion results of WorldView-2 datasets on urban area: (<b>a</b>) MS image; (<b>b</b>) PAN image; (<b>c</b>) reference image; (<b>d</b>) DWT–Sparse Representation (SR); (<b>e</b>) Curvelet; (<b>f</b>) NSST–SR; (<b>g</b>) Guided Filter (GF); (<b>h</b>) Additive Wavelet Luminance Proportional (AWLP); (<b>i</b>) Bilateral Filter Luminance Proportional (BFLP); (<b>j</b>) Matting Model (MM); (<b>k</b>) MM–DWT; (<b>l</b>) MM–NSST.</p>
Full article ">Figure 9
<p>The fusion results of WorldView-2 datasets on an uptown area: (<b>a</b>) MS image; (<b>b</b>) PAN image; (<b>c</b>) reference image; (<b>d</b>) DWT–Sparse Representation (SR); (<b>e</b>) Curvelet; (<b>f</b>) NSST–SR; (<b>g</b>) Guided Filter (GF); (<b>h</b>) Additive Wavelet Luminance Proportional (AWLP); (<b>i</b>) Bilateral Filter Luminance Proportional (BFLP); (<b>j</b>) Matting Model (MM); (<b>k</b>) MM–DWT; (<b>l</b>) MM–NSST.</p>
Full article ">Figure 10
<p>The fusion results on the IKONOS dataset: (<b>a</b>) MS image; (<b>b</b>) PAN image; (<b>c</b>) reference image; (<b>d</b>) DWT–Sparse Representation (SR); (<b>e</b>) Curvelet; (<b>f</b>) NSST–SR; (<b>g</b>) Guided Filter (GF); (<b>h</b>) Additive Wavelet Luminance Proportional (AWLP); (<b>i</b>) Bilateral Filter Luminance Proportional (BFLP); (<b>j</b>) Matting Model (MM); (<b>k</b>) MM–DWT; (<b>l</b>) MM–NSST.</p>
Full article ">Figure 11
<p>The fusion results on the QuickBird dataset: (<b>a</b>) MS image; (<b>b</b>) PAN image; (<b>c</b>) reference image; (<b>d</b>) DWT–Sparse Representation (SR); (<b>e</b>) Curvelet; (<b>f</b>) NSST–SR; (<b>g</b>) Guided Filter (GF); (<b>h</b>) Additive Wavelet Luminance Proportional (AWLP); (<b>i</b>) Bilateral Filter Luminance Proportional (BFLP); (<b>j</b>) Matting Model (MM); (<b>k</b>) MM–DWT; (<b>l</b>) MM–NSST.</p>
Full article ">
4319 KiB  
Article
Evapotranspiration Mapping in a Heterogeneous Landscape Using Remote Sensing and Global Weather Datasets: Application to the Mara Basin, East Africa
by Tadesse Alemayehu, Ann van Griensven, Gabriel B. Senay and Willy Bauwens
Remote Sens. 2017, 9(4), 390; https://doi.org/10.3390/rs9040390 - 20 Apr 2017
Cited by 44 | Viewed by 7233
Abstract
Actual evapotranspiration (ET) is a major water use flux in a basin water balance with crucial significance for water resources management and planning. Mapping ET with good accuracy has been the subject of ongoing research. Such mapping is even more challenging [...] Read more.
Actual evapotranspiration (ET) is a major water use flux in a basin water balance with crucial significance for water resources management and planning. Mapping ET with good accuracy has been the subject of ongoing research. Such mapping is even more challenging in heterogeneous and data-scarce regions. The main objective of our research is to estimate ET using daily Moderate Resolution Imaging Spectroradiometer (MODIS) land surface temperature and Global Land Data Assimilation System (GLDAS) weather datasets based on the operational simplified surface energy balance (SSEBop) algorithm at a 1-km spatial scale and 8-day temporal resolution for the Mara Basin (Kenya/Tanzania). Unlike previous studies where the SSEBop algorithm was used, we use a seasonally-varying calibration coefficient for determining the “cold” reference temperature. Our results show that ET is highly variable, with a high inter-quartile range for wetlands and evergreen forest (24% to 29% of the median) and even up to 52% of the median for herbaceous land cover and rainfed agriculture. The basin average ET accounts for about 66% of the rainfall with minimal inter-annual variability. The basin scale validation using nine-years of monthly, gridded global flux tower-based ET (GFET) data reveals that our ET is able to explain 64% of the variance in GFET while the MOD16-NB (Nile Basin) explains 72%. We also observe a percent of bias (PBIAS) of 1.1% and 2.8%, respectively for SSEBop ET and MOD16-NB, indicating a good reliability in the ET estimates. Additionally, the SSEBop ET explains about 52% of the observed variability in the Normalized Difference Vegetation Index (NDVI) for a 16-day temporal resolution and 81% for the annual resolution, pointing to an increased reliability for longer aggregation periods. The annual SSEBop ET estimates are also consistent with the underlying primary (i.e., water and energy) and secondary (i.e., soil, topography, geology, land cover, etc.) controlling factors across the basin. This paper demonstrated how to effectively estimate and evaluate spatially-distributed and temporally-varying ET in data-scarce regions that can be applied elsewhere in the world where observed hydro-meteorological variables are limited. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Location of the Mara Basin. Elevation zones are based on the 30-m Shuttle Radar Topographic Mission (SRTM) Digital Elevation Model (DEM) along with the location of watersheds (W1–W5) with different landscape and climatic characteristics; (<b>b</b>) land-cover classes of the Mara Basin based on the Africover map [<a href="#B17-remotesensing-09-00390" class="html-bibr">17</a>]. The stripped polygons depict locations where the Normalized Vegetation Index (NDVI) is extracted.</p>
Full article ">Figure 2
<p>Downscaled daily average seasonal grass reference evapotranspiration based on the Food and Agriculture Organization (FAO) Penman–Monteith equation [<a href="#B42-remotesensing-09-00390" class="html-bibr">42</a>] using a GLDAS dataset (2002–2010) at a 1-km spatial scale. The downscaling was made using the Trabucco and Zomer [<a href="#B43-remotesensing-09-00390" class="html-bibr">43</a>] global observation-based climatological <span class="html-italic">ET</span><sub>0</sub>.</p>
Full article ">Figure 3
<p>Temporal dynamics of the cloud free land surface temperature (<span class="html-italic">T<sub>s</sub></span>) for different land-cover classes. The predefined hot (<span class="html-italic">T<sub>h</sub></span>) and cold (<span class="html-italic">T<sub>c</sub></span>) reference temperature envelope <span class="html-italic">T<sub>s</sub></span> on most of the cloud-free days. The time of overpass of the Terra satellite is 10:30 a.m.</p>
Full article ">Figure 4
<p>Boxplot of the eight-day aggregated <span class="html-italic">ET</span> (2002–2010) across different land cover classes over the Mara Basin. The vertical boxes represent the interquartile range, while the horizontal line shows the median <span class="html-italic">ET</span>.</p>
Full article ">Figure 5
<p>The spatial distribution of SSEBop <span class="html-italic">ET</span> at 1 km in the Mara Basin from 2002–2010. The standard deviation (STD) shows <span class="html-italic">ET</span> variability across the basin.</p>
Full article ">Figure 6
<p>Illustration of monthly basin average dynamics of evapotranspiration (<span class="html-italic">ET</span>) over the Mara Basin (2002–2010) using gridded flux tower network (FLUXNET) <span class="html-italic">ET</span> (GFET), regional MODIS <span class="html-italic">ET</span> product (MOD16-NB) and SSEBop <span class="html-italic">ET</span>. The scatter plot of MOD16-NB and SSEBop <span class="html-italic">ET</span> against GFET at the monthly (<b>a</b>) and annual (<b>b</b>) temporal scale.</p>
Full article ">Figure 7
<p>The spatial distribution of <span class="html-italic">ET</span> at 1 km in the Mara Basin using MODIS <span class="html-italic">ET</span> for the Nile Basin countries (MOD16-NB) from 2002–2010. The standard deviation (STD) shows <span class="html-italic">ET</span> variability across the basin.</p>
Full article ">Figure 8
<p>Pixel level correlation between monthly SSEBop <span class="html-italic">ET</span> and the MODIS <span class="html-italic">ET</span> estimates for the Nile Basin countries (MOD16-NB) (2002–2010).</p>
Full article ">Figure 9
<p>Density plots showing the distribution of the monthly <span class="html-italic">ET</span> from the MOD16 for the Nile Basin countries (MOD16-NB) and SSEBop (2002–2010).</p>
Full article ">Figure 10
<p>Mean NDVI versus cumulative SSEBop <span class="html-italic">ET</span> at 16-day (<b>a</b>), monthly (<b>b</b>) and annual (<b>c</b>) temporal scale for selected land-cover classes. (<b>d</b>) The relationship of mean annual NDVI with the MOD16 for the Nile Basin countries (MOD16-NB). Note that both the monthly and annual relationships are statistically significant at the 95% confidence level.</p>
Full article ">Figure 11
<p>Scatterplot of the evaporative index (<span class="html-italic">ET</span>/<span class="html-italic">P</span>) against the aridity index (<span class="html-italic">ET</span><sub>0</sub>/<span class="html-italic">P</span>). The markers represent different watersheds (W1–W5) while the solid lines (thin) show the relationships represented by Fu [<a href="#B57-remotesensing-09-00390" class="html-bibr">57</a>] for <span class="html-italic">w</span> values of 2.7 and 1.9.</p>
Full article ">Figure 12
<p>The seasonal variability of the monthly <span class="html-italic">ET</span> for 2002–2010 in the Mara Basin. The months are arranged according to the water year (October–September).</p>
Full article ">Figure 13
<p>Seasonally-aggregated SSEBop <span class="html-italic">ET</span> (upper row) and bias-corrected satellite rainfall (lower row) in the Mara Basin (2002–2010). Note that the spatial resolution is 1 km for <span class="html-italic">ET</span> and 4 km for the rainfall.</p>
Full article ">
10570 KiB  
Article
Unassisted Quantitative Evaluation of Despeckling Filters
by Luis Gomez, Raydonal Ospina and Alejandro C. Frery
Remote Sens. 2017, 9(4), 389; https://doi.org/10.3390/rs9040389 - 20 Apr 2017
Cited by 73 | Viewed by 7365
Abstract
SAR (Synthetic Aperture Radar) imaging plays a central role in Remote Sensing due to, among other important features, its ability to provide high-resolution, day-and-night and almost weather-independent images. SAR images are affected from a granular contamination, speckle, that can be described by a [...] Read more.
SAR (Synthetic Aperture Radar) imaging plays a central role in Remote Sensing due to, among other important features, its ability to provide high-resolution, day-and-night and almost weather-independent images. SAR images are affected from a granular contamination, speckle, that can be described by a multiplicative model. Many despeckling techniques have been proposed in the literature, as well as measures of the quality of the results they provide. Assuming the multiplicative model, the observed image Z is the product of two independent fields: the backscatter X and the speckle Y. The result of any speckle filter is X ^ , an estimator of the backscatter X, based solely on the observed data Z. An ideal estimator would be the one for which the ratio of the observed image to the filtered one I = Z / X ^ is only speckle: a collection of independent identically distributed samples from Gamma variates. We, then, assess the quality of a filter by the closeness of I to the hypothesis that it is adherent to the statistical properties of pure speckle. We analyze filters through the ratio image they produce with regards to first- and second-order statistics: the former check marginal properties, while the latter verifies lack of structure. A new quantitative image-quality index is then defined, and applied to state-of-the-art despeckling filters. This new measure provides consistent results with commonly used quality measures (equivalent number of looks, PSNR, MSSIM, β edge correlation, and preservation of the mean), and ranks the filters results also in agreement with their visual analysis. We conclude our study showing that the proposed measure can be successfully used to optimize the (often many) parameters that define a speckle filter. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>Top</b>): original SAR image; (<b>Middle</b>): SRAD (<math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics> </math>) filtered image and ratio image; (<b>Bottom</b>): zoom of a selected area within the ratio image and extracted edges by Canny’s edge detector.</p>
Full article ">Figure 2
<p>A step: constant and textured versions, and their return. (<b>a</b>) Constant step and speckled return; (<b>b</b>) Textured step and speckled return.</p>
Full article ">Figure 3
<p>Estimated speckle by the ideal filter and by overmoothing.</p>
Full article ">Figure 4
<p>Slowly varying backscatter, fully developed speckle, and estimated speckle. (<b>a</b>) Slowly-varying mean value and its return; (<b>b</b>) Estimated speckle.</p>
Full article ">Figure 5
<p>Ratio image resulting from neglecting a slowly varying structure under fully developed speckle.</p>
Full article ">Figure 6
<p>The effect of oversmoothing on an image of strips of varying width. (<b>a</b>) Strips and speckle; (<b>b</b>) Filtered strips with oversmoothing.</p>
Full article ">Figure 7
<p>Estimated speckle: ideal and oversmoothing filters.</p>
Full article ">Figure 8
<p>Speckled strips, result of applying a <math display="inline"> <semantics> <mrow> <mn>5</mn> <mo>×</mo> <mn>5</mn> </mrow> </semantics> </math> BoxCar filter, ratio image. (<b>a</b>) Speckled strips; (<b>b</b>) Filtered strips; (<b>c</b>) Ratio image.</p>
Full article ">Figure 9
<p>Selection of mean and ENL values for the first-order measure.</p>
Full article ">Figure 10
<p>Blocks and points phantom, and <math display="inline"> <semantics> <mrow> <mn>500</mn> <mo>×</mo> <mn>500</mn> </mrow> </semantics> </math> pixels simulated single-look intensity image. (<b>a</b>) Blocks and points phantom; (<b>b</b>) Speckled version, single look.</p>
Full article ">Figure 11
<p>Results for the simulated single-look intensity data. Top to bottom, (<b>left</b>) results of applying the SRAD, the E-Lee, the PPB and the FANS filters. Top to bottom (<b>right</b>), their ratio images.</p>
Full article ">Figure 12
<p>Zoom of the results for synthetic data: (<b>top</b>) Noisy image, (<b>first row</b>, <b>left</b>) SRAD filter, (first row, right) E-Lee filter, (<b>second row</b>, <b>left</b>) PPB filter and, (<b>second row</b>, <b>right</b>) FANS filter.</p>
Full article ">Figure 13
<p>Intensity AIRSAR images, HH polarization, three looks. (<b>a</b>) Flevoland; (<b>b</b>) San Francisco bay.</p>
Full article ">Figure 14
<p>Results for the Flevoland image. Top to bottom, (<b>left</b>) results of applying SRAD, E-Lee, PPB and FANS filters. Top to bottom (<b>right</b>), their ratio images.</p>
Full article ">Figure 15
<p>Zoom of the results for Flevoland image: (<b>top</b>) Noisy image, (<b>first row</b>, <b>left</b>) SRAD filter, (<b>first row</b>, <b>right</b>) E-Lee filter, (<b>second row</b>, <b>left</b>) PPB filter and, (<b>second row</b>, <b>right</b>) FANS filter.</p>
Full article ">Figure 16
<p>Result for the San Francisco bay image. Top to bottom, (<b>left</b>) results of applying SRAD, E-Lee, PPB and FANS. Top to bottom (<b>right</b>), their ratio images.</p>
Full article ">Figure 17
<p>Zoom of the results for San Francisco image: (<b>top</b>) Noisy image, (<b>first row</b>, <b>left</b>) SRAD filter, (<b>first row</b>, <b>right</b>) E-Lee filter, (<b>second row</b>, <b>left</b>) PPB filter and, (<b>second row</b>, <b>right</b>) FANS filter.</p>
Full article ">Figure 18
<p>Intensity Pi-SAR, HH one look Niigata image (<b>left</b>); Results of applying FANS filters with default parameters (<b>middle</b>) and with optimized parameters (<b>right</b>).</p>
Full article ">Figure 19
<p>Ratio images for Niigata data; FANS with default parameters (<b>left</b>) and with optimized parameters (<b>right</b>).</p>
Full article ">
14675 KiB  
Article
A Glacier Surge of Bivachny Glacier, Pamir Mountains, Observed by a Time Series of High-Resolution Digital Elevation Models and Glacier Velocities
by Anja Wendt, Christoph Mayer, Astrid Lambrecht and Dana Floricioiu
Remote Sens. 2017, 9(4), 388; https://doi.org/10.3390/rs9040388 - 20 Apr 2017
Cited by 39 | Viewed by 8262
Abstract
Surge-type glaciers are characterised by relatively short phases of enhanced ice transport and mass redistribution after a comparatively long quiescent phase when the glacier is virtually inactive. This unstable behaviour makes it difficult to assess the influence of climate change on those glaciers. [...] Read more.
Surge-type glaciers are characterised by relatively short phases of enhanced ice transport and mass redistribution after a comparatively long quiescent phase when the glacier is virtually inactive. This unstable behaviour makes it difficult to assess the influence of climate change on those glaciers. We describe the evolution of the most recent surge of Bivachny Glacier in the Pamir Mountains, Tajikistan between 2011 and 2015 with respect to changes in its topography and dynamics. For the relevant time span, nine digital elevation models were derived from TanDEM-X data; optical satellite data (Landsat 5, 7 and 8, EO-1) as well as synthetic aperture radar data (TerraSAR-X and TanDEM-X) were used to analyse ice flow velocities. The comparison of the topography at the beginning of the surge with the one observed by the Shuttle Radar Topography Mission in 2000 revealed a thickening in the upper part of the ablation area of the glacier and a thinning further down the glacier as is typically observed during the quiescent phase. During the active phase, a surge bulge measuring up to around 80 m developed and travelled downstream for a distance of 13 km with a mean velocity of 4400 m year−1. Ice flow velocities increased from below 90 m year−1 duringthe quiescent phase in 2000 to up to 3400 m year−1 in spring 2014. After reaching the confluence with Fedchenko Glacier, the surge slowed down until it completely terminated in 2015. The observed seasonality of the glacier velocities with a regular speed-up during the onset of the melt period suggests a hydrological control of the surge related to the effectiveness of the subglacial drainage system. Full article
(This article belongs to the Special Issue Remote Sensing of Glaciers)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Bivachny Glacier with its tributaries, Pamir Mountains. Background image: Landsat 7 (24 August 2000), glacier outline in black, longitudinal profile in red, contour lines from the shuttle radar topography mission (SRTM) 2000 in grey. Inset shows location map.</p>
Full article ">Figure 2
<p>The displacement of surface features during the recent surge of Bivachny Glacier as seen in optical satellite images: (<b>a</b>) EO-1 image of the year 2011; (<b>b</b>) Landsat 8 image of the year 2015. The looped moraines downstream of MGU Glacier are marked in white and black, respectively.</p>
Full article ">Figure 3
<p>Longitudinal elevation profile of the Bivachny Glacier central flowline in 2000 (SRTM in black), 2011 (TanDEM-X in blue) and 2015 (TanDEM-X in red) and the reconstructed bedrock topography based on equation 1 in brown. The brown arrow marks the bedrock bump mentioned in the text, black arrows mark the position of the MGU Glacier and Oshanin Glacier confluences, respectively.</p>
Full article ">Figure 4
<p>(<b>a</b>) Surface velocity and (<b>b</b>) elevation change along the central flowline (see <a href="#remotesensing-09-00388-f001" class="html-fig">Figure 1</a>) for selected periods.</p>
Full article ">Figure 5
<p>Total elevation difference from August 2011 to October 2015. Colour scale is from blue for elevation loss to red for elevation gain. Background image: Landsat 8.</p>
Full article ">Figure 6
<p>The confluence of Bivachny Glacier flowing from upper right into Fedchenko Glacier flowing from the left towards the observer in August 2015 (Photo: A. Lambrecht).</p>
Full article ">Figure 7
<p>Estimated ice volume flux along the central flowline shown in <a href="#remotesensing-09-00388-f001" class="html-fig">Figure 1</a>.</p>
Full article ">
8617 KiB  
Article
A Fuzzy-GA Based Decision Making System for Detecting Damaged Buildings from High-Spatial Resolution Optical Images
by Milad Janalipour and Ali Mohammadzadeh
Remote Sens. 2017, 9(4), 349; https://doi.org/10.3390/rs9040349 - 20 Apr 2017
Cited by 28 | Viewed by 6175
Abstract
In this research, a semi-automated building damage detection system is addressed under the umbrella of high-spatial resolution remotely sensed images. The aim of this study was to develop a semi-automated fuzzy decision making system using Genetic Algorithm (GA). Our proposed system contains four [...] Read more.
In this research, a semi-automated building damage detection system is addressed under the umbrella of high-spatial resolution remotely sensed images. The aim of this study was to develop a semi-automated fuzzy decision making system using Genetic Algorithm (GA). Our proposed system contains four main stages. In the first stage, post-event optical images were pre-processed. In the second stage, textural features were extracted from the pre-processed post-event optical images using Haralick texture extraction method. Afterwards, in the third stage, a semi-automated Fuzzy-GA (Fuzzy Genetic Algorithm) decision making system was used to identify damaged buildings from the extracted texture features. In the fourth stage, a comprehensive sensitivity analysis was performed to achieve parameters of GA leading to more accurate results. Finally, the accuracy of results was assessed using check and test samples. The proposed system was tested over the 2010 Haiti earthquake (Area 1 and Area 2) and the 2003 Bam earthquake (Area 3). The proposed system resulted in overall accuracies of 76.88 ± 1.22%, 65.43 ± 0.29%, and 90.96 ± 0.15% over Area 1, Area 2, and Area 3, respectively. On the one hand, based on the concept of the proposed Fuzzy-GA decision making system, the automation level of this system is higher than other existing systems. On the other hand, based on the accuracy of our proposed system and four advanced machine learning techniques, i.e., bagging, boosting, random forests, and support vector machine, in the detection of damaged buildings, it seems that our proposed system is robust and efficient. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The workflow of our semi-automated damage detection system in this study.</p>
Full article ">Figure 2
<p>A schematic presentation of a MFIS with three inputs and one output and its MFs.</p>
Full article ">Figure 3
<p>The diagram of convergence of GA over (<b>a</b>) Area 1, (<b>b</b>) Area 3.</p>
Full article ">Figure 4
<p>The presentation of preliminary MFs and optimized MFs in an experiment for input 2 and input 3: (<b>a</b>) preliminary MFs for input 2, (<b>b</b>) optimized MFs for input 2, (<b>c</b>) preliminary MFs for input 3, (<b>d</b>) optimized MFs for input 3.</p>
Full article ">Figure 4 Cont.
<p>The presentation of preliminary MFs and optimized MFs in an experiment for input 2 and input 3: (<b>a</b>) preliminary MFs for input 2, (<b>b</b>) optimized MFs for input 2, (<b>c</b>) preliminary MFs for input 3, (<b>d</b>) optimized MFs for input 3.</p>
Full article ">Figure 5
<p>Building damage maps resulting from the proposed method on (<b>a</b>) Area 1, (<b>b</b>) Area 2, (<b>c</b>) Area 3.</p>
Full article ">
2031 KiB  
Article
Evaluation and Improvement of SMOS and SMAP Soil Moisture Products for Soils with High Organic Matter over a Forested Area in Northeast China
by Mengjie Jin, Xingming Zheng, Tao Jiang, Xiaofeng Li, Xiao-Jie Li and Kai Zhao
Remote Sens. 2017, 9(4), 387; https://doi.org/10.3390/rs9040387 - 19 Apr 2017
Cited by 23 | Viewed by 5010
Abstract
Soil moisture (SM) retrieval from SMOS (the Soil Moisture and Ocean Salinity mission) and SMAP (the Soil Moisture Active/Passive mission) passive microwave data over forested areas with required accuracy is of great significance and poses some challenges. In this paper, we [...] Read more.
Soil moisture (SM) retrieval from SMOS (the Soil Moisture and Ocean Salinity mission) and SMAP (the Soil Moisture Active/Passive mission) passive microwave data over forested areas with required accuracy is of great significance and poses some challenges. In this paper, we used Ground Wireless Sensor Network (GWSN) SM measurements from 9 September to 5 November 2015 to validate SMOS and SMAP Level 3 (L3) SM products over forested areas in northeastern China. Our results found that neither SMOS nor SMAP L3 SM products were ideal, with respective RMSE (root mean square error) values of 0.31 cm3/cm3 and 0.17 cm3/cm3. Nevertheless, some improvements in SM retrieval might be achievable through refinements of the soil dielectric model with respect to high percentage of soil organic matter (SOM) in the forested area. To that end, the potential of the semi-empirical soil dielectric model proposed by Jun Liu (Liu’s model) in improving SM retrieval results over forested areas was investigated. Introducing Liu’s model into the retrieval algorithms of both SMOS and SMAP missions produced promising results. For SMAP, the RMSE of L3 SM products improved from 0.16 cm3/cm3 to 0.07 cm3/cm3 for AM (local solar time around 06:00 am) data, and from 0.17 cm3/cm3 to 0.05 cm3/cm3 for PM (local solar time around 06:00 pm) data. For SMOS ascending orbit products, the accuracy was improved by 56%, while descending orbit products improved by 45%. Full article
Show Figures

Figure 1

Figure 1
<p>Land use types in the study area and locations of in situ <span class="html-italic">SM</span> stations.</p>
Full article ">Figure 2
<p>Soil organic matter content (<span class="html-italic">SOM</span>) and bulk density (g/cm<sup>3</sup>) of the soil samples from the 17 EC-5 <span class="html-italic">SM</span> sensor-sites.</p>
Full article ">Figure 3
<p>The calibration results of EC-5 <span class="html-italic">SM</span> sensor.</p>
Full article ">Figure 4
<p>Comparisons between Ground Wireless Sensor Network (GWSN) measurements with Soil Moisture and Ocean Salinity (SMOS) (<b>a</b>) and Soil Moisture Active/Passive (SMAP) (<b>b</b>) Soil Moisture (<span class="html-italic">SM</span>) products. GWSN_AM_Average and GWSN_PM_Average indicate the average values of the sensors located in the corresponding SMOS and SMAP grids at the time of 6 am and 6 pm, respectively. The shaded area corresponds to the interval between the minimum and maximum values of GWSN measurements. For the SMOS grid, the range of GWSN measurements is 0.30–0.43 cm<sup>3</sup>/cm<sup>3</sup>, the standard deviation is 0.03 cm<sup>3</sup>/cm<sup>3</sup>, and for the SMAP grid, the range and standard deviation of GWSN measurements are 0.36–0.47 cm<sup>3</sup>/cm<sup>3</sup> and 0.03 cm<sup>3</sup>/cm<sup>3</sup>.</p>
Full article ">Figure 5
<p>Comparison between the dielectric constants simulated by the Mironov model and that simulated by Liu’s model for three kinds of soil with the same clay content (13%) and different <span class="html-italic">SOM</span> (10%, 20% and 30%) at 1.4 GHz.</p>
Full article ">Figure 6
<p>Comparisons of revised SMOS <span class="html-italic">SM</span> by Liu’s model with SMOS L3 <span class="html-italic">SM</span> products.</p>
Full article ">Figure 7
<p>Comparisons of revised SMAP <span class="html-italic">SM</span> by Liu’s model with SMAP L3 <span class="html-italic">SM</span> products.</p>
Full article ">Figure 8
<p>The differences between the <span class="html-italic">SM</span> values derived by Liu’s model (SM1) and the Mironov model (SM2). SM1 ranged from 0 to 0.6 cm<sup>3</sup>/cm<sup>3</sup>, while <span class="html-italic">SOM</span> ranged from 0 to 60%. SM1 and <span class="html-italic">SOM</span> were input into Liu’s model, and SM2 was then retrieved by the Mironov model based on the <span class="html-italic">T<sub>B</sub></span> that was simulated by Liu’s model. The soil was assumed to be bare and smooth.</p>
Full article ">
6188 KiB  
Article
Discriminative Sparse Representation for Hyperspectral Image Classification: A Semi-Supervised Perspective
by Zhaohui Xue, Peijun Du, Hongjun Su and Shaoguang Zhou
Remote Sens. 2017, 9(4), 386; https://doi.org/10.3390/rs9040386 - 19 Apr 2017
Cited by 17 | Viewed by 5236
Abstract
This paper presents a novel semi-supervised joint dictionary learning (S2JDL) algorithm for hyperspectral image classification. The algorithm jointly minimizes the reconstruction and classification error by optimizing a semi-supervised dictionary learning problem with a unified objective loss function. To this end, we [...] Read more.
This paper presents a novel semi-supervised joint dictionary learning (S2JDL) algorithm for hyperspectral image classification. The algorithm jointly minimizes the reconstruction and classification error by optimizing a semi-supervised dictionary learning problem with a unified objective loss function. To this end, we construct a semi-supervised objective loss function which combines the reconstruction term from unlabeled samples and the reconstruction–discrimination term from labeled samples to leverage the unsupervised and supervised information. In addition, a soft-max loss is used to build the reconstruction–discrimination term. In the training phase, we randomly select the unlabeled samples and loop through the labeled samples to comprise the training pairs, and the first-order stochastic gradient descents are calculated to simultaneously update the dictionary and classifier by feeding the training pairs into the objective loss function. The experimental results with three popular hyperspectral datasets indicate that the proposed algorithm outperforms the other related methods. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Graphical illustration of the proposed method.</p>
Full article ">Figure 2
<p>(<b>a</b>) False color composition of the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) Indian Pines scene (R: 57, G: 27, B: 17); (<b>b</b>) Ground truth-map containing 16 mutually exclusive land-cover classes.</p>
Full article ">Figure 3
<p>(<b>a</b>) False color composition of the Reflective Optics Spectrographic Imaging System (ROSIS) University of Pavia scene (R: 102, G: 56, B: 31); (<b>b</b>) Ground truth map containing nine mutually exclusive land-cover classes.</p>
Full article ">Figure 4
<p>(<b>a</b>) False color composition of the AVIRIS Salinas scene (R: 57, G: 27, B: 17); (<b>b</b>) Ground truth map containing 16 mutually exclusive land-cover classes.</p>
Full article ">Figure 5
<p>Parameter sensitivity analysis of the proposed method for the AVIRIS Indian Pines dataset (10% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary). (<b>a</b>) Overall accuracy (OA) as a function of <math display="inline"> <semantics> <mi>λ</mi> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics> </math>; (<b>b</b>) Overall accuracy (OA) as a function of <math display="inline"> <semantics> <mi>τ</mi> </semantics> </math>. (<b>a</b>) OA vs. <math display="inline"> <semantics> <mi>λ</mi> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics> </math>; (<b>b</b>) OA vs. <math display="inline"> <semantics> <mi>τ</mi> </semantics> </math>.</p>
Full article ">Figure 6
<p>The evolution of overall accuracy with SNR for different classifiers (10% of labeled samples per class are used for training and 15 labeled samples per class are used to build the dictionary).</p>
Full article ">Figure 7
<p>Overall accuracy (OA) as a function of the number of atoms per class for the AVIRIS dataset (10% of labeled samples per class are used for training). Error bars indicate the standard deviations obtained by the proposed method.</p>
Full article ">Figure 8
<p>Overall accuracy (OA) as a function of the ratio of labeled samples per class for the AVIRIS Indian Pines dataset (15 labeled samples per class are used to build the dictionary). Error bars indicate the standard deviations obtained by the proposed method.</p>
Full article ">Figure 9
<p>Classification maps obtained by different methods for the AVIRIS Indian Pines dataset. The OA in each case is reported in the parentheses. (<b>a</b>) MOD (48.48%); (<b>b</b>) K-SVD (75.79%); (<b>c</b>) D-KSVD (62.11%); (<b>d</b>) LC-KSVD (62.65%); (<b>e</b>) OnlineDL (55.02%); (<b>f</b>) SDL (71.77%); (<b>g</b>) S<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>JDL-Log (80.46%); (<b>h</b>) S<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>JDL-Sof (82.25%).</p>
Full article ">Figure 10
<p>Stem distributions of sparse coefficients relative to the class <span class="html-italic">Wheat</span> obtained by different methods for the AVIRIS Indian Pines dataset. The circles terminating different stems represent the sparse coefficients relative to the associated atoms which are marked with different colors representing different classes. (<b>a</b>) MOD; (<b>b</b>) K-SVD; (<b>c</b>) D-KSVD; (<b>d</b>) LC-KSVD; (<b>e</b>) OnlineDL; (<b>f</b>) SDL; (<b>g</b>) S<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>JDL-Log; (<b>h</b>) S<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>JDL-Sof.</p>
Full article ">Figure 11
<p>Graphical illustration of sparse coefficients relative to the class <span class="html-italic">Wheat</span> obtained by different methods for the AVIRIS Indian Pines dataset. (<b>a</b>) MOD; (<b>b</b>) K-SVD; (<b>c</b>) D-KSVD; (<b>d</b>) LC-KSVD; (<b>e</b>) OnlineDL; (<b>f</b>) SDL; (<b>g</b>) S<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>JDL-Log; (<b>h</b>) S<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>JDL-Sof; (<b>i</b>) Ground-truth.</p>
Full article ">Figure 12
<p>Reconstruction and denoising power of sparse representation for different methods by taking the class <span class="html-italic">Wheat</span> as an example. The original spectrum (top), reconstructed spectrum with RMSE value (middle), and noise (bottom) are given for each case. (<b>a</b>) MOD; (<b>b</b>) K-SVD; (<b>c</b>) D-KSVD; (<b>d</b>) LC-KSVD; (<b>e</b>) OnlineDL; (<b>f</b>) SDL; (<b>g</b>) S<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>JDL-Log; (<b>h</b>) S<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>JDL-Sof; (<b>i</b>) Original.</p>
Full article ">Figure 13
<p>Graphical illustration of the dictionary structure learnt by different methods. The vertical dashed lines in each figure separate different atoms belonging to different classes. (<b>a</b>) MOD; (<b>b</b>) K-SVD; (<b>c</b>) D-KSVD; (<b>d</b>) LC-KSVD; (<b>e</b>) OnlineDL; (<b>f</b>) SDL; (<b>g</b>) S<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>JDL-Log; (<b>h</b>) S<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>JDL-Sof.</p>
Full article ">Figure 14
<p>Overall accuracy (OA) as a function of the number of atoms per class for the ROSIS University of Pavia dataset (5% of labeled samples per class are used for training). Error bars indicate the standard deviations obtained by the proposed method.</p>
Full article ">Figure 15
<p>Overall accuracy (OA) as a function of the ratio of labeled samples per class for the ROSIS University of Pavia dataset (15 labeled samples per class are used to build the dictionary). Error bars indicate the standard deviations obtained by the proposed method.</p>
Full article ">Figure 16
<p>Classification maps obtained by different methods for the ROSIS University of Pavia dataset. The OA in each case is reported in the parentheses. (<b>a</b>) MOD (59.82%); (<b>b</b>) K-SVD (70.25%); (<b>c</b>) D-KSVD (46.96%); (<b>d</b>) LC-KSVD (48.34%); (<b>e</b>) OnlineDL (75.21%); (<b>f</b>) SDL (72.37%); (<b>g</b>) S<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>JDL-Log (76.29%); (<b>h</b>) S<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>JDL-Sof (78.79%).</p>
Full article ">Figure 17
<p>Overall accuracy (OA) as a function of the number of atoms per class for the AVIRIS Salinas dataset (5% of labeled samples per class are used for training). Error bars indicate the standard deviations obtained by the proposed method.</p>
Full article ">Figure 18
<p>Overall accuracy (OA) as a function of the ratio of labeled samples per class for the AVIRIS Salinas dataset (15 labeled samples per class are used to build the dictionary). Error bars indicate the standard deviations obtained by the proposed method.</p>
Full article ">Figure 19
<p>Classification maps obtained by different methods for the AVIRIS Salinas dataset. The OA in each case is reported in the parentheses. (<b>a</b>) MOD (75.35%); (<b>b</b>) K-SVD (87.89%); (<b>c</b>) D-KSVD (82.90%); (<b>d</b>) LC-KSVD (83.56%); (<b>e</b>) OnlineDL (86.88%); (<b>f</b>) SDL (82.98%); (<b>g</b>) S<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>JDL-Log (89.59%); (<b>h</b>) S<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>JDL-Sof (92.50%).</p>
Full article ">
4832 KiB  
Article
MBES-CARIS Data Validation for Bathymetric Mapping of Shallow Water in the Kingdom of Bahrain on the Arabian Gulf
by Abderrazak Bannari and Ghadeer Kadhem
Remote Sens. 2017, 9(4), 385; https://doi.org/10.3390/rs9040385 - 19 Apr 2017
Cited by 18 | Viewed by 9295
Abstract
Sound navigating and ranging (SONAR) detection systems can provide valuable information for navigation and security, especially in shallow coastal areas. The last few years have seen an important increase in the volume of bathymetric data produced by Multi-Beam Echo-sounder Systems (MBES). Recently, the [...] Read more.
Sound navigating and ranging (SONAR) detection systems can provide valuable information for navigation and security, especially in shallow coastal areas. The last few years have seen an important increase in the volume of bathymetric data produced by Multi-Beam Echo-sounder Systems (MBES). Recently, the General Bathymetric Chart of the Oceans (GEBCO) released these MBES dataset preprocessed and processed with Computer Aided Resource Information System (CARIS) for public domain use. For the first time, this research focuses on the validation of these released MBES-CARIS dataset performance and robustness for bathymetric mapping of shallow water at the regional scale in the Kingdom of Bahrain (Arabian Gulf). The data were imported, converted and processed in a GIS environment. Only area that covers the Bahrain national water boundary was extracted, avoiding the land surfaces. As the released dataset were stored in a node-grid points uniformly spaced with approximately 923 m and 834 m in north and west directions, respectively, simple kriging was used for densification and bathymetric continuous surface map derivation with a 30 by 30 m pixel size. In addition to dataset cross-validation, 1200 bathymetric points representing different water depths between 0 and −30 m were selected randomly and extracted from a medium scale (1:100,000) nautical map, and they were used for validation purposes. The cross-validation results showed that the modeled semi-variogram was adjusted appropriately assuring satisfactory results. Moreover, the validation results by reference to the nautical map showed that when we consider the total validation points with different water depths, linear statistical regression analysis at a 95% confidence level (p < 0.05) provide a good coefficient of correlation (R2 = 0.95), a good index of agreement (D = 0.82), and a root mean square error (RMSE) of 1.34 m. However, when we consider only the validation points (~800) with depth lower than −10 m, both R2 and D decreased to 0.79 and 0.52, respectively, while the RMSE increased to 1.92 m. Otherwise, when we consider exclusively shallow water points (~400) with a depth higher than −10 m, the results showed a very significant R2 (0.97), a good D (0.84) and a low RMSE (0.51 m). Certainly, the released MBES-CARIS data are more appropriate for shallow water bathymetric mapping. However, for the relatively deeper areas the obtained results are relatively less accurate because probably the MBSE did not cover the bottom in several deeper pockmarks as the rapid change in depth. Possibly the steep slopes and the rough seafloor affect the integrity of the acquired raw data. Moreover, the interpolation of the missed areas’ values between MBSE acquisition data points may not reflect the true depths of these areas. It is possible also that the nautical map used for validation was not established with a good accuracy in the deeper regions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area, Kingdom of Bahrain national water boundary.</p>
Full article ">Figure 2
<p>Methodology flowchart.</p>
Full article ">Figure 3
<p>Kingdom of Bahrain national water boundary shapefile with MBES collected data.</p>
Full article ">Figure 4
<p>Zoom on the node points near to Bahrain mean Island that are superimposed on Landsat-8 image.</p>
Full article ">Figure 5
<p>Bathymetric depth map (1:100,000) with zooms on two areas.</p>
Full article ">Figure 6
<p>Bathymetric map derived using simple kriging and contours lines with 4 m equidistance.</p>
Full article ">Figure 7
<p>3D of the derived bathymetric map integrated with SRTM land digital elevation model.</p>
Full article ">Figure 8
<p>Cross-validation procedure between the measured depths with MBES-CARIS and the predicted depths for the same points using sample kriging. <span class="html-fig-inline" id="remotesensing-09-00385-i001"> <img alt="Remotesensing 09 00385 i001" src="/remotesensing/remotesensing-09-00385/article_deploy/html/images/remotesensing-09-00385-i001.png"/></span> <span class="html-fig-inline" id="remotesensing-09-00385-i002"> <img alt="Remotesensing 09 00385 i002" src="/remotesensing/remotesensing-09-00385/article_deploy/html/images/remotesensing-09-00385-i002.png"/></span> Points with deviation from the 1:1 line.</p>
Full article ">Figure 9
<p>Relationship between bathymetric depths from nautical map (observed values) and predicted values from SK based on the measured MBES-CARIS. Depths between −30 and 0 m (<b>a</b>), depths between −30 and −10 m (<b>b</b>), and depths between −10 and 0 m (<b>c</b>).</p>
Full article ">Figure 9 Cont.
<p>Relationship between bathymetric depths from nautical map (observed values) and predicted values from SK based on the measured MBES-CARIS. Depths between −30 and 0 m (<b>a</b>), depths between −30 and −10 m (<b>b</b>), and depths between −10 and 0 m (<b>c</b>).</p>
Full article ">
9894 KiB  
Article
Capturing the Diversity of Deprived Areas with Image-Based Features: The Case of Mumbai
by Monika Kuffer, Karin Pfeffer, Richard Sliuzas, Isa Baud and Martin Van Maarseveen
Remote Sens. 2017, 9(4), 384; https://doi.org/10.3390/rs9040384 - 19 Apr 2017
Cited by 48 | Viewed by 10141
Abstract
Many cities in the Global South are facing rapid population and slum growth, but lack detailed information to target these issues. Frequently, municipal datasets on such areas do not keep up with such dynamics, with data that are incomplete, inconsistent, and outdated. Aggregated [...] Read more.
Many cities in the Global South are facing rapid population and slum growth, but lack detailed information to target these issues. Frequently, municipal datasets on such areas do not keep up with such dynamics, with data that are incomplete, inconsistent, and outdated. Aggregated census-based statistics refer to large and heterogeneous areas, hiding internal spatial differences. In recent years, several remote sensing studies developed methods for mapping slums; however, few studies focused on their diversity. To address this shortcoming, this study analyzes the capacity of very high resolution (VHR) imagery and image processing methods to map locally specific types of deprived areas, applied to the city of Mumbai, India. We analyze spatial, spectral, and textural characteristics of deprived areas, using a WorldView-2 imagery combined with auxiliary spatial data, a random forest classifier, and logistic regression modeling. In addition, image segmentation is used to aggregate results to homogenous urban patches (HUPs). The resulting typology of deprived areas obtains a classification accuracy of 79% for four deprived types and one formal built-up class. The research successfully demonstrates how image-based proxies from VHR imagery can help extract spatial information on the diversity and cross-boundary clusters of deprivation to inform strategic urban management. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Determinants of the typology of deprived areas (conceptualized based on the UN-Habitat, 2003).</p>
Full article ">Figure 2
<p>Typology of deprived areas and their dimensions (ground photo in 2009) (* label used).</p>
Full article ">Figure 3
<p>Deprived area in Mumbai along a road (ground photo in 2009).</p>
Full article ">Figure 4
<p>Deprived area in Mumbai climbing up a steep slope (ground photo in 2009).</p>
Full article ">Figure 5
<p>Health wards of Mumbai, classified by levels of deprivation (left; high values indicate high deprivation; Source: [<a href="#B6-remotesensing-09-00384" class="html-bibr">6</a>]) and image mosaic covering a central part of Mumbai (right; Source: DigitalGlobe).</p>
Full article ">Figure 6
<p>Methodology—mapping diversity of deprived areas in Mumbai.</p>
Full article ">Figure 7
<p>Morphological dimensions of deprived areas and employed image features.</p>
Full article ">Figure 8
<p>Mean feature values of training and reference samples.</p>
Full article ">Figure 9
<p>Mapping the typology of deprived areas on top of a land cover/use map (background image source: DigitalGlobe).</p>
Full article ">Figure 10
<p>Mapping a typology of deprived areas: scope and limitations. (<b>a</b>) Formal and vegetation; (<b>b</b>) transition between slum types; (<b>c</b>) mix of deprived types; (<b>d</b>) slum pocket partial extraction.</p>
Full article ">Figure 11
<p>Cross health ward clusters of deprivation (A indicates a ward with an IMD of 0.29 and B has an IMD of 0.39).</p>
Full article ">
21789 KiB  
Article
A Recognition and Geological Model of a Deep-Seated Ancient Landslide at a Reservoir under Construction
by Shengwen Qi, Yu Zou, Faquan Wu, Changgen Yan, Jinghui Fan, Mingdong Zang, Shishu Zhang and Ruyi Wang
Remote Sens. 2017, 9(4), 383; https://doi.org/10.3390/rs9040383 - 19 Apr 2017
Cited by 14 | Viewed by 5714
Abstract
Forty-six ancient Tibetan star-shaped towers and a village are located on a giant slope, which would be partially flooded by a nearby reservoir currently under construction. Ground survey, boreholes, and geophysical investigations have been carried out, with results indicating that the slope consists [...] Read more.
Forty-six ancient Tibetan star-shaped towers and a village are located on a giant slope, which would be partially flooded by a nearby reservoir currently under construction. Ground survey, boreholes, and geophysical investigations have been carried out, with results indicating that the slope consists of loose deposit with a mean thickness of approximately 80 m in addition to an overlying bedrock of micaceous schist and phyllite. Ground survey and Interferometric Synthetic Aperture Radar (InSAR) indicated that the slope is experiencing some local deformations, with the appearance of cracks and occurrence of two small landslides. Through using borehole logs with the knowledge of the regional geological background, it can be inferred that the loose deposit is a result of an ancient deep-seated translational landslide. This landslide was initiated along the weak layer of the bedding plane during the last glaciation in the late Pleistocene (Q3) period, which was due to deep incision of the Dadu River at that time. Although it has not shown a major reaction since the ancient Tibetan star-shaped towers have been built (between 200 and 1600 AD), and preliminary studies based on geological and geomorphological analyses incorporated with InSAR technology indicated that the landslide is deformable. Furthermore, these studies highlighted that the rate of deformation is gradually reducing from the head to the toe area of the landslide, with the deformation also exhibiting relationships with seasonal rainstorms. The state of the toe area is very important for stabilizing a landslide and minimizing damage. It can be expected that the coming impoundment of the reservoir will increase pore pressure of the rupture zone at the toe area, which will then reduce resistance and accelerate the deformation. Future measures for protection of the slope should be focused on toe erosion and some bank protection measures (i.e., rock armor) should be adopted in this area. Meanwhile, some long-term monitoring measures should be installed to gain a deep understanding on the stability of this important slope. Full article
(This article belongs to the Special Issue Remote Sensing of Landslides)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>Stereonet projection of discontinuities in the bedrock (Equal Angle, Lower Hemisphere). Legend is as follows: 1 = bedding plane; 2 = J1—Joint Set 1; 3 = J2—Joint Set 2; 4 = J3—Joint Set 3; 5 = J4—Joint Set 4.</p>
Full article ">Figure 3
<p>Geological map of the slope.</p>
Full article ">Figure 4
<p>(<b>a</b>) Rock block with size of 10 × 10 × 2 m. (<b>b</b>) The structure of the first river terrace (from site of PH01 in <a href="#remotesensing-09-00383-f003" class="html-fig">Figure 3</a>).</p>
Full article ">Figure 5
<p>(<b>a</b>) A block farmland sub-divided into two blocks by a crack (from site of PH02 in <a href="#remotesensing-09-00383-f003" class="html-fig">Figure 3</a>); (<b>b</b>) tilted powerline poles and trees on the slope (from site of PH03 in <a href="#remotesensing-09-00383-f003" class="html-fig">Figure 3</a>); (<b>c</b>) a small circular slump at the toe of the slope (from site of PH04 in <a href="#remotesensing-09-00383-f003" class="html-fig">Figure 3</a>); (<b>d</b>) cracks developed at the ground surface in the house of a local resident, with maximum width of 5 cm (from site of PH05 in <a href="#remotesensing-09-00383-f003" class="html-fig">Figure 3</a>).</p>
Full article ">Figure 6
<p>(<b>a</b>) Tibetan star-shaped towers on the slope; (<b>b</b>) Tibetan star-shaped towers with 13 outward-pointing corners; (<b>c</b>) a leaning tower caused by local uneven deformation of the landslide.</p>
Full article ">Figure 7
<p>The contour map of electrical resistivity determined from multi-electrode resistivity method: (<b>a</b>–<b>g</b>) show the geophysical survey lines and their interpretations.</p>
Full article ">Figure 7 Cont.
<p>The contour map of electrical resistivity determined from multi-electrode resistivity method: (<b>a</b>–<b>g</b>) show the geophysical survey lines and their interpretations.</p>
Full article ">Figure 7 Cont.
<p>The contour map of electrical resistivity determined from multi-electrode resistivity method: (<b>a</b>–<b>g</b>) show the geophysical survey lines and their interpretations.</p>
Full article ">Figure 8
<p>Landslides traces found in samples from (<b>a</b>) borehole BH01 at a depth of 95 m and (<b>b</b>) borehole BH03 at a depth of 62.8 m.</p>
Full article ">Figure 9
<p>Charred wood presented at a depth of 48.16 m in borehole BH04.</p>
Full article ">Figure 10
<p>Riverbed section of the Dadu River valley (Revised after Xu et al. [<a href="#B31-remotesensing-09-00383" class="html-bibr">31</a>]).</p>
Full article ">Figure 11
<p>Cross-section of I–I’.</p>
Full article ">Figure 12
<p>The relative deformation since 23 December 2006 on 25 September 2007 (<b>A</b>); 12 May 2008 (<b>B</b>); 30 September 2009 (<b>C</b>); 3 October 2010 (<b>D</b>). (<b>E</b>) The average deformation rate of the landslide from 23 December 2006 to 3 January 2011 from ALOS PALSAR data.</p>
Full article ">Figure 12 Cont.
<p>The relative deformation since 23 December 2006 on 25 September 2007 (<b>A</b>); 12 May 2008 (<b>B</b>); 30 September 2009 (<b>C</b>); 3 October 2010 (<b>D</b>). (<b>E</b>) The average deformation rate of the landslide from 23 December 2006 to 3 January 2011 from ALOS PALSAR data.</p>
Full article ">Figure 12 Cont.
<p>The relative deformation since 23 December 2006 on 25 September 2007 (<b>A</b>); 12 May 2008 (<b>B</b>); 30 September 2009 (<b>C</b>); 3 October 2010 (<b>D</b>). (<b>E</b>) The average deformation rate of the landslide from 23 December 2006 to 3 January 2011 from ALOS PALSAR data.</p>
Full article ">Figure 13
<p>Variation in deformation (mm) of two points P1 and P2 in the slope from 23 December 2006 to 3 January 2011 from ALOS PALSAR data vs. monthly precipitation (mm).</p>
Full article ">
5867 KiB  
Article
Parallel Agent-as-a-Service (P-AaaS) Based Geospatial Service in the Cloud
by Xicheng Tan, Song Guo, Liping Di, Meixia Deng, Fang Huang, Xinyue Ye, Ziheng Sun, Weishu Gong, Zongyao Sha and Shaoming Pan
Remote Sens. 2017, 9(4), 382; https://doi.org/10.3390/rs9040382 - 19 Apr 2017
Cited by 9 | Viewed by 5583
Abstract
To optimize the efficiency of the geospatial service in the flood response decision making system, a Parallel Agent-as-a-Service (P-AaaS) method is proposed and implemented in the cloud. The prototype system and comparisons demonstrate the advantages of our approach over existing methods. The P-AaaS [...] Read more.
To optimize the efficiency of the geospatial service in the flood response decision making system, a Parallel Agent-as-a-Service (P-AaaS) method is proposed and implemented in the cloud. The prototype system and comparisons demonstrate the advantages of our approach over existing methods. The P-AaaS method includes both parallel architecture and a mechanism for adjusting the computational resources—the parallel geocomputing mechanism of the P-AaaS method used to execute a geospatial service and the execution algorithm of the P-AaaS based geospatial service chain, respectively. The P-AaaS based method has the following merits: (1) it inherits the advantages of the AaaS-based method (i.e., avoiding transfer of large volumes of remote sensing data or raster terrain data, agent migration, and intelligent conversion into services to improve domain expert collaboration); (2) it optimizes the low performance and the concurrent geoprocessing capability of the AaaS-based method, which is critical for special applications (e.g., highly concurrent applications and emergency response applications); and (3) it adjusts the computing resources dynamically according to the number and the performance requirements of concurrent requests, which allows the geospatial service chain to support a large number of concurrent requests by scaling up the cloud-based clusters in use and optimizes computing resources and costs by reducing the number of virtual machines (VMs) when the number of requests decreases. Full article
(This article belongs to the Special Issue Remote Sensing Big Data: Theory, Methods and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>P-AaaS architecture.</p>
Full article ">Figure 2
<p>Computing resources management modules.</p>
Full article ">Figure 3
<p>Geospatial task decomposition and result combination.</p>
Full article ">Figure 4
<p>P-AaaS task scheduling mechanism.</p>
Full article ">Figure 5
<p>Execution mechanism of the P-AaaS based geospatial service.</p>
Full article ">Figure 6
<p>Flood response model.</p>
Full article ">Figure 7
<p>Flood response results.</p>
Full article ">Figure 8
<p>Execution times of the AaaS-based and P-AaaS based methods.</p>
Full article ">Figure 9
<p>Performance of P-AaaS based method at different performance requirement levels.</p>
Full article ">Figure 10
<p>Computing resource requirements of the P-AaaS based method.</p>
Full article ">
5701 KiB  
Article
Examining Spatial Distribution and Dynamic Change of Urban Land Covers in the Brazilian Amazon Using Multitemporal Multisensor High Spatial Resolution Satellite Imagery
by Yunyun Feng, Dengsheng Lu, Emilio F. Moran, Luciano Vieira Dutra, Miquéias Freitas Calvi and Maria Antonia Falcão De Oliveira
Remote Sens. 2017, 9(4), 381; https://doi.org/10.3390/rs9040381 - 19 Apr 2017
Cited by 40 | Viewed by 6356
Abstract
The construction of the Belo Monte hydroelectric dam began in 2011, resulting in rapidly increased population from less than 80,000 persons before 2010 to more than 150,000 persons in 2012 in Altamira, Pará State, Brazil. This rapid urbanization has produced many problems in [...] Read more.
The construction of the Belo Monte hydroelectric dam began in 2011, resulting in rapidly increased population from less than 80,000 persons before 2010 to more than 150,000 persons in 2012 in Altamira, Pará State, Brazil. This rapid urbanization has produced many problems in urban planning and management, as well as challenging environmental conditions, requiring monitoring of urban land-cover change at high temporal and spatial resolutions. However, the frequent cloud cover in the moist tropical region is a big problem, impeding the acquisition of cloud-free optical sensor data. Thanks to the availability of different kinds of high spatial resolution satellite images in recent decades, RapidEye imagery in 2011 and 2012, Pleiades imagery in 2013 and 2014, SPOT 6 imagery in 2015, and CBERS imagery in 2016 with spatial resolutions from 0.5 m to 10 m were collected for this research. Because of the difference in spectral and spatial resolutions among these satellite images, directly conducting urban land-cover change using conventional change detection techniques, such as image differencing and principal component analysis, was not feasible. Therefore, a hybrid approach was proposed based on integration of spectral and spatial features to classify the high spatial resolution satellite images into six land-cover classes: impervious surface area (ISA), bare soil, building demolition, water, pasture, and forest/plantation. A post-classification comparison approach was then used to detect urban land-cover change annually for the periods between 2011 and 2016. The focus was on the analysis of ISA expansion, the dynamic change between pasture and bare soil, and the changes in forest/plantation. This study indicates that the hybrid approach can effectively extract six land-cover types with overall accuracy of over 90%. ISA increased continuously through conversion from pasture and bare soil. The Belo Monte dam construction resulted in building demolition in 2015 in low-lying areas along the rivers and an increase in water bodies in 2016. Because of the dam construction, forest/plantation and pasture decreased much faster, while ISA and water increased much faster in 2011–2016 than they had between 1991 and 2011. About 50% of the increased annual deforestation area can be attributed to the dam construction between 2011 and 2016. The spatial patterns of annual urban land-cover distribution and rates of dynamic change provided important data sources for making better decisions for urban management and planning in this city and others experiencing such explosive demographic change. Full article
(This article belongs to the Special Issue Remote Sensing of Urban Ecology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area and big change in water bodies before and after dam construction. (<b>A</b>) Location of Altamira within Brazil; (<b>B</b>) Landsat color composite using near-infrared (NIR), red, and green (30-m spatial resolution) as RGB for a comparison of water bodies in 2008 (Landsat TM) and 2016 (Landsat 8 OLI); (<b>C</b>) SPOT 6 color composite (1.5 m spatial resolution) in 2015 using NIR, red, and green as RGB.</p>
Full article ">Figure 2
<p>Framework of urban land-cover classification using a hybrid approach based on different sensor data with various spatial resolutions. MS and Pan represent multispectral and panchromatic data; SPOT represents Système Pour l’Observation de la Terre (French remote sensing satellite), CBERS represents China-Brazil Earth Resources Satellite, and PC1 represents the first principal component. SVM, support vector machine; NDVI, Normalized Difference Vegetation Index; NDWI, Normalized Difference Water Index.</p>
Full article ">Figure 3
<p>Urban land-cover distribution annually from 2011 to 2016, showing forest and pasture distribution in a large area.</p>
Full article ">Figure 4
<p>Spatial distribution of annual urban land-cover dynamic change from 2011 to 2016; (<b>a</b>–<b>e</b>) are graphics of ISA, water, pasture, bare soil, forest/plantation, respectively; 1 and 2 represent increased area and lost area, respectively.</p>
Full article ">
6155 KiB  
Article
InSAR Time-Series Analysis of Land Subsidence under Different Land Use Types in the Eastern Beijing Plain, China
by Chaofan Zhou, Huili Gong, Beibei Chen, Jiwei Li, Mingliang Gao, Feng Zhu, Wenfeng Chen and Yue Liang
Remote Sens. 2017, 9(4), 380; https://doi.org/10.3390/rs9040380 - 19 Apr 2017
Cited by 59 | Viewed by 8480
Abstract
In the Beijing plain, the long-term groundwater overexploitation, exploitation, and the utilization of superficial urban space have led to land subsidence. In this study, the spatial–temporal analysis of land subsidence in Beijing was assessed by using the small baseline subset (SBAS) interferometric synthetic [...] Read more.
In the Beijing plain, the long-term groundwater overexploitation, exploitation, and the utilization of superficial urban space have led to land subsidence. In this study, the spatial–temporal analysis of land subsidence in Beijing was assessed by using the small baseline subset (SBAS) interferometric synthetic aperture radar (InSAR) technique based on 47 TerraSAR-X SAR images from 2010 to 2015. Distinct variations of the land subsidence were found in the study regions. The maximum annual land subsidence rate was 146 mm/year from 2011 to 2015. The comparison between the SBAS InSAR results and the ground leveling measurements showed that the InSAR land subsidence results achieved a precision of 2 mm. In 2013, the maximum displacement reached 132 and 138 mm/year in the Laiguangying and DongbalizhuangDajiaoting area. Our analysis showed that the serious land subsidence mainly occurred in the following land use types: water area and wetland, paddy field, upland soils, vegetable land, and peasant-inhabited land. Our results could provide a useful reference for groundwater exploitation and urban planning. Full article
(This article belongs to the Special Issue Earth Observation in Planning for Sustainable Urban Development)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The geographic location of the study area. The red star is the location of reference points for interferometric synthetic aperture radar (InSAR) and leveling measurements. The red pushpins (BJ021, BJ031, BJ054, BJ070, BJ076, BJ135) in (<b>a</b>) and (<b>b</b>) indicate the locations of the leveling benchmarks. In (<b>a</b>), the black box represents the TerraSAR-X data spatial coverage.</p>
Full article ">Figure 2
<p>A network of interferogram pairs obtained from TerraSAR-X used in small baseline subset (SBAS) InSAR.</p>
Full article ">Figure 3
<p>Mean land subsidence rate based on the TerraSAR-X data from 2011 to 2015. The red lines are the study area boundary. L and D are the Laiguangying and DongbalizhuangDajiaoting land subsidence areas, respectively.</p>
Full article ">Figure 4
<p>Comparisons of SBAS-derived land subsidence rates and leveling measurement rates from 2011 to 2013.The location of benchmarks is marked as “BJ021”, “BJ031”, “BJ054”, “BJ070”, “BJ076” and “BJ135” in <a href="#remotesensing-09-00380-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 5
<p>Displacement information between 2011 and 2015 in the Laiguangying and DongbalizhuangDajiaoting land subsidence areas measured by the SBAS technique using TerraSAR-X data. (<b>a</b>–<b>e</b>) is annual deformation from 2011 to 2015, respectively.</p>
Full article ">Figure 6
<p>Time series of the land displacement in the study area. L indicates the Laiguangying land subsidence area, and D indicates the DongbalizhuangDajiaoting land subsidence area.</p>
Full article ">Figure 7
<p>Correspondence analysis chart of land subsidence rate versus the land use types. (<b>a</b>–<b>g</b>) The figures of the corrections between nine land use types and seven land subsidence rate classifications. The solid black arrows indicate vectors and the arrows of the direction of the positive vector. The black dotted lines are the extension of the vectors. The red solid arrows indicate perpendicular lines from each land use type to the vectors. The blue solid arrows are the closest to the vectors.</p>
Full article ">Figure 8
<p>Land use types are indicated with different contour colors: the blue is water area and wetland, the dark green is vegetable land, the light green is paddy field and upland soils area, and the purple is peasant-inhabited land. The five gray patterns divide the compressible deposits thickness into four classes with a thickness ranging from 50 to 100 m, 100 to 150 m, 150 to 200 m, and 200 to 260 m. A–A1 and B–B1 are two profiles perpendicular to F1, F2, and F3.</p>
Full article ">Figure 9
<p>Land subsidence rate with profiles A–A1 and B–B1. The red line is the location of the faults.</p>
Full article ">Figure 10
<p>(<b>a1</b>–<b>d1</b>) show the relationship between ground water level and the land subsidence from 2011 to 2013 in paddy field and upland soils, vegetable land, peasant-inhabited land areas, and water area and wetland. Negative values indicate the decrease of groundwater level and land subsidence. (<b>a2</b>–<b>d2</b>) is the time series relationship between the groundwater level and land subsidence.</p>
Full article ">Figure 11
<p>Distribution of land subsidence rate in the different compressible thickness layers.</p>
Full article ">
1708 KiB  
Article
Optical Backscattering Measured by Airborne Lidar and Underwater Glider
by James H. Churnside, Richard D. Marchbanks, Chad Lembke and Jordon Beckler
Remote Sens. 2017, 9(4), 379; https://doi.org/10.3390/rs9040379 - 18 Apr 2017
Cited by 25 | Viewed by 6268
Abstract
The optical backscattering from particles in the ocean is an important quantity that has been measured by remote sensing techniques and in situ instruments. In this paper, we compare estimates of this quantity from airborne lidar with those from an in situ instrument [...] Read more.
The optical backscattering from particles in the ocean is an important quantity that has been measured by remote sensing techniques and in situ instruments. In this paper, we compare estimates of this quantity from airborne lidar with those from an in situ instrument on an underwater glider. Both of these technologies allow much denser sampling of backscatter profiles than traditional ship surveys. We found a moderate correlation (R = 0.28, p < 10−5), with differences that are partially explained by spatial and temporal sampling mismatches, variability in particle composition, and lidar retrieval errors. The data suggest that there are two different regimes with different scattering properties. For backscattering coefficients below about 0.001 m−1, the lidar values were generally greater than the glider values. For larger values, the lidar was generally lower than the glider. Overall, the results are promising and suggest that airborne lidar and gliders provide comparable and complementary information on optical particulate backscattering. Full article
Show Figures

Figure 1

Figure 1
<p>Chart of the study area off the west coast of Florida, USA. Gray lines are depth contours in m. Red line is the glider track. Black lines are the flight track segments used in the analysis. Background is satellite estimate of <span class="html-italic">b<sub>bp</sub></span> with values given by the color bar on the right.</p>
Full article ">Figure 2
<p>Glider-measured particulate backscattering coefficient <span class="html-italic">b<sub>bp</sub></span> as a function of depth and longitude according to the color table at the right. Plot shows the glider data binned into 0.01° latitude by 1 m depth bins for the east to west (<b>top</b>) and return transect (<b>bottom</b>).</p>
Full article ">Figure 3
<p>Particulate backscatter coefficient <span class="html-italic">b<sub>bp</sub></span> as a function of depth and longitude according to the color table at the right. Top panel is the northern transect and bottom panel is the southern transect. Plot shows the lidar data averaged over the first three flights and binned into 0.01° latitude by 1 m depth bins.</p>
Full article ">Figure 4
<p>(<b>a</b>) Scatter plot of binned particulate backscatter coefficient <span class="html-italic">b<sub>bp</sub></span> as measured by lidar at 532 nm (unpolarized) as a function of those inferred from the glider data at 650 nm for the north transect. Grey squares denote a single data point within the region covered by the square (5 × 10<sup>−5</sup> m<sup>−1</sup> in each dimension). Red squares denote multiple data points from two (lightest) to 20 (darkest shade) of 1228 total. Lidar data are from the 10 July flight. Solid line is the ideal 1:1 relationship for <span class="html-italic">χ</span>(180°) = 1. Short dashes show the 1:1 line, relative to the data, for <span class="html-italic">χ</span>(180°) = 0.5, and long dashes for <span class="html-italic">χ</span>(180°) = 2.0; (<b>b</b>) Similar plot of lidar attenuation, <span class="html-italic">α</span>, with darkest red representing 11 data points.</p>
Full article ">Figure 5
<p>Correlations of binned data between flights for the two, three, and five day time differences among the first three flights for the north (black bars on the left) and south (gray bars on the right) transects.</p>
Full article ">Figure 6
<p>Product of depolarization, <span class="html-italic">d</span>, and particulate backscatter coefficient <span class="html-italic">b<sub>bp</sub></span> as a function of depth and longitude for the cross-polarized lidar return for the northern (top) and southern (bottom) transects. Because the lidar flights were more concentrated towards the earlier portion of the glider mission, the displayed lidar data were averaged over the first three flights for the northern transect and the second and third flights for the southern transect. They are binned into 0.01° latitude by 1 m depth bins.</p>
Full article ">
7566 KiB  
Article
Detection of Absorbing Aerosol Using Single Near-UV Radiance Measurements from a Cloud and Aerosol Imager
by Sujung Go, Mijin Kim, Jhoon Kim, Sang Seo Park, Ukkyo Jeong and Myungje Choi
Remote Sens. 2017, 9(4), 378; https://doi.org/10.3390/rs9040378 - 18 Apr 2017
Cited by 4 | Viewed by 5740
Abstract
The Ultra-Violet Aerosol Index (UVAI) is a practical parameter for detecting aerosols that absorb UV radiation, especially where other aerosol retrievals fail, such as over bright surfaces (e.g., deserts and clouds). However, typical UVAI retrieval requires at least two UV channels, while several [...] Read more.
The Ultra-Violet Aerosol Index (UVAI) is a practical parameter for detecting aerosols that absorb UV radiation, especially where other aerosol retrievals fail, such as over bright surfaces (e.g., deserts and clouds). However, typical UVAI retrieval requires at least two UV channels, while several satellite instruments, such as the Thermal And Near infrared Sensor for carbon Observation Cloud and Aerosol Imager (TANSO-CAI) instrument onboard a Greenhouse gases Observing SATellite (GOSAT), provide single channel UV radiances. In this study, a new UVAI retrieval method was developed which uses a single UV channel. A single channel aerosol index (SAI) is defined to measure the extent to which an absorbing aerosol state differs from its state with minimized absorption by aerosol. The SAI qualitatively represents absorbing aerosols by considering a 30-day minimum composite and the variability in aerosol absorption. This study examines the feasibility of detecting absorbing aerosols using a UV-constrained satellite, focusing on those which have a single UV channel. The Vector LInearized pseudo-spherical Discrete Ordinate Radiative Transfer (VLIDORT) was used to test the sensitivity of the SAI and UVAI to aerosol optical properties. The theoretical calculations showed that highly absorbing aerosols have a meaningful correlation with SAI. The retrieved SAI from OMI and operational OMI UVAI were also in good agreement when UVAI values were greater than 0.7 (the absorption criteria of UVAI). The retrieved SAI from the TANSO-CAI data was compared with operational OMI UVAI data, demonstrating a reasonable agreement and low rate of false detection for cases of absorbing aerosols in East Asia. The SAI retrieved from TANSO-CAI was in better agreement with OMI UVAI, particularly for the values greater than the absorbing threshold value of 0.7. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Comparison of model-simulated UVAI and SAI with respect to AOD, SSA, aerosol types, and aerosol layer height using VLIDORT. Solid lines indicate UVAI (354, 388 nm) and dashed lines indicate SAI (388 nm) using Equations (1) and (2). The aerosol’s SSA are indicated at top left. The surface albedo and terrain pressure are 0.05 and 1014 mb, respectively. The three aerosol types (smoke, dust, and NA) and their optical properties were obtained from AERONET lv.2 Inversion data using the aerosol classifying method of Lee et al. [<a href="#B27-remotesensing-09-00378" class="html-bibr">27</a>].</p>
Full article ">Figure 2
<p>A single path of OMI UVAI data is projected for all OMI data for the Korean Peninsula and surroundings (<b>top</b>). MODIS True Color image (<b>bottom</b>) with an 8 min time difference compared with OMI, showing a severe dust storm over the Korean Peninsula, which originated in north China. Highly absorbing aerosols within the dust storm were detected by OMI UVAI with values greater than 1.5.</p>
Full article ">Figure 3
<p>Flowchart of the SAI retrieval algorithm using the 388 nm instrument channel of OMI. To avoid bias in the algorithms, measurement quality flags of zero, pixel quality flags of zero, and final algorithm flags of zero and one are used in the OMI lv.2 data.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>j</b>) Frequency distribution of background AIs from <a href="#remotesensing-09-00378-t004" class="html-table">Table 4</a>. Upper panel shows results for 8 April 2006 and lower panel for 23 April 2006. From left to right, the frequency distributions show each background AI with cloud minimum M0 (<b>a</b>,<b>f</b>), without cloud minimum M0 (<b>b</b>,<b>g</b>), without cloud mean M0 (<b>c</b>,<b>h</b>), without cloud median M0 (<b>d</b>,<b>i</b>), and without cloud absolute minimum M0 (<b>e</b>,<b>j</b>). (<b>e</b>,<b>j</b>) plotted on a log scale on the y-axis. Among the five empirical background AI models, the minimum M0 (<b>b</b>), (<b>g</b>) shape is most likely to have a Gaussian distribution and is evenly distributed.</p>
Full article ">Figure 5
<p>Scatter plot of SAIs (388 nm) versus OMI lv.2 SSA for 8 April 2006. The empirical model of SAIs from (<b>a</b>) to (<b>f</b>) is the same as that listed in <a href="#remotesensing-09-00378-t004" class="html-table">Table 4</a>. These scatterplots use an AOD criterion of greater than 0.5 and a UVAI criterion of greater than 0.5 pixels. Among the five empirical SAI models, the M2 (<b>c</b>) has the smallest RMSE value.</p>
Full article ">Figure 6
<p>(<b>a</b>) MODIS RGB has an 8-min time difference compared with OMI. (<b>b</b>) OMI UVAI, (<b>c</b>) OMI SSA, and (<b>d</b>) calculated SAIs over the Korean Peninsula, comparing UTC 0319 and UTC 0458 on 23 April 2006. PM<sub>10</sub> concentrations were between 50 and 400 µg/m<sup>3</sup> on this day.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>i</b>) Results of an agreement and false detection test of OMI UVAI and OMI SAI for 8 April (Case 1) and 23 April (Case 2, Case 3) 2006, respectively. The x-axis indicates the SAI absorbing threshold ranging from −0.5 to 1.5. The different line styles indicate different UVAI threshold values. The SAI value of 0.5 corresponds to a UVAI value of 0.7. The false detection rate is constant, indicating that the current SAI algorithm correctly defines the absorbing aerosol pixels.</p>
Full article ">Figure 8
<p>OMI UVAI, SSA, and calculated TANSO-CAI SAI for UTC 04:29 on March 17 2012 over the Korean Peninsula. (<b>a</b>) MODIS RGB has an 8-min time difference compared with OMI; (<b>b</b>) A single path of OMI lv.2 UVAI (354 and 388 nm) data is projected; (<b>c</b>) OMI lv.2 SSA 388 nm (<b>d</b>) SAI calculated from TANSO-CAI has a 30-min time difference compared with OMI.</p>
Full article ">Figure 9
<p>OMI UVAI, SSA, and calculated CAI-SAI for UTC 04:40 on April 25 2012 over the Korean Peninsula. (<b>a</b>) MODIS RGB has an 8-min time difference compared with OMI; (<b>b</b>) A single path of OMI lv.2 UVAI (354 and 388 nm) data is projected. A sun-glint area near the south coast of China was removed because this area has a brighter surface than other ocean surface areas; (<b>c</b>) OMI lv.2 SSA 388 nm (<b>d</b>) SAI calculated from TANSO-CAI with a 30-min time difference compared with OMI.</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>d</b>) Results of an agreement and false detection test of OMI UVAI and TANSO-CAI SAI for 17 March and 25 April 2012, respectively. The left column shows the results for OMI SSA values less than 1.0, while the right column shows the results for SSA values less than 0.95. The SAI value of 0.5 corresponds to a UVAI value of 0.7. The false detection rate is constant for moderate absorbing aerosol cases.</p>
Full article ">
16212 KiB  
Article
In-Field High-Throughput Phenotyping of Cotton Plant Height Using LiDAR
by Shangpeng Sun, Changying Li and Andrew H. Paterson
Remote Sens. 2017, 9(4), 377; https://doi.org/10.3390/rs9040377 - 18 Apr 2017
Cited by 94 | Viewed by 11715
Abstract
A LiDAR-based high-throughput phenotyping (HTP) system was developed for cotton plant phenotyping in the field. The HTP system consists of a 2D LiDAR and an RTK-GPS mounted on a high clearance tractor. The LiDAR scanned three rows of cotton plots simultaneously from the [...] Read more.
A LiDAR-based high-throughput phenotyping (HTP) system was developed for cotton plant phenotyping in the field. The HTP system consists of a 2D LiDAR and an RTK-GPS mounted on a high clearance tractor. The LiDAR scanned three rows of cotton plots simultaneously from the top and the RTK-GPS was used to provide the spatial coordinates of the point cloud during data collection. Configuration parameters of the system were optimized to ensure the best data quality. A height profile for each plot was extracted from the dense three dimensional point clouds; then the maximum height and height distribution of each plot were derived. In lab tests, single plants were scanned by LiDAR using 0.5° angular resolution and results showed an R2 value of 1.00 (RMSE = 3.46 mm) in comparison to manual measurements. In field tests using the same angular resolution; the LiDAR-based HTP system achieved average R2 values of 0.98 (RMSE = 65 mm) for cotton plot height estimation; compared to manual measurements. This HTP system is particularly useful for large field application because it provides highly accurate measurements; and the efficiency is greatly improved compared to similar studies using the side view scan. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Diagram of data acquisition platform during data collection in the field.</p>
Full article ">Figure 2
<p>Custom data acquisition software. (<b>a</b>) Front panel of the custom data acquisition software; (<b>b</b>) Flowchart of block diagram of the custom data acquisition software.</p>
Full article ">Figure 3
<p>Schematic for full scanning without gaps along a cross section from a moving tractor.</p>
Full article ">Figure 4
<p>Geometric model for determining angular resolution of LiDAR.</p>
Full article ">Figure 5
<p>Influence of angular resolution: (<b>a</b>) ∆<span class="html-italic">D</span><sub>1</sub> &gt; <span class="html-italic">Φ</span>; (<b>b</b>) ∆<span class="html-italic">D</span><sub>2</sub> = <span class="html-italic">Φ</span>; (<b>c</b>) ∆<span class="html-italic">D</span><sub>3</sub> &lt; <span class="html-italic">Φ</span>.</p>
Full article ">Figure 6
<p>Relationship between mounting height and scanned rows: (<b>a</b>) diagram of one missed plant in Row 2; (<b>b</b>) one row was scanned; (<b>c</b>) three rows were scanned; (<b>d</b>) five rows were scanned.</p>
Full article ">Figure 7
<p>Scheme of height measurement for lab tests. The LiDAR was attached on the frame at a height of 1803 mm. Points A and B are two terminal points of the frame at ground base, and O is the projected point of the LiDAR; let O be the original point, A is the position of −500 mm, and B is 500 mm.</p>
Full article ">Figure 8
<p>Plants used in lab tests.</p>
Full article ">Figure 9
<p>Design of field experiments and the moving direction of the tractor.</p>
Full article ">Figure 10
<p>Data processing pipeline for LiDAR point cloud.</p>
Full article ">Figure 11
<p>The varying pattern of the laser beam diameter and the distance between two adjacent laser points across a 1000 mm long scanning plane for lab tests at (<b>a</b>) the ground base level (0 mm), and (<b>b</b>) a height level of 1623 mm. The LiDAR was at the height of 1803 mm.</p>
Full article ">Figure 12
<p>Correlation analysis for different angular resolutions.</p>
Full article ">Figure 13
<p>The varying pattern of the laser beam diameter and the distance between adjacent laser points across a 2743 mm scanning length for the field tests. The LiDAR was at the height of 1803 mm.</p>
Full article ">Figure 14
<p>Example of 3D reconstruction. (<b>a</b>) 3D view of the 3D structure model; (<b>b</b>) Top view of the 3D structure model.</p>
Full article ">Figure 15
<p>Correlation and error analysis between LiDAR and manual measurements. Two vertical dashed lines in the histograms indicate the ±10% error range.</p>
Full article ">Figure 16
<p>Height histograms for six cotton cultivars. Cultivars with the same letter in the title are not significantly different.</p>
Full article ">
93796 KiB  
Article
Automatic UAV Image Geo-Registration by Matching UAV Images to Georeferenced Image Data
by Xiangyu Zhuo, Tobias Koch, Franz Kurz, Friedrich Fraundorfer and Peter Reinartz
Remote Sens. 2017, 9(4), 376; https://doi.org/10.3390/rs9040376 - 17 Apr 2017
Cited by 54 | Viewed by 17399
Abstract
Recent years have witnessed the fast development of UAVs (unmanned aerial vehicles). As an alternative to traditional image acquisition methods, UAVs bridge the gap between terrestrial and airborne photogrammetry and enable flexible acquisition of high resolution images. However, the georeferencing accuracy of UAVs [...] Read more.
Recent years have witnessed the fast development of UAVs (unmanned aerial vehicles). As an alternative to traditional image acquisition methods, UAVs bridge the gap between terrestrial and airborne photogrammetry and enable flexible acquisition of high resolution images. However, the georeferencing accuracy of UAVs is still limited by the low-performance on-board GNSS and INS. This paper investigates automatic geo-registration of an individual UAV image or UAV image blocks by matching the UAV image(s) with a previously taken georeferenced image, such as an individual aerial or satellite image with a height map attached or an aerial orthophoto with a DSM (digital surface model) attached. As the biggest challenge for matching UAV and aerial images is in the large differences in scale and rotation, we propose a novel feature matching method for nadir or slightly tilted images. The method is comprised of a dense feature detection scheme, a one-to-many matching strategy and a global geometric verification scheme. The proposed method is able to find thousands of valid matches in cases where SIFT and ASIFT fail. Those matches can be used to geo-register the whole UAV image block towards the reference image data. When the reference images offer high georeferencing accuracy, the UAV images can also be geolocalized in a global coordinate system. A series of experiments involving different scenarios was conducted to validate the proposed method. The results demonstrate that our approach achieves not only decimeter-level registration accuracy, but also comparable global accuracy as the reference images. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Typical cases from the datasets (<b>a</b>) <tt>Container</tt> and (<b>b</b>) <tt>Highway</tt> showing the results of matching UAV and aerial images using SIFT, where the left of the subfigure is a downsampled UAV image and the right is a cropped aerial image. Green lines indicate the matches detected by SIFT; almost all of them are wrong.</p>
Full article ">Figure 2
<p>Datasets used in this paper: each column represents one (pre-processed) aerial reference image and two UAV target images. The UAV image in (<b>d</b>) should be matched to the aerial image (top right) and to a cropped part of a Google Maps image (<b>h</b>). (<b>a</b>) <tt>Container</tt>; (<b>b</b>) <tt>Urban1</tt>; (<b>c</b>) <tt>Pool1</tt>; (<b>d</b>) <tt>Building</tt>; (<b>e</b>) <tt>Highway</tt>; (<b>f</b>) <tt>Urban2</tt>; (<b>g</b>) <tt>Pool2</tt>; (<b>h</b>) <tt>Googlemaps</tt>.</p>
Full article ">Figure 3
<p>Influence of different ratio test thresholds for the <tt>Container</tt> dataset. (<b>a</b>) Number of remaining matches after applying the ratio test (solid) and the number of correct matches among them (dashed); (<b>b</b>) ratio of correct (dashed) and incorrect (solid) matches.</p>
Full article ">Figure 4
<p>Cumulative number of possible correct matches considering multiple nearest neighbors in the feature matching for the <tt>Container</tt> dataset.</p>
Full article ">Figure 5
<p>Feature points highlighted in red, namely all of the pixels at the boundaries of superpixels, after removing those feature points located at homogeneous areas for (<b>a</b>) the pre-aligned UAV image and (<b>b</b>) the aerial image of the <tt>Container</tt> dataset with 1000 simple linear iterative clustering (SLIC) superpixels.</p>
Full article ">Figure 6
<p>Challenge of ambiguous feature matching. One feature point at a corner of a container in the UAV image (<b>a</b>) corresponds to many feature points in the aerial image with similar descriptors (<b>b</b>). The correct match often can be found among a set of multiple nearest neighbors. These ambiguities need to be solved in order to extract the correct match.</p>
Full article ">Figure 7
<p>Geometric match verification of the <tt>Container</tt> scenario with histogram voting. Distribution of pixel distances for all putative matches according to the one-to-many matching in the (<b>a</b>) row and (<b>b</b>) column direction. Distinct peaks represent unknown 2D-translation.</p>
Full article ">Figure 8
<p>Recovering the unknown image rotation in the case of unavailable or inaccurate UAV IMU data. Extending the proposed method by transforming UAV feature points with multiple rotation values before the histogram voting step. The figure shows the rotation histogram for the <tt>Container</tt> dataset. The maximum number of raw matches represents unknown image rotation.</p>
Full article ">Figure 9
<p>Refinement and duplicate elimination of geometric correct matches. (<b>a</b>) One feature point in the UAV image (yellow dot) and its template size (rectangle); (<b>b</b>) corresponding geometric inliers (yellow dots) in the aerial image and size of the search window for one match (red rectangle); (<b>c</b>) all geometric inliers will share the same optimized pixel location after refinement (red dot).</p>
Full article ">Figure 10
<p>Additional datasets for the experiment. Top: reference images. Bottom: target images. Overlapping areas are highlighted by yellow rectangles in the reference images. (<b>a</b>) <tt>WV2</tt>; (<b>b</b>) <tt>Eichenau</tt>; (<b>c</b>) <tt>EOC</tt>.</p>
Full article ">Figure 11
<p>Qualitative results of the proposed matching method according to the image pairs in <a href="#remotesensing-09-00376-f002" class="html-fig">Figure 2</a>. The first row shows the overlapped UAV and aerial image pairs after applying an estimated homography calculated from our matches (also for the figure on the bottom right). The second and third row show the distribution of the geometrically-correct matches in the UAV images (yellow dots).</p>
Full article ">Figure 11 Cont.
<p>Qualitative results of the proposed matching method according to the image pairs in <a href="#remotesensing-09-00376-f002" class="html-fig">Figure 2</a>. The first row shows the overlapped UAV and aerial image pairs after applying an estimated homography calculated from our matches (also for the figure on the bottom right). The second and third row show the distribution of the geometrically-correct matches in the UAV images (yellow dots).</p>
Full article ">Figure 12
<p>Comparison of (<b>a</b>) the aerial orthophoto with 20 cm GSD and (<b>b</b>) the UAV orthophoto with 2 cm GSD of the <tt>Eichenau</tt> dataset; (<b>c</b>) 50% transparent overlap of both orthophotos; (<b>d</b>,<b>e</b>) compare cars and (<b>f</b>,<b>g</b>) show a roof on the aerial and UAV orthophoto, respectively.</p>
Full article ">Figure 13
<p>Camera pose visualization for the <tt>Eichenau</tt> dataset, showing camera poses of the geo-registered UAV image block at a 100-m altitude (red) and the aerial image block at a 600-m altitude (black).</p>
Full article ">Figure 14
<p>Comparison of (<b>a</b>,<b>c</b>) aerial orthophotos with 20-cm GSD and (<b>b</b>,<b>d</b>) UAV orthophotos with 2-cm GSD of the <tt>Germering</tt> dataset; (<b>e</b>,<b>f</b>) compare a manhole and (<b>g</b>,<b>h</b>) staircases on the aerial and UAV orthophoto, respectively.</p>
Full article ">Figure 15
<p>Comparison of (<b>a</b>) aerial and (<b>b</b>) UAV DSM of the <tt>Eichenau</tt> dataset. 20-cm GSD for aerial and 2-cm GSD for UAV DSM. (<b>c</b>) Color map illustrating the height differences between the two DSMs in meters.</p>
Full article ">Figure 16
<p>Comparison of (<b>a</b>) aerial and (<b>b</b>) UAV DSM of the <tt>Germering</tt> dataset. 20-cm GSD for aerial and 2-cm GSD for UAV DSM. (<b>c</b>) Color map illustrating the height differences between the two DSMs in meters.</p>
Full article ">Figure 17
<p>Histograms of the height differences between the aligned DSMs generated from UAV and aerial images for the (<b>a</b>) <tt>Eichenau</tt> and (<b>b</b>) <tt>Germing</tt> datasets.</p>
Full article ">Figure 18
<p>Comparison of the dense point clouds for (<b>a</b>) only aerial images and (<b>b</b>) additional registered nadir and oblique UAV images of the <tt>EOC</tt> dataset. The combination of aerial and UAV images can enrich 3D models for more details and add facades to buildings.</p>
Full article ">
10271 KiB  
Article
Improving Fractional Impervious Surface Mapping Performance through Combination of DMSP-OLS and MODIS NDVI Data
by Wei Guo, Dengsheng Lu and Wenhui Kuang
Remote Sens. 2017, 9(4), 375; https://doi.org/10.3390/rs9040375 - 17 Apr 2017
Cited by 34 | Viewed by 6582
Abstract
Impervious surface area (ISA) is an important parameter for many studies such as urban climate, urban environmental change, and air pollution; however, mapping ISA at the regional or global scale is still challenging due to the complexity of impervious surface features. The Defense [...] Read more.
Impervious surface area (ISA) is an important parameter for many studies such as urban climate, urban environmental change, and air pollution; however, mapping ISA at the regional or global scale is still challenging due to the complexity of impervious surface features. The Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) data have been used for ISA mapping, but high uncertainty existed due to mixed-pixel and data-saturation problems. This paper presents a new index called normalized impervious surface index (NISI), which is an integration of DMSP-OLS and Moderate Resolution Imaging Spectroradiometer (MODIS) normalized difference vegetation index (NDVI) data, in order to reduce these problems. Meanwhile, this newly developed index is compared with previously used indices—Human Settlement Index (HSI) and Vegetation Adjusted Nighttime light Urban Index (VANUI)—in ISA mapping performance. We selected China as an example to map fractional ISA distribution through a support vector regression approach based on the relationship between the index and Landsat-derived ISA data. The results indicate that the proposed NISI provided better ISA estimation accuracy than HSI and VANUI, especially when the fractional ISA in a pixel is relatively large (i.e., >0.6) or very small (i.e., <0.2). This approach can be used to rapidly update ISA datasets at regional and global scales. Full article
(This article belongs to the Special Issue Recent Advances in Remote Sensing with Nighttime Lights)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area—China and selected ten cities in red squares.</p>
Full article ">Figure 2
<p>Framework of mapping fractional ISA distribution through combination of multisource remote sensing data (ISA—impervious surface area; NISI—Normalized Impervious Surface Index; HSI—Human Settlement Index; VANUI—Vegetation Adjusted Nighttime light Urban Index; NDVI—Normalized Difference Vegetation Index).</p>
Full article ">Figure 3
<p>A false color composite based on (<b>a</b>) OLSnor, NDVImax, and NISI as red, green, and blue; and corresponding digital values of (<b>b</b>) OLSnor, (<b>c</b>) NDVImax, and (<b>d</b>) NISI based on the line in the urban landscape of Beijing. (Note: OLSnor is the normalized DMSP-OLS image, and NDVImax is a composite of NDVI time series data using the maximum algorithm).</p>
Full article ">Figure 4
<p>Predicted impervious surface area distributions with the support vector regression method based on (<b>a</b>) HSI, (<b>b</b>) VANUI, and (<b>c</b>) NISI. Here (<b>a1</b>), (<b>b1</b>), and (<b>c1</b>) represent a typical location of impervious surface distribution corresponding to a, b, and c index respectively.</p>
Full article ">Figure 5
<p>The relationships between ISA estimates and reference data from different data sources and corresponding residual analysis results based on HSI (<b>a1</b>,<b>b1</b>), VANUI (<b>a2</b>,<b>b2</b>), and NISI (<b>a3</b>,<b>b3</b>).</p>
Full article ">Figure 6
<p>A comparison of estimated ISA among different cities based on four data sources.</p>
Full article ">
11612 KiB  
Article
Phenological Observations on Classical Prehistoric Sites in the Middle and Lower Reaches of the Yellow River Based on Landsat NDVI Time Series
by Yuqing Pan, Yueping Nie, Chege Watene, Jianfeng Zhu and Fang Liu
Remote Sens. 2017, 9(4), 374; https://doi.org/10.3390/rs9040374 - 17 Apr 2017
Cited by 9 | Viewed by 4452
Abstract
Buried archeological features show up as crop marks that are mostly visible using high-resolution image data. Such data are costly and restricted to small regions and time domains. However, a time series of freely available medium resolution imagery can be employed to detect [...] Read more.
Buried archeological features show up as crop marks that are mostly visible using high-resolution image data. Such data are costly and restricted to small regions and time domains. However, a time series of freely available medium resolution imagery can be employed to detect crop growth changes to reveal subtle surface marks in large areas. This paper aims to study the classical Chinese settlements of Taosi and Erlitou over large areas using Landsat NDVI time series crop phenology to determine the optimum periods for detection and monitoring of crop anomalies. Burial areas (such as the palace area and the sacrificial area) were selected as the research area while the surrounding empty fields with a low density of ancient features were used as reference regions. Landsat NDVI covering two years’ growth periods of wheat and maize were computed and analyzed using Pearson’s correlation coefficient and Euclidean distance. Similarities or disparities between the burial areas and their empty areas were computed using the Hausdorff distance. Based on the phenology of crop growth, the time series NDVI images of winter wheat and summer maize were generated to analyze crop anomalies in the archeological sites. Results show that the Hausdorff distance was high during the critical stages of water for both crops and that the images of high Hausdorff distance can provide more obvious subsurface archeological information. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of study sites in China. (<b>a</b>) Both of the sites are located in the Yellow River valley. The Taosi site is located to the east of the Fenhe River, while the Erlitou site is located between the Yihe River and Luohe River. The true color images of high spatial resolution: (<b>b</b>) Taosi, 2 December 2010, Geoeye; (<b>d</b>) Erlitou, 30 December 2010, WorldView-2, and Corona panchromatic imagery: (<b>c</b>) Taosi, 5 November 1968; (<b>e</b>) Erlitou, 18 April 1962.</p>
Full article ">Figure 2
<p>The NDVI average curves of each sample with planting time and growth stages indicated. The indices peaked first at the end of the jointing stage of winter wheat and second at the summer maize stage. The summer maize growth was restrained at the Taosi site. At the tillering stage of the winter wheat’s next planting season, the indices slowly fluctuated as the root system began to develop.</p>
Full article ">Figure 3
<p>Location and attributes of the samples on the high spatial resolution images: (<b>a</b>) Taosi, 2 December 2010, Geoeye; (<b>b</b>) Erlitou, 30 December 2010, WorldView-2.</p>
Full article ">Figure 4
<p>Landsat NDVI time series images from 2000–2001 of Taosi. Winter wheat: (<b>a</b>) Tillering (22 November 2000); (<b>b</b>) Over-wintering (24 December 2000); (<b>c</b>) Jointing (13 March 2001); (<b>d</b>) Jointing (30 March 2001); (<b>e</b>) Heading (14 April 2001). Summer maize: (<b>f</b>) Jointing (4 July 2001); (<b>g</b>) Heading (4 August 2001); (<b>h</b>) Filling (13 September 2001). The negative crop and positive anomalies caused by the southeast small town of worship and sacrifice can be identified in images (<b>c</b>–<b>h</b>), the north sacrificial area and the dense storage area shows negative anomalies in images (<b>c</b>–<b>e</b>), and the palace area shows positive anomalies in image (<b>h</b>).</p>
Full article ">Figure 5
<p>Landsat NDVI data time series images from 2001–2002 of Taosi. Winter wheat: (<b>a</b>) Tillering (16 November 2001); (<b>b</b>) Over-wintering (4 February 2002); (<b>c</b>) Jointing (9 March 2002); (<b>d</b>) Jointing (24 March 2002); (<b>e</b>) Heading (18 April 2002). Summer maize: (<b>f</b>) Jointing (29 June 2002); (<b>g</b>) Heading (30 July 2002); (<b>h</b>) Flowering (31 August 2002). The negative crop or positive anomalies caused by the southeast small town of worship and sacrifice can be identified in images (<b>c</b>–<b>h</b>), the dense storage area shows negative anomalies in images (<b>c</b>–<b>e</b>), the palace area shows positive anomalies in images (<b>g</b>) and (<b>h</b>).</p>
Full article ">Figure 6
<p>Landsat NDVI time series images from 1999–2000 of Erlitou. Winter wheat: (<b>a</b>) Three-leaf (12 November 1999); (<b>b</b>) Over-wintering (7 December 1999); (<b>c</b>) Jointing (28 March 2000); (<b>d</b>) Flowering (28 April 2000); (<b>e</b>) Flowering (6 May 2000). Summer maize: (<b>f</b>) Jointing (16 June 2000); (<b>g</b>) Heading (19 August 2000); (<b>h</b>) Flowering (27 August 2000). The negative crop anomalies caused by the ancient road and the eastern wall of the imperial palace can be identified in images (<b>c</b>–<b>e</b>) and (<b>g</b>) and (<b>h</b>), the southern ditch of cast copper relics region and the sacrificial area shows negative anomalies in images (<b>c</b>–<b>e</b>), and the ancient road and the eastern wall of the imperial palace also shows positive anomalies in image (<b>a</b>).</p>
Full article ">Figure 7
<p>Landsat NDVI time series images from 2000–2001 of Erlitou. Winter wheat: (<b>a</b>) Tillering (22 November 2000); (<b>b</b>) Over-wintering (26 January 2001); (<b>c</b>) Jointing (30 March 2001); (<b>d</b>) Heading (16 April 2001); (<b>e</b>) Flowering (10 May 2001). Summer maize: (<b>f</b>) Jointing (12 July 2001); (<b>g</b>) Heading (22 August 2001); (<b>h</b>) Flowering (6 September 2001). The negative crop anomalies caused by palace bases No. 5 and No. 3, the ancient road and the eastern wall of the imperial palace can be identified in images (<b>c</b>–<b>e</b>) and (<b>g</b>) and (<b>h</b>), and the sacrificial area also shows negative anomalies in images (<b>c</b>–<b>e</b>).</p>
Full article ">Figure 8
<p>The distribution of negative crop anomalies and main archeological features of Taosi. (<b>left</b>) Landsat NDVI of 9 May 2001 and (<b>right</b>) Geoeye image of 2 December 2010. Google Earth image 30 August 2013: (<b>a</b>) southern sacrificial area and the ancient astronomical observatory, 2003–2005; (<b>b</b>) palace area, 2002–2007; (<b>c</b>) northern sacrifice area outside the walled site with northern wall, 2000–2004; (<b>d</b>) regions of dense cellar holes, 2001. The features have since been refilled after an excavation project.</p>
Full article ">Figure 9
<p>The distribution of negative crop anomalies and the main archeological features of Erlitou. (<b>A</b>) Landsat NDVI of 6 May 2000; (<b>B</b>) Landsat NDVI of 10 May 2001; (<b>C</b>) Landsat NDVI of 21 August 2001. (<b>D</b>) Google Earth image of 19 March 2010: (<b>a</b>) sacrifice area and aggregate pit, 1983 partly; (<b>b1</b>) the excavation image of palace bases No. 5 and No. 3 on WorldView-2 image at 30 December 2010; (<b>b2</b>) the eastern wall of the imperial palace, palace bases No. 5 and No. 3, 2003–2010; (<b>b3</b>) the eastern wall of the imperial palace, palace bases No. 5 and No. 3 at images 15 August 2012; (<b>c1</b>) the southern ditch of cast copper relics regions, 1983; (<b>c2</b>) the southern ditch on WorldView-2 image of 30 December 2010. The features have since been refilled following an excavation project.</p>
Full article ">Figure 10
<p>The distribution of positive crop anomalies and main archeological features. (<b>left</b>) Landsat NDVI of 13 September 2001. (<b>right</b>) Google Earth image of 30 August 2013: (<b>a</b>) southern sacrifice area and the ancient astronomical observatory, 2003–2005; (<b>b</b>) palace area, 2002–2007; (<b>c</b>) northern sacrifice area outside the walled site with northern wall, 2000–2004; (<b>d</b>) regions of dense cellar holes, 2001. The features have been refilled after an excavation project.</p>
Full article ">Figure 11
<p>NDVI image series of the three-leaf stage of winter wheat planting period in 1999 and 2000 at the Erlitou site. Year 1999: (<b>a</b>) Emergence (28 October 1999); (<b>b</b>) Three-leaf (12 November 1999); (<b>c</b>) Tillering (28 November 1999). Year 2000: (<b>d</b>) Emergence (29 October 2000); (<b>e</b>) Emergence (30 October 2000); (<b>f</b>) Tillering (22 November 2000). The positive crop anomalies displayed in the images acquired at early stages from emergence to three-leaf, disappeared from the images acquired at the tillering stage.</p>
Full article ">
8909 KiB  
Article
Multispectral LiDAR Point Cloud Classification: A Two-Step Approach
by Biwu Chen, Shuo Shi, Wei Gong, Qingjun Zhang, Jian Yang, Lin Du, Jia Sun, Zhenbing Zhang and Shalei Song
Remote Sens. 2017, 9(4), 373; https://doi.org/10.3390/rs9040373 - 17 Apr 2017
Cited by 44 | Viewed by 8680
Abstract
Target classification techniques using spectral imagery and light detection and ranging (LiDAR) are widely used in many disciplines. However, none of the existing methods can directly capture spectral and 3D spatial information simultaneously. Multispectral LiDAR was proposed to solve this problem as its [...] Read more.
Target classification techniques using spectral imagery and light detection and ranging (LiDAR) are widely used in many disciplines. However, none of the existing methods can directly capture spectral and 3D spatial information simultaneously. Multispectral LiDAR was proposed to solve this problem as its data combines spectral and 3D spatial information. Point-based classification experiments have been conducted with the use of multispectral LiDAR; however, the low signal to noise ratio creates salt and pepper noise in the spectral-only classification, thus lowering overall classification accuracy. In our study, a two-step classification approach is proposed to eliminate this noise during target classification: routine classification based on spectral information using spectral reflectance or a vegetation index, followed by neighborhood spatial reclassification. In an experiment, a point cloud was first classified with a routine classifier using spectral information and then reclassified with the k-nearest neighbors (k-NN) algorithm using neighborhood spatial information. Next, a vegetation index (VI) was introduced for the classification of healthy and withered leaves. Experimental results show that our proposed two-step classification method is feasible if the first spectral classification accuracy is reasonable. After the reclassification based on the k-NN algorithm was combined with neighborhood spatial information, accuracies increased by 1.50–11.06%. Regarding identification of withered leaves, VI performed much better than raw spectral reflectance, with producer accuracy increasing from 23.272% to 70.507%. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Photo of the realistic experimental scene.</p>
Full article ">Figure 2
<p>Point cloud of the experimental materials. The white wall points are almost white for high reflectance, which is why they are nearly invisible in this picture.</p>
Full article ">Figure 3
<p>Normalized spectral reflectance variation of seven targets on four wavelengths: (<b>a</b>) 556; (<b>b</b>) 670; (<b>c</b>) 700; and (<b>d</b>) 780 nm. The upper, bottom edge, and middle line of the box represent the 25th and 75th percentiles, and the median of the point cloud, respectively. The length of the dotted line is 1.5 times that of the box unless the end of dotted line contains all external points. Outliers are marked with crosses.</p>
Full article ">Figure 4
<p>Manually labeled multispectral light detection and ranging (LiDAR) point cloud in the seven materials: white wall, white paper box, cactus, ceramic flowerpot, healthy scindapsus leaves, withered scindapsus leaves and plastic foam, which are shown in blue, red, orange, yellow, green, brown and purple, respectively.</p>
Full article ">Figure 5
<p>Classification results based on raw spectral reflectance. Different colors represent the different results of targets. The representative color is the same as the training samples shown in <a href="#remotesensing-09-00373-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>Classification of artificial (white wall, white paper box, ceramic flowerpot, and plastic foam) and vegetable (cactus and scindapsus leaves) targets on the basis of (<b>a</b>) raw spectral reflectance; and (<b>b</b>) five VIs. Red and green points represent the artificial and vegetable samples, respectively.</p>
Full article ">Figure 7
<p>Variation of the five types of vegetation indexes of healthy (red) and withered (blue) scindapsus leaves: Chlorophyll Absorption Reflectance Index 1, Normalized Difference Red Edge, Modified Triangular Vegetation Index 1, Green Normalized Difference Vegetation Index, and Gitelson. The upper, bottom edge, and middle line of the box represent the 25th and 75th percentiles, and the median of the VI value, respectively. The length of the dotted line is 1.5 times that of the box unless end of the dotted line contains the external point. Outliers are marked with red crosses.</p>
Full article ">Figure 8
<p>Healthy and withered scindapsus leaves classification results based on (<b>a</b>) VI and (<b>b</b>) spectral reflectance. Green and brown points indicate healthy and withered leaves, respectively. The result indicates that the VI is more sensitive to the growing condition of leaves, which makes it helpful for discriminating between healthy and withered leaves.</p>
Full article ">Figure 9
<p>Reclassification result of seven individual targets based on the k-NN algorithm with spatial information. Different colors represent different targets. The representative colors are the same as those of the training samples in <a href="#remotesensing-09-00373-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 10
<p>Colorful points represent the points whose class changed after reclassification based on the k-NN algorithm with spatial information. Gray, green, and red points represent the unchanged points, the correctly changed points, and the falsely changed points, respectively.</p>
Full article ">Figure 11
<p>Classification result of the seven individual targets based on Gong’s method [<a href="#B38-remotesensing-09-00373" class="html-bibr">38</a>]. Different colors represent different targets. The representative colors are the same as those of the training samples in <a href="#remotesensing-09-00373-f004" class="html-fig">Figure 4</a>.</p>
Full article ">
3648 KiB  
Article
Evaluation of Remote Sensing Inversion Error for the Above-Ground Biomass of Alpine Meadow Grassland Based on Multi-Source Satellite Data
by Baoping Meng, Jing Ge, Tiangang Liang, Shuxia Yang, Jinglong Gao, Qisheng Feng, Xia Cui, Xiaodong Huang and Hongjie Xie
Remote Sens. 2017, 9(4), 372; https://doi.org/10.3390/rs9040372 - 16 Apr 2017
Cited by 54 | Viewed by 8380
Abstract
It is not yet clear whether there is any difference in using remote sensing data of different spatial resolutions and filtering methods to improve the above-ground biomass (AGB) estimation accuracy of alpine meadow grassland. In this study, field measurements of AGB and spectral [...] Read more.
It is not yet clear whether there is any difference in using remote sensing data of different spatial resolutions and filtering methods to improve the above-ground biomass (AGB) estimation accuracy of alpine meadow grassland. In this study, field measurements of AGB and spectral data at Sangke Town, Gansu Province, China, in three years (2013–2015) are combined to construct AGB estimation models of alpine meadow grassland based on these different remotely-sensed NDVI data: MODIS, HJ-1B CCD of China and Landsat 8 OLI (denoted as NDVIMOD, NDVICCD and NDVIOLI, respectively). This study aims to investigate the estimation errors of AGB from the three satellite sensors, to examine the influence of different filtering methods on MODIS NDVI for the estimation accuracy of AGB and to evaluate the feasibility of large-scale models applied to a small area. The results showed that: (1) filtering the MODIS NDVI using the Savitzky–Golay (SG), logistic and Gaussian approaches can reduce the AGB estimation error; in particular, the SG method performs the best, with the smallest errors at both the sample plot scale (250 m × 250 m) and the entire study area (33.9% and 34.9%, respectively); (2) the optimum estimation model of grassland AGB in the study area is the exponential model based on NDVIOLI, with estimation errors of 29.1% and 30.7% at the sample plot and the study area scales, respectively; and (3) the estimation errors of grassland AGB models previously constructed at different spatial scales (the Tibetan Plateau, Gannan Prefecture and Xiahe County) are higher than those directly constructed based on the small area of this study by 11.9%–36.4% and 5.3%–29.6% at the sample plot and study area scales, respectively. This study presents an improved monitoring algorithm of alpine natural grassland AGB estimation and provides a clear direction for future improvement of the grassland AGB estimation and grassland productivity from remote sensing technology. Full article
(This article belongs to the Special Issue Remote Sensing of Above Ground Biomass)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Distributions of experimental sample areas (A–E), with sample plots (yellow squares) and sub-plots (red small squares with their identification numbers as yellow), in Xiahe County, Gansu Province. Each sample area has a similar situation of grass growth; each sample plot is a MODIS pixel of 250 m × 250 m; each sub-plot is a 30 m × 30 m plot with five sample quadrats (see the details in <a href="#sec2dot2-remotesensing-09-00372" class="html-sec">Section 2.2</a>).</p>
Full article ">Figure 2
<p>Distributions of the five quadrats (1.5 m × 1.5 m each) in each sub-plot of 30 m × 30 m. Each quadrat consists of 9 sub-quadrats of 0.5 m × 0.5 m. The sub-quadrat identification number (1–9) is the order that we used to sample grass each time, e.g., Sub-quadrat 1 was used the first time of any year, 2 was used in second time of the same year, etc.</p>
Full article ">Figure 3
<p>The spectral curves of 13 sample plots on 28–29 July 2014.</p>
Full article ">Figure 4
<p>The best fit models constructed based on NDVI<sub>MOD</sub> (<b>a</b>), NDVI<sub>SG</sub> (<b>b</b>), NDVI<sub>CCD</sub> (<b>c</b>) and NDVI<sub>OLI</sub> (<b>d</b>).</p>
Full article ">
3427 KiB  
Article
Surface Motion and Structural Instability Monitoring of Ming Dynasty City Walls by Two-Step Tomo-PSInSAR Approach in Nanjing City, China
by Fulong Chen, Yuhua Wu, Yimeng Zhang, Issaak Parcharidis, Peifeng Ma, Ruya Xiao, Jia Xu, Wei Zhou, Panpan Tang and Michael Foumelis
Remote Sens. 2017, 9(4), 371; https://doi.org/10.3390/rs9040371 - 15 Apr 2017
Cited by 32 | Viewed by 7048
Abstract
Spaceborne Multi-Temporal Synthetic Aperture Radar (SAR) Interferometry (MT-InSAR) has been a valuable tool in mapping motion phenomena in different scenarios. Recently, the capabilities of MT-InSAR for risk monitoring and preventive analysis of heritage sites have increasingly been exploited. Considering the limitations of conventional [...] Read more.
Spaceborne Multi-Temporal Synthetic Aperture Radar (SAR) Interferometry (MT-InSAR) has been a valuable tool in mapping motion phenomena in different scenarios. Recently, the capabilities of MT-InSAR for risk monitoring and preventive analysis of heritage sites have increasingly been exploited. Considering the limitations of conventional MT-InSAR techniques, in this study a two-step Tomography-based Persistent Scatterers (PS) Interferometry (Tomo-PSInSAR) approach is proposed for monitoring ground deformation and structural instabilities over the Ancient City Walls (Ming Dynasty) in Nanjing city, China. For the purpose of this study we utilized 26 Stripmap acquisitions from TerraSAR-X and TanDEM-X missions, spanning from May 2013 to February 2015. As a first step, regional-scale surface deformation rates on single PSs were derived (ranging from −40 to +5 mm/year) and used for identifying deformation hotspots as well as for the investigation of a potential correlation between urbanization and the occurrence of surface subsidence. As a second step, structural instability parameters of ancient walls (linear motion rates, non-linear motions and material thermodynamics) were estimated by an extended four-dimensional Tomo-PSInSAR model. The model applies a two-tier network strategy; that is, the detection of most reliable single PSs in the first-tier Delaunay triangulation network followed by the detection of remaining single PSs and double PSs on the second-tier local star network referring to single SPs extracted in the first-tier network. Consequently, a preliminary phase calibration relevant to the Atmospheric Phase Screen (APS) is not needed. Motion heterogeneities in the spatial domain, either caused by thermal kinetics or displacement trends, were also considered. This study underlines the potential of the proposed Tomo-PSInSAR solution for the monitoring and conservation of cultural heritage sites. The proposed approach offers a quantitative indicator to local authorities and planners for assessing potential damages as well as for the design of remediation activities. Full article
(This article belongs to the Special Issue Radar Systems for the Societal Challenges)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Location of Nanjing City indicated by the red rectangle; (<b>b</b>) Photograph of city wall remains and (<b>c</b>) the preserved sections marked by red lines overlayed on Google Earth imagery.</p>
Full article ">Figure 2
<p>Surface motion rates at the surroundings of Nanjing city downtown, derived from region-scale Tomo-PSInSAR. City-wall remains (red lines) and the current subway lines (yellow lines) are shown. Local land subsidence patterns are indicated (pink arrows). Cross-section-1, 2 and 3 (white rectangles), were applied for the monument-scale Tomo-PSInSAR investigation. Precise leveling data from two observation points were indicated by white crosses.</p>
Full article ">Figure 3
<p>Cracks and fissures observed during field campaign undertaken in November 2016, (<b>a</b>) Zhanyuan area and (<b>b</b>) local subsidence pattern west of Qinghuai River.</p>
Full article ">Figure 4
<p>Section-1 (highlighted by the white rectangle in <a href="#remotesensing-09-00371-f002" class="html-fig">Figure 2</a>) of wall remains. (<b>a</b>) Deformation rates and (<b>b</b>) thermal amplitude along the linear wall remains; (<b>c</b>,<b>d</b>) motion velocities of PS points along two profiles (highlighted by pink lines); (<b>e</b>,<b>f</b>) motion time series of PS points “p1 and p2, indicated by white crosses” located in mild sinking subzones (indicated by white rectangles); (<b>g</b>) Photo obtained from field visits.</p>
Full article ">Figure 5
<p>Section-2 (see <a href="#remotesensing-09-00371-f002" class="html-fig">Figure 2</a>) of wall remains showing (<b>a</b>) annual deformation rates; (<b>b</b>) thermal amplitudes with difference values on Yifeng Gate (white rectangle); (<b>c</b>,<b>d</b>) are the motion time series of PS points “p1 and p2” on Yifeng Gate (white crosses).</p>
Full article ">Figure 6
<p>Section-3 (see <a href="#remotesensing-09-00371-f002" class="html-fig">Figure 2</a>) of wall remains showing, annual deformation rates with heterogeneous motions and the displacement time series of a PS target marked by white cross (<b>a</b>). Triggering factors of demolition and construction activities are shown using a series of multi-temporal Google Earth images acquired on (<b>b</b>) April 2013, (<b>c</b>) March 2014 and (<b>d</b>) October 2014. Cracks were found on hotspots with maximum motion gradients (pink circles).</p>
Full article ">Figure 7
<p>(<b>a</b>) Calculated heights (relative to the ground) of wall remains as well as the Yifeng Gate by Tomo-PSInSAR; (<b>b</b>) scatter plot of measured heights from InSAR and laser distance-meter.</p>
Full article ">Figure 8
<p>Unfavorable observation sections of ancient city walls constrained by (<b>a</b>) SAR viewing geometry and the occurrence of vegetation, e.g., relics sheltered by trees indicated by pink arrows in (<b>b</b>,<b>c</b>).</p>
Full article ">
3319 KiB  
Article
Prototyping of LAI and FPAR Retrievals from MODIS Multi-Angle Implementation of Atmospheric Correction (MAIAC) Data
by Chi Chen, Yuri Knyazikhin, Taejin Park, Kai Yan, Alexei Lyapustin, Yujie Wang, Bin Yang and Ranga B. Myneni
Remote Sens. 2017, 9(4), 370; https://doi.org/10.3390/rs9040370 - 15 Apr 2017
Cited by 23 | Viewed by 7096
Abstract
Leaf area index (LAI) and fraction of photosynthetically active radiation (FPAR) absorbed by vegetation are key variables in many global models of climate, hydrology, biogeochemistry, and ecology. These parameters are being operationally produced from Terra and Aqua MODIS bidirectional reflectance factor (BRF) data. [...] Read more.
Leaf area index (LAI) and fraction of photosynthetically active radiation (FPAR) absorbed by vegetation are key variables in many global models of climate, hydrology, biogeochemistry, and ecology. These parameters are being operationally produced from Terra and Aqua MODIS bidirectional reflectance factor (BRF) data. The MODIS science team has developed, and plans to release, a new version of the BRF product using the multi-angle implementation of atmospheric correction (MAIAC) algorithm from Terra and Aqua MODIS observations. This paper presents analyses of LAI and FPAR retrievals generated with the MODIS LAI/FPAR operational algorithm using Terra MAIAC BRF data. Direct application of the operational algorithm to MAIAC BRF resulted in an underestimation of the MODIS Collection 6 (C6) LAI standard product by up to 10%. The difference was attributed to the disagreement between MAIAC and MODIS BRFs over the vegetation by −2% to +8% in the red spectral band, suggesting different accuracies in the BRF products. The operational LAI/FPAR algorithm was adjusted for uncertainties in the MAIAC BRF data. Its performance evaluated on a limited set of MAIAC BRF data from North and South America suggests an increase in spatial coverage of the best quality, high-precision LAI retrievals of up to 10%. Overall MAIAC LAI and FPAR are consistent with the standard C6 MODIS LAI/FPAR. The increase in spatial coverage of the best quality LAI retrievals resulted in a better agreement of MAIAC LAI with field data compared to the C6 LAI product, with the RMSE decreasing from 0.80 LAI units (C6) down to 0.67 (MAIAC) and the R2 increasing from 0.69 to 0.80. The slope (intercept) of the satellite-derived vs. field-measured LAI regression line has changed from 0.89 (0.39) to 0.97 (0.25). Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Retrieval index (vertical axis) and RMSE (color bar) as a function of single scattering albedo at red, <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="sans-serif">ω</mi> <mrow> <mi>red</mi> </mrow> </msub> </mrow> </semantics> </math>, and NIR, <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="sans-serif">ω</mi> <mrow> <mi>NIR</mi> </mrow> </msub> </mrow> </semantics> </math>, spectral bands (horizontal plane) for broadleaf crops (Biome 3). One can see a subset of pairs <math display="inline"> <semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi mathvariant="sans-serif">ω</mi> <mrow> <mi>red</mi> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mtext> </mtext> <mi mathvariant="sans-serif">ω</mi> </mrow> <mrow> <mi>NIR</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> at which high values of the RI and low values of RMSE remain almost invariant. The LAI histograms, however, exhibit strong variation for these single scattering albedos. The calibration procedure aims to find a pair <math display="inline"> <semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi mathvariant="sans-serif">ω</mi> <mrow> <mi>red</mi> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mtext> </mtext> <mi mathvariant="sans-serif">ω</mi> </mrow> <mrow> <mi>NIR</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> from this subset that minimizes the disagreement between LAI histograms generated by the main algorithm retrievals and the MODIS C6 LAI product. The solution for the MAIAC BRF is shown as a star, which corresponds to <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="sans-serif">ω</mi> <mrow> <mi>red</mi> </mrow> </msub> <mo>=</mo> <mn>0.15</mn> <mo>,</mo> <msub> <mrow> <mtext> </mtext> <mi mathvariant="sans-serif">ω</mi> </mrow> <mrow> <mi>NIR</mi> </mrow> </msub> <mo>=</mo> <mn>0.94</mn> </mrow> </semantics> </math>, RI = 99.3, and RMSE = 0.20. The diamond shows single scattering albedos used in the MODIS C6 operational algorithm: <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="sans-serif">ω</mi> <mrow> <mi>red</mi> </mrow> </msub> <mo>=</mo> <mn>0.10</mn> <mo>,</mo> <msub> <mi mathvariant="sans-serif">ω</mi> <mrow> <mi>NIR</mi> </mrow> </msub> <mo>=</mo> <mn>0.94</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>(<b>a</b>) Look-up-table (LUT) entries on the near-infrared (NIR) vs. red spectral plane adjusted for MOD09GA (circles) and MAIAC (asterisk) BRF data. (<b>b</b>) The retrieval domain of the algorithm calibrated for MAIAC BRF data. The main LAI algorithm can retrieve a LAI value only if the observed pair <math display="inline"> <semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mrow> <mi>BRF</mi> </mrow> <mrow> <mi>red</mi> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>BRF</mi> </mrow> <mrow> <mi>NIR</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> of MAIAC BRF at red and NIR spectral bands falls within the retrieval domain. Color bars show the returned LAI values per unit red vs. NIR spectral planes. The LUT entries and retrieval domain are for broadleaf forests (Biome 6), a solar zenith angle between 22.5° to 37.5°, a view zenith angle between 0° to 8.5°, and the relative azimuth angle between 0° to 25°.</p>
Full article ">Figure 3
<p>Comparison of MODIS C6 and MAIAC LAI over selected regions with good quality input for grasses and cereal crops (Biome 1, (<b>a</b>–<b>c</b>)), broadleaf crops (Biome 3, (<b>d</b>–<b>f</b>)), deciduous broadleaf forests (Biome 6, (<b>g</b>–<b>i</b>)), and deciduous needleleaf forests (Biome 7, (<b>j</b>–<b>l</b>)) generated by the main algorithm during the compositing period between 4 and 11 July 2002. Shown are MAIAC versus C6 LAIs scatterplots (first column), histograms of MAIAC (blue) and C6 (red) LAIs (second column), and the difference between retrievals before (red) and after (blue) calibration of the algorithm for MAIAC data (third column). The dashed vertical lines show mean values of the histograms.</p>
Full article ">Figure 4
<p>Comparisons of (<b>a</b>) C6 and (<b>b</b>) MAIAC with ground measured LAI. Effective and true LAI are shown as triangles and circles, respectively. There were 25 field measurements available for our analysis.</p>
Full article ">Figure 5
<p>Seasonal variations in (<b>a</b>) C6 LAI and (<b>b</b>) MAIAC LAI of forests (B5-B8); (<b>c</b>) C6 FPAR and (<b>d</b>) MAIAC FPAR of non-forest (B1–B4) biome types in the year 2002. Retrievals generated by the main algorithm over selected regions were used to derive LAI and FPAR trajectories. LAI of non-forest and FPAR of forest biome types are shown in <a href="#app1-remotesensing-09-00370" class="html-app">Figure S2</a>.</p>
Full article ">Figure 6
<p>Distribution of the algorithm path QA flag over pixels with good quality pixels for MOD09GA (left bars) and MAIAC (right bars) input BRFs in the year 2002. The QA flag indicates whether LAI was generated by the main or backup algorithm. In the first case, a LAI value can be retrieved from non-saturated (legend “Main”) or saturated (legend “Main-S”) surface BRF. The backup algorithm is utilized if the main algorithm fails due to the sun-sensor geometry (legend “BackUp-G”) or other reasons (legend “BackUp-O). (<b>a</b>) Distribution of QA flags for different biome types. (<b>b</b>) Seasonal variation of QA for all eight biome types.</p>
Full article ">
3859 KiB  
Article
Comparative Assessments of the Latest GPM Mission’s Spatially Enhanced Satellite Rainfall Products over the Main Bolivian Watersheds
by Frédéric Satgé, Alvaro Xavier, Ramiro Pillco Zolá, Yawar Hussain, Franck Timouk, Jérémie Garnier and Marie-Paule Bonnet
Remote Sens. 2017, 9(4), 369; https://doi.org/10.3390/rs9040369 - 13 Apr 2017
Cited by 58 | Viewed by 7075
Abstract
The new IMERG and GSMaP-v6 satellite rainfall estimation (SRE) products from the Global Precipitation Monitoring (GPM) mission have been available since January 2015. With a finer grid box of 0.1°, these products should provide more detailed information than their latest widely-adapted (relatively coarser [...] Read more.
The new IMERG and GSMaP-v6 satellite rainfall estimation (SRE) products from the Global Precipitation Monitoring (GPM) mission have been available since January 2015. With a finer grid box of 0.1°, these products should provide more detailed information than their latest widely-adapted (relatively coarser spatial scale, 0.25°) counterpart. Integrated Multi-satellitE Retrievals for GPM (IMERG) and Global Satellite Mapping of Precipitation version 6 (GSMaP-v6) assessment is done by comparing their rainfall estimations with 247 rainfall gauges from 2014 to 2016 in Bolivia. The comparisons were done on annual, monthly and daily temporal scales over the three main national watersheds (Amazon, La Plata and TDPS), for both wet and dry seasons to assess the seasonal variability and according to different slope classes to assess the topographic influence on SREs. To observe the potential enhancement in rainfall estimates brought by these two recently released products, the widely-used TRMM Multi-satellite Precipitation Analysis (TMPA) product is also considered in the analysis. The performances of all the products increase during the wet season. Slightly less accurate than TMPA, IMERG can almost achieve its main objective, which is to ensure TMPA rainfall measurements, while enhancing the discretization of rainy and non-rainy days. It also provides the most accurate estimates among all products over the Altiplano arid region. GSMaP-v6 is the least accurate product over the region and tends to underestimate rainfall over the Amazon and La Plata regions. Over the Amazon and La Plata region, SRE potentiality is related to topographic features with the highest bias observed over high slope regions. Over the TDPS watershed, the high rainfall spatial variability with marked wet and arid regions is the main factor influencing SREs. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area (<b>a</b>) with the number of rain gauges included in studied 0.25° SREs pixels (<b>b</b>) 0.25° mean slope pixel derived from SRTM-GL1 (<b>c</b>) and mean monthly rainfall amounts derived from TMPA for the 1998–2015 period for each considered regions (<b>d</b>–<b>f</b>).</p>
Full article ">Figure 2
<p>Annual rainfall pattern for all SREs. Rainfall amounts are in mm.</p>
Full article ">Figure 3
<p>Taylor diagram for monthly rainfall considering the whole Bolivia (<b>a</b>), Amazon (<b>b</b>), La Plata (<b>c</b>) and TDPS (<b>d</b>) regions separately. The continuous curved lines represent RMSE values.</p>
Full article ">Figure 4
<p>Absolute Bias (%) and RMSE (%) for different slope classes. Black squares, blue tringle and green point represent TMPA, IMERG and GSMaP-v6, respectively.</p>
Full article ">Figure 5
<p>Performance diagram for the whole Bolivia (<b>a</b>), Amazon (<b>b</b>), La Plata (<b>c</b>) and TDPS (<b>d</b>) regions. Straight and curved lines represent the B and CSI values, respectively.</p>
Full article ">Figure 6
<p>POD and FAR for different slope classes. Black squares, blue tringle and green point represent TMPA, IMERG and GSMaP-v6, respectively.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop