[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 11, June-1
Previous Issue
Volume 11, May-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 11, Issue 10 (May-2 2019) – 115 articles

Cover Story (view full-size image): With the launch of the Sentinel-2 mission, new opportunities have arisen for mapping tree species, owing to its spatial, spectral, and temporal resolution. We evaluated the utility of the Sentinel-2 time series for mapping tree species in the complex, mixed forests of the Polish Carpathian Mountains. We used 18 Sentinel-2 images from 2018. Different combinations of Sentinel-2 imagery were selected based on the mean decrease in accuracy and mean decrease in Gini measures, in addition to temporal phonological pattern analysis. Tree species discrimination was performed using the random forest classification algorithm. Our results show that the use of the Sentinel-2 time series instead of single date imagery significantly improved forest tree species mapping, by approximately 5–10% of overall accuracy. In particular, combining images from spring and autumn resulted in better species discrimination. View [...] Read more.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
26 pages, 9011 KiB  
Article
Object-Based Time-Constrained Dynamic Time Warping Classification of Crops Using Sentinel-2
by Ovidiu Csillik, Mariana Belgiu, Gregory P. Asner and Maggi Kelly
Remote Sens. 2019, 11(10), 1257; https://doi.org/10.3390/rs11101257 - 27 May 2019
Cited by 79 | Viewed by 11505
Abstract
The increasing volume of remote sensing data with improved spatial and temporal resolutions generates unique opportunities for monitoring and mapping of crops. We compared multiple single-band and multi-band object-based time-constrained Dynamic Time Warping (DTW) classifications for crop mapping based on Sentinel-2 time series [...] Read more.
The increasing volume of remote sensing data with improved spatial and temporal resolutions generates unique opportunities for monitoring and mapping of crops. We compared multiple single-band and multi-band object-based time-constrained Dynamic Time Warping (DTW) classifications for crop mapping based on Sentinel-2 time series of vegetation indices. We tested it on two complex and intensively managed agricultural areas in California and Texas. DTW is a time-flexible method for comparing two temporal patterns by considering their temporal distortions in their alignment. For crop mapping, using time constraints in computing DTW is recommended in order to consider the seasonality of crops. We tested different time constraints in DTW (15, 30, 45, and 60 days) and compared the results with those obtained by using Euclidean distance or a DTW without time constraint. Best classification results were for time delays of both 30 and 45 days in California: 79.5% for single-band DTWs and 85.6% for multi-band DTWs. In Texas, 45 days was best for single-band DTW (89.1%), while 30 days yielded best results for multi-band DTW (87.6%). Using temporal information from five vegetation indices instead of one increased the overall accuracy in California with 6.1%. We discuss the implications of DTW dissimilarity values in understanding the classification errors. Considering the possible sources of errors and their propagation throughout our analysis, we had combined errors of 22.2% and 16.8% for California and 24.6% and 25.4% for Texas study areas. The proposed workflow is the first implementation of DTW in an object-based image analysis (OBIA) environment and represents a promising step towards generating fast, accurate, and ready-to-use agricultural data products. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Test area locations in California (first test area (TA1)) (<b>a</b>) and Texas (second test area (TA2)) (<b>b</b>). The false color composites for TA1 (1 May 2017) and TA2 (20 July 2017) are depicted in (<b>c</b>) and (<b>d</b>), respectively.</p>
Full article ">Figure 2
<p>Timeline of the two satellite image time series (SITS) used, composed of Sentinel-2A and 2B images with an irregular temporal distribution.</p>
Full article ">Figure 3
<p>Workflow of our object-based dynamic time warping (DTW) classifications using multiple vegetation indices extracted from Sentinel-2 SITS, as shown in <a href="#remotesensing-11-01257-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 4
<p>The temporal patterns of the classes analyzed for TA1, with values of five vegetation indices shown on the vertical axis. We analyzed a single agricultural year, the horizontal axis shows the day-of-the-time-series, namely from 23 September 2016 to 23 September 2017 (365 days). Two temporal patterns were identified for alfalfa (1 and 2). The vegetation indices are the Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Normalized Difference Red-Edge (NDRE), Soil Adjusted Vegetation Index (SAVI), and Normalized Difference Water Index (NDWI), derived as shown in <a href="#remotesensing-11-01257-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 5
<p>The temporal patterns of the classes analyzed for TA2, with values of five vegetation indices shown on the vertical axis (NDVI, GNDVI, NDRE, SAVI, and NDWI). We analyzed a single agricultural year, the horizontal axis shows the day-of-the-time-series, namely from 18 October 2016 to 2 November 2017 (380 days). Two temporal patterns were identified for winter wheat (wheat 1 and 2).</p>
Full article ">Figure 6
<p>Comparison between two sequences: (<b>a</b>) while Euclidean distance is time-rigid, (<b>b</b>) the dynamic time warping (DTW) is time-flexible in dealing with possible time distortion between the sequences. This flexibility is desirable for crop mapping, to deal with the intra-class phenological discrepancies caused by different environmental conditions.</p>
Full article ">Figure 7
<p>Computing the alignment between two sequences of TA1 (<b>a</b>) and TA2 (<b>b</b>). The vertical and horizontal values represent the date of an image from SITS, starting from 1 to 365 for TA1 and from 1 to 380 for TA2. Indices <span class="html-italic">i</span> and <span class="html-italic">j</span> are used to parse the matrix by line and by column, respectively. In these two examples, a maximum time delay, <span class="html-italic">w</span>, of 45 days is depicted, meaning that only the elements of the matrix who fall within this condition (orange) are computed. With black dots is represented the main diagonal of the DTW matrix (resembling Euclidean distance). After computing the matrix from upper left to lower right, the last element of the matrix, <span class="html-italic">m[S,T]</span>, is returned, as a measure of DTW dissimilarity between the two compared sequences.</p>
Full article ">Figure 8
<p>Best classification results for single-band (<b>a</b>) and multi-band DTW (<b>b</b>) for TA1 using in both cases a time constraint of 30 days (DTW30). Best classification results for single-band (<b>c</b>) and multi-band DTW (<b>d</b>) for TA2 using 45- and 30-day time constraints, respectively (DTW45 and DTW 30). For clarity, the developed/low to medium intensity areas are masked with white.</p>
Full article ">Figure 9
<p>DTW dissimilarity values for single-band (<b>a</b>) and multi-band DTW (<b>b</b>) for TA1 using in both cases a time constraint of 30 days (DTW30). DTW dissimilarity values for single-band DTW45 (<b>c</b>) and multi-band DTW30 (<b>d</b>) for TA2 using 45- and 30-day time constraints, respectively. For clarity, the developed/low to medium intensity areas are masked with white.</p>
Full article ">Figure 10
<p>DTW dissimilarity values scatter plots computed for each class for the single-band DTW30 of California, with R<sup>2</sup> values in the upper left of the diagonal. Classes analyzed are wheat, alfalfa1, alfalfa2, other hay/non-alfalfa, sugarbeets, onions, sod/grass seed, fallow/idle cropland, vegetables, and water.</p>
Full article ">Figure 11
<p>DTW dissimilarity values scatter plots computed for each class for the single-band DTW45 of Texas, with R<sup>2</sup> values in the upper left of the diagonal. Classes analyzed are corn, cotton, winter wheat1, winter wheat2, alfalfa, fallow/idle cropland, grass/pasture, and double crop.</p>
Full article ">
20 pages, 4762 KiB  
Article
Analysis of Factors Affecting Asynchronous RTK Positioning with GNSS Signals
by Bao Shu, Hui Liu, Yanming Feng, Longwei Xu, Chuang Qian and Zhixin Yang
Remote Sens. 2019, 11(10), 1256; https://doi.org/10.3390/rs11101256 - 27 May 2019
Cited by 4 | Viewed by 4823
Abstract
For short baseline real-time kinematic (RTK) positioning, the atmosphere and broadcast ephemeris errors can be usually eliminated in double-differenced (DD) processing for synchronous observations. However, in the case of possible communication latency time, these errors may not be eliminated in DD treatments due [...] Read more.
For short baseline real-time kinematic (RTK) positioning, the atmosphere and broadcast ephemeris errors can be usually eliminated in double-differenced (DD) processing for synchronous observations. However, in the case of possible communication latency time, these errors may not be eliminated in DD treatments due to their variations during latency time. In addition, the time variation of these errors may present different characteristics among GPS, GLONASS, BDS, and GALILEO due to different satellite orbit and clock types. In this contribution, the formulas for studying the broadcast orbit and clock offset errors and atmosphere error in asynchronous RTK (ARTK) model is proposed, and comprehensive experimental analysis is performed to numerically show time variations of these errors and their impacts on RTK results from short-baselines among four systems. Compared with synchronous RTK, the degradation of position precision for ARTK can reach a few centimeters, but the accuracy degradation to a different degree by different systems. BDS and Galileo usually outperform GPS and GLONASS in ARTK due to the smaller variation of broadcast ephemeris error. The variation of broadcast orbit error is generally negligible compared with the variation of broadcast clock offset error for GPS, BDS, and Galileo. Specifically, for a month of data, the root mean square (RMS) values for the variation of broadcast ephemeris error over 15 seconds are 11.2, 16.9, 7.3, and 3.0 mm for GPS, GLONASS, BDS, and Galileo, respectively. The variation of ionosphere error for some satellites over 15 seconds can reach a few centimeters during active sessions under a normal ionosphere day. In addition, compared with other systems, BDS ARTK shows an advantage under high ionosphere activity, and such advantage may be attributed to five GEO satellites in the BDS constellation. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Simulated latency time for asynchronous real-time kinematic (ARTK)</p>
Full article ">Figure 2
<p>Real-time kinematic (RTK) position errors resulting from synchronous (<b>top</b>) and asynchronous (<b>bottom</b>) data in Session A1 for Bs.1.</p>
Full article ">Figure 3
<p>RTK position errors resulting from synchronous (<b>top</b>) and asynchronous (<b>bottom</b>) data in Session A1 for Bs.2.</p>
Full article ">Figure 4
<p>Position RMS value of GPS ARTK using different precise ephemeris for Bs.2.</p>
Full article ">Figure 5
<p>Illustration of L1 carrier residuals for GPS ARTK using broadcast and c2t-5s ephemeris.</p>
Full article ">Figure 6
<p>RMS result for 1, 5, 10, and 15 s latency variation of broadcast orbit error on 10 January 2018.</p>
Full article ">Figure 7
<p>RMS result for 1, 5, 10, and 15 s latency variation of broadcast clock offset error on 10 January 2018.</p>
Full article ">Figure 8
<p>RMS result for 1, 5, 10, and 15 s latency variation of broadcast ephemeris error on 10 January 2018.</p>
Full article ">Figure 9
<p>RMS result for 15 s latency variation of broadcast ephemeris error in four different days.</p>
Full article ">Figure 10
<p>ARTK position errors of Sessions A2 (<b>top</b>) and A3 (<b>bottom</b>) for Bs.2.</p>
Full article ">Figure 11
<p>L1 carrier residuals for GPS, GLONASS, BDS, and Galileo ARTK using broadcast, c2t ephemeris.</p>
Full article ">Figure 12
<p>Average 15 s latency variation of ionosphere error (top), elevation angle (middle), and the change rate of elevation angle (bottom) in Session A2.</p>
Full article ">Figure 13
<p>L1 carrier residuals of Galileo ARTK using broadcast ephemeris in Session A1 and A3 for Bs.2.</p>
Full article ">
13 pages, 20118 KiB  
Article
Improvement of Full Waveform Airborne Laser Bathymetry Data Processing based on Waves of Neighborhood Points
by Tomasz Kogut and Krzysztof Bakuła
Remote Sens. 2019, 11(10), 1255; https://doi.org/10.3390/rs11101255 - 27 May 2019
Cited by 16 | Viewed by 3650
Abstract
Measurements of the topography of the sea floor are one of the main tasks of hydrographic organizations worldwide. The occurrence of any disaster in maritime traffic can contaminate the environment for many years. Therefore, increasing attention is being paid to the development of [...] Read more.
Measurements of the topography of the sea floor are one of the main tasks of hydrographic organizations worldwide. The occurrence of any disaster in maritime traffic can contaminate the environment for many years. Therefore, increasing attention is being paid to the development of effective methods for the detection and monitoring of possible obstacles on the transport route. Bathymetric laser scanners record the full waveform reflected from the object (target). Its transformation allows to obtain information about the water surface, water column, seabed, and the objects on it. However, it is not possible to identify subsequent returns among all waves, leading to a loss of information about the situation under the water. On the basis of the studies conducted, it was concluded that the use of a secondary analysis of a full waveform of the airborne laser bathymetry allowed for the identification of objects on the seabed. It allowed us to detect further points in the point cloud, which are necessary in the identification of objects on the seabed. The results of the experiment showed that, among the area of experiment where objects on the seabed were located, the number of points increased between 150 and 550% and the altitude accuracy of the seabed elevation model even by 50% to the level of 0.30 m with reference to sonar data depending of types of objects. Full article
(This article belongs to the Special Issue Applications of Full Waveform Lidar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the test object, Baltic Sea.</p>
Full article ">Figure 2
<p>Workflow of the experiment.</p>
Full article ">Figure 3
<p>View of the seabed and points with one echo on the water surface (blue) (<b>A</b>) and view of the seabed with gaps in data caused by lack of second echo (<b>B</b>).</p>
Full article ">Figure 4
<p>Visualization of the 3D data registered in a sequence: waves in pink colour contain the registration of two echoes, other waves are registration of one return.</p>
Full article ">Figure 5
<p>Example full waveform from airborne laser bathymetry with marked windows wI and wII.</p>
Full article ">Figure 6
<p>Fitted Gaussian in samples from window (wII) representing second echo.</p>
Full article ">Figure 7
<p>Example waves in which no subsequent echoes were found in the secondary analysis.</p>
Full article ">Figure 8
<p>Visual effect of detecting new points using Gaussian decomposition in secondary analysis of Airborne Laser Bathymetry (ALB) data: hillshade models of seabed overlaid with difference of preliminary results in reference to sonar data (<b>A</b>), improved results in reference to sonar data (<b>B</b>) difference between secondary and preliminary results; (<b>C</b>) 3D visualization of seabed elevation model for preliminary results (<b>D</b>), improved results (<b>E</b>) and reference sonar data (<b>F</b>).</p>
Full article ">Figure 9
<p>Distribution of differences between new points in the point cloud on the bottom and data from multiband sonar.</p>
Full article ">
26 pages, 13368 KiB  
Article
Transition Characteristics of the Dry-Wet Regime and Vegetation Dynamic Responses over the Yarlung Zangbo River Basin, Southeast Qinghai-Tibet Plateau
by Liu Liu, Qiankun Niu, Jingxia Heng, Hao Li and Zongxue Xu
Remote Sens. 2019, 11(10), 1254; https://doi.org/10.3390/rs11101254 - 27 May 2019
Cited by 18 | Viewed by 4578
Abstract
The dry-wet transition is of great importance for vegetation dynamics, however the response mechanism of vegetation variations is still unclear due to the complicated effects of climate change. As a critical ecologically fragile area located in the southeast Qinghai-Tibet Plateau, the Yarlung Zangbo [...] Read more.
The dry-wet transition is of great importance for vegetation dynamics, however the response mechanism of vegetation variations is still unclear due to the complicated effects of climate change. As a critical ecologically fragile area located in the southeast Qinghai-Tibet Plateau, the Yarlung Zangbo River (YZR) basin, which was selected as the typical area in this study, is significantly sensitive and vulnerable to climate change. The standardized precipitation evapotranspiration index (SPEI) and the normalized difference vegetation index (NDVI) based on the GLDAS-NOAH products and the GIMMS-NDVI remote sensing data from 1982 to 2015 were employed to investigate the spatio-temporal characteristics of the dry-wet regime and the vegetation dynamic responses. The results showed that: (1) The spatio-temporal patterns of the precipitation and temperature simulated by the GLDAS-NOAH fitted well with those of the in-situ data. (2) During the period of 1982–2015, the whole YZR basin exhibited an overall wetting tendency. However, the spatio-temporal characteristics of the dry-wet regime exhibited a reversal phenomenon before and after 2000, which was jointly identified by the SPEI and runoff. That is, the YZR basin showed a wetting trend before 2000 and a drying trend after 2000; the arid areas in the basin showed a tendency of wetting whereas the humid areas exhibited a trend of drying. (3) The region where NDVI was positively correlated with SPEI accounted for approximately 70% of the basin area, demonstrating a similar spatio-temporal reversal phenomenon of the vegetation around 2000, indicating that the dry-wet condition is of great importance for the evolution of vegetation. (4) The SPEI showed a much more significant positive correlation with the soil water content which accounted for more than 95% of the basin area, implying that the soil water content was an important indicator to identify the dry-wet transition in the YZR basin. Full article
(This article belongs to the Special Issue Remote Sensing of the Terrestrial Hydrologic Cycle)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the Yarlung Zangbo River basin and distribution of the hydro-meteorological stations.</p>
Full article ">Figure 2
<p>GLDAS-NOAH and measured monthly precipitation and temperature from 1982 to 2015.</p>
Full article ">Figure 3
<p>Spatial performance of the measured and GLDAS-NOAH precipitation (left) and temperature (right).</p>
Full article ">Figure 4
<p>Changes of SPEI at different time scales.</p>
Full article ">Figure 5
<p>Changes of the annual (<b>left</b>) and growing season (<b>right</b>) SPEI.</p>
Full article ">Figure 6
<p>Area proportions of the dry, wet, and normal areas indicated by the annual <b>(left</b>) and growing season (<b>right</b>) SPEI in different ranges.</p>
Full article ">Figure 7
<p>Annual (<b>a, b</b>) and growing season (<b>c, d</b>) spatial distributions of SPEI.</p>
Full article ">Figure 8
<p>Annual (<b>a, b</b>) and growing season (<b>c, d</b>) variation trends of SPEI with the significance test.</p>
Full article ">Figure 9
<p>Changes of the annual (<b>left</b>) and growing season (<b>right</b>) NDVI.</p>
Full article ">Figure 10
<p>Annual (<b>left</b>) and growing season (<b>right</b>) spatial distributions of NDVI.</p>
Full article ">Figure 11
<p>Annual (<b>a</b>, <b>b</b>) and growing season (<b>c</b>, <b>d</b>) spatial variations of NDVI with the significance test.</p>
Full article ">Figure 12
<p>Annual (<b>a</b>, <b>b</b>) and growing season (<b>c</b>, <b>d</b>) correlation analysis between the SPEI and NDVI with the significance test.</p>
Full article ">Figure 13
<p>Annual (<b>left</b>) and growing season (<b>right</b>) spatial distributions of the soil water content.</p>
Full article ">Figure 14
<p>Annual (<b>a, b</b>) and growing season (<b>c, d</b>) spatial variation trends of the soil water content with the significance test.</p>
Full article ">Figure 15
<p>Annual (<b>a, b</b>) and growing season (<b>c, d</b>) correlation analysis between the SPEI and soil water content with the significance test.</p>
Full article ">Figure 16
<p>Analysis on the spatial variation <b>trends</b> of the annual precipitation (<b>a, b</b>), temperature (<b>c, d</b>), and PET (<b>e, f</b>) with the significance test.</p>
Full article ">Figure 17
<p>Analysis on the spatial variation trends of the growing season precipitation (<b>a, b</b>), temperature (<b>c, d</b>), and PET (<b>e, f</b>) with the significance test.</p>
Full article ">
17 pages, 5542 KiB  
Article
Data Processing and Interpretation of Antarctic Ice-Penetrating Radar Based on Variational Mode Decomposition
by Siyuan Cheng, Sixin Liu, Jingxue Guo, Kun Luo, Ling Zhang and Xueyuan Tang
Remote Sens. 2019, 11(10), 1253; https://doi.org/10.3390/rs11101253 - 27 May 2019
Cited by 10 | Viewed by 4063
Abstract
In the Arctic and Antarctic scientific expeditions, ice-penetrating radar is an effective method for studying the bedrock under the ice sheet and ice information within the ice sheet. Because of the low conductivity of ice and the relatively uniform composition of ice sheets [...] Read more.
In the Arctic and Antarctic scientific expeditions, ice-penetrating radar is an effective method for studying the bedrock under the ice sheet and ice information within the ice sheet. Because of the low conductivity of ice and the relatively uniform composition of ice sheets in the polar regions, ice-penetrating radar is able to obtain deeper and more abundant data than other geophysical methods. However, it is still necessary to suppress the noise in radar data to obtain more accurate and plentiful effective information. In this paper, the entirely non-recursive Variational Mode Decomposition (VMD) is applied to the data noise reduction of ice-penetrating radar. VMD is a decomposition method of adaptive and quasi-orthogonal signals, which decomposes airborne radar data into multiple frequency-limited quasi-orthogonal Intrinsic Mode Functions (IMFs). The IMFs containing noise are then removed according to the information distribution in the IMF’s components and the remaining IMFs are reconstructed. This paper employs this method to process the real ice-penetrating radar data, which effectively eliminates the interference noise in the data, improves the signal-to-noise ratio and obtains the clearer inner layer structure of ice. It is verified that the method can be applied to the noise reduction processing of polar ice-penetrating radar data very well, which provides a better basis for data interpretation. At last, we present fine ice structure within the ice sheet based on VMD denoised radar profile. Full article
(This article belongs to the Special Issue Recent Progress in Ground Penetrating Radar Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) synthetic data; (<b>b</b>) data 1; (<b>c</b>) data 2; (<b>d</b>) data 3; (<b>e</b>) data 4.</p>
Full article ">Figure 2
<p>Intrinsic Mode Function (IMF) components obtained by decomposing synthetic data with Variational Mode Decomposition (VMD).</p>
Full article ">Figure 3
<p>IMF components obtained by decomposing synthetic data with Empirical Mode Decomposition (EMD).</p>
Full article ">Figure 4
<p>Relative permittivity model (<b>left</b>) and conductivity model (<b>right</b>).</p>
Full article ">Figure 5
<p>Simulated data profile.</p>
Full article ">Figure 6
<p>The 43rd trace of simulated data.</p>
Full article ">Figure 7
<p>Processed simulated data profile.</p>
Full article ">Figure 8
<p>The 43rd trace of processed simulated data.</p>
Full article ">Figure 9
<p>Test data and IMF components obtained by decomposing the simulated data with VMD. (<b>a</b>) Test data, (<b>b</b>) IMF1, (<b>c</b>) IMF2, (<b>d</b>) IMF3, (<b>e</b>) IMF4, (<b>f</b>) IMF5.</p>
Full article ">Figure 10
<p>Combination of IMF2 and IMF5 of the simulated radar data.</p>
Full article ">Figure 11
<p>The data collection location of airborne ice-penetrating radar in the Antarctic. The background image is badmap2 bed elevation.</p>
Full article ">Figure 12
<p>The first fixed-wing airplane Snow Eagle 601 deployed by China for Antarctic survey with airborne geophysical instruments including Airborne HiCARS radar system [<a href="#B37-remotesensing-11-01253" class="html-bibr">37</a>].</p>
Full article ">Figure 13
<p>Data profile after conventional data processing, where the red line represents the position of the 45th trace.</p>
Full article ">Figure 14
<p>Test trace and IMF components obtained by decomposing airborne ice-penetrating radar single trace data with VMD. (<b>a</b>) Test trace, (<b>b</b>) IMF1, (<b>c</b>) IMF 2, (<b>d</b>) IMF3, (<b>e</b>) IMF4, (<b>f</b>) IMF5.</p>
Full article ">Figure 15
<p>Original data and IMF components obtained by VMD. (<b>a</b>) Original data, (<b>b</b>) IMF1, (<b>c</b>) IMF2, (<b>d</b>) IMF3, (<b>e</b>) IMF4, (<b>f</b>) IMF5.</p>
Full article ">Figure 16
<p>(<b>a</b>) Original data profile; (<b>b</b>) Composite profile.</p>
Full article ">Figure 17
<p>Resulting profile.</p>
Full article ">
19 pages, 10362 KiB  
Article
Influence of Drone Altitude, Image Overlap, and Optical Sensor Resolution on Multi-View Reconstruction of Forest Images
by Erich Seifert, Stefan Seifert, Holger Vogt, David Drew, Jan van Aardt, Anton Kunneke and Thomas Seifert
Remote Sens. 2019, 11(10), 1252; https://doi.org/10.3390/rs11101252 - 27 May 2019
Cited by 132 | Viewed by 12296
Abstract
Recent technical advances in drones make them increasingly relevant and important tools for forest measurements. However, information on how to optimally set flight parameters and choose sensor resolution is lagging behind the technical developments. Our study aims to address this gap, exploring the [...] Read more.
Recent technical advances in drones make them increasingly relevant and important tools for forest measurements. However, information on how to optimally set flight parameters and choose sensor resolution is lagging behind the technical developments. Our study aims to address this gap, exploring the effects of drone flight parameters (altitude, image overlap, and sensor resolution) on image reconstruction and successful 3D point extraction. This study was conducted using video footage obtained from flights at several altitudes, sampled for images at varying frequencies to obtain forward overlap ratios ranging between 91 and 99%. Artificial reduction of image resolution was used to simulate sensor resolutions between 0.3 and 8.3 Megapixels (Mpx). The resulting data matrix was analysed using commercial multi-view reconstruction (MVG) software to understand the effects of drone variables on (1) reconstruction detail and precision, (2) flight times of the drone, and (3) reconstruction times during data processing. The correlations between variables were statistically analysed with a multivariate generalised additive model (GAM), based on a tensor spline smoother to construct response surfaces. Flight time was linearly related to altitude, while processing time was mainly influenced by altitude and forward overlap, which in turn changed the number of images processed. Low flight altitudes yielded the highest reconstruction details and best precision, particularly in combination with high image overlaps. Interestingly, this effect was nonlinear and not directly related to increased sensor resolution at higher altitudes. We suggest that image geometry and high image frequency enable the MVG algorithm to identify more points on the silhouettes of tree crowns. Our results are some of the first estimates of reasonable value ranges for flight parameter selection for forestry applications. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Different parameters in drone flights. The outer box represents the target variables, while the inner box shows the parameters that can be directly influenced by the mission planner and derivatives of those parameters.</p>
Full article ">Figure 2
<p>The experimental stand seen on (<b>a</b>) a drone image and (<b>b</b>) ground level.</p>
Full article ">Figure 3
<p>Relationship of the altitude above ground and the corresponding speed that was calculated by the flight planning software. Equation and regression coefficients in <a href="#app1-remotesensing-11-01252" class="html-app">Supplementary Table S1</a>.</p>
Full article ">Figure 4
<p>Relationship between sparse reconstruction and dense reconstruction with regards to (<b>a</b>) point numbers and (<b>b</b>) processing times. The percent numbers indicate the side overlap that was used to vary the tie point numbers in the sparse point cloud. Equation and regression coefficients in <a href="#app1-remotesensing-11-01252" class="html-app">Supplementary Table S1</a>.</p>
Full article ">Figure 5
<p>Illustration of a section of the dense point cloud based on different tie point densities. (<b>a</b>) Dense reconstruction with a side overlap of 90% and 753,239 tie points in the sparse point cloud and 67,582,712 points in the total dense point cloud. (<b>b</b>) Dense reconstruction with a side overlap of 55% and 185,349 tie points in the sparse point cloud and 35,413,159 points in the total dense point cloud. Clearly visible are the missing points indicated by the blue background in the deciduous trees of the lower left and right corner of (<b>b</b>).</p>
Full article ">Figure 6
<p>Relationships between side overlap and (<b>a</b>) tie point numbers, (<b>b</b>) the SRMSRE, (<b>c</b>) the flight time, (<b>d</b>) the processing time for the sparse reconstruction. Equation and regression coefficients are in <a href="#app1-remotesensing-11-01252" class="html-app">Supplementary Table S1</a>.</p>
Full article ">Figure 7
<p>Trend observations for quality and efficiency parameters versus flight/sensor parameters. Displayed are the model predictions and a 95% prediction confidence interval (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.025</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 8
<p>Generalised additive model (GAM) for identified tie point numbers in relation to drone altitude (above ground) and forward image overlap at a sensor resolution of 4 Mpx.</p>
Full article ">Figure 9
<p>The number of tie points plotted over the forward overlap. Linear regression lines were inserted to show the general reaction pattern. The different colours denote different altitudes (red and green for 25 m, blue for 50 m). The green line denotes for a rescaled flight at 25 m (GSD = 2.4 cm/px) with nearly the same ground resolution as the 50 m flight (2.2 cm/px). Equation and regression coefficients are in <a href="#app1-remotesensing-11-01252" class="html-app">Supplementary Table S1</a>.</p>
Full article ">Figure 10
<p>Influence of side overlap and altitude on area covered per time unit derived from <a href="#remotesensing-11-01252-f006" class="html-fig">Figure 6</a>c. Equation and regression coefficients are in <a href="#app1-remotesensing-11-01252" class="html-app">Supplementary Table S1</a>.</p>
Full article ">
37 pages, 7160 KiB  
Article
Multi-Resolution Study of Thermal Unmixing Techniques over Madrid Urban Area: Case Study of TRISHNA Mission
by Carlos Granero-Belinchon, Aurelie Michel, Jean-Pierre Lagouarde, Jose A. Sobrino and Xavier Briottet
Remote Sens. 2019, 11(10), 1251; https://doi.org/10.3390/rs11101251 - 27 May 2019
Cited by 13 | Viewed by 5278
Abstract
This work is linked to the future Indian–French high spatio-temporal TRISHNA (Thermal infraRed Imaging Satellite for High-resolution natural resource Assessment) mission, which includes shortwave and thermal infrared bands, and is devoted amongst other things to the monitoring of urban heat island events. In [...] Read more.
This work is linked to the future Indian–French high spatio-temporal TRISHNA (Thermal infraRed Imaging Satellite for High-resolution natural resource Assessment) mission, which includes shortwave and thermal infrared bands, and is devoted amongst other things to the monitoring of urban heat island events. In this article, the performance of seven empirical thermal unmixing techniques applied on simulated TRISHNA satellite images of an urban scenario is studied across spatial resolutions. For this purpose, Top Of Atmosphere (TOA) images in the shortwave and Thermal InfraRed (TIR) ranges are constructed at different resolutions (20 m, 40 m, 60 m, 80 m, and 100 m) and according to TRISHNA specifications (spectral bands and sensor properties). These images are synthesized by correcting and undersampling DESIREX 2008 Airborne Hyperspectral Scanner (AHS) images of Madrid at 4 m resolution. This allows to compare the Land Surface Temperature (LST) retrieval of several unmixing techniques applied on different resolution images, as well as to characterize the evolution of the performance of each technique across resolutions. The seven unmixing techniques are: Disaggregation of radiometric surface Temperature (DisTrad), Thermal imagery sHARPening (TsHARP), Area-To-Point Regression Kriging (ATPRK), Adaptive Area-To-Point Regression Kriging (AATPRK), Urban Thermal Sharpener (HUTS), Multiple Linear Regressions (MLR), and two combinations of ground classification (index-based classification and K-means classification) with DisTrad. Studying these unmixing techniques across resolutions also allows to validate the scale invariance hypotheses on which the techniques hinge. Each thermal unmixing technique has been tested with several shortwave indices, in order to choose the best one. It is shown that (i) ATPRK outperforms the other compared techniques when characterizing the LST of Madrid, (ii) the unmixing performance of any technique is degraded when the coarse spatial resolution increases, (iii) the used shortwave index does not strongly influence the unmixing performance, and (iv) even if the scale-invariant hypotheses behind these techniques remain empirical, this does not affect the unmixing performances within this range of resolutions. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Main unmixing steps of the seven empirical techniques presented in this work (Linear case exemple). (<b>a</b>) Linear regression (red line) at the coarse scale <span class="html-italic">L</span> and coarse residuals estimation <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>T</mi> <mi>L</mi> </msub> </mrow> </semantics></math>. (<b>b</b>) Unmixed Land Surface Temperature (LST) estimation without residual correction <math display="inline"><semantics> <msub> <mover accent="true"> <mi>T</mi> <mo>˜</mo> </mover> <mi>η</mi> </msub> </semantics></math>. (<b>c</b>) Residual correction of <math display="inline"><semantics> <msub> <mover accent="true"> <mi>T</mi> <mo>˜</mo> </mover> <mi>η</mi> </msub> </semantics></math> with the fine scale residuals <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>T</mi> <mi>η</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Gray pixels are the <span class="html-italic">N</span> pixels <math display="inline"><semantics> <msub> <mi>x</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> </semantics></math> used to compute the residual of green pixel <math display="inline"><semantics> <msub> <mi>x</mi> <mi>η</mi> </msub> </semantics></math>. The number of coarse pixels <span class="html-italic">N</span> can vary (in this exemple <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>25</mn> </mrow> </semantics></math>). The same <span class="html-italic">N</span> coarse pixels are used to compute the residuals of all the fine pixels within the center coarse one (gridded in the exemple). However, the weights (<math display="inline"><semantics> <msub> <mi>λ</mi> <mi>i</mi> </msub> </semantics></math>) can be different for each fine pixel, as the distances vary.</p>
Full article ">Figure 3
<p>Madrid Airborne Hyperspectral Scanner (AHS) RGB image from Getafe to Universidad Autónoma at 4 m resolution. The red rectangles correspond to the regions shown in the results section for visual analysis.</p>
Full article ">Figure 4
<p>Preprocessing procedures to obtain, from AHS radiances, satellite reflectances in the shortwave range and land surface temperatures in the Thermal InfraRed (TIR). Green lines correspond to the shortwave processing and red lines to the thermal one.</p>
Full article ">Figure 5
<p>Madrid city center land surface temperature. From left to right at 20 m, 40 m, 60 m, 80 m, 100 m resolutions.</p>
Full article ">Figure 6
<p>Atocha train station and Gran Via avenue land surface temperature. From left to right at 20 m, 40 m, 60 m, 80 m, 100 m resolutions.</p>
Full article ">Figure 7
<p>Madrid city center surface temperature at 20 m resolution. For the unmixed temperatures the initial coarse scale was 60 m.</p>
Full article ">Figure 8
<p>Madrid city center surface temperature at 20 m resolution. For the unmixed temperatures the initial coarse scale was 100 m.</p>
Full article ">Figure 9
<p>Madrid Atocha train station surface temperature at 20 m resolution. For the unmixed temperatures the initial coarse scale was 60 m.</p>
Full article ">Figure 10
<p>Madrid Atocha train station surface temperature at 20 m resolution. For the unmixed temperatures the initial coarse scale was 100 m.</p>
Full article ">Figure 11
<p>Madrid old town surface temperature at 20 m resolution. For the unmixed temperatures the initial coarse scale was 60 m.</p>
Full article ">Figure 12
<p>Madrid old town surface temperature at 20 m resolution. For the unmixed temperatures the initial coarse scale was 100 m.</p>
Full article ">Figure 13
<p>Gran Via Avenue surface temperature at 20 m resolution. For the unmixed temperatures the initial coarse scale was 60 m.</p>
Full article ">Figure 14
<p>Gran Via Avenue surface temperature at 20 m resolution. For the unmixed temperatures the initial coarse scale was 100 m.</p>
Full article ">Figure 15
<p>Top: Boxplot of temperature distribution at 20 m resolution for several unmixing techniques (<math display="inline"><semantics> <msub> <mover accent="true"> <mi>T</mi> <mo>^</mo> </mover> <mrow> <mn>20</mn> <mi>m</mi> </mrow> </msub> </semantics></math>) and reference (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mn>20</mn> <mi>m</mi> </mrow> </msub> </semantics></math>). (<b>a</b>) unmixing from 60 m → 20 m and (<b>b</b>) unmixing from from 100 m → 20 m. Bottom: Boxplot of <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mn>20</mn> <mi>m</mi> </mrow> </msub> <mo>−</mo> <msub> <mover accent="true"> <mi>T</mi> <mo>^</mo> </mover> <mrow> <mn>20</mn> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) unmixing from 60 m → 20 m and (<b>d</b>) unmixing from from 100 m → 20 m. For all the figures the boxplots correspond from left to right to: Reference temperature at 20 m, DisTrad, Area-To-Point Regression Kriging (ATPRK), Adaptive Area-To-Point Regression Kriging (AATPRK), K-means and DisTrad, Index classification and DisTrad, Bi-linear and HUTS.</p>
Full article ">Figure 16
<p>LST vs. shortwave index for the 14 tested indices at 20 m resolution. Red lines are the linear regression of the point clouds. (<b>a</b>) EVI: <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.36</mn> </mrow> </semantics></math>, (<b>b</b>) EVI2: <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.40</mn> </mrow> </semantics></math>, (<b>c</b>) NDVI: <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.42</mn> </mrow> </semantics></math>, (<b>d</b>) SAVI: <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.36</mn> </mrow> </semantics></math>, (<b>e</b>) NDBI: <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.37</mn> </mrow> </semantics></math>, (<b>f</b>) FC: <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.42</mn> </mrow> </semantics></math>, (<b>g</b>) VC: <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.41</mn> </mrow> </semantics></math>, (<b>h</b>) MSR: <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.41</mn> </mrow> </semantics></math>, (<b>i</b>) RDVI: <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.39</mn> </mrow> </semantics></math>, (<b>j</b>) WDRVI: <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.42</mn> </mrow> </semantics></math>, (<b>k</b>) SR: <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.37</mn> </mrow> </semantics></math>, (<b>l</b>) PISI: <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.23</mn> </mrow> </semantics></math>, (<b>m</b>) NBI: <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.27</mn> </mrow> </semantics></math>, (<b>n</b>) BRBA: <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.14</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>Evolution across scales of linear regression parameters for the LST-NDVI relationship.</p>
Full article ">Figure 18
<p>Evolution across scales of linear regression parameters for the LST-NDBI relationship.</p>
Full article ">Figure 19
<p>Evolution across scales of linear regression parameters for the LST-FC relationship.</p>
Full article ">Figure 20
<p>Madrid city center <math display="inline"><semantics> <mover accent="true"> <mi>T</mi> <mo>˜</mo> </mover> </semantics></math> (LST without residuals correction) at 20 m resolution. For the unmixed temperatures the initial coarse scale was 60 m.</p>
Full article ">Figure 21
<p>Madrid city center <math display="inline"><semantics> <mover accent="true"> <mi>T</mi> <mo>˜</mo> </mover> </semantics></math> (LST without residuals correction) at 20 m resolution. For the unmixed temperatures the initial coarse scale was 100 m.</p>
Full article ">Figure 22
<p>Boxplot of residuals of the linear regression T vs. Normalized Difference Built-up Index (NDBI). From left to right the coarse scale of the unmixing is 40 m (<b>a</b>), 60 m (<b>b</b>), 80 m (<b>c</b>), 100 m (<b>d</b>) and the fine scale is always 20 m. The left boxplot of each figure characterizes the distribution of the residuals measured at 20 m without unmixing, and is considered the reference. DisTrad1 correspond to the residuals obtained using Equation (<a href="#FD8-remotesensing-11-01251" class="html-disp-formula">8</a>), DisTrad2 to the residuals obtained with Equation (<a href="#FD7-remotesensing-11-01251" class="html-disp-formula">7</a>), and ATPRK with those obtained with Equation (<a href="#FD9-remotesensing-11-01251" class="html-disp-formula">9</a>).</p>
Full article ">
33 pages, 40507 KiB  
Article
Net Cloud Thinning, Low-Level Cloud Diminishment, and Hadley Circulation Weakening of Precipitating Clouds with Tropical West Pacific SST Using MISR and Other Satellite and Reanalysis Data
by Terence L. Kubar and Jonathan H. Jiang
Remote Sens. 2019, 11(10), 1250; https://doi.org/10.3390/rs11101250 - 27 May 2019
Cited by 5 | Viewed by 4404
Abstract
Daily gridded Multi-Angle Imaging Spectroradiometer (MISR) satellite data are used in conjunction with CERES, TRMM, and ERA-Interim reanalysis data to investigate horizontal and vertical high cloud structure, top-of-atmosphere (TOA) net cloud forcing and albedo, and dynamics relationships against local SST and precipitation as [...] Read more.
Daily gridded Multi-Angle Imaging Spectroradiometer (MISR) satellite data are used in conjunction with CERES, TRMM, and ERA-Interim reanalysis data to investigate horizontal and vertical high cloud structure, top-of-atmosphere (TOA) net cloud forcing and albedo, and dynamics relationships against local SST and precipitation as a function of the mean Tropical West Pacific (TWP; 120°E to 155°W; 30°S–30°N) SST. As the TWP warms, the SST mode (~29.5 °C) is constant, but the area of the mode grows, indicating increased kurtosis of SSTs and decreased SST gradients overall. This is associated with weaker low-level convergence and mid-tropospheric ascent (ω500) over the highest SSTs as the TWP warms, but also a broader area of weak ascent away from the deepest convection, albeit stronger when compared to when the mean TWP is cooler. These associated dynamics changes are collocated with less anvil and thick cloud cover over the highest SSTs and similar thin cold cloud fraction when the TWP is warmer, but broadly more anvil and cirrus clouds over lower local SSTs (SST < 27 °C). For all TWP SST quintiles, anvil cloud fraction, defined as clouds with tops > 9 km and TOA albedos between 0.3–0.6, is closely associated with rain rate, making it an excellent proxy for precipitation; but for a given heavier rain rate, cirrus clouds are more abundant with increasing domain-mean TWP SST. Clouds locally over SSTs between 29–30 °C have a much less negative net cloud forcing, up to 25 W m−2 greater, when the TWP is warm versus cool. When the local rain rate increases, while the net cloud fraction with tops < 9 km decreases, mid-level clouds (4 km < Ztop < 9 km) modestly increase. In contrast, combined low-level and mid-level clouds decrease as the domain-wide SST increases (−10% deg−1). More cirrus clouds for heavily precipitating systems exert a stronger positive TOA effect when the TWP is warmer, and anvil clouds over a higher TWP SST are less reflective and have a weaker cooling effect. For all precipitating systems, total high cloud cover increases modestly with higher TWP SST quintiles, and anvil + cirrus clouds are more expansive, suggesting more detrainment when TWP SSTs are higher. Total-domain anvil cloud fraction scales mostly with domain-mean ω500, but cirrus clouds mostly increase with domain-mean SST, invoking an explanation other than circulation. The overall thinning and greater top-heaviness of clouds over the TWP with warming are possible TWP positive feedbacks not previously identified. Full article
(This article belongs to the Special Issue MISR)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Joint PDF of 1° × 1° TOA cloud albedo versus CERES high cloud optical depth τ for raining clouds (as defined by collocated TRMM grids) for grids in which the MISR cloud fraction &gt; 0.2 above 7 km, and the high-topped cloud fraction&gt;low-topped cloud fraction. Mean and median values are shown for each bin. (<b>b</b>) Same as (<b>a</b>), except joint PDF of CERES net cloud forcing versus MISR high cloud albedo. The red dashed line denotes zero net cloud forcing, and the black line depicts the cumulative PDF.</p>
Full article ">Figure 2
<p>Map of mean (<b>a</b>) SST, (<b>b</b>) MISR high cloud fraction (tops &gt; 9 km), and (<b>c</b>) MISR low + middle cloud fraction (tops &lt; 9 km) during times when the mean TWP SST is in the first quintile (26.70 °C &lt; SST<sub>TWP</sub> &lt; 27.09 °C). (<b>d</b>–<b>f</b>): Same as (<b>a</b>–<b>c</b>), except during times when the mean TWP SST is in the second quintile (27.09 °C &lt; SST<sub>TWP</sub> &lt; 27.29 °C). (<b>g</b>–<b>i</b>): Same as (<b>d</b>–<b>f</b>) except for times when the mean TWP SST is in the third SST quintile (27.29 °C &lt; SST<sub>TWP</sub> &lt; 27.41 °C). (<b>j</b>–<b>l</b>): Same as (<b>g</b>–<b>i</b>), except for times when mean TWP SST is in the fourth quintile (27.41 °C &lt; SST<sub>TWP</sub> &lt; 27.51 °C). (<b>m</b>–<b>o</b>): Same as (<b>j</b>–<b>l</b>), except during times when mean TWP SST is in thefifth SST quintile (27.51 °C &lt; SST<sub>TWP</sub> &lt; 27.88 °C). The MISR CF in the middle and right columns is conditional for daily raining grids only.</p>
Full article ">Figure 3
<p>(<b>a</b>) Fraction of raining grids, (<b>b</b>) cirrus fraction of total high cloud fraction, and (<b>c</b>) TOA cloud albedo during times when the mean TWP SST is in the first SST quintile (26.70 °C &lt; SST<sub>TWP</sub> &lt; 27.09 °C). (<b>d</b>–<b>f</b>): Same as (<b>a</b>–<b>c</b>), except during times when the mean TWP SST is in the second SST quintile (27.09 °C &lt; SST<sub>TWP</sub> &lt; 27.29 °C). (<b>g</b>–<b>i</b>): Same as (<b>d</b>–<b>f</b>), except during times when the mean TWP SST is in the third quintile (27.29 °C &lt; SST<sub>TWP</sub> &lt; 27.41 °C). (<b>j</b>–<b>l</b>): Same as (<b>g</b>–<b>i</b>), except during times when the mean TWP SST is in the fourth quintile (27.41 °C &lt; SST<sub>TWP</sub> &lt; 27.51 °C). (<b>m</b>–<b>o</b>): Same as (<b>j</b>–<b>l</b>), except during times when the mean TWP is in the fifth quintile (27.51 °C &lt; SST<sub>TWP</sub> &lt; 27.88 °C). The MISR cloud properties in the middle and right columns are conditional for daily raining grids only.</p>
Full article ">Figure 4
<p>(<b>a</b>) ω<sub>500</sub> (mb/day) and (<b>b</b>) divergence at 850 hPa (Div<sub>850</sub>) (1 × 10<sup>6</sup>s<sup>−1</sup>) during times when the mean TWP SST is in the first quintile (26.70 °C &lt; SST<sub>TWP</sub> &lt; 27.09 °C). (<b>c</b>,<b>d</b>): Same as (<b>a</b>,<b>b</b>), except during times when the mean TWP SST is in the second quintile (27.09 °C &lt; SST<sub>TWP</sub> &lt; 27.29 °C). (<b>e</b>,<b>f</b>): Same as (<b>c</b>,<b>d</b>), except during times when the mean TWP SST is in the third quintile (27.29 °C &lt; SST<sub>TWP</sub> &lt; 27.41 °C). (<b>g</b>,<b>h</b>): Same as (<b>e</b>,<b>f</b>), except during times when the mean TWP SST is in the fourth quintile (27.41 °C &lt; SST<sub>TWP</sub> &lt; 27.51 °C). (<b>i</b>,<b>j</b>): Same as (<b>g</b>,<b>h</b>), except during times when the mean TWP SST is in the fifth quintile (27.51 °C &lt; SST<sub>TWP</sub> &lt; 27.88 °C). All panels are conditional for daily raining grids only.</p>
Full article ">Figure 5
<p>(<b>a</b>) Distribution of local SSTs over the TWP for raining grids as a function of the TWP SST quintiles in <a href="#remotesensing-11-01250-f002" class="html-fig">Figure 2</a> and as described in the text. (<b>b</b>) MISR total high cloud fraction versus local SST for raining grids as a function of the first, third, and fifth TWP SST quintiles, along with standard error bars showing the 90% confidence intervals. Dashed lines are for cirrus cloud fraction only. (<b>c</b>) Same as (<b>a</b>), except for SSTs as a function of SST quintiles in the northern hemisphere only. (<b>d</b>) Same as (<b>b</b>), except for clouds in the northern hemisphere TWP portion only. (<b>e</b>) and (<b>f</b>) are the same as (<b>c</b>) and (<b>d</b>), respectively, except for the southern hemispheric portion of the TWP.</p>
Full article ">Figure 6
<p>(<b>a</b>–<b>e</b>): Contours of anvil+cirrus cloud fraction as a function of local TRMM rain rate and local SST for each of the five TWP SST quintiles. Each panel has 25 rain rate by 25 SST categories. (<b>f</b>) Cirrus+anvil CF versus local SST for the lowest, middle, and highest TWP SST quintiles, averaged over all rain rates. Thick dashed lines depict cirrus CF only. In (<b>f</b>), standard error bars for the highest and lowest TWP SST quintiles represent the 90% confidence interval.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>e</b>): Contours of TOA cloud forcing as a function of local TRMM rain rate and local SST for each of the five TWP SST quintiles, for (<b>a</b>) first mean TWP SST quintile, (<b>b</b>) second TWP SST quintile, (<b>c</b>) third TWP SST quintile, (<b>d</b>) fourth TWP SST quintile, and (<b>e</b>) fifth TWP SST quintile. (<b>f</b>) TOA Forcing versus local SST averaged over all rain rates, with standard error bars showing the 90% confidence interval for the lowest, middle, and highest TWP SST quintiles.</p>
Full article ">Figure 8
<p>(<b>a</b>–<b>e</b>): Contours ofω<sub>500</sub> (mb/day) from ERA-Interim as a function of local TRMM rain rate and local SST for each of the five TWP SST quintiles, for (<b>a</b>) first mean TWP SST quintile, (<b>b</b>) second TWP SST quintile, (<b>c</b>) third TWP SST quintile, (<b>d</b>) fourth TWP SST quintile, and (<b>e</b>) fifth TWP SST quintile. Results for local SSTs above 20 °C are shown. In panel 8f, standard error bars represent the 90% confidence interval for the lowest and highest TWP SST quintiles for each local SST.</p>
Full article ">Figure 9
<p>(<b>a</b>) MISR local thick CF vs. local rain rate for all five TWP SST quintiles. (<b>b</b>) Same as (<b>a</b>), except for anvil CF. (<b>c</b>) Same as (<b>a</b>) and (<b>b</b>), except for cirrus CF. (<b>d</b>) MISR TOA albedo versus local rain rate. (<b>e</b>) MISR TOA albedo versus local SST. (<b>f</b>) 925 hPa Divergence (× 10<sup>−6</sup> s<sup>−1</sup>) vs. local SST. Note that the rain rate axis on (<b>a</b>–<b>d</b>) is a logarithmic scale. 90% confidence intervals are shown in panels (<b>a</b>,<b>c</b>–<b>f</b>).</p>
Full article ">Figure 10
<p>Z-albedo histograms of MISR CF for all five TWP SST quintiles for light, moderate, and heavy rain cloud systems. (<b>a</b>–<b>c</b>) Histograms when the mean TWP SST is in the first quintile (26.70 °C &lt; SST<sub>TWP</sub> &lt; 27.09 °C) for (<b>a</b>) light, (<b>b</b>) moderate, and (<b>c</b>) heavy rain cloud systems. (<b>d</b>–<b>f</b>) Same as (<b>a</b>–<b>c</b>), except when mean TWP SST is in the second quintile (27.09 °C &lt; SST<sub>TWP</sub> &lt; 27.29 °C). (<b>g</b>–<b>i</b>) Same as (<b>d</b>–<b>f</b>), except when the mean TWP SST is in the third quintile (27.29 °C &lt; SST<sub>TWP</sub> &lt; 27.41 °C). (<b>j</b>–<b>l</b>) Same as (<b>g</b>–<b>i</b>), except when the mean TWP SST is in the fourth quintile (27.41°C &lt; SST<sub>TWP</sub> &lt; 27.51 °C). (<b>m</b>–<b>o</b>): Same as (<b>g</b>–<b>i</b>), except when the mean TWP SST is in the fifth quintile (27.51 °C &lt; SST<sub>TWP</sub> &lt; 27.88 °C). Total cloud fraction is given in each panel.</p>
Full article ">Figure 11
<p>Z-albedo histograms of TOA net cloud forcing (W m<sup>−2</sup>) for all five TWP SST quintiles for light, moderate, and heavy rain cloud systems. (<b>a</b>–<b>c</b>) Histograms when the mean TWP SST is in the first quintile (26.70 °C &lt; SST<sub>TWP</sub> &lt; 27.09 °C) for (<b>a</b>) light, (<b>b</b>) moderate, and (<b>c</b>) heavy rain cloud systems. (<b>d</b>–<b>f</b>) Same as (<b>a</b>–<b>c</b>), except when mean TWP SST is in the second quintile (27.09 °C &lt; SST<sub>TWP</sub> &lt; 27.29 °C). (<b>g</b>–<b>i</b>) Same as (<b>d</b>–<b>f</b>), except when the mean TWP SST is in the third quintile (27.29 °C &lt; SST<sub>TWP</sub> &lt; 27.41 °C). (<b>j</b>–<b>l</b>) Same as (<b>g</b>–<b>i</b>), except when the mean TWP SST is in the fourth quintile (27.41°C &lt; SST<sub>TWP</sub> &lt; 27.51 °C). (<b>m</b>–<b>o</b>): Same as (<b>g</b>–<b>i</b>), except when the mean TWP SST is in the fifth quintile (27.51 °C &lt; SST<sub>TWP</sub> &lt; 27.88 °C). α<sub>cloud</sub> is from CERES, and cloud area fraction and effective heights from CERES SYN1DEG are used to construct each histogram; the total of the partial forcings in each histogram add up to the total net cloud forcing in each precipitation category and large-scale TWP SST quintile, given in each panel.</p>
Full article ">Figure 12
<p>(<b>a</b>) Cloud fraction profiles for thick (solid), thick+anvil cloud (long dashed), and all clouds (short dashed) for lightly raining clouds for the first, third, and fifth TWP SST quintiles. Here, thick refers to any cloud height with an albedo &gt; 0.6, and thick+anvil any cloud height with an albedo &gt; 0.3. (<b>b</b>) CERES TOA cloud forcing for the lightly raining clouds for all five mean TWP SST quintiles. (<b>c</b>) Same as (<b>a</b>), except for moderately raining clouds. (<b>d</b>) Same as (<b>b</b>), except for moderately raining clouds. (<b>e</b>) Same as (<b>a</b>) and (<b>c</b>), except for heavily raining clouds. (<b>f</b>): Same as (<b>b</b>) or (<b>d</b>), except for heavily raining clouds.</p>
Full article ">Figure 13
<p>Domain-mean properties versus either total domain-mean TWP SST or the raining only TWP SST for (<b>a</b>) TRMM rain rates (zero rain rates inclusive) (mm day<sup>−1</sup>), (<b>b</b>) TRMM rain rates where it is raining, (<b>c</b>) MISR High CF (raining and non-raining), (<b>d</b>) MISR High CF where it is raining, (<b>e</b>) MISR High Cirrus CF. (<b>f</b>) Same as (<b>e</b>), except for raining grids only; (<b>g</b>) same as (<b>f</b>), except for low plus middle-topped clouds; and (<b>h</b>) same as (<b>g</b>), except for MISR TOA cloud albedo. Linear fit slopes are given in panels where the null hypothesis of a zero slope can be rejected at the 99% confidence interval, and the percent change per degree SST (deg<sup>−1</sup>) is also given in each panel.</p>
Full article ">Figure 14
<p>(<b>a</b>) Joint PDF of high cloud detrainment ratio and ω<sub>500</sub> for the first two TWP SST quintiles, collected from the 1° × 1° time-averaged raining map grids during the large-scale SST conditions. Detrainment ratio is estimated from (2) in the text. (<b>b</b>) Same as (<b>a</b>), except for the upper two TWP SST quintiles. (<b>c</b>) Domain-averaged anvil CF versus 100 bins of mean raining (TWP SST, ω<sub>500</sub>). (<b>d</b>) Same as (<b>c</b>), except for domain-averaged cirrus CF.</p>
Full article ">
19 pages, 12438 KiB  
Article
Evaluation of Environmental Influences on a Multi-Point Optical Fiber Methane Leak Monitoring System
by Claudio Floridia, Joao Batista Rosolem, João Paulo Vicentini Fracarolli, Fábio Renato Bassan, Rivael Strobel Penze, Larissa Maria Pereira and Maria Angélica Carmona da Motta Resende
Remote Sens. 2019, 11(10), 1249; https://doi.org/10.3390/rs11101249 - 27 May 2019
Cited by 8 | Viewed by 3770
Abstract
A novel system to monitor methane fugitive emissions was developed using passive optical sensors to attend to the natural gas production and transportation industry. The system is based on optical time domain reflectometry and direct optical absorption spectroscopy. The system was tested in [...] Read more.
A novel system to monitor methane fugitive emissions was developed using passive optical sensors to attend to the natural gas production and transportation industry. The system is based on optical time domain reflectometry and direct optical absorption spectroscopy. The system was tested in a gas compressor station for four months. The system was capable to measure methane concentration at two points showing its correlation with meteorological data, specially wind velocity and local temperature. Methane concentrations varied from 2.5% to 15% in the first monitored point by sensor 1, and from 5% to 30%, in the second point with sensor 2. Both sensors exhibited a moderate negative correlation with wind velocity with a mean Pearson coefficient of −0.61, despite the external cap designed to avoid the influence of wind. Sensor 2 had a modification to its external package that reduced this mean correlation coefficient to −0.30, considered to be weak to negligible. Regarding temperature, a moderate mean correlation of −0.59 was verified for sensor 1 and zero mean correlation was found for sensor 2. Based on these results the system was proven to be robust for installation in gas transportation or processing facilities. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Methane absorption bands (hypothetical 5 cm cell with pure methane at 50 torr) compared with optical fiber attenuation expressed in dB/km. Best absorption band for fiber optical application indicated.</p>
Full article ">Figure 2
<p>Proposed system for monitoring multiple points by a single optical fiber. System comprises an interrogation unit, the optical cable and the sensors. On top of the figure typical OTDR traces when the Laser Diode (LD) is tuned outside (blue curve) and inside (red curve) the absorption line of methane.</p>
Full article ">Figure 3
<p>Laboratorial results for a setup with a pure methane cell with optical path length of 5.5 cm and 50 torr pressure is placed after 4 km from the interrogation unit. (<b>a</b>) collected OTDR traces varying the wavelength from 1645.50 to 1645.60 nm, (<b>b</b>) the obtained attenuation in the distance of 4 km.</p>
Full article ">Figure 4
<p>Absorption spectrum of 1% methane and pure water vapor (multiplied by 100) for a 10 cm optical path length. LD in and LD out of the methane absorption band also show.</p>
Full article ">Figure 5
<p>Example of real field results based in the proposed system. Reference OTDR trace (blue curve) and measurement trace (red curve) corresponds to the case of LD wavelength tuned outside and inside the absorption line of methane, respectively. Sensors are placed at ~5 km and ~10 km of the interrogation unit. LD inside absorption line (1645.55 nm), LD outside absorption line (1645.70 nm).</p>
Full article ">Figure 6
<p>Interrogation unit: top view of electrical and optical components.</p>
Full article ">Figure 7
<p>Sensor elements. (<b>a</b>) Schematic of the compact optical gas cell. (<b>b</b>) View of the actual compact optical gas cell used in the prototypes, 50 mm long and 5 mm diameter.</p>
Full article ">Figure 8
<p>Developed sensor units. (<b>a</b>) sensor head for flange application, (<b>b</b>) sensor head for vent valve application, (<b>c</b>) sensor head with external package for flange application, (<b>d</b>) sensor head with external package for vent valve application.</p>
Full article ">Figure 9
<p>Calibration curve of the proposed system using commercial equipment DP-IR from Health-Consultant.</p>
Full article ">Figure 10
<p>Installation of the remote fiber optic methane leak detection system and distance from a weather station: (<b>a</b>) view of the compressor station with indicated points of installation, (<b>b</b>) distance from compressor station to weather station of ~8 km (Google Maps 2019).</p>
Full article ">Figure 11
<p>Sensor units installed in the two points of interest. (<b>a</b>) Flange on a derivation of pressure instrumentation (sensor 1) and (<b>b</b>) vent valve (sensor 2).</p>
Full article ">Figure 12
<p>In-field comparison of the proposed system with a commercial equipment model DP-IR from Health Consultant. (<b>a</b>) view of the performed test with gas inlet of commercial equipment just above the used compact sensor, (<b>b</b>) test showing good agreement between the two systems.</p>
Full article ">Figure 13
<p>Comparison between methane concentration for sensors S1 and S2 and meteorological data.</p>
Full article ">Figure 14
<p>Increase of moving averages for observation of correlation between concentration data and climatic data. (<b>a</b>) untreated concentration data, (<b>b</b>) 10 moving averages applied to sensor 1 data, (<b>c</b>) 20 moving averages and (<b>d</b>) 40 moving averages.</p>
Full article ">Figure 15
<p>Examples of dispersion diagrams with different values of r (correlation coefficient) [<a href="#B19-remotesensing-11-01249" class="html-bibr">19</a>].</p>
Full article ">Figure 16
<p>Correlation between the concentration measured by sensors 1 and 2 and the wind velocity obtained from the climatic station for the 3rd set of data from May 15 to June 1, 2017. (<b>a</b>) Sensor 1, (<b>b</b>) sensor 2.</p>
Full article ">Figure 17
<p>Correlation between the concentration measured by sensor 2 and the wind velocity obtained from the climatic station. (<b>a</b>) 3rd data set from May 15 to June 1, 2017, (<b>b</b>) 4th data set from June 6 to June 24, (<b>c</b>) modification introduced in sensor 2 to reduce wind action.</p>
Full article ">Figure 18
<p>Correlation between the concentration measured by sensors 1 and 2 and the temperature: (<b>a</b>) sensor 1 (moderate correlation) and (<b>b</b>) sensor 2 (negligible correlation).</p>
Full article ">Figure 19
<p>Correlation of temperature and humidity. (<b>a</b>) temporal evolution of both meteorological parameters, (<b>b</b>) Pearson correlation parameter.</p>
Full article ">Figure 20
<p>Comparison between the measured concentration by the sensors and the rainfall pluviometry expressed in mm.</p>
Full article ">Figure 21
<p>Vision of the sensors in presence of rain: (<b>a</b>) sensor 1, (<b>b</b>) sensor 2, (<b>c</b>) sensor 2 open for verification.</p>
Full article ">
22 pages, 19582 KiB  
Article
Higher-Order Conditional Random Fields-Based 3D Semantic Labeling of Airborne Laser-Scanning Point Clouds
by Yong Li, Dong Chen, Xiance Du, Shaobo Xia, Yuliang Wang, Sheng Xu and Qiang Yang
Remote Sens. 2019, 11(10), 1248; https://doi.org/10.3390/rs11101248 - 27 May 2019
Cited by 13 | Viewed by 3274
Abstract
This paper presents a novel framework to achieve 3D semantic labeling of objects (e.g., trees, buildings, and vehicles) from airborne laser-scanning point clouds. To this end, we propose a framework which consists of hierarchical clustering and higher-order conditional random fields (CRF) labeling. In [...] Read more.
This paper presents a novel framework to achieve 3D semantic labeling of objects (e.g., trees, buildings, and vehicles) from airborne laser-scanning point clouds. To this end, we propose a framework which consists of hierarchical clustering and higher-order conditional random fields (CRF) labeling. In the hierarchical clustering, the raw point clouds are over-segmented into a set of fine-grained clusters by integrating the point density clustering and the classic K-means clustering algorithm, followed by the proposed probability density clustering algorithm. Through this process, we not only obtain a more uniform size and more homogeneous clusters with semantic consistency, but the topological relationships of the cluster’s neighborhood are implicitly maintained by turning the problem of topology maintenance into a clustering problem based on the proposed probability density clustering algorithm. Subsequently, the fine-grained clusters and their topological context are fed into the CRF labeling step, from which the fine-grained cluster’s semantic labels are learned and determined by solving a multi-label energy minimization formulation, which simultaneously considers the unary, pairwise, and higher-order potentials. Our experiments of classifying urban and residential scenes demonstrate that the proposed approach reaches 88.5% and 86.1% of “m F 1 ” estimated by averaging all classes of the F 1 -scores. We prove that the proposed method outperforms five other state-of-the-art methods. In addition, we demonstrate the effectiveness of the proposed energy terms by using an “ablation study” strategy. Full article
Show Figures

Figure 1

Figure 1
<p>The flowchart of the proposed method.</p>
Full article ">Figure 2
<p><b>Two scenes for assessing the proposed algorithm.</b> (<b>a</b>,<b>d</b>) are the residential and urban scenes in Tianjin. (<b>b</b>,<b>e</b>) are the selected points with semantic labels from (<b>a</b>,<b>d</b>) for training. The corresponding references are shown in subfigures (<b>c</b>,<b>f</b>). Please note that the point clouds in (<b>a</b>,<b>d</b>) are rendered according to point clouds’ elevation, and other colors represent the semantic information, i.e., blue = trees, green = buildings and red = vehicles.</p>
Full article ">Figure 3
<p><b>Coarse-grained cluster generation using the DBSCAN algorithm.</b> (<b>a</b>) The raw point clouds. (<b>b</b>) The clustering results after using DBSCAN. (<b>c</b>) The reference data. Please note that in (<b>a</b>), the point clouds are colored by elevation. Different colors in (<b>b</b>) represent different clusters. A few colors have been reused, as a result, different disjoint clusters may share the same color. The semantic colors in (<b>c</b>) comply with the color code, i.e., green = trees, brown = buildings, and red = vehicles.</p>
Full article ">Figure 4
<p><b>Fine-grained cluster generation using the K-means algorithm.</b> (<b>a</b>) The coarse-grained clusters generated by DBSCAN. (<b>b</b>) The fine-grained clusters obtained by the K-means algorithm. Please note that different colors represent different clusters. A few colors have been reused, as a result, different disjoint clusters may share the same color.</p>
Full article ">Figure 5
<p><b>Neighborhood topology maintenance between the fine-grained clusters.</b> (<b>a</b>) The initial coarse labels. (<b>b</b>) The results of the proposed probability density clustering. (<b>c</b>) The inclusive relationship between the clusters generated by the proposed probability density clustering and the fine-grained clusters. The inclusive relationships are clearly shown in the enlarged view in subfigure (<b>c</b>) by the overlap of the two kinds of clusters.</p>
Full article ">Figure 6
<p><b>Comparison of the performance of six methods of classifying a residential scene.</b> Subfigures (<b>a</b>–<b>e</b>) are the results generated by the state-of-the-art Methods 1 to 5. Our results are shown in (<b>f</b>). The color legend is defined in <a href="#remotesensing-11-01248-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 7
<p><b>Comparison of the performance of six methods of classifying an urban scene.</b> Subfigures (<b>a</b>–<b>e</b>) are the results generated by the state-of-the-art Methods 1 to 5. Our results are shown in (<b>f</b>). The color legend is defined in <a href="#remotesensing-11-01248-f002" class="html-fig">Figure 2</a>.</p>
Full article ">
18 pages, 5806 KiB  
Article
Mapping Urban Extent at Large Spatial Scales Using Machine Learning Methods with VIIRS Nighttime Light and MODIS Daytime NDVI Data
by Xue Liu, Alex de Sherbinin and Yanni Zhan
Remote Sens. 2019, 11(10), 1247; https://doi.org/10.3390/rs11101247 - 27 May 2019
Cited by 35 | Viewed by 8623
Abstract
Urbanization poses significant challenges on sustainable development, disaster resilience, climate change mitigation, and environmental and resource management. Accurate urban extent datasets at large spatial scales are essential for researchers and policymakers to better understand urbanization dynamics and its socioeconomic drivers and impacts. While [...] Read more.
Urbanization poses significant challenges on sustainable development, disaster resilience, climate change mitigation, and environmental and resource management. Accurate urban extent datasets at large spatial scales are essential for researchers and policymakers to better understand urbanization dynamics and its socioeconomic drivers and impacts. While high-resolution urban extent data products - including the Global Human Settlements Layer (GHSL), the Global Man-Made Impervious Surface (GMIS), the Global Human Built-Up and Settlement Extent (HBASE), and the Global Urban Footprint (GUF) - have recently become available, intermediate-resolution urban extent data products including the 1 km SEDAC’s Global Rural-Urban Mapping Project (GRUMP), MODIS 1km, and MODIS 500 m still have many users and have been demonstrated in a recent study to be more appropriate in urbanization process analysis (around 500 m resolution) than those at higher resolutions (30 m). The objective of this study is to improve large-scale urban extent mapping at an intermediate resolution (500 m) using machine learning methods through combining the complementary nighttime Visible Infrared Imaging Radiometer Suite (VIIRS) and daytime Moderate Resolution Imaging Spectroradiometer (MODIS) data, taking the conterminous United States (CONUS) as the study area. The effectiveness of commonly-used machine learning methods, including random forest (RF), gradient boosting machine (GBM), neural network (NN), and their ensemble (ESB), has been explored. Our results show that these machine learning methods can achieve similar high accuracies across all accuracy metrics (>95% overall accuracy, >98% producer’s accuracy, and >92% user’s accuracy) with Kappa coefficients greater than 0.90, which have not been achieved in the existing data products or by previous studies; the ESB is not able to produce significantly better accuracies than individual machine learning methods; the total misclassifications generated by GBM are more than those generated by RF, NN, and ESB by 14%, 16%, and 11%, respectively, with NN having the least total misclassifications. This indicates that using these machine learning methods, especially NN and RF, with the combination of VIIRS nighttime light and MODIS daytime normalized difference vegetation index (NDVI) data, high accuracy intermediate-resolution urban extent data products at large spatial scales can be achieved. The methodology has the potential to be applied to annual continental-to-global scale urban extent mapping at intermediate resolutions. Full article
(This article belongs to the Special Issue Advances in Remote Sensing with Nighttime Lights)
Show Figures

Figure 1

Figure 1
<p>The study area consisting of the 48 conterminous states in the US, with urban extent data layer from the Global Rural–Urban Mapping Project (GRUMP) overlaid with state boundaries.</p>
Full article ">Figure 2
<p>Illustration of urban and non-urban areas based on the definition of urban areas employed in this study. The size of the square is 1 km by 1 km: urban area contains more than 50% of built-up areas (<b>left</b>) while a non-urban area contains less than 50% built-up area (<b>right</b>).</p>
Full article ">Figure 3
<p>(<b>Left</b>) An example of VIIRS nighttime light annual composite for the northeastern United States with stray light, lightning, lunar illumination, cloud-cover, and gas flares removed (urban areas are characterized by brighter pixel clusters); (<b>Right</b>) an example of MODIS NDVI annual composite for the northeastern United States with cloud contamination removed using the greenest pixel method (urban areas are characterized by NDVI pixel clusters with lower positive values).</p>
Full article ">Figure 4
<p>Urban and non-urban reference samples collected for the conterminous United States: (1) the cross “<b>+</b>” symbols indicate reference sample sites solely for training purposes, (2) the triangle “<b>▲</b>” symbols indicate reference sample sites solely for accuracy assessment. Sample sites of (1) and (2) were collected in two separate steps and are, therefore, totally independent.</p>
Full article ">Figure 5
<p>The workflow for data processing and machine learning prediction of urban extent.</p>
Full article ">Figure 6
<p>Urban extent maps generated by Random Forest (RF), gradient boosting machine (GBM), neural network (NN), and their ensemble (ESB) for CONUS 2015: (<b>a</b>) whole urban extent maps generated by the four machine learning algorithms (<b>a1,a2,a3,a4</b>), (<b>b</b>) zoom-in detailed comparison of the four urban extent maps (<b>b1,b2,b3</b>).</p>
Full article ">Figure 6 Cont.
<p>Urban extent maps generated by Random Forest (RF), gradient boosting machine (GBM), neural network (NN), and their ensemble (ESB) for CONUS 2015: (<b>a</b>) whole urban extent maps generated by the four machine learning algorithms (<b>a1,a2,a3,a4</b>), (<b>b</b>) zoom-in detailed comparison of the four urban extent maps (<b>b1,b2,b3</b>).</p>
Full article ">Figure 7
<p>Comparison between 2015 urban extent generated by NN and GRUMP urban extent 1995: (<b>a1</b>,<b>b1</b>,<b>c1</b>,<b>d1</b>) are GRUMP urban extent 1995 (Yellow) while (<b>a2</b>,<b>b2</b>,<b>c2</b>,<b>d2</b>) are NN-based urban extent 2015 (Red).</p>
Full article ">Figure 8
<p>Comparison between 2015 urban extent generated by NN (Red) and GlobCover-extracted urban extent 2009 (Yellow), with the background imagery from 2017: (<b>a</b>) Baltimore; (<b>b</b>) Philadelphia.</p>
Full article ">
1 pages, 156 KiB  
Correction
Correction: Zhang, M., et al. Estimation of Vegetation Productivity Using a Landsat 8 Time Series in a Heavily Urbanized Area, Central China. Remote Sens. 2019, 11, 133
by Meng Zhang, Hui Lin, Guangxin Wang, Hua Sun and Yaotong Cai
Remote Sens. 2019, 11(10), 1246; https://doi.org/10.3390/rs11101246 - 27 May 2019
Cited by 2 | Viewed by 2402
Abstract
The authors wish to make the following corrections to this paper [...] Full article
16 pages, 6849 KiB  
Article
Comparison of Normalized Difference Vegetation Index Derived from Landsat, MODIS, and AVHRR for the Mesopotamian Marshes Between 2002 and 2018
by Reyadh Albarakat and Venkataraman Lakshmi
Remote Sens. 2019, 11(10), 1245; https://doi.org/10.3390/rs11101245 - 25 May 2019
Cited by 52 | Viewed by 7338
Abstract
The Mesopotamian marshes are a group of water bodies located in southern Iraq, in the shape of a triangle, with the cities Amarah, Nasiriyah, and Basra located at its corners. The marshes are appropriate habitats for a variety of birds and most of [...] Read more.
The Mesopotamian marshes are a group of water bodies located in southern Iraq, in the shape of a triangle, with the cities Amarah, Nasiriyah, and Basra located at its corners. The marshes are appropriate habitats for a variety of birds and most of the commercial fisheries in the region. The normalized difference vegetation index (NDVI) has been derived using observations from various satellite sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS), Advanced Very-High-Resolution Radiometer (AVHRR), and Landsat over the Mesopotamian marshlands for the 17-year period between 2002 and 2018. We have chosen this time series (2002–2018) to monitor the change in vegetation of the study area since it is considered as a period of rehabilitation for the marshes (following a period when there was little to no water flowing into the marshes). Statistical analyses were performed to monitor the variability of the maximum biomass time (month of June). The results illustrated a strong positive correlation between the NDVI derived from Landsat, MODIS, and AVHRR. The statistical correlations were 0.79, 0.77, and 0.96 between Landsat and AVHRR, MODIS and AVHRR, and Landsat and MODIS, respectively. The linear slope of NDVI (Landsat, MODIS, and AVHRR) for each pixel over the period 2002–2018 displays a long-term trend of green biomass (NDVI) change in the study area, and the slope is slightly negative over most of the area. Slope values (−0.002 to −0.05) denote a slight decrease in the observed vegetation index over 17 years. The green biomass of the marshlands increased by 33.2% of the total area over 17 years. The areas of negative and positive slopes correspond to the same areas in slope map when calculated from Landsat, MODIS, and AVHRR, although they are different in spatial resolution (30 m, 1 km, and 5 km, respectively). The time series of the average NDVI (2002–2018) for three different sensors shows the highest and lowest NDVI values during the same years (for the month of June each year). The highest values were 0.19, 0.22, and 0.22 for Landsat, MODIS, and AVHRR, respectively, in 2006, and the lowest values were 0.09, 0.14, and 0.09 for Landsat, MODIS, and AVHRR, respectively, in 2003. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>False color composite Enhanced Thematic Mapper Plus (ETM+) image for the Mesopotamian marshes for June 2018. Open water is shown in black and turquoise, vegetation is shown in dark and light green, and the barren area is represented in light brownish, white, and gray.</p>
Full article ">Figure 2
<p>Average normalized difference vegetation index (NDVI) over the 17-year period (2002–2018) for three different sensors at their original spatial resolution—5 km, 1 km, and 30 m for Land Long Term Data Record (LTDR) Advanced Very-High-Resolution Radiometer (AVHRR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat 7 ETM+, respectively.</p>
Full article ">Figure 3
<p>The time series of average NDVI from 2002 to 2018 for three different sensors at their original spatial resolution—5 km, 1 km, and 30 m for LTDR AVHRR, MODIS, and Landsat 7 ETM+, respectively.</p>
Full article ">Figure 4
<p>Average NDVI over the 17-year period (2002–2018) for Landsat 7 ETM+ and Terra MODIS NDVI resampled to the spatial resolution of AVHRR LTDR (5 km).</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>c</b>) Pearson’s correlation analysis between NDVI from AVHRR, MODIS, and Landsat over 17 years (2002–2018). (<b>a</b>) Landsat–AVHRR (<b>b</b>) MODIS–AVHRR, and (<b>c</b>) Landsat–MODIS.</p>
Full article ">Figure 6
<p>(<b>a</b>–<b>c</b>) Comparison of annual average NDVI values from the three different sensors over 17 years. (<b>a</b>) Landsat versus AVHRR, (<b>b</b>) MODIS versus AVHRR, and (<b>c</b>) Landsat versus MODIS.</p>
Full article ">Figure 7
<p>Temporal standard deviation for each pixel (2002–2018) for the three different sensors (AVHRR, MODIS, and Landsat).</p>
Full article ">Figure 8
<p>Linear slope of NDVI (AVHRR, MODIS, and Landsat) for each pixel over the period of 2002–2018, showing long-term trends of green biomass change in the Mesopotamian marshes.</p>
Full article ">Figure 9
<p>Combined digital elevation model (DEM) image (30 × 30 m) with the increase in vegetation coverage (from the linear slope of the NDVI).</p>
Full article ">Figure 10
<p>Correlation between vegetation (from linear slope of the NDVI) and elevation.</p>
Full article ">Figure 11
<p>Pixel-based significance of linear slope over the period 2002–2018 for the Mesopotamian marshes from the three different sensors (AVHRR, MODIS, and Landsat).</p>
Full article ">
25 pages, 4707 KiB  
Article
UAV and Ground Image-Based Phenotyping: A Proof of Concept with Durum Wheat
by Adrian Gracia-Romero, Shawn C. Kefauver, Jose A. Fernandez-Gallego, Omar Vergara-Díaz, María Teresa Nieto-Taladriz and José L. Araus
Remote Sens. 2019, 11(10), 1244; https://doi.org/10.3390/rs11101244 - 25 May 2019
Cited by 76 | Viewed by 8257
Abstract
Climate change is one of the primary culprits behind the restraint in the increase of cereal crop yields. In order to address its effects, effort has been focused on understanding the interaction between genotypic performance and the environment. Recent advances in unmanned aerial [...] Read more.
Climate change is one of the primary culprits behind the restraint in the increase of cereal crop yields. In order to address its effects, effort has been focused on understanding the interaction between genotypic performance and the environment. Recent advances in unmanned aerial vehicles (UAV) have enabled the assembly of imaging sensors into precision aerial phenotyping platforms, so that a large number of plots can be screened effectively and rapidly. However, ground evaluations may still be an alternative in terms of cost and resolution. We compared the performance of red–green–blue (RGB), multispectral, and thermal data of individual plots captured from the ground and taken from a UAV, to assess genotypic differences in yield. Our results showed that crop vigor, together with the quantity and duration of green biomass that contributed to grain filling, were critical phenotypic traits for the selection of germplasm that is better adapted to present and future Mediterranean conditions. In this sense, the use of RGB images is presented as a powerful and low-cost approach for assessing crop performance. For example, broad sense heritability for some RGB indices was clearly higher than that of grain yield in the support irrigation (four times), rainfed (by 50%), and late planting (10%). Moreover, there wasn’t any significant effect from platform proximity (distance between the sensor and crop canopy) on the vegetation indexes, and both ground and aerial measurements performed similarly in assessing yield. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Cumulative monthly rainfall (blue line) and maximum, minimum, and mean temperature (bars) in Colmenar de Oreja for the 2016–2017 crop cycle.</p>
Full article ">Figure 2
<p>Red–green–blue (RGB) (<b>A</b>), false-color normalized difference vegetation index (NDVI) (<b>B</b>), and false-color thermal (<b>C</b>) orthomosaic examples corresponding to the late-planting trial during the heading stage at the third sampling visit. Both the multispectral and thermal mosaics have been given false colors: in the former, low NDVI values have been colored red and high values colored green, in the latter, warmer temperature values have been colored red and the colder values colored blue.</p>
Full article ">Figure 3
<p>Pearson correlation coefficient heatmap of grain yield with parameters measured from ground and aerial platforms throughout the different phenological stages and treatments. Correlations are scaled according to the key above. Correlations were studied across the 72 plots from each growing condition.</p>
Full article ">Figure 4
<p>Relationships between grain yield with the RGB index green area (GA) (left), the multispectral index NDVI (middle) and the canopy temperature (right), measured from the ground level (red points) and from the aerial level (blue points) during grain filling for the supplementary irrigation (top), the rainfed (middle), and the late-planting (bottom) growing conditions. Correlations were studied across the 72 plots from each growing condition.</p>
Full article ">Figure 5
<p>Bar graph comparison of the grain yield and the H<sup>2</sup> (orange) and H<sup>2</sup> × r<sub>g</sub> (blue) indexes of a selection of indexes.</p>
Full article ">
20 pages, 9161 KiB  
Article
A Multi-Primitive-Based Hierarchical Optimal Approach for Semantic Labeling of ALS Point Clouds
by Xuming Ge, Bo Wu, Yuan Li and Han Hu
Remote Sens. 2019, 11(10), 1243; https://doi.org/10.3390/rs11101243 - 24 May 2019
Cited by 8 | Viewed by 4054
Abstract
There are normally three main steps to carrying out the labeling of airborne laser scanning (ALS) point clouds. The first step is to use appropriate primitives to represent the scanning scenes, the second is to calculate the discriminative features of each primitive, and [...] Read more.
There are normally three main steps to carrying out the labeling of airborne laser scanning (ALS) point clouds. The first step is to use appropriate primitives to represent the scanning scenes, the second is to calculate the discriminative features of each primitive, and the third is to introduce a classifier to label the point clouds. This paper investigates multiple primitives to effectively represent scenes and exploit their geometric relationships. Relationships are graded according to the properties of related primitives. Then, based on initial labeling results, a novel, hierarchical, and optimal strategy is developed to optimize semantic labeling results. The proposed approach was tested using two sets of representative ALS point clouds, namely the Vaihingen datasets and Hong Kong’s Central District dataset. The results were compared with those generated by other typical methods in previous work. Quantitative assessments for the two experimental datasets showed that the performance of the proposed approach was superior to reference methods in both datasets. The scores for correctness attained over 98% in all cases of the Vaihingen datasets and up to 96% in the Hong Kong dataset. The results reveal that our approach of labeling different classes in terms of ALS point clouds is robust and bears significance for future applications, such as 3D modeling and change detection from point clouds. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Framework of the proposed approach.</p>
Full article ">Figure 2
<p>Multiple primitives to represent a scanning scene.</p>
Full article ">Figure 3
<p>The strategy used in the proposed method to carry out a semantic check. (<b>a</b>) Point clouds in 3D space. The green and blue points represent tree and ground, respectively. The red points represent a candidate of semantic segments, e.g., plane segment A. (<b>b</b>) Point clouds in 2D space. The yellow circle shows the neighborhood using a cylinder radius. (<b>c</b>) Point clouds of <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="bold-italic">A</mi> <mo>¯</mo> </mover> </semantics></math> and a designed grid with a fixed step, <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>y</mi> </mrow> </semantics></math>. (<b>d</b>) The scanning line is not required to be parallel to the grid. (<b>e</b>) Points in one of the grid lines are reordered in a designed 2D system and extreme points (e.g., points in the red circles) can be detected from the 2D curve.</p>
Full article ">Figure 4
<p>Relationship definition of multi-primitives in the proposed method.</p>
Full article ">Figure 5
<p>Test sites of scene Vaihingen. From left to right: areas 1, 2, and 3 [<a href="#B34-remotesensing-11-01243" class="html-bibr">34</a>].</p>
Full article ">Figure 6
<p>Multiple primitives to split point clouds. (<b>a</b>) Planar segments and semantic segments. (<b>b</b>) Segments and points.</p>
Full article ">Figure 7
<p>3D View of the semantic labeling results for the three Vaihingen areas with five classes: <span class="html-italic">ground</span> (orange), <span class="html-italic">building roof</span> (blue), <span class="html-italic">vegetation</span> (green), <span class="html-italic">façade</span> (yellow), and <span class="html-italic">other</span> (red). (<b>a</b>) vs. Area 1, (<b>b</b>) vs. Area 2, and (<b>c</b>) vs. Area (3).</p>
Full article ">Figure 8
<p>Semantic labeling results and visualized evaluation of three Vaihingen test sites. (<b>a</b>) vs. Area 1, (<b>b</b>) vs. Area 2, and (<b>c</b>) vs. Area 3.</p>
Full article ">Figure 9
<p>Two “missing” cases, where the areas are larger than 50 m<sup>2</sup> in Area 2 and Area 3 in the red circles of (<b>a</b>) and (<b>b</b>), respectively.</p>
Full article ">Figure 10
<p>Tiled image of the study area (red dotted square) in Hong Kong’s Central District.</p>
Full article ">Figure 11
<p>The training dataset and the semantic labeling results in 2D. The training dataset with manual labels is in the red square. The region outside the square comprises the semantic labeling results with labels as predicted by the proposed method.</p>
Full article ">Figure 12
<p>The semantic labeling results of the Hong Kong dataset using the proposed method in 3D.</p>
Full article ">Figure 13
<p>Semantic labeling results and visualized evaluation of the Hong Kong test sites.</p>
Full article ">
9 pages, 1298 KiB  
Communication
Remotely Sensed Vegetation Indices to Discriminate Field-Grown Olive Cultivars
by Giovanni Avola, Salvatore Filippo Di Gennaro, Claudio Cantini, Ezio Riggi, Francesco Muratore, Calogero Tornambè and Alessandro Matese
Remote Sens. 2019, 11(10), 1242; https://doi.org/10.3390/rs11101242 - 24 May 2019
Cited by 44 | Viewed by 4926
Abstract
The application of spectral sensors mounted on unmanned aerial vehicles (UAVs) assures high spatial and temporal resolutions. This research focused on canopy reflectance for cultivar recognition in an olive grove. The ability in cultivar recognition of 14 vegetation indices (VIs) calculated from reflectance [...] Read more.
The application of spectral sensors mounted on unmanned aerial vehicles (UAVs) assures high spatial and temporal resolutions. This research focused on canopy reflectance for cultivar recognition in an olive grove. The ability in cultivar recognition of 14 vegetation indices (VIs) calculated from reflectance patterns (green520–600, red630–690 and near-infrared760–900 bands) and an image segmentation process was evaluated on an open-field olive grove with 10 different scion/rootstock combinations (two scions by five rootstocks). Univariate (ANOVA) and multivariate (principal components analysis—PCA and linear discriminant analysis—LDA) statistical approaches were applied. The efficacy of VIs in scion recognition emerged clearly from all the approaches applied, whereas discrimination between rootstocks appeared unclear. The results of LDA ascertained the efficacy of VI application to discriminate between scions with an accuracy of 90.9%, whereas recognition of rootstocks failed in more than 68.2% of cases. Full article
(This article belongs to the Special Issue Remote Sensing for Agroforestry)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Unmanned aerial vehicle (UAV) image processing flow: (<b>a</b>) orthomosaic; (<b>b</b>) digital surface model (DSM); (<b>c</b>) olive crown vegetation index extraction based on canopy height model (blue numbers provide an example of normalized difference vegetation index extracted per single crown).</p>
Full article ">Figure 2
<p>Principal component analysis (PCA) table between vegetation indices and different olive scion/rootstock combinations. Rootstocks: 1 = Carolea; 2 = Cipressino; 3 = Coratina; 4 = Frantoio; 5 = Leccino.</p>
Full article ">
28 pages, 25877 KiB  
Article
An Adaptive Framework for Multi-Vehicle Ground Speed Estimation in Airborne Videos
by Jing Li, Shuo Chen, Fangbing Zhang, Erkang Li, Tao Yang and Zhaoyang Lu
Remote Sens. 2019, 11(10), 1241; https://doi.org/10.3390/rs11101241 - 24 May 2019
Cited by 41 | Viewed by 7061
Abstract
With the rapid development of unmanned aerial vehicles (UAVs), UAV-based intelligent airborne surveillance systems represented by real-time ground vehicle speed estimation have attracted wide attention from researchers. However, there are still many challenges in extracting speed information from UAV videos, including the dynamic [...] Read more.
With the rapid development of unmanned aerial vehicles (UAVs), UAV-based intelligent airborne surveillance systems represented by real-time ground vehicle speed estimation have attracted wide attention from researchers. However, there are still many challenges in extracting speed information from UAV videos, including the dynamic moving background, small target size, complicated environment, and diverse scenes. In this paper, we propose a novel adaptive framework for multi-vehicle ground speed estimation in airborne videos. Firstly, we build a traffic dataset based on UAV. Then, we use the deep learning detection algorithm to detect the vehicle in the UAV field of view and obtain the trajectory in the image through the tracking-by-detection algorithm. Thereafter, we present a motion compensation method based on homography. This method obtains matching feature points by an optical flow method and eliminates the influence of the detected target to accurately calculate the homography matrix to determine the real motion trajectory in the current frame. Finally, vehicle speed is estimated based on the mapping relationship between the pixel distance and the actual distance. The method regards the actual size of the car as prior information and adaptively recovers the pixel scale by estimating the vehicle size in the image; it then calculates the vehicle speed. In order to evaluate the performance of the proposed system, we carry out a large number of experiments on the AirSim Simulation platform as well as real UAV aerial surveillance experiments. Through quantitative and qualitative analysis of the simulation results and real experiments, we verify that the proposed system has a unique ability to detect, track, and estimate the speed of ground vehicles simultaneously even with a single downward-looking camera. Additionally, the system can obtain effective and accurate speed estimation results, even in various complex scenes. Full article
(This article belongs to the Special Issue Drone Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Vehicle speed estimation results of our system in different scenarios. Among them, scene (<b>a</b>) is a highway, scene (<b>b</b>) is an intersection, scene (<b>c</b>) is a road, and scene (<b>d</b>) is a parking lot entrance. The backgrounds of these scenes are complex, the number of vehicles is large, and the size is small, which makes vehicle speed estimation difficult.</p>
Full article ">Figure 2
<p>An overview of the proposed multi-vehicle ground speed estimation system in airborne videos. The hardware structure of the system is very simple. It is composed of a UAV DJI-MATRICE 100, a Point Grey monocular camera, and a computer, and it transmits data using wireless technology. The vehicle speed estimation algorithm mainly consists of three parts: multi-vehicle detection and tracking, motion compensation, and adaptive vehicle speed calculation.</p>
Full article ">Figure 3
<p>Vehicle detection and tracking in a complex scene. In this scenario, there are lots of small-sized vehicles and complicated backgrounds. The system can achieve good detection results through using the YOLOv3 algorithm, and effective and robust target tracking can be realized by using the Kalman filter and intersection over union (IOU) calculation.</p>
Full article ">Figure 4
<p>Motion compensation of the vehicle trajectory. The method firstly utilizes the dense optical flow method to obtain the image matching points and eliminates the foreground matching points by means of the detection result to more accurately calculate the homography matrix <math display="inline"><semantics> <mi mathvariant="bold">H</mi> </semantics></math>. Then, the trajectories of the previous frame are subjected to a homography transformation to get trajectories <math display="inline"><semantics> <msubsup> <mi>T</mi> <mi>j</mi> <mrow> <mo>′</mo> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </semantics></math> in the current frame.</p>
Full article ">Figure 5
<p>Some images of our training dataset, including (<b>a</b>) a self-built traffic dataset based on an unmanned aerial vehicle (UAV), where the images were captured by DJI-MATRICE 100 with a Point Grey monocular camera; (<b>b</b>) the public traffic dataset UA-DETRAC [<a href="#B44-remotesensing-11-01241" class="html-bibr">44</a>].</p>
Full article ">Figure 6
<p>The demonstration of the simulation platform: (<b>a</b>) construction of the simulation environment, including monitoring scenes, a car, and a UAV; (<b>b</b>) the components of the simulation system, including two computers and an ethernet crossover cable.</p>
Full article ">Figure 7
<p>Vehicle speed estimation under different monitoring conditions, including (<b>a</b>) variable motion when the UAV and vehicle moved in the same direction; (<b>b</b>) uniform motion when the UAV and vehicle moved in the same direction; (<b>c</b>) variable motion when the UAV was moving in the same direction as the vehicle, and the altitude of the UAV was constantly changing.</p>
Full article ">Figure 8
<p>Vehicle speed estimation under different monitoring conditions, including (<b>a</b>) variable motion when the UAV and vehicle moved in opposite directions; (<b>b</b>) uniform motion when the UAV and vehicle moved in opposite directions; (<b>c</b>) variable motion when the UAV was stationary.</p>
Full article ">Figure 9
<p>Vehicle positioning results for our UAV-based traffic dataset. The test dataset contains five common traffic monitoring scenarios: an intersection, country road, parking entrance, highways, and crossroads. The background of these scenes is complex, with shadows, buildings, trees, etc., and there are numerous small-sized vehicles in the scene.</p>
Full article ">Figure 10
<p>Vehicle speed estimation process and results for our UAV-based traffic dataset, including motion compensated optical flow, the mean changes of the Gaussian model, speed measurement results, and speed changes of some targets throughout the monitoring process.</p>
Full article ">Figure 11
<p>The figure shows the monitoring range of the UAV in the qualitative experiment of the real environment, where road segment 1 and road segment 2 are used as landmarks in the scene, and the actual distances of these road segments were measured to be 33.06 m.</p>
Full article ">Figure 12
<p>Vehicle speed estimation under different monitoring conditions in the real environment, including (<b>a</b>) a UAV and a vehicle moving in the same direction; (<b>b</b>) a UAV and a vehicle moving in opposite directions; (<b>c</b>) a UAV hovering in the scene; and (<b>d</b>) a UAV flying from low to high in the scene.</p>
Full article ">Figure 13
<p>Distribution of velocity measurement errors with different prior values under different monitoring conditions in a real environment, including (<b>a</b>) a UAV hovering at a height of 50 m in the scene; (<b>b</b>) a UAV hovering at a height of 70m in the scene; and (<b>c</b>) a UAV and a vehicle moving in the same direction.</p>
Full article ">
24 pages, 661 KiB  
Review
Challenges and Future Perspectives of Multi-/Hyperspectral Thermal Infrared Remote Sensing for Crop Water-Stress Detection: A Review
by Max Gerhards, Martin Schlerf, Kaniska Mallick and Thomas Udelhoven
Remote Sens. 2019, 11(10), 1240; https://doi.org/10.3390/rs11101240 - 24 May 2019
Cited by 187 | Viewed by 16060
Abstract
Thermal infrared (TIR) multi-/hyperspectral and sun-induced fluorescence (SIF) approaches together with classic solar-reflective (visible, near-, and shortwave infrared reflectance (VNIR)/SWIR) hyperspectral remote sensing form the latest state-of-the-art techniques for the detection of crop water stress. Each of these three domains requires dedicated sensor [...] Read more.
Thermal infrared (TIR) multi-/hyperspectral and sun-induced fluorescence (SIF) approaches together with classic solar-reflective (visible, near-, and shortwave infrared reflectance (VNIR)/SWIR) hyperspectral remote sensing form the latest state-of-the-art techniques for the detection of crop water stress. Each of these three domains requires dedicated sensor technology currently in place for ground and airborne applications and either have satellite concepts under development (e.g., HySPIRI/SBG (Surface Biology and Geology), Sentinel-8, HiTeSEM in the TIR) or are subject to satellite missions recently launched or scheduled within the next years (i.e., EnMAP and PRISMA (PRecursore IperSpettrale della Missione Applicativa, launched on March 2019) in the VNIR/SWIR, Fluorescence Explorer (FLEX) in the SIF). Identification of plant water stress or drought is of utmost importance to guarantee global water and food supply. Therefore, knowledge of crop water status over large farmland areas bears large potential for optimizing agricultural water use. As plant responses to water stress are numerous and complex, their physiological consequences affect the electromagnetic signal in different spectral domains. This review paper summarizes the importance of water stress-related applications and the plant responses to water stress, followed by a concise review of water-stress detection through remote sensing, focusing on TIR without neglecting the comparison to other spectral domains (i.e., VNIR/SWIR and SIF) and multi-sensor approaches. Current and planned sensors at ground, airborne, and satellite level for the TIR as well as a selection of commonly used indices and approaches for water-stress detection using the main multi-/hyperspectral remote sensing imaging techniques are reviewed. Several important challenges are discussed that occur when using spectral emissivity, temperature-based indices, and physically-based approaches for water-stress detection in the TIR spectral domain. Furthermore, challenges with data processing and the perspectives for future satellite missions in the TIR are critically examined. In conclusion, information from multi-/hyperspectral TIR together with those from VNIR/SWIR and SIF sensors within a multi-sensor approach can provide profound insights to actual plant (water) status and the rationale of physiological and biochemical changes. Synergistic sensor use will open new avenues for scientists to study plant functioning and the response to environmental stress in a wide range of ecosystems. Full article
(This article belongs to the Special Issue Applications of Spectroscopy in Agriculture and Vegetation Research)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Most important relationships between primary plant stresses, the induced plant responses, and the multi-/hyperspectral remote sensing techniques for the detection of environmental stresses (modified after Jones and Vaughan [<a href="#B19-remotesensing-11-01240" class="html-bibr">19</a>]).</p>
Full article ">
22 pages, 6698 KiB  
Article
Winter Wheat Canopy Height Extraction from UAV-Based Point Cloud Data with a Moving Cuboid Filter
by Yang Song and Jinfei Wang
Remote Sens. 2019, 11(10), 1239; https://doi.org/10.3390/rs11101239 - 24 May 2019
Cited by 39 | Viewed by 6066
Abstract
Plant height can be used as an indicator to estimate crop phenology and biomass. The Unmanned Aerial Vehicle (UAV)-based point cloud data derived from photogrammetry methods contains the structural information of crops which could be used to retrieve crop height. However, removing noise [...] Read more.
Plant height can be used as an indicator to estimate crop phenology and biomass. The Unmanned Aerial Vehicle (UAV)-based point cloud data derived from photogrammetry methods contains the structural information of crops which could be used to retrieve crop height. However, removing noise and outliers from the UAV-based crop point cloud data for height extraction is challenging. The objective of this paper is to develop an alternative method for canopy height determination from UAV-based 3D point cloud datasets using a statistical analysis method and a moving cuboid filter to remove outliers. In this method, first, the point cloud data is divided into many 3D columns. Secondly, a moving cuboid filter is applied in each column and moved downward to eliminate noise points. The threshold of point numbers in the filter is calculated based on the distribution of points in the column. After applying the moving cuboid filter, the crop height is calculated from the highest and lowest points in each 3D column. The proposed method achieved high accuracy for height extraction with low Root Mean Square Error (RMSE) of 6.37 cm and Mean Absolute Error (MAE) of 5.07 cm. The canopy height monitoring window for winter wheat using this method starts from the beginning of the stem extension stage to the end of the heading stage (BBCH 31 to 65). Since the height of wheat has limited change after the heading stage, this method could be used to retrieve the crop height of winter wheat. In addition, this method only requires one operation of UAV in the field. It could be an effective method that can be widely used to help end-user to monitor their crops and support real-time decision making for farm management. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area and sampling points in the field. (<b>a</b>) The study area in Ontario, Canada. (<b>b</b>) The sampling points in the study area. The blue points are the ground control points, and the black squares are ground-measured sampling points.</p>
Full article ">Figure 2
<p>2D UAV orthomosaic images for the study area during three growth stages, (<b>a</b>) 16 May, (<b>c</b>) 31 May, (<b>e</b>) 9 June; 3D Point cloud dataset for the black boundary area in perspective view, (<b>b</b>) 16 May, (<b>d</b>) 31 May, (<b>f</b>) 9 June. The color scheme bar showed the elevation (above sea level) of the point cloud dataset.</p>
Full article ">Figure 2 Cont.
<p>2D UAV orthomosaic images for the study area during three growth stages, (<b>a</b>) 16 May, (<b>c</b>) 31 May, (<b>e</b>) 9 June; 3D Point cloud dataset for the black boundary area in perspective view, (<b>b</b>) 16 May, (<b>d</b>) 31 May, (<b>f</b>) 9 June. The color scheme bar showed the elevation (above sea level) of the point cloud dataset.</p>
Full article ">Figure 3
<p>Individual 3D square cross-section column within the point cloud data set.</p>
Full article ">Figure 4
<p>Histograms of the point distribution of a typical 3D column in the crop field at different crop growth stages. The distribution of overall points, bare ground points, and plant points are represented by black, brown, and green bars. X-axis is the elevation of points and Y-axis is the frequency of points. (<b>a</b>) The histogram of points distribution for bare ground points in October 2015. (<b>b</b>,<b>c</b>) The histogram of points distribution in the early growth stage of winter wheat (BBCH ≈ 31) on 16 May 2016. (<b>d</b>,<b>e</b>) The histogram of points distribution in the middle growth stage of winter wheat (BBCH ≈ 65) on 31 May 2016. (<b>f</b>,<b>g</b>) The histogram of points distribution in the late growth stage of winter wheat (BBCH ≈ 83) on 9 June 2016.</p>
Full article ">Figure 5
<p>The principle of the moving cuboid filter in a single column. The orange cuboid is the moving cuboid filter. It starts from Step 1 and moves down one slice in Step 2. <math display="inline"><semantics> <mi>i</mi> </semantics></math> is the number of steps in the 3D column, and Step <math display="inline"><semantics> <mrow> <mi>j</mi> </mrow> </semantics></math> is the final step.</p>
Full article ">Figure 6
<p>Flow chart of the moving cuboid filter.</p>
Full article ">Figure 7
<p>Threshold <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>α</mi> </msub> </mrow> </semantics></math> determination using the relationship between the ratio (<math display="inline"><semantics> <mi>α</mi> </semantics></math>) and optimal mean threshold (<math display="inline"><semantics> <mi>T</mi> </semantics></math>). (<b>a</b>) the relationship between <math display="inline"><semantics> <mi>α</mi> </semantics></math> and <math display="inline"><semantics> <mi>T</mi> </semantics></math>; (<b>b</b>) classification of <math display="inline"><semantics> <mi>α</mi> </semantics></math>.</p>
Full article ">Figure 8
<p>Raw maps of the winter wheat canopy height displayed as a cubic convolution interpretation. (<b>a</b>) 16 May; (<b>b</b>) 31 May; (<b>c</b>) 9 June.</p>
Full article ">Figure 9
<p>Map of the unsolved pixels (red points) at different growing stages for winter wheat. (<b>a</b>) 16 May; (<b>b</b>) 31 May; (<b>c</b>) 9 June.</p>
Full article ">Figure 10
<p>The final maps of canopy height in the study area at different growing stages. (<b>a</b>) 16 May; (<b>b</b>) 31 May; (<b>c</b>) 9 June. After removal of the unsolved pixels, the final map was generated using the inverse distance weighted (IDW) interpretation method and displayed as cubic convolution resampling. The black dash rectangle showed an area with a higher height estimation on the crop height map.</p>
Full article ">Figure 11
<p>The winter wheat canopy height produced by Khanna’s method. (<b>a</b>) Canopy height map on 16 May. (<b>b</b>) Canopy height map on 31 May. (<b>c</b>) canopy height map with unsolved pixels on 16 May. (<b>d</b>) Canopy height map with unsolved pixels on 31 May.</p>
Full article ">Figure 12
<p>The relationship between the threshold and estimated crop canopy height for one sampling point.</p>
Full article ">Figure 13
<p>The results after applying the proposed moving cuboid filter with different thresholds; the red points represent outliers and the green points are the points that are kept after filtering. (<b>a</b>,<b>b</b>) threshold of 7.4%; (<b>c</b>,<b>d</b>) threshold of 7.5%.</p>
Full article ">
22 pages, 7074 KiB  
Article
Object-Based Land Cover Classification of Cork Oak Woodlands using UAV Imagery and Orfeo ToolBox
by Giandomenico De Luca, João M. N. Silva, Sofia Cerasoli, João Araújo, José Campos, Salvatore Di Fazio and Giuseppe Modica
Remote Sens. 2019, 11(10), 1238; https://doi.org/10.3390/rs11101238 - 24 May 2019
Cited by 104 | Viewed by 13055
Abstract
This paper investigates the reliability of free and open-source algorithms used in the geographical object-based image classification (GEOBIA) of very high resolution (VHR) imagery surveyed by unmanned aerial vehicles (UAVs). UAV surveys were carried out in a cork oak woodland located in central [...] Read more.
This paper investigates the reliability of free and open-source algorithms used in the geographical object-based image classification (GEOBIA) of very high resolution (VHR) imagery surveyed by unmanned aerial vehicles (UAVs). UAV surveys were carried out in a cork oak woodland located in central Portugal at two different periods of the year (spring and summer). Segmentation and classification algorithms were implemented in the Orfeo ToolBox (OTB) configured in the QGIS environment for the GEOBIA process. Image segmentation was carried out using the Large-Scale Mean-Shift (LSMS) algorithm, while classification was performed by the means of two supervised classifiers, random forest (RF) and support vector machines (SVM), both of which are based on a machine learning approach. The original, informative content of the surveyed imagery, consisting of three radiometric bands (red, green, and NIR), was combined to obtain the normalized difference vegetation index (NDVI) and the digital surface model (DSM). The adopted methodology resulted in a classification with higher accuracy that is suitable for a structurally complex Mediterranean forest ecosystem such as cork oak woodlands, which are characterized by the presence of shrubs and herbs in the understory as well as tree shadows. To improve segmentation, which significantly affects the subsequent classification phase, several tests were performed using different values of the range radius and minimum region size parameters. Moreover, the consistent selection of training polygons proved to be critical to improving the results of both the RF and SVM classifiers. For both spring and summer imagery, the validation of the obtained results shows a very high accuracy level for both the SVM and RF classifiers, with kappa coefficient values ranging from 0.928 to 0.973 for RF and from 0.847 to 0.935 for SVM. Furthermore, the land cover class with the highest accuracy for both classifiers and for both flights was cork oak, which occupies the largest part of the study area. This study shows the reliability of fixed-wing UAV imagery for forest monitoring. The study also evidences the importance of planning UAV flights at solar noon to significantly reduce the shadows of trees in the obtained imagery, which is critical for classifying open forest ecosystems such as cork oak woodlands. Full article
(This article belongs to the Special Issue UAV Applications in Forestry)
Show Figures

Figure 1

Figure 1
<p>The top figure shows the location of the study area in Central Portugal. Below, the location of the flux tower and the 70% isoline of the footprint climatology representing the study area are identified in a UAV orthomosaic.</p>
Full article ">Figure 2
<p>Flight plan (red) and flight path (yellow) (<b>A</b>). The fixed-wing UAV S20 before take-off (<b>B</b>).</p>
Full article ">Figure 3
<p>Workflow of the preprocessing, segmentation, classification, and validation steps implemented to derive an object-based land cover map for cork oak woodlands from UAV imagery.</p>
Full article ">Figure 4
<p>Visual comparison from some of the numerous tests carried out for the estimation of segmentation accuracy. On the left, a portion of the R-G-NIR image is shown; on the right, the same area is shown with the superimposed vector file containing the segmentation polygons (segments). The top images, from A1 to A3, show the segmentation results obtained without the smoothing step and a range radius of 15 (the default value), 5, and 7. The lower images, B1 and B2, show the segmentation results with the smoothing step and a spatial radius of 5 (the default value) and 30.</p>
Full article ">Figure 5
<p>Maps of the land cover classification for the spring flight obtained from random forest (RF) (<b>A1</b>) and support vector machine (SVM) (<b>A2</b>) algorithms. Bar charts show the surface distribution according to the five defined land cover classes.</p>
Full article ">Figure 6
<p>Maps of the land cover classification for the summer flight obtained from random forest (RF) (<b>B1</b>) and support vector machine (SVM) (<b>B2</b>) algorithms. Bar charts show the surface distribution according to the five defined land cover classes.</p>
Full article ">Figure A1
<p>Spectral signature of cork oaks in spring and summer UAV imagery (green, red, and NIR bands). The transparent area shows minimum and maximum values range.</p>
Full article ">
15 pages, 1898 KiB  
Article
A Novel Vital-Sign Sensing Algorithm for Multiple Subjects Based on 24-GHz FMCW Doppler Radar
by Hyunjae Lee, Byung-Hyun Kim, Jin-Kwan Park and Jong-Gwan Yook
Remote Sens. 2019, 11(10), 1237; https://doi.org/10.3390/rs11101237 - 24 May 2019
Cited by 85 | Viewed by 8604
Abstract
A novel non-contact vital-sign sensing algorithm for use in cases of multiple subjects is proposed. The approach uses a 24 GHz frequency-modulated continuous-wave Doppler radar with the parametric spectral estimation method. Doppler processing and spectral estimation are concurrently implemented to detect vital signs [...] Read more.
A novel non-contact vital-sign sensing algorithm for use in cases of multiple subjects is proposed. The approach uses a 24 GHz frequency-modulated continuous-wave Doppler radar with the parametric spectral estimation method. Doppler processing and spectral estimation are concurrently implemented to detect vital signs from more than one subject, revealing excellent results. The parametric spectral estimation method is utilized to clearly identify multiple targets, making it possible to distinguish multiple targets located less than 40 cm apart, which is beyond the limit of the theoretical range resolution. Fourier transformation is used to extract phase information, and the result is combined with the spectral estimation result. To eliminate mutual interference, the range integration is performed when combining the range and phase information. By considering breathing and heartbeat periodicity, the proposed algorithm can accurately extract vital signs in real time by applying an auto-regressive algorithm. The capability of a contactless and unobtrusive vital sign measurement with a millimeter wave radar system has innumerable applications, such as remote patient monitoring, emergency surveillance, and personal health care. Full article
(This article belongs to the Special Issue Radar Remote Sensing on Life Activities)
Show Figures

Figure 1

Figure 1
<p>Flow chart of the proposed algorithm: (<b>I</b>) data acquisition from the radar system; (<b>II</b>) feature extraction for range and phase information in parallel; and (<b>III</b>) tracking vital signs.</p>
Full article ">Figure 2
<p>Influence of the mutual interference with varying range difference and position estimation error: (<b>a</b>) front subject (<math display="inline"><semantics> <mrow> <msub> <mi>y</mi> <mn>12</mn> </msub> <mo>/</mo> <msub> <mi>y</mi> <mn>11</mn> </msub> </mrow> </semantics></math>); and (<b>b</b>) rear subject (<math display="inline"><semantics> <mrow> <msub> <mi>y</mi> <mn>21</mn> </msub> <mo>/</mo> <msub> <mi>y</mi> <mn>22</mn> </msub> </mrow> </semantics></math>).</p>
Full article ">Figure 3
<p>Effect of the range integration for the rear subject (<math display="inline"><semantics> <mrow> <msub> <mi>y</mi> <mn>21</mn> </msub> <mo>/</mo> <msub> <mi>y</mi> <mn>22</mn> </msub> </mrow> </semantics></math>): (<b>a</b>) position estimation error: −10 cm; and (<b>b</b>) position estimation error: 10 cm.</p>
Full article ">Figure 4
<p>Simulation setup.</p>
Full article ">Figure 5
<p>Simulation results for two targets: (<b>a</b>) range estimation for two targets at distances of 120 and 300 cm; (<b>b</b>) heart rate and respiration extraction for two targets at distances of 120 and 300 cm using the AR method; (<b>c</b>) range estimation for two targets at distances of 120 and 170 cm; and (<b>d</b>) heart rate and respiration extraction for two targets at distances of 120 and 170 cm using the AR method.</p>
Full article ">Figure 6
<p>Simulation results for two subjects at distances of 120 and 160 cm: (<b>a</b>) range estimation; (<b>b</b>) vital sign detection without the range integration; and (<b>c</b>) vital sign detection with the range integration.</p>
Full article ">Figure 7
<p>Root mean square error (RMSE) according to SNR for the proposed algorithm: (<b>a</b>) RMSE of range estimation at distances of 120 and 300 cm; (<b>b</b>) RMSE of heart rate and respiration detection for the front target at a distance of 120 cm (rear target: 300 cm); (<b>c</b>) RMSE of heart rate and respiration detection for the rear target at a distance of 300 cm (front target: 120 cm); (<b>d</b>) RMSE of range estimation at distances of 120 and 160 cm; (<b>e</b>) RMSE of heart rate and respiration detection for the front target at a distance of 120 cm (rear target: 160 cm); and (<b>f</b>) RMSE of heart rate and respiration detection for the rear target at a distance of 160 cm (front target: 120 cm).</p>
Full article ">Figure 8
<p>Experimental setup: (<b>a</b>) overview; and (<b>b</b>) top view.</p>
Full article ">Figure 9
<p>Measurement results for a single person at a distance of 300 cm: (<b>a</b>) range estimation; (<b>b</b>) heart rate and respiration; and (<b>c</b>) range estimation varying with distance from the radar.</p>
Full article ">Figure 10
<p>Measurement results for two targets: (<b>a</b>) range estimation for two targets at distances of 130 to 300 cm; (<b>b</b>) real-time heart rate for the front target at a distance of 130 cm; (<b>c</b>) real-time heart rate for the rear target at a distance of 300 cm; (<b>d</b>) comparison between the proposed method and the reference sensor in real-time data; and (<b>e</b>) comparison between the proposed method and the reference sensor in terms of the total time.</p>
Full article ">Figure 11
<p>Measurement results for two targets: (<b>a</b>) range estimation for two targets at distances of 130 to 170 cm; (<b>b</b>) real-time heart rate for the front target at a distance of 130 cm; (<b>c</b>) real-time heart rate for the rear target at a distance of 170 cm; (<b>d</b>) comparison between the proposed method and the reference sensor in terms of the real-time data; and (<b>e</b>) comparison between the proposed method and the reference sensor in terms of the total time.</p>
Full article ">Figure 12
<p>Compensated measurement results for two targets with the range integration: (<b>a</b>) real-time heart rate for the front target at a distance of 130 cm; (<b>b</b>) real-time heart rate for the rear target at a distance of 170 cm; (<b>c</b>) comparison between the proposed method and the reference sensor in terms of the real-time data; and (<b>d</b>) comparison between the proposed method and the reference sensor in terms of the total time.</p>
Full article ">Figure 13
<p>Measurement results for three targets: (<b>a</b>) range estimation for three targets at distances of 130, 180 and 300 cm; (<b>b</b>) comparison between the proposed method and the reference sensor in terms of the real-time data; (<b>c</b>) heart rate for target 1 in terms of total time; (<b>d</b>) heart rate for target 2 in terms of total time; and (<b>e</b>) heart rate for target 3 in terms of total time.</p>
Full article ">
16 pages, 2464 KiB  
Article
Characterizing the Variability of the Structure Parameter in the PROSPECT Leaf Optical Properties Model
by Erik J. Boren, Luigi Boschetti and Dan M. Johnson
Remote Sens. 2019, 11(10), 1236; https://doi.org/10.3390/rs11101236 - 24 May 2019
Cited by 17 | Viewed by 4224
Abstract
Radiative transfer model (RTM) inversion allows for the quantitative estimation of vegetation biochemical composition from satellite sensor data, but large uncertainties associated with inversion make accurate estimation difficult. The leaf structure parameter (Ns) is one of the largest sources of [...] Read more.
Radiative transfer model (RTM) inversion allows for the quantitative estimation of vegetation biochemical composition from satellite sensor data, but large uncertainties associated with inversion make accurate estimation difficult. The leaf structure parameter (Ns) is one of the largest sources of uncertainty in inversion of the widely used leaf-level PROSPECT model, since it is the only parameter that cannot be directly measured. In this study, we characterize Ns as a function of phenology by collecting an extensive dataset of leaf measurements from samples of three dicotyledon species (hard red wheat, soft white wheat, and upland rice) and one monocotyledon (soy), grown under controlled conditions over two full growth seasons. A total of 230 samples were collected: measured leaf reflectance and transmittance were used to estimate Ns from each sample. These experimental data were used to investigate whether Ns depends on phenological stages (early/mid/late), and/or irrigation regime (irrigation at 85%, 75%, 60% of the initial saturated tray weight, and pre-/post-irrigation). The results, supported by the extensive experimental data set, indicate a significant difference between Ns estimated on monocotyledon and dicotyledon plants, and a significant difference between Ns estimated at different phenological stages. Different irrigation regimes did not result in significant Ns differences for either monocotyledon or dicotyledon plant types. To our knowledge, this study provides the first systematic record of Ns as a function of phenology for common crop species. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Phenological aggregation into three classes for the species considered in this study. For each species, the top line reports the observed crop growth stage recorded using commonly used growth stage identification protocols. Growth phases and durations, outlined by Arraudeau and Vergara [<a href="#B48-remotesensing-11-01236" class="html-bibr">48</a>], were used for identifying phenological stages of upland rice. The soy was identified using the protocol outlined by Pedersen et al. [<a href="#B49-remotesensing-11-01236" class="html-bibr">49</a>]. Both red and white varieties of wheat followed the Zadok’s code [<a href="#B50-remotesensing-11-01236" class="html-bibr">50</a>]. The bottom row of each species row reports the phenological aggregation.</p>
Full article ">Figure 2
<p>Comparison between measured and modeled leaf spectra in the NIR region (850–1150 nm): (<b>a</b>) before PROREF adjustment; (<b>b</b>) after PROREF adjustment. The dashed red line represents the modeled leaf spectra as output from PROSPECT-5. The solid black line represents the measured wheat leaf spectra. The modeled leaf spectra were generated for this example figure with the biochemical input parameters measured from a wheat leaf sample: C<sub>w</sub> = 0.016, C<sub>m</sub> = 0.004, and N = 1.38.</p>
Full article ">Figure 3
<p>Comparison between measured reflectance and transmittance, and estimated absorptance (absolute difference between reflectance and transmittance) in the NIR region (850–1150 nm) of the wheat leaf from <a href="#remotesensing-11-01236-f001" class="html-fig">Figure 1</a>. The black ‘X’s mark the approximate locations of maximum reflectance (λ<sub>1</sub> = 880 nm) and transmittance (λ<sub>2</sub> = 1071 nm), and minimum absorptance (λ<sub>3</sub> = 853 nm) in the NIR region.</p>
Full article ">Figure 4
<p>Estimated <span class="html-italic">N<sub>s</sub></span> for the entire sample of the four species grown in the 2015 experiment: (<b>a</b>) white wheat; (<b>b</b>) red wheat; (<b>c</b>) soy; (<b>d</b>) upland rice. In all four plots, the symbol indicates the three different irrigation regimes: Treatments 1, 2, and 3 indicate respectively watering when the plant tray reached 85%, 75%, and 60% of the initial saturated weight.</p>
Full article ">Figure 5
<p>Estimated <span class="html-italic">N<sub>s</sub></span> for the entire sample of the two species grown in the 2016 experiment: (<b>a</b>) red wheat; (<b>b</b>) soy. In each plot, the symbol indicates the two different irrigation regimes: Pre-water samples indicate estimated <span class="html-italic">N<sub>s</sub></span> before the trays were watered and post-water samples indicate estimated <span class="html-italic">N<sub>s</sub></span> 24-h after the plants were watered.</p>
Full article ">Figure 6
<p>Box plots of the distribution of <span class="html-italic">N<sub>s</sub></span> as a function of plant type and phenological class: (<b>a</b>) dicotyledon soy during the 2015 and 2016 experiments for each phenological class; (<b>b</b>) monocotyledon wheat (hard red, soft white) and upland rice during the 2015 and 2016 experiments for each phenological class; (<b>c</b>) both dicotyledon and monocotyledon plant types during both 2015 and 2016 experiments for the entire season period. The box plots report median, interquartile range (IQR), whiskers (defined by: Q3 + 1.5 *IQR and Q1-1.5*IQR), and outliers (dots).</p>
Full article ">
21 pages, 4208 KiB  
Article
Identifying Dry-Season Rice-Planting Patterns in Bangladesh Using the Landsat Archive
by Aaron M. Shew and Aniruddha Ghosh
Remote Sens. 2019, 11(10), 1235; https://doi.org/10.3390/rs11101235 - 24 May 2019
Cited by 27 | Viewed by 7684
Abstract
In many countries, in situ agricultural data is not available and cost-prohibitive to obtain. While remote sensing provides a unique opportunity to map agricultural areas and management characteristics, major efforts are needed to expand our understanding of cropping patterns and the potential for [...] Read more.
In many countries, in situ agricultural data is not available and cost-prohibitive to obtain. While remote sensing provides a unique opportunity to map agricultural areas and management characteristics, major efforts are needed to expand our understanding of cropping patterns and the potential for remotely monitoring crop production because this could support predictions of food shortages and improve resource allocation. In this study, we demonstrate a new method to map paddy rice using Google Earth Engine (GEE) and the Landsat archive in Bangladesh during the dry (boro) season. Using GEE and Landsat, dry-season rice areas were mapped at 30 m resolution for approximately 90,000 km2 annually between 2014 and 2018. The method first reconstructs spectral vegetation indices (VIs) for individual pixels using a harmonic time series (HTS) model to minimize the effect of any sensor inconsistencies and atmospheric noise, and then combines the time series indices with a rule-based algorithm to identify characteristics of rice phenology to classify rice pixels. To our knowledge, this is the first time an annual pixel-based time series model has been applied to Landsat at the national level in a multiyear analysis of rice. Findings suggest that the harmonic-time-series-based vegetation indices (HTS-VIs) model has the potential to map rice production across fragmented landscapes and heterogeneous production practices with comparable results to other estimates, but without local management or in situ information as inputs. The HTS-VIs model identified 4.285, 4.425, 4.645, 4.117, and 4.407 million rice-producing hectares for 2014, 2015, 2016, 2017, and 2018, respectively, which correlates well with national and district estimates from official sources at an average R-squared of 0.8. Moreover, accuracy assessment with independent validation locations resulted in an overall accuracy of 91% and a kappa coefficient of 0.83 for the boro/non-boro stable rice map from 2014 to 2018. We conclude with a discussion of potential improvements and future research pathways for this approach to spatiotemporal mapping of rice in heterogeneous landscapes. Full article
Show Figures

Figure 1

Figure 1
<p>Study area map: Population density in Asia and Bangladesh. (<b>a</b>) Population in persons/km<sup>2</sup> in Asian countries. (<b>b</b>) Population in persons/km<sup>2</sup> in Bangladesh. Bangladesh is among the most densely populated countries in the world and as population density increases, so does the demand for food staples like rice. Boro season rice production helps meet the growing demand for food, and typically improves the efficiency of rice production with higher yields per unit of area. As arable land decreases due to urbanization and other factors, understanding where rice is grown will be increasingly important.</p>
Full article ">Figure 2
<p>Images of rice production near Khulna, Bangladesh. (<b>A</b>) Rice seedling preparation for boro season (January 2016) near Khulna, Bangladesh, and (<b>B</b>) high-yielding variety rice during the boro season (March 2015) near Khulna, Bangladesh. Photo credit: Aaron Shew.</p>
Full article ">Figure 3
<p>Crop phenology for the boro season with a high-resolution image using harmonic-time-series-based vegetation indices (HTS-VIs). (<b>A</b>) High-resolution satellite image from Google Earth in Nashipur (88.67918 N, 25.74691 E), Dinajpur, Bangladesh (near the Wheat Research Centre, BARI). (<b>B</b>) Rice phenoplot for the yellow outlined fields in image A. (<b>C</b>) Wheat phenoplot for the orange outlined fields in image A. (<b>D</b>) Other crop phenoplot for the purple outlined fields in image A. The phenoplots demonstrate the HTS-VIs model output for rice, wheat, and other crops summarized by the pixel values within each highlighted field (high-resolution image and time series vegetation indices (VI) are from the 2018 boro season). Rice has a distinct flood signature where normalized difference flood index (NDFI) is greater than enhanced vegetation index (EVI) prior to EVI trending to a maximum above 0.6. Wheat and other crops do not have a flood signature prior to the peak in EVI, and both wheat and other crops do show the EVI pattern typical of a crop with steep slopes leading into the flowering vegetation phase and decreasing through harvest.</p>
Full article ">Figure 4
<p>HTS-VIs rice map of Bangladesh based on the Landsat Archive, 2013–2018 crop years.</p>
Full article ">Figure 5
<p>The HTS-VIs model by division in Bangladesh for 2018 (with MODIS comparison). SR: Surface Reflectance; TOA: Top-of-atmosphere.</p>
Full article ">Figure 6
<p>Regional differences in rice-planting frequency. (<b>a</b>–<b>h</b>). Boro rice frequency plots at the center coordinates alongside the upazila name listed above each 10 km<sup>2</sup> map. In the legend, no rice includes all other land use land cover including crops where rice was never grown during 2013 and 2018; 1 year to 5 years represent pixels classified as boro rice for the period of time between 2013 and 2018.</p>
Full article ">Figure 7
<p>Temporal consistency in boro rice cropping patterns.</p>
Full article ">Figure 8
<p>Comparison of district rice area (10,000 hectares) estimates for Bangladesh Bureau of Statistics (BBS) and HTS-VIs.</p>
Full article ">Figure 9
<p>District-wise map of the HTS-VIs (<b>left</b>) predicted rice area compared to BBS (<b>right</b>) estimates for 2017 and 2018 (area presented in 10,000 hectares).</p>
Full article ">
15 pages, 5123 KiB  
Article
Long-Term Monitoring of Cropland Change near Dongting Lake, China, Using the LandTrendr Algorithm with Landsat Imagery
by Lihong Zhu, Xiangnan Liu, Ling Wu, Yibo Tang and Yuanyuan Meng
Remote Sens. 2019, 11(10), 1234; https://doi.org/10.3390/rs11101234 - 24 May 2019
Cited by 59 | Viewed by 8022
Abstract
Tracking cropland change and its spatiotemporal characteristics can provide a scientific basis for assessments of ecological restoration in reclamation areas. In 1998, an ecological restoration project (Converting Farmland to Lake) was launched in Dongting Lake, China, in which original lake areas reclaimed for [...] Read more.
Tracking cropland change and its spatiotemporal characteristics can provide a scientific basis for assessments of ecological restoration in reclamation areas. In 1998, an ecological restoration project (Converting Farmland to Lake) was launched in Dongting Lake, China, in which original lake areas reclaimed for cropland were converted back to lake or to poplar cultivation areas. This study characterized the resulting long-term (1998–2018) change patterns using the LandTrendr algorithm with Landsat time-series data derived from the Google Earth Engine (GEE). Of the total cropland affected, ~447.48 km2 was converted to lake and 499.9 km2 was converted to poplar cultivation, with overall accuracies of 87.0% and 83.8%, respectively. The former covered a wider range, mainly distributed in the area surrounding Datong Lake, while the latter was more clustered in North and West Dongting Lake. Our methods based on GEE captured cropland change information efficiently, providing data (raster maps, yearly data, and change attributes) that can assist researchers and managers in gaining a better understanding of environmental influences related to the ongoing conversion efforts in this region. Full article
Show Figures

Figure 1

Figure 1
<p>Geographic location and land cover map of the study area: (<b>a</b>) general location in China; (<b>b</b>) location in Hunan Province; (<b>c</b>) 1997 land-use classification (prior to Converting Farmland to Lake (CFTL) implementation) from Google Earth Engine (GEE) imagery (see <a href="#sec2dot3-remotesensing-11-01234" class="html-sec">Section 2.3</a>). The red frames show the location of the two typical CFTL areas.</p>
Full article ">Figure 2
<p>Two examples areas of each cropland change pattern. Corresponding photos a1 and a2, b1 and b2 before and after CFTL are from google earth to demonstrate the process of conversion to lake, poplar cultivation respectively. Figures a1 and b1 represent the years 1997 and figures a2 and b2 the years 2018. Field photographs c1 and c2 were collected in 2018.</p>
Full article ">Figure 3
<p>Conceptual models and several examples for cropland conversion in the CFTL: (<b>a</b>) Normalized Difference Vegetation Index (NDVI) trajectory of conversion to lake; (<b>b</b>) NDVI trajectory of conversion to poplar cultivation.</p>
Full article ">Figure 4
<p>Methodological flowchart for (<b>a</b>) pre-CFTL land-use classification and (<b>b</b>) post-CFTL analysis of conversion to lake and poplar cultivation.</p>
Full article ">Figure 5
<p>NDVI range of cropland, conversion to lake, and conversion to poplar cultivation in the study area.</p>
Full article ">Figure 6
<p>Year of conversion from cropland to lake with four areas shown in detail: (<b>L1</b>) Datong Lake; (<b>L2</b>) West Dongting Lake; (<b>L3</b>) South Dongting Lake; (<b>L4</b>) East Dongting Lake (scale of the four areas is 1:130,000).</p>
Full article ">Figure 7
<p>Year of conversion from cropland to poplar cultivation with four areas shown in detail: (<b>P1</b>) West Dongting Lake; (<b>P2</b>); (<b>P3</b>); (<b>P4</b>) portions of North Dongting Lake (scale of the four areas is 1:130,000).</p>
Full article ">Figure 8
<p>Magnitude and duration of conversion from cropland to (<b>a,b</b>) lake and (<b>c,d</b>) poplar cultivation in the Dongting Lake region. Close up looks for the prevalent regions of two patterns are displayed.</p>
Full article ">Figure 9
<p>Landsat spectral trajectories and LandTrendr fitted trajectories in typical areas (crosses indicate trajectory locations) for (<b>a,C</b>) conversion to lake and (<b>b,D</b>) conversion to poplar cultivation. Related imagery (a1,b1,bands 4–5–3) and corresponding Landsat classification results(a2,b2) are also shown at left.</p>
Full article ">Figure 10
<p>Four types of commission errors possible when assessing cropland conversion with LandTrendr: (<b>a</b>) transient floods misclassified as conversion to lake; (<b>b</b>) non-abrupt conversion to lake; (<b>c</b>) reed cultivation producing a similar NDVI signal as poplar cultivation; (<b>d</b>) non-abrupt poplar cultivation. The typical trajectory for each error (blue or purple solid line) and real cropland conversion (black dotted line) were extracted from NDVI series.</p>
Full article ">
20 pages, 6887 KiB  
Article
The Comparison of Different Methods of Texture Analysis for Their Efficacy for Land Use Classification in Satellite Imagery
by Przemysław Kupidura
Remote Sens. 2019, 11(10), 1233; https://doi.org/10.3390/rs11101233 - 24 May 2019
Cited by 105 | Viewed by 10354
Abstract
The paper presents a comparison of the efficacy of several texture analysis methods as tools for improving land use/cover classification in satellite imagery. The tested methods were: gray level co-occurrence matrix (GLCM) features, Laplace filters and granulometric analysis, based on mathematical morphology. The [...] Read more.
The paper presents a comparison of the efficacy of several texture analysis methods as tools for improving land use/cover classification in satellite imagery. The tested methods were: gray level co-occurrence matrix (GLCM) features, Laplace filters and granulometric analysis, based on mathematical morphology. The performed tests included an assessment of the classification accuracy performed based on spectro-textural datasets: spectral images with the addition of images generated using different texture analysis methods. The class nomenclature was based on spectral and textural differences and included the following classes: water, low vegetation, bare soil, urban, and two (coniferous and deciduous) forest classes. The classification accuracy was assessed using the overall accuracy and kappa index of agreement, based on the reference data generated using visual interpretation of the images. The analysis was performed using very high-resolution imagery (Pleiades, WorldView-2) and high-resolution imagery (Sentinel-2). The results show the efficacy of selected GLCM features and granulometric analysis as tools for providing textural data, which could be used in the process of land use/cover classification. It is also clear that texture analysis is generally a more important and effective component of classification for images of higher resolution. In addition, for classification using GLCM results, the Random Forest variable importance analysis was performed. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Exemplary mask of Laplace filter, used in the presented research.</p>
Full article ">Figure 2
<p>Test images: (<b>a</b>) 1—Pleiades, (<b>b</b>) 2—WorldView-2, (<b>c</b>) 3—Sentinel-2.</p>
Full article ">Figure 3
<p>The methodology scheme.</p>
Full article ">Figure 4
<p>Subsets of images of selected classification variants (test image 1, Pleiades): (<b>a</b>) <span class="html-italic">spectral</span>, (<b>b</b>) <span class="html-italic">spectral + gran10</span>, (<b>c</b>) <span class="html-italic">spectral + GLCM7</span>, (<b>d</b>) <span class="html-italic">spectral+ Laplacian</span>, (<b>e</b>) original satellite image.</p>
Full article ">Figure 5
<p>Subsets of images of selected classification variants (test image 2: WorldView-2): (<b>a</b>) <span class="html-italic">spectral</span>, (<b>b</b>) <span class="html-italic">spectral + gran10</span>, (<b>c</b>) <span class="html-italic">spectral + GLCM7</span>, (<b>d</b>) <span class="html-italic">spectral+ Laplacian</span>, (<b>e</b>) original satellite image.</p>
Full article ">Figure 6
<p>Subsets of images of selected classification variants (test image 3: Sentinel-2): (<b>a</b>) <span class="html-italic">spectral</span>, (<b>b</b>) <span class="html-italic">spectral + gran10</span>, (<b>c</b>) <span class="html-italic">spectral + GLCM7</span>, (<b>d</b>) <span class="html-italic">spectral+ Laplacian</span>, (<b>e</b>) original satellite image.</p>
Full article ">Figure 7
<p>Raw variable importance for spectral and GLCM variants, test image 1: Pleiades.</p>
Full article ">Figure 8
<p>Raw variable importance for spectral and GLCM variants, test image 2: WorldView-2.</p>
Full article ">Figure 9
<p>Raw variable importance for spectral and GLCM variants, test image 3: Sentinel-2.</p>
Full article ">Figure 10
<p>Edge effect in simulated imagery: (<b>a</b>) original image, (<b>b</b>) GLCM entropy, (<b>c</b>) granulometric map; and in actual Pleiades image: (<b>d</b>) original image, <b>(e</b>) GLCM Entropy, (<b>f</b>) granulometric map.</p>
Full article ">
16 pages, 4559 KiB  
Article
Characterization of Electromagnetic Properties of In Situ Soils for the Design of Landmine Detection Sensors: Application in Donbass, Ukraine
by Timothy Bechtel, Stanislav Truskavetsky, Gennadiy Pochanin, Lorenzo Capineri, Alexander Sherstyuk, Konstantin Viatkin, Tatyana Byndych, Vadym Ruban, Liudmyla Varyanitza-Roschupkina, Oleksander Orlenko, Pavlo Kholod, Pierluigi Falorni, Andrea Bulletti, Luca Bossi and Fronefield Crawford
Remote Sens. 2019, 11(10), 1232; https://doi.org/10.3390/rs11101232 - 24 May 2019
Cited by 13 | Viewed by 5005
Abstract
To design holographic and impulse ground penetrating radar (GPR) sensors suitable for humanitarian de-mining in the Donbass (Ukraine) conflict zone, we measured critical electromagnetic parameters of typical local soils using simple methods that could be adapted to any geologic setting. Measurements were recorded [...] Read more.
To design holographic and impulse ground penetrating radar (GPR) sensors suitable for humanitarian de-mining in the Donbass (Ukraine) conflict zone, we measured critical electromagnetic parameters of typical local soils using simple methods that could be adapted to any geologic setting. Measurements were recorded along six profiles, each crossing at least two mapped soil types. The parameters selected to evaluate GPR and metal detector sensor performance were magnetic permeability, electrical conductivity, and dielectric permittivity. Magnetic permeability measurements indicated that local soils would be conducive to metal detector performance. Electrical conductivity measurements indicated that local soils would be medium to high loss materials for GPR. Calculation of the expected attenuation as a function of signal frequency suggested that 1 GHz may have optimized the trade-off between resolution and penetration and matched the impulse GPR system power budget. Dielectric permittivity was measured using both time domain reflectometry and impulse GPR. For the latter, a calibration procedure based on an in-situ measurement of reflection coefficient was proposed and the data were analyzed to show that soil conditions were suitable for the reliable use of impulse GPR. A distinct difference between the results of these two suggested a dry (low dielectric) soil surface, grading downward into more moist (higher dielectric) soils. This gradation may provide a matching layer to reduce ground surface reflections that often obscure shallow subsurface targets. In addition, the relatively high dielectric deeper (10 cm–20 cm) subsurface soils should provide a strong contrast with plastic-cased mines. Full article
(This article belongs to the Special Issue Recent Progress in Ground Penetrating Radar Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Chernozems soils of Ukraine and study site. (<b>B</b>) Survey transects and Anti-Terrorist Operations Zone (ATO) zone.</p>
Full article ">Figure 2
<p>Transect locations and soil map of the study area.</p>
Full article ">Figure 3
<p>Photographs of field equipment. (1) Bartington MS2 magnetic susceptibility meter with MS2F probe, (2) L&amp;R Instruments MiniRes with 4-electrode probe. (3a) Spectrum Technologies Field Scout 300 TDR. (3b) Purpose-built 1-Tx, 2-Rx impulse GPR.</p>
Full article ">Figure 4
<p>Calibration curve for conversion of TDR pulse transit times to dielectric permittivity.</p>
Full article ">Figure 5
<p>Schematic of 1-Tx, 2-Rx impulse GPR for simple measurement of surface reflection coefficient, with Rx differencing to suppress direct coupling signal.</p>
Full article ">Figure 6
<p>Example waveforms from the GPR system in <a href="#remotesensing-11-01232-f003" class="html-fig">Figure 3</a>b and <a href="#remotesensing-11-01232-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 7
<p>Field electromagnetic parameter data from six surveys transects with different lengths.</p>
Full article ">Figure 8
<p>Comparison of the TDR- and R-based dielectric permittivities for typical Donbass soils.</p>
Full article ">Figure 9
<p>Empirical correction of measured attenuation based on published values [<a href="#B42-remotesensing-11-01232" class="html-bibr">42</a>,<a href="#B43-remotesensing-11-01232" class="html-bibr">43</a>] for earth materials across a range of signal frequencies.</p>
Full article ">Figure 10
<p>Estimated attenuation for a 1 GHz signal in different local soils types.</p>
Full article ">Figure 11
<p>Measured soil moisture contents (at the time of this study) compared to variations in dielectric properties from [<a href="#B43-remotesensing-11-01232" class="html-bibr">43</a>].</p>
Full article ">Figure A1
<p>Laboratory sand test bed. In this photo, the antenna system rests on foam blocks above the sand surface. The control system is to the right of the test bed.</p>
Full article ">Figure A2
<p>Metallic reflector on the bottom of the sand box.</p>
Full article ">Figure A3
<p>In the plot the two voltage signals correspond to: M – reflected by metallic sheet, and S – reflected by the sand box with buried metal sheet.</p>
Full article ">
15 pages, 2549 KiB  
Article
Remote Sensing of Wetland Flooding at a Sub-Pixel Scale Based on Random Forests and Spatial Attraction Models
by Linyi Li, Yun Chen, Tingbao Xu, Kaifang Shi, Rui Liu, Chang Huang, Binbin Lu and Lingkui Meng
Remote Sens. 2019, 11(10), 1231; https://doi.org/10.3390/rs11101231 - 24 May 2019
Cited by 11 | Viewed by 3442
Abstract
Wetland flooding is significant for the flora and fauna of wetlands. High temporal resolution remote sensing images are widely used for the timely mapping of wetland flooding but have a limitation of their relatively low spatial resolutions. In this study, a novel method [...] Read more.
Wetland flooding is significant for the flora and fauna of wetlands. High temporal resolution remote sensing images are widely used for the timely mapping of wetland flooding but have a limitation of their relatively low spatial resolutions. In this study, a novel method based on random forests and spatial attraction models (RFSAM) was proposed to improve the accuracy of sub-pixel mapping of wetland flooding (SMWF) using remote sensing images. A random forests-based SMWF algorithm (RM-SMWF) was developed firstly, and a comprehensive complexity index of a mixed pixel was formulated. Then the RFSAM-SMWF method was developed. Landsat 8 Operational Land Imager (OLI) images of two wetlands of international importance included in the Ramsar List were used to evaluate RFSAM-SMWF against three other SMWF methods, and it consistently achieved more accurate sub-pixel mapping results in terms of visual and quantitative assessments in the two wetlands. The effects of the number of trees in random forests and the complexity threshold on the mapping accuracy of RFSAM-SMWF were also discussed. The results of this study improve the mapping accuracy of wetland flooding from medium-low spatial resolution remote sensing images and therefore benefit the environmental studies of wetlands. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>An illustration of SMWF (scale factor = 3). (<b>a</b>) Fraction image of wetland flooding; (<b>b</b>) Possible flooding distribution 1. (<b>c</b>) Possible flooding distribution 2. (<b>d</b>) Possible flooding distribution 3.</p>
Full article ">Figure 2
<p>A flow chart of RFSAM-SMWF.</p>
Full article ">Figure 3
<p>Experimental images of the two study areas. (<b>a</b>) East Dongting Lake Wetland; (<b>b</b>) Honghu Wetland.</p>
Full article ">Figure 4
<p>Mapping results of different SMWF methods for East Dongting Lake Wetland (scale = 5).</p>
Full article ">Figure 5
<p>Mapping results of different SMWF methods for Honghu Wetland (scale = 5).</p>
Full article ">Figure 6
<p>Sub-pixel mapping accuracy of RFSAM-SMWF related to the number of trees (NT) for East Dongting Lake Wetland, where NT represents the number of trees, OA represents overall accuracy, KC represents Kappa coefficient, APA represents average producer’s accuracy, and AUA represents average user’s accuracy.</p>
Full article ">Figure 7
<p>Materials of a large area (3500 × 2000 pixels) located in East Dongting Lake Wetland and the mapping results of different methods (scale = 5).</p>
Full article ">
19 pages, 7838 KiB  
Article
A Methodology to Monitor Urban Expansion and Green Space Change Using a Time Series of Multi-Sensor SPOT and Sentinel-2A Images
by Jinsong Deng, Yibo Huang, Binjie Chen, Cheng Tong, Pengbo Liu, Hongquan Wang and Yang Hong
Remote Sens. 2019, 11(10), 1230; https://doi.org/10.3390/rs11101230 - 23 May 2019
Cited by 45 | Viewed by 6837
Abstract
Monitoring urban expansion and greenspace change is an urgent need for planning and decision-making. This paper presents a methodology integrating Principal Component Analysis (PCA) and hybrid classifier to undertake this kind of work using a sequence of multi-sensor SPOT images (SPOT-2,3,5) and Sentinel-2A [...] Read more.
Monitoring urban expansion and greenspace change is an urgent need for planning and decision-making. This paper presents a methodology integrating Principal Component Analysis (PCA) and hybrid classifier to undertake this kind of work using a sequence of multi-sensor SPOT images (SPOT-2,3,5) and Sentinel-2A data from 1996 to 2016 in Hangzhou City, which is the central metropolis of the Yangtze River Delta in China. In this study, orthorectification was first applied on the SPOT and Sentinel-2A images to guarantee precise geometric correction which outperformed the conventional polynomial transformation method. After pre-processing, PCA and hybrid classifier were used together to enhance and extract change information. Accuracy assessment combining stratified random and user-defined plots sampling strategies was performed with 930 reference points. The results indicate reasonable high accuracies for four periods. It was further revealed that the proposed method yielded higher accuracy than that of the traditional post-classification comparison approach. On the whole, the developed methodology provides the effectiveness of monitoring urban expansion and green space change in this study, despite the existence of obvious confusions that resulted from compound factors. Full article
(This article belongs to the Special Issue Remote Sensing of Urban Forests)
Show Figures

Figure 1

Figure 1
<p>Location of Hangzhou City in Yangtze River Delta.</p>
Full article ">Figure 2
<p>The flowchart of the proposed methodology and post-classification comparison change detection.</p>
Full article ">Figure 3
<p>Example of Principal Component Analysis (PCA)-enhanced land use change information from cropland and water to urban land in the third principal component (PC3), 2006–2016. (<b>a</b>) Red, Green and Blue (RGB) composition image of SPOT-5 (2006); (<b>b</b>) RGB composition image of Sentinel-2A (2016); (<b>c</b>) the third principal component; (<b>d</b>) GF-2 image (2016).</p>
Full article ">Figure 4
<p>Example of PCA-enhanced land use change information from cropland and water to urban land in the third principal component (PC3), 2003–2006. (<b>a</b>) RGB composition image of SPOT-5 (2003); (<b>b</b>) (RGB) composition image of SPOT-5 (2006); (<b>c</b>) the third principal component; (<b>d</b>) field survey photo (2006).</p>
Full article ">Figure 4 Cont.
<p>Example of PCA-enhanced land use change information from cropland and water to urban land in the third principal component (PC3), 2003–2006. (<b>a</b>) RGB composition image of SPOT-5 (2003); (<b>b</b>) (RGB) composition image of SPOT-5 (2006); (<b>c</b>) the third principal component; (<b>d</b>) field survey photo (2006).</p>
Full article ">Figure 5
<p>Example of multi-date PCA land use change enhancement, 2003–2006. (<b>a</b>) RGB composition image of SPOT-3 (2003); (<b>b</b>) color aerial photograph (2000); (<b>c</b>) RGB composition image of SPOT-5 (2006); (<b>d</b>) field survey photo (2006); (<b>e</b>–<b>l</b>) from the first to eighth principal components (PC1–PC8).</p>
Full article ">Figure 6
<p>Example of multi-date PCA land use change enhancement, 2000–2003. (<b>a</b>) RGB composition image of SPOT-3 (2000); (<b>b</b>) color aerial photograph (2000); (<b>c</b>) RGB composition image of SPOT-5 (2003); (<b>d</b>) RGB composition image of IKONOS (2003); (<b>e</b>) field survey photo (2003); (<b>f</b>–<b>l</b>) from the first to seventh principal components (PC1–PC8).</p>
Full article ">Figure 7
<p>Result of land use and land use changes, 2006–2016.</p>
Full article ">Figure 8
<p>Result of land use and land use changes, 2003–2006.</p>
Full article ">Figure 9
<p>Result of land use and land use changes, 2000–2003.</p>
Full article ">Figure 10
<p>Result of land use and land use changes, 1996–2000.</p>
Full article ">
21 pages, 2306 KiB  
Article
Hyperspectral Image Super-Resolution by Deep Spatial-Spectral Exploitation
by Jing Hu, Minghua Zhao and Yunsong Li
Remote Sens. 2019, 11(10), 1229; https://doi.org/10.3390/rs11101229 - 23 May 2019
Cited by 22 | Viewed by 4311 | Correction
Abstract
Limited by the existing imagery sensors, hyperspectral images are characterized by high spectral resolution but low spatial resolution. The super-resolution (SR) technique aiming at enhancing the spatial resolution of the input image is a hot topic in computer vision. In this paper, we [...] Read more.
Limited by the existing imagery sensors, hyperspectral images are characterized by high spectral resolution but low spatial resolution. The super-resolution (SR) technique aiming at enhancing the spatial resolution of the input image is a hot topic in computer vision. In this paper, we present a hyperspectral image (HSI) SR method based on a deep information distillation network (IDN) and an intra-fusion operation. Specifically, bands are firstly selected by a certain distance and super-resolved by an IDN. The IDN employs distillation blocks to gradually extract abundant and efficient features for reconstructing the selected bands. Second, the unselected bands are obtained via spectral correlation, yielding a coarse high-resolution (HR) HSI. Finally, the spectral-interpolated coarse HR HSI is intra-fused with the input HSI to achieve a finer HR HSI, making further use of the spatial-spectral information these unselected bands convey. Different from most existing fusion-based HSI SR methods, the proposed intra-fusion operation does not require any auxiliary co-registered image as the input, which makes this method more practical. Moreover, contrary to most single-based HSI SR methods whose performance decreases significantly as the image quality gets worse, the proposal deeply utilizes the spatial-spectral information and the mapping knowledge provided by the IDN, which achieves more robust performance. Experimental data and comparative analysis have demonstrated the effectiveness of this method. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Workflow of the proposed method.</p>
Full article ">Figure 2
<p>Correlation between neighboring bands in the Pavia university scene.</p>
Full article ">Figure 3
<p>General architecture of the deep IDN.</p>
Full article ">Figure 4
<p>Visual exhibition of the HSIs used for validating the performance, in which all the gray images are generated by the 15th band in the corresponding HSIs.</p>
Full article ">Figure 5
<p>(<b>a</b>) CCs of two-times down-sampled Pavia with different intervals; and (<b>b</b>) CCs of four-times down-sampled Pavia minus that of two-times down-sampled Pavia with different intervals.</p>
Full article ">Figure 6
<p>Variation of PSNR and computation time with “<span class="html-italic">d</span>” for the Pavia University at the scaling factor of 2.</p>
Full article ">Figure 7
<p>Visual exhibition of the fourth band created by the Pavia University HSIs, which are reconstructed by different single methods when scaling factor is 4.</p>
Full article ">Figure 8
<p>PSNRs of different bands in: (<b>a</b>) the 8× reconstructed Pavia University HSIs; an (<b>b</b>) 8× reconstructed Washington DC Mall HSIs via different single based methods.</p>
Full article ">Figure 9
<p>Visual exhibition of the 90th band created by the Washington DC Mall HSIs, which are reconstructed by different single methods when scaling factor is 4.</p>
Full article ">Figure 10
<p>Spectral curves of the randomly selected point in the 8× reconstructed HR HSIs without IDN method.</p>
Full article ">Figure 11
<p>Visual exhibition of the 99th band created by the Salinas HSIs, which are reconstructed by different single methods when scaling factor is 4.</p>
Full article ">Figure 12
<p>PSNRs of different bands in: (<b>a</b>) the 4× reconstructed Salinas HSIs; and (<b>b</b>) the 8× reconstructed Botswana via different single based methods</p>
Full article ">Figure 13
<p>Visual exhibition of the 198th band created by the Scene02 HSIs, which are reconstructed by different single methods when scaling factor is 8.</p>
Full article ">Figure 14
<p>PSNRs for different bands in the 8× reconstructed Scene02 HSIs via different single based methods.</p>
Full article ">Figure 15
<p>The gap of the performance between the IDN and the proposal on the “fake_and_real_food” HSI at scaling factors of 2, 4 and 8.</p>
Full article ">
19 pages, 5858 KiB  
Article
Detection and Analysis of C-Band Radio Frequency Interference in AMSR2 Data over Land
by Ying Wu, Bo Qian, Yansong Bao, Meixin Li, George P. Petropoulos, Xulin Liu and Lin Li
Remote Sens. 2019, 11(10), 1228; https://doi.org/10.3390/rs11101228 - 23 May 2019
Cited by 5 | Viewed by 3489
Abstract
A simplified generalized radio frequency interference (RFI) detection method and principal component analysis (PCA) method are utilized to detect and attribute the sources of C-band RFI in AMSR2 L1 brightness temperature data over land during 1–16 July 2017. The results show that the [...] Read more.
A simplified generalized radio frequency interference (RFI) detection method and principal component analysis (PCA) method are utilized to detect and attribute the sources of C-band RFI in AMSR2 L1 brightness temperature data over land during 1–16 July 2017. The results show that the consistency between the two methods provides confidence that RFI may be reliably detected using either of the methods, and the only difference is that the scope of the RFI-contaminated area identified by the former algorithm is larger in some areas than that using the latter method. Strong RFI signals at 6.925 GHz are mainly distributed in the United States, Japan, India, Brazil, and some parts of Europe; meanwhile, RFI signals at 7.3 GHz are mainly distributed in Latin America, Asia, Southern Europe, and Africa. However, no obvious 7.3 GHz RFI appears in the United States or India, indicating that the 7.3 GHz channels mitigate the effects of the C-band RFI in these regions. The RFI signals whose position does not vary with the Earth azimuth of the observations generally come from stable, continuous sources of active ground-based microwave radiation, while the RFI signals which are observed only in some directions on a kind of scanning orbit (ascending/descending) mostly arise from reflected geostationary satellite signals. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial distribution of RFI signals observed by AMSR2 in ascending orbits at (<b>a</b>,<b>b</b>) 6.925 GHz and (<b>c</b>,<b>d</b>) 7.3 GHz, at (<b>a</b>,<b>c</b>) horizontal and (<b>b</b>,<b>d</b>) vertical polarization, using the generalized RFI detection approach over North America during 1–16 July 2017.</p>
Full article ">Figure 2
<p>As in <a href="#remotesensing-11-01228-f001" class="html-fig">Figure 1</a> but for Southeast Asia during 1–16 July 2017. (<b>a</b>) 6.925 GHz horizontal polarization, (<b>b</b>) 6.925 GHz vertical polarization, (<b>c</b>) 7.3 GHz horizontal polarization, (<b>d</b>) 7.3 GHz vertical polarization.</p>
Full article ">Figure 2 Cont.
<p>As in <a href="#remotesensing-11-01228-f001" class="html-fig">Figure 1</a> but for Southeast Asia during 1–16 July 2017. (<b>a</b>) 6.925 GHz horizontal polarization, (<b>b</b>) 6.925 GHz vertical polarization, (<b>c</b>) 7.3 GHz horizontal polarization, (<b>d</b>) 7.3 GHz vertical polarization.</p>
Full article ">Figure 3
<p>As in <a href="#remotesensing-11-01228-f001" class="html-fig">Figure 1</a> but for Europe during 1–16 July 2017. (<b>a</b>) 6.925 GHz horizontal polarization, (<b>b</b>) 6.925 GHz vertical polarization, (<b>c</b>) 7.3 GHz horizontal polarization, (<b>d</b>) 7.3 GHz vertical polarization.</p>
Full article ">Figure 3 Cont.
<p>As in <a href="#remotesensing-11-01228-f001" class="html-fig">Figure 1</a> but for Europe during 1–16 July 2017. (<b>a</b>) 6.925 GHz horizontal polarization, (<b>b</b>) 6.925 GHz vertical polarization, (<b>c</b>) 7.3 GHz horizontal polarization, (<b>d</b>) 7.3 GHz vertical polarization.</p>
Full article ">Figure 4
<p>PCA-based RFI distribution in descending orbits at (<b>a</b>,<b>b</b>) 6.925 GHz and (<b>c</b>,<b>d</b>) 7.3 GHz, at (<b>a</b>,<b>c</b>) horizontal and (<b>b</b>,<b>d</b>) vertical polarization, over North America during 1–16 July 2017.</p>
Full article ">Figure 4 Cont.
<p>PCA-based RFI distribution in descending orbits at (<b>a</b>,<b>b</b>) 6.925 GHz and (<b>c</b>,<b>d</b>) 7.3 GHz, at (<b>a</b>,<b>c</b>) horizontal and (<b>b</b>,<b>d</b>) vertical polarization, over North America during 1–16 July 2017.</p>
Full article ">Figure 5
<p>As in <a href="#remotesensing-11-01228-f004" class="html-fig">Figure 4</a> but for Southeast Asia during 1–16 July 2017. (<b>a</b>) 6.925 GHz horizontal polarization, (<b>b</b>) 6.925 GHz vertical polarization, (<b>c</b>) 7.3 GHz horizontal polarization, (<b>d</b>) 7.3 GHz vertical polarization.</p>
Full article ">Figure 6
<p>As in <a href="#remotesensing-11-01228-f004" class="html-fig">Figure 4</a> but for Europe during 1–16 July 2017. (<b>a</b>) 6.925 GHz horizontal polarization, (<b>b</b>) 6.925 GHz vertical polarization, (<b>c</b>) 7.3 GHz horizontal polarization, (<b>d</b>) 7.3 GHz vertical polarization.</p>
Full article ">Figure 7
<p>Spatial distribution of RFI signals observed by AMSR2 in ascending orbits at 7.3 GHz, at (<b>a</b>–<b>c</b>) horizontal and (<b>d</b>–<b>f</b>) vertical polarization, using spectral difference method (<b>a</b>,<b>d</b>), generalized RFI detection method (<b>b</b>,<b>e</b>), and PCA method (<b>c</b>,<b>f</b>) over Europe during 14–16 July 2017.</p>
Full article ">Figure 8
<p>Spatial distribution of RFI signals observed by AMSR2 in ascending orbits at 7.3 GHz, at (<b>a</b>–<b>c</b>) horizontal and (<b>d</b>–<b>f</b>) vertical polarization, using spectral difference method (<b>a</b>,<b>d</b>), generalized RFI detection method (<b>b</b>,<b>e</b>), and PCA method (<b>c</b>,<b>f</b>) over Southeast Asia during 14–16 July 2017.</p>
Full article ">Figure 9
<p>The relationship between the brightness temperature and the Earth azimuth observation angle of AMSR2 in Box 1 (Eastern United States) during 1–16 July 2017. (<b>a</b>) 6.925 GHz horizontal polarization, (<b>b</b>) 6.925 GHz vertical polarization, (<b>c</b>) 7.3 GHz horizontal polarization, (<b>d</b>) 7.3 GHz vertical polarization.</p>
Full article ">Figure 10
<p>As in <a href="#remotesensing-11-01228-f009" class="html-fig">Figure 9</a> but for Box 2 (most of Japan) during 1–16 July 2017. (<b>a</b>) 6.925 GHz horizontal polarization, (<b>b</b>) 6.925 GHz vertical polarization, (<b>c</b>) 7.3 GHz horizontal polarization, (<b>d</b>) 7.3 GHz vertical polarization.</p>
Full article ">Figure 11
<p>As in <a href="#remotesensing-11-01228-f009" class="html-fig">Figure 9</a> but for Box 3 (Sumatra Island and Peninsular Malaysia) during 1–16 July 2017. (<b>a</b>) 6.925 GHz horizontal polarization, (<b>b</b>) 6.925 GHz vertical polarization, (<b>c</b>) 7.3 GHz horizontal polarization, (<b>d</b>) 7.3 GHz vertical polarization.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop