[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (30)

Search Parameters:
Keywords = canopy pixels thresholding

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 9929 KiB  
Article
Inversion of Cotton Soil and Plant Analytical Development Based on Unmanned Aerial Vehicle Multispectral Imagery and Mixed Pixel Decomposition
by Bingquan Tian, Hailin Yu, Shuailing Zhang, Xiaoli Wang, Lei Yang, Jingqian Li, Wenhao Cui, Zesheng Wang, Liqun Lu, Yubin Lan and Jing Zhao
Agriculture 2024, 14(9), 1452; https://doi.org/10.3390/agriculture14091452 - 25 Aug 2024
Viewed by 1146
Abstract
In order to improve the accuracy of multispectral image inversion of soil and plant analytical development (SPAD) of the cotton canopy, image segmentation methods were utilized to remove the background interference, such as soil and shadow in UAV multispectral images. UAV multispectral images [...] Read more.
In order to improve the accuracy of multispectral image inversion of soil and plant analytical development (SPAD) of the cotton canopy, image segmentation methods were utilized to remove the background interference, such as soil and shadow in UAV multispectral images. UAV multispectral images of cotton bud stage canopies at three different heights (30 m, 50 m, and 80 m) were acquired. Four methods, namely vegetation index thresholding (VIT), supervised classification by support vector machine (SVM), spectral mixture analysis (SMA), and multiple endmember spectral mixture analysis (MESMA), were used to segment cotton, soil, and shadows in the multispectral images of cotton. The segmented UAV multispectral images were used to extract the spectral information of the cotton canopy, and eight vegetation indices were calculated to construct the dataset. Partial least squares regression (PLSR), Random forest (FR), and support vector regression (SVR) algorithms were used to construct the inversion model of cotton SPAD. This study analyzed the effects of different image segmentation methods on the extraction accuracy of spectral information and the accuracy of SPAD modeling in the cotton canopy. The results showed that (1) The accuracy of spectral information extraction can be improved by removing background interference such as soil and shadows using four image segmentation methods. The correlation between the vegetation indices calculated from MESMA segmented images and the SPAD of the cotton canopy was improved the most; (2) At three different flight altitudes, the vegetation indices calculated by the MESMA segmentation method were used as the input variable, and the SVR model had the best accuracy in the inversion of cotton SPAD, with R2 of 0.810, 0.778, and 0.697, respectively; (3) At a flight altitude of 80 m, the R2 of the SVR models constructed using vegetation indices calculated from images segmented by VIT, SVM, SMA, and MESMA methods were improved by 2.2%, 5.8%, 13.7%, and 17.9%, respectively, compared to the original images. Therefore, the MESMA mixed pixel decomposition method can effectively remove soil and shadows in multispectral images, especially to provide a reference for improving the inversion accuracy of crop physiological parameters in low-resolution images with more mixed pixels. Full article
(This article belongs to the Special Issue Application of UAVs in Precision Agriculture—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Overview of the study area.</p>
Full article ">Figure 2
<p>Experimental instruments. (<b>a</b>) DJI M300 with a Zenmuse Pl camera, (<b>b</b>) DJI M210 with MS600Pro multispectral camera. Note: The green box in (<b>a</b>) is the Zenmuse P1 camera (DJI, Shenzhen, China), and the red box in (<b>b</b>) is the MS600Pro multispectral camera (Yusense, Inc., Qingdao, China).</p>
Full article ">Figure 3
<p>MESMA under different fertilization gradients. (<b>a1</b>–<b>a3</b>) RGB images, (<b>b1</b>–<b>b3</b>) MNF eigenvalue, (<b>c1</b>–<b>c3</b>) enumerating pixels in an n-dimensional visualizer, (<b>d1</b>–<b>d3</b>) outputting EM spectral.</p>
Full article ">Figure 4
<p>Distribution of pure pixels at different flight altitudes. (<b>a</b>) 30 m; (<b>b</b>) 50 m; (<b>c</b>) 80 m.</p>
Full article ">Figure 5
<p>Segmentation results at different flight altitudes. (<b>a1</b>–<b>a3</b>) RGB images, (<b>b1</b>–<b>b3</b>) <span class="html-italic">NDCSI</span> vegetation index threshold segmentation, (<b>c1</b>–<b>c3</b>) SVM segmentation, (<b>d1</b>–<b>d3</b>) SMA segmentation, (<b>e1</b>–<b>e3</b>) MESMA segmentation.</p>
Full article ">Figure 6
<p>MESMA abundance inversion result map (flight altitude 80 m). (<b>a</b>) cotton; (<b>b</b>) shadow; (<b>c</b>) soil.</p>
Full article ">Figure 7
<p>Correlation between cotton SPAD and vegetation indices at 30 m.</p>
Full article ">Figure 8
<p>Correlation between cotton SPAD and vegetation indices at 50 m.</p>
Full article ">Figure 9
<p>Correlation between cotton SPAD and vegetation indices at 80 m.</p>
Full article ">Figure 10
<p>Inversion results of the optimal cotton SPAD model at different flight altitudes: (<b>a</b>) 30 m; (<b>b</b>) 50 m; (<b>c</b>) 80 m.</p>
Full article ">Figure 11
<p>SPAD distribution map of cotton.</p>
Full article ">
22 pages, 5870 KiB  
Article
Hierarchical Integration of UAS and Sentinel-2 Imagery for Spruce Bark Beetle Grey-Attack Detection by Vegetation Index Thresholding Approach
by Grigorijs Goldbergs and Emīls Mārtiņš Upenieks
Forests 2024, 15(4), 644; https://doi.org/10.3390/f15040644 - 2 Apr 2024
Viewed by 2980
Abstract
This study aimed to examine the efficiency of the vegetation index (VI) thresholding approach for mapping deadwood caused by spruce bark beetle outbreak. For this, the study used upscaling from individual dead spruce detection by unmanned aerial (UAS) imagery as reference data for [...] Read more.
This study aimed to examine the efficiency of the vegetation index (VI) thresholding approach for mapping deadwood caused by spruce bark beetle outbreak. For this, the study used upscaling from individual dead spruce detection by unmanned aerial (UAS) imagery as reference data for continuous spruce deadwood mapping at a stand/landscape level by VI thresholding binary masks calculated from satellite Sentinel-2 imagery. The study found that the Normalized Difference Vegetation Index (NDVI) was most effective for distinguishing dead spruce from healthy trees, with an accuracy of 97% using UAS imagery. The study results showed that the NDVI minimises cloud and dominant tree shadows and illumination differences during UAS imagery acquisition, keeping the NDVI relatively stable over sunny and cloudy weather conditions. Like the UAS case, the NDVI calculated from Sentinel-2 (S2) imagery was the most reliable index for spruce deadwood cover mapping using a binary threshold mask at a landscape scale. Based on accuracy assessment, the summer leaf-on period (June–July) was found to be the most appropriate for spruce deadwood mapping by S2 imagery with an accuracy of 85% and a deadwood detection rate of 83% in dense, close-canopy mixed conifer forests. The study found that the spruce deadwood was successfully classified by S2 imagery when the spatial extent of the isolated dead tree cluster allocated at least 5–7 Sentinel-2 pixels. Full article
(This article belongs to the Special Issue Forest Structure Monitoring Based on Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The central part of Latvia represented the study area, with locations of forest inventory sample plots and UAS imagery taken in March–July 2023, covered by the Sentinel-2 tiles used in this study. Two photos represent spruce clusters with 0% (<b>a</b>) and 100% (<b>b</b>) defoliation degree.</p>
Full article ">Figure 2
<p>Timeline acquisition of datasets used in this study. S2—Sentinel-2 cloud-free imagery acquisition dates were chosen for the given research.</p>
Full article ">Figure 3
<p>Overview of the study workflow.</p>
Full article ">Figure 4
<p>The manually created reference plots of dead (red circles) and healthy (green) spruces by using an (<b>a</b>) NDVI-based threshold mask (red polygons) and (<b>b</b>) visualised on corresponding RGB orthophoto.</p>
Full article ">Figure 5
<p>NDVI—best VI predictor for separation of dead (90–100% defoliation) and live (healthy) spruces (0–10% defoliation) in the study area, where (<b>a</b>) presents histograms with NDVI threshold of 0.46 for dead tree separation and (<b>b</b>) presents corresponding NDVI and tree defoliation correlation graph.</p>
Full article ">Figure 6
<p>Boxplots illustrate the efficiency of separating dead spruce by NDVI based on weather conditions during UAS imagery acquisition, where NDVI_all includes all cases under sun and cloudy conditions (second and third graphs, respectively).</p>
Full article ">Figure 7
<p>Raincloud plots of the six best vegetation indices and Red band calculated from single-date Sentinel-2 imagery (8 June) compared to 207 (104 dead and 103 live) reference plots of dead and live spruces. Redline—possible VI threshold for deadwood cover.</p>
Full article ">Figure 8
<p>The changes in mean reflectances with standard deviations of Sentinel-2 NDVI, RED, and NIR bands associated with dead and live spruce plots across the 2023 season.</p>
Full article ">Figure 9
<p>Non-linear correlation graph of calculated median NDVI (dependent variable) and area of 133 isolated deadwood clusters (independent variable) using single-date S2 imagery (8 June).</p>
Full article ">Figure 10
<p>Deadwood classification results using S2 NDVI threshold binary masks (red polygons) over NDVI images, where (<b>a</b>) is the UAS NDVI (0.1 m GSD) used for creating reference circle plots of dead (red circles) and healthy (green circles) spruce and (<b>b</b>) is the S2-based NDVI from 22 April, (<b>c</b>) from 8 June, and (<b>d</b>) from 21 September. The NDVI represent the mixed spruce forest subset 120 × 220 m.</p>
Full article ">Figure A1
<p>Boxplots displaying the spectral reflectance of reference 1035 dead (grey boxes) and 305 live spruces (white boxes) in individual DJIP4 bands (B-G-R-Edge-NIR) in raw DN values; and selected VIs, where RBNDVI—Red-Blue NDVI, BNDVI—NIR-Blue Normalised Difference VI, RVI—simple Ratio Vegetation Index, VARI—Visible Atmospherically Resistant Index and NGRDI—Normalised Difference Green/Red index.</p>
Full article ">Figure A2
<p>Boxplots displaying the mean spectral reflectance (BOA) of reference 207 (104 dead and 103 live) spruce circular plots in individual Sentinel-2 bands (10 m and 20 m GSD) of S2 imagery: S2B_MSIL2A_20230608T093549_N0509_R036_T34VFJ.</p>
Full article ">
27 pages, 5790 KiB  
Article
A New Approach for Feeding Multispectral Imagery into Convolutional Neural Networks Improved Classification of Seedlings
by Mohammad Imangholiloo, Ville Luoma, Markus Holopainen, Mikko Vastaranta, Antti Mäkeläinen, Niko Koivumäki, Eija Honkavaara and Ehsan Khoramshahi
Remote Sens. 2023, 15(21), 5233; https://doi.org/10.3390/rs15215233 - 3 Nov 2023
Cited by 1 | Viewed by 1857
Abstract
Tree species information is important for forest management, especially in seedling stands. To mitigate the spectral admixture of understory reflectance with small and lesser foliaged seedling canopies, we proposed an image pre-processing step based on the canopy threshold (Cth) applied on [...] Read more.
Tree species information is important for forest management, especially in seedling stands. To mitigate the spectral admixture of understory reflectance with small and lesser foliaged seedling canopies, we proposed an image pre-processing step based on the canopy threshold (Cth) applied on drone-based multispectral images prior to feeding classifiers. This study focused on (1) improving the classification of seedlings by applying the introduced technique; (2) comparing the classification accuracies of the convolutional neural network (CNN) and random forest (RF) methods; and (3) improving classification accuracy by fusing vegetation indices to multispectral data. A classification of 5417 field-located seedlings from 75 sample plots showed that applying the Cth technique improved the overall accuracy (OA) of species classification from 75.7% to 78.5% on the Cth-affected subset of the test dataset in CNN method (1). The OA was more accurate in CNN (79.9%) compared to RF (68.3%) (2). Moreover, fusing vegetation indices with multispectral data improved the OA from 75.1% to 79.3% in CNN (3). Further analysis revealed that shorter seedlings and tensors with a higher proportion of Cth-affected pixels have negative impacts on the OA in seedling forests. Based on the obtained results, the proposed method could be used to improve species classification of single-tree detected seedlings in operational forest inventory. Full article
(This article belongs to the Special Issue Novel Applications of UAV Imagery for Forest Science)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A map of the study area in southern Finland (<b>A</b>) showing the five flight zones with drone-based red, green, blue (RGB) orthomosaics (<b>B</b>), together with a drone-based multispectral orthomosaic in a flight zone (<b>C</b>) and zooming into a single forest stand (<b>D</b>) and forest plot (10 × 10 m, (<b>E</b>)), showing tensors of field-measured trees (10 × 10 pixels). Numbers 1, 2, 3, and 4 in (<b>E</b>) show species classes of pine, spruce, birch, and other species, respectively. The topographical map in (<b>B</b>) is from the National Land Survey of Finland (NLS).</p>
Full article ">Figure 2
<p>Visualizing the effect of applying the canopy threshold (C<sub>th</sub>)-based image pre-processing method introduced in this research on two sample pine trees with height 1.4 m (upper row) and 1.3 m (lower row). The pixel size of the RGB image is 1.4 cm, but CHM and multispectral images are 5 cm. The multispectral images (noC<sub>th</sub> and withC<sub>th</sub>) visualized colored infrared bands (bands 5, 3, 2). The CHM is pseudo-colored by height (legend in each separately). The white pixels in CHM denote nullified (C<sub>th</sub>-affected) pixels after image pre-processing due to a height of ≤0.4. The + symbol in the middle of the images shows the location of a field-measured treetop.</p>
Full article ">Figure 3
<p>The model architecture used in this research.</p>
Full article ">Figure 4
<p>A schematic graph of the methodological principles behind introducing canopy threshold (C<sub>th</sub>)-based image pre-processing and combining two subsets of the test dataset based on whether or not it was affected by C<sub>th</sub> processing.</p>
Full article ">Figure 5
<p>Overall accuracy within the canopy threshold (C<sub>th</sub>)-affected (<span class="html-italic">n</span> = 161, 33.4%) and not-affected (<span class="html-italic">n</span> = 361, 66.6%) subsets of the test set (<span class="html-italic">n</span> = 542) in RF, CNN noVIs (without vegetation indices, five bands), and CNN withVIs (after fusing 8 VIs to tensors pixels, 13 bands) in the original (noC<sub>th</sub>) and C<sub>th</sub>-applied (withC<sub>th</sub>) datasets.</p>
Full article ">Figure 6
<p>Plotting overall accuracy of species classification within different C<sub>th</sub>-affection rates (%); in fact, 0 refers to not-C<sub>th</sub>-affected, and the rest of bars are barplots of C<sub>th</sub>-affection (%, also real numbers too because it was 10 × 10). The italic number “<b><span class="html-italic">n</span></b>=” shows the number of test datasets included in that bin. noVIs: without fusing vegetation indices (VIs) to CNN; withVIs: with fusing VIs.</p>
Full article ">Figure 7
<p>Plotting overall accuracy of species classification considering seedlings height (meter). The italic number “<b><span class="html-italic">n</span></b>=” indicates the number of tensors included in that height bin. noVIs: without fusing vegetation indices (VIs) to CNN; withVIs: with fusing VIs.</p>
Full article ">Figure 8
<p>The summary of species classification accuracies in the normalized confusion matrix together with overall accuracy and kappa values. RF: random forest classifier; CNN: convolutional neural network; Vis: vegetation indices. Values inside each cell show the recall values of the cell – proportion of correctly classified class over the total observations of the class.</p>
Full article ">Figure 9
<p>Plotting overall accuracy of species classification within different C<sub>th</sub>-affection rates (%); in fact, 0 refers to not-C<sub>th</sub>-affected, and the rest of bars are barplots of C<sub>th</sub>-affection (%). The italic number “<b><span class="html-italic">n</span></b>=” shows the number of tensors included in that bin. noVIs: without fusing vegetation indices (VIs) to CNN; withVIs: with fusing VIs.</p>
Full article ">Figure 10
<p>Visualizing overall accuracy of species classification regarding seedlings height (meter). The italic number “<b><span class="html-italic">n</span></b>=” shows the number of tensors that the height bin included. noVIs: without fusing vegetation indices (VIs) to CNN; withVIs: with fusing VIs.</p>
Full article ">Figure 11
<p>The summary of species classification accuracies in the normalized confusion matrix including overall accuracy and kappa values. RF: random forest classifier; CNN: convolutional neural network; VIs: vegetation indices. Values inside each cell are the recall of the cell.</p>
Full article ">Figure 12
<p>Selected five most important and not-intercorrelated (&lt;0.8) features for RF in the noC<sub>th</sub> and withC<sub>th</sub> datasets. Correlation matrix (upper-right) and scatter plot with fitting regression (lower-left) and histogram (diagonal 1:1 line).</p>
Full article ">Figure A1
<p>Visualization of the training and validation accuracy in every epoch for the CNN models on the noC<sub>th</sub> and withC<sub>th</sub> datasets.</p>
Full article ">
20 pages, 5791 KiB  
Article
How Sensitive Is Thermal Image-Based Orchard Water Status Estimation to Canopy Extraction Quality?
by Livia Katz, Alon Ben-Gal, M. Iggy Litaor, Amos Naor, Aviva Peeters, Eitan Goldshtein, Guy Lidor, Ohaliav Keisar, Stav Marzuk, Victor Alchanatis and Yafit Cohen
Remote Sens. 2023, 15(5), 1448; https://doi.org/10.3390/rs15051448 - 4 Mar 2023
Cited by 3 | Viewed by 2998
Abstract
Accurate canopy extraction and temperature calculations are crucial to minimizing inaccuracies in thermal image-based estimation of orchard water status. Currently, no quantitative comparison of canopy extraction methods exists in the context of precision irrigation. The accuracies of four canopy extraction methods were compared, [...] Read more.
Accurate canopy extraction and temperature calculations are crucial to minimizing inaccuracies in thermal image-based estimation of orchard water status. Currently, no quantitative comparison of canopy extraction methods exists in the context of precision irrigation. The accuracies of four canopy extraction methods were compared, and the effect on water status estimation was explored for these methods: 2-pixel erosion (2PE) where non-canopy pixels were removed by thresholding and morphological erosion; edge detection (ED) where edges were identified and morphologically dilated; vegetation segmentation (VS) using temperature histogram analysis and spatial watershed segmentation; and RGB binary masking (RGB-BM) where a binary canopy layer was statistically extracted from an RGB image for thermal image masking. The field experiments occurred in a four-hectare commercial peach orchard during the primary fruit growth stage (III). The relationship between stem water potential (SWP) and crop water stress index (CWSI) was established in 2018. During 2019, a large dataset of ten thermal infrared and two RGB images was acquired. The canopy extraction methods had different accuracies: on 12 August, the overall accuracy was 83% for the 2PE method, 77% for the ED method, 84% for the VS method, and 90% for the RGB-BM method. Despite the high accuracy of the RGB-BM method, canopy edges and between-row weeds were misidentified as canopy. Canopy temperature and CWSI were calculated using the average of 100% of canopy pixels (CWSI_T100%) and the average of the coolest 33% of canopy pixels (CWSI_T33%). The CWSI_T33% dataset produced similar SWP–CWSI models irrespective of the canopy extraction method used, while the CWSI_T100% yielded different and inferior models. The results highlighted the following: (1) The contribution of the RGB images is not significant for canopy extraction. Canopy pixels can be extracted with high accuracy and reliability solely with thermal images. (2) The T33% approach to canopy temperature calculation is more robust and superior to the simple mean of all canopy pixels. These noteworthy findings are a step forward in implementing thermal imagery in precision irrigation management. Full article
(This article belongs to the Special Issue Crops and Vegetation Monitoring with Remote/Proximal Sensing)
Show Figures

Figure 1

Figure 1
<p>Mishmar Hayarden peach orchard (green line) divided into 22 management cells (MC) (black dashed squares).</p>
Full article ">Figure 2
<p>Data acquisition and analysis of orchard canopy extraction accuracy, canopy temperature, and orchard water status using the 2-pixel erosion (2PE), edge detection (ED), vegetation segmentation (VS), and RGB binary masking (RGB-BM) canopy extraction methods (green boxes). Canopy temperature per management cell (MC) was calculated using the average of 100% of canopy pixels (T100%) (orange boxes) and the average of the coolest 33% of canopy pixels (T33%) (blue boxes). Orchard water status was estimated using the crop water stress index (CWSI) and the estimated stem water potential (SWPe). The SWPe was based on a tree-scale stem water potential (SWP) and CWSI relationship established using each canopy extraction method and each canopy temperature calculation approach.</p>
Full article ">Figure 3
<p>Canopy area (m<sup>2</sup>) per management cell (MC) (black dots) and box plot (red) per day of image acquisition (21 July–26 August 2019). The black horizontal line is the grand mean. The green boxes indicate data from 21 July and 12 August of the 2-pixel erosion (2PE), edge detection (ED), and vegetation segmentation (VS) methods. The RGM binary masking (RGB-BM) method was performed only on these dates. Note: the Y-axis range of the VS method is specifically different from the other methods.</p>
Full article ">Figure 4
<p>Overall accuracy (blue bars) of canopy/non-canopy classification and precision (red bars), recall (yellow bars), and F1-score (grey bars) parameters of canopy classification as measured with a confusion matrix per date for the 2-pixel erosion (2PE), edge detection (ED), vegetation segmentation (VS), and RGB binary masking (RGB-BM) canopy extraction methods.</p>
Full article ">Figure 5
<p>Canopy temperature histogram of the whole orchard for the 2-pixel erosion (2PE), edge detection (ED), vegetation segmentation (VS), and RGB binary masking (RGB-BM) canopy extraction methods on 12 August 2019. Images of all extracted canopy temperature pixels (T100%) of the management cell (MC) 5 (left image column) and the highlighted (turquoise) coolest 33% of canopy temperature pixels (T33%) (right image column) for all canopy extraction methods.</p>
Full article ">Figure 6
<p>Canopy temperature (°C) calculated by the average 100% (T100%) and by the average of the coolest 33% (T33%) of canopy pixels per management cell (MC) between 21 July and 26 Aug 2019 for the canopy extraction methods: 2-pixel erosion (2PE) (turquoise), edge detection (dark blue), vegetation segmentation (VS) (coral), and RGB binary masking (RGB-BM) (brick red).</p>
Full article ">Figure 7
<p>Crop water status index (CWSI) with Tcanopy (°C) calculated using the average 100% (CWSI_T100%) and the average of the coolest 33% (CWSI_T33%) of canopy pixels. Twet = lowest 5% of canopy pixels, and Tdry = Tair + 2 °C. Values per management cell (MC) between 21 July and 26 August 2019 for the canopy extraction methods: 2-pixel erosion (2PE) (turquoise), edge detection (ED) (dark blue), vegetation segmentation (VS) (coral), and RGB binary masking (RGB-BM) (brick red). The table insert shows the air temperature (Tair (°C)) values.</p>
Full article ">Figure 8
<p>Linear regression model of SWP and CWSI for the 2-pixel erosion (2PE), edge detection (ED), vegetation segmentation (VS), and RGB binary masking (RGB-BM) canopy extraction methods. Crop water status index (CWSI) with Tcanopy (°C) calculated using the average 100% (CWSI_T100%) (red points and lines) and the average of the coolest 33% (CWSI_T33%) (blue points and lines) of canopy pixels. Twet = lowest 5% of canopy pixels, and Tdry = Tair + 2 °C. Each point represents a measurement tree (n = 15).</p>
Full article ">Figure 9
<p>The histogram of the difference between the measured and estimated stem water potential (SWPe) calculated using the canopy temperature data of the average 100% (SWPe_T100%) (pink bars) and the average of the coolest 33% (SWPe_T33%) (blue bars) of canopy pixels for the 2-pixel erosion (2PE), edge detection (ED), vegetation segmentation (VS), and RGB binary masking (RGB-BM) canopy extraction methods. The frequency refers to the number of management cells (MC). The table insert provides the descriptive statistics of each dataset. Note: the Y-axis range of the RGB-BM method is specifically different from the other methods.</p>
Full article ">Figure 10
<p>Histogram of percent estimated stem water potential (SWPe) (MPa) calculated using the canopy temperature data of the average 100% (SWPe_T100%) and the average of the coolest 33% (SWPe_T33%) of canopy pixels in comparison to the defined optimal SWP range for stage III: upper (−1.17 Mpa, blue dashed line) and lower (−1.43 Mpa, red dashed line) thresholds. Below-range SWP values indicate orchard stress, while above-range water status values indicate theoretical over-irrigation. The canopy extraction methods tested were 2-pixel erosion (2PE, turquoise polygon), edge detection (ED, blue polygon), vegetation segmentation (VS, pink polygon), and RGB binary masking (RGB-BM, red polygon).</p>
Full article ">
11 pages, 2143 KiB  
Communication
Measurement of Overlapping Leaf Area of Ice Plants Using Digital Image Processing Technique
by Bolappa Gamage Kaushalya Madhavi, Anil Bhujel, Na Eun Kim and Hyeon Tae Kim
Agriculture 2022, 12(9), 1321; https://doi.org/10.3390/agriculture12091321 - 27 Aug 2022
Cited by 10 | Viewed by 3010
Abstract
Non-destructive and destructive leaf area estimation are critical in plant physiological and ecological experiments. In modern agriculture, ubiquitous digital cameras and scanners are primarily replacing traditional leaf area measurements. Thus, measuring the leaflet’s dimension is integral in analysing plant photosynthesis and growth. Leaf [...] Read more.
Non-destructive and destructive leaf area estimation are critical in plant physiological and ecological experiments. In modern agriculture, ubiquitous digital cameras and scanners are primarily replacing traditional leaf area measurements. Thus, measuring the leaflet’s dimension is integral in analysing plant photosynthesis and growth. Leaf dimension assessment with image processing is widely used nowadays. In this investigation employed an image segmentation algorithm to classify the ice plant (Mesembryanthemum crystallinum L.) canopy image with a threshold segmentation technique by grey colour model and calculating the degree of green colour in the HSV (hue, saturation, value) model. Notably, the segmentation technique is used to separate suitable surfaces from a defective noisy background. In this work, the canopy area was measured by pixel number statistics relevant to the known reference area. Furthermore, this paper proposed total leaf area estimation in a destructive method by a computer coordinating area curvimeter and lastly evaluated the overlapping percentage using the total leaf area and canopy area measurements. To assess the overlapping percentage using the proposed algorithm, the curvimeter method experiment was performed on 24 images of ice plants. The obtained results reveal that the overlapping percentage is less than 10%, as evidenced by a difference in the curvimeter and the proposed algorithm’s results with the canopy leaf area approach. Furthermore, the results show a strong correlation between the canopy and total leaf area (R2: 0.99) calculated by our proposed method. This overlapping leaf area finding offers a significant contribution to crop evolution by using computational techniques to make monitoring easier. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>The ice plants (<span class="html-italic">M. crystallinum</span>) in the plant factory system under controlled environmental conditions.</p>
Full article ">Figure 2
<p>The flowchart of the measurement of overlapping percentage (%) of ice plant leaves.</p>
Full article ">Figure 3
<p>Image acquisition system of ice plant canopy.</p>
Full article ">Figure 4
<p>Segmented image of the ice plant canopy (<b>a</b>), inverted binary ice plant canopy image (<b>b</b>), segmented reference image (<b>c</b>), inverted binary reference image (<b>d</b>).</p>
Full article ">Figure 5
<p>Calculation of the total leaf area of ice plants (<span class="html-italic">M. crystallinum</span>) using the curvimeter method.</p>
Full article ">Figure 6
<p>Correlation between canopy area and total leaf area.</p>
Full article ">
15 pages, 7092 KiB  
Article
A Fruit Colour Development Index (CDI) to Support Harvest Time Decisions in Peach and Nectarine Orchards
by Alessio Scalisi, Mark G. O’Connell, Muhammad S. Islam and Ian Goodwin
Horticulturae 2022, 8(5), 459; https://doi.org/10.3390/horticulturae8050459 - 19 May 2022
Cited by 20 | Viewed by 4765
Abstract
Fruit skin colour is one of the most important visual fruit quality parameters driving consumer preferences. Proximal sensors such as machine vision cameras can be used to detect skin colour in fruit visible in collected images, but their accuracy in variable orchard light [...] Read more.
Fruit skin colour is one of the most important visual fruit quality parameters driving consumer preferences. Proximal sensors such as machine vision cameras can be used to detect skin colour in fruit visible in collected images, but their accuracy in variable orchard light conditions remains a practical challenge. This work aimed to derive a new fruit skin colour attribute—namely a Colour Development Index (CDI), ranging from 0 to 1, that intuitively increases as fruit becomes redder—to assess colour development in peach and nectarine fruit skin. CDI measurements were generated from high-resolution images collected on both east and west sides of the canopies of three peach and one nectarine cultivars using the commercial mobile platform Cartographer (Green Atlas). Fruit colour (RGB values) was extracted from the central pixels of detected fruit and converted into a CDI. The repeatability of CDI measurements under different light environments was tested by scanning orchards at different times of the day. The effects of cultivar and canopy side on CDI were also determined. CDI data was related to the index of absorbance difference (IAD)—an index of chlorophyll degradation that was correlated with ethylene emission—and its response to time from harvest was modelled. The CDI was only significantly altered when measurements were taken in the middle of the morning or in the middle of the afternoon, when the presence of the sun in the image caused significant alteration of the image brightness. The CDI was tightly related to IAD, and CDI values plateaued (0.833 ± 0.009) at IAD ≤ 1.20 (climacteric onset) in ‘Majestic Pearl’ nectarine, suggesting that CDI thresholds show potential to be used for harvest time decisions and to support logistics. In order to obtain comparable CDI datasets to study colour development or forecast harvest time, it is recommended to scan peach and nectarine orchards at night, in the early morning, solar noon, or late afternoon. This study found that the CDI can serve as a standardised and objective skin colour index for peaches and nectarines. Full article
(This article belongs to the Special Issue Precision Management of Fruit Trees)
Show Figures

Figure 1

Figure 1
<p>CieLAB colour space and representation of hue angle (h°) and Colour Development Index (CDI) values around the colour wheel (adapted with permission from Scalisi et al. [<a href="#B8-horticulturae-08-00459" class="html-bibr">8</a>]).</p>
Full article ">Figure 2
<p>Colour Development Index (CDI) expected response in peach fruit with different skin pigmentation.</p>
Full article ">Figure 3
<p>Spatial map of ‘O’Henry’ (60 trees), ‘Snow Flame 23’ (30 trees), ‘Snow Flame 25’ (30 trees) and ‘August Bright’ (60 trees) experimental plots and tree position in the Stonefruit experimental orchard at the Tatura SmartFarm.</p>
Full article ">Figure 4
<p>Spatial map of four row orientations and 720 ‘Majestic Pearl’ nectarine trees (180 trees per row orientation block) in the Sundial orchard at the Tatura SmartFarm.</p>
Full article ">Figure 5
<p>Average fruit Colour Development Index (CDI) per image in ready-to-harvest ‘Snow Flame 23’ peach fruit scanned at (<b>A</b>) 430, (<b>B</b>) 715, (<b>C</b>) 1015, (<b>D</b>) 1315, (<b>E</b>) 1615, (<b>F</b>) 1915 and (<b>G</b>) 2130 h (AEDT) on 14 December 2021. Images collected on east and west sides of the canopy. Brightness and contrast of the original images was increased by 50% to improve visualisation. In panel (<b>H</b>), detected fruit are shown with red detection boxes in a zoomed in image.</p>
Full article ">Figure 6
<p>Daily trends of colour development index (CDI) in fruit of (<b>A</b>) ‘O’Henry’, (<b>B</b>) ‘Snow Flame 23’ and (<b>C</b>) ‘Snow Flame 25’ peaches, and (<b>D</b>) ‘August Bright’ nectarine in the east and west side of the canopies. Error bars represent 95% confidence intervals of the estimates.</p>
Full article ">Figure 7
<p>Daily trend of fruit colour development index (CDI) in peach and nectarine cultivars with east- and west-exposed fruit pooled together. Grey bands highlight night-time before sunrise and after sunset, whereas dashed vertical line represents solar noon. Error bars represent 95% confidence intervals of the estimates and different letters show significant differences (<span class="html-italic">p</span> &lt; 0.05) after a Bonferroni post hoc test.</p>
Full article ">Figure 8
<p>Natural logarithms of ethylene emission (ETH<sub>ln</sub>) and corresponding Index of Absorbance Difference (I<sub>AD</sub>) in a sample of 123 ‘Majestic Pearl’ nectarine fruit collected at different times before and after harvest in 2020–2021. The horizontal dashed line shows ETH<sub>ln</sub> = 0, whereas vertical error bars represent standard errors of ETH<sub>ln</sub>. The fit is characterised by a piecewise regression with a three-segment line. The vertical red line represents the identified I<sub>AD</sub> threshold for climacteric onset.</p>
Full article ">Figure 9
<p>Polynomial regression fit (black line) describing the relationship between Colour Development Index (CDI) and Index of Absorbance Difference (I<sub>AD</sub>) in ‘Majestic Pearl’ nectarine fruit in 2021–2022. The points represent medians of experimental units (row orientation arm) at different times form harvest (<span class="html-italic">n</span> = 80 per experimental unit); horizontal and vertical bars show standard errors of the means; green lines show 95% confidence intervals. Model equation: CDI = 0.835 (0.004) − 0.030 (0.002) × I<sub>AD</sub><sup>2</sup> × ln (I<sub>AD</sub>); <span class="html-italic">p</span> &lt; 0.001; <span class="html-italic">R</span><sup>2</sup> = 0.905; S.E. = 0.015.</p>
Full article ">Figure 10
<p>Cubic regression fit (black line) of Colour Development Index (CDI) against time (days from harvest, DfH) in ‘Majestic Pearl’ nectarine fruit measured in 2021–2022. The points represent means of experimental units (row orientation arm; <span class="html-italic">n</span> = 4) at different times form harvest and vertical bars show standard errors of the means. The red horizontal line represents the predicted CDI value at harvest (DfH = 0) and green lines represent the standard errors of the prediction. Model equation: CDI = 0.833 + 0.002 × DfH − 0.0003 × DfH<sup>2</sup> − 1.04 × 10<sup>−5</sup> × DfH<sup>3</sup>; <span class="html-italic">p</span> &lt; 0.001; <span class="html-italic">R</span><sup>2</sup> = 0.973; S.E. = 0.011.</p>
Full article ">
15 pages, 8372 KiB  
Article
Improved Forest Canopy Closure Estimation Using Multispectral Satellite Imagery within Google Earth Engine
by Bo Xie, Chunxiang Cao, Min Xu, Xinwei Yang, Robert Shea Duerler, Barjeece Bashir, Zhibin Huang, Kaimin Wang, Yiyu Chen and Heyi Guo
Remote Sens. 2022, 14(9), 2051; https://doi.org/10.3390/rs14092051 - 25 Apr 2022
Cited by 4 | Viewed by 2985
Abstract
The large area estimation of forest canopy closure (FCC) using remotely sensed data is of high interest in monitoring forest changes and forest health, as well as in assessing forest ecological services. The accurate estimation of FCC over the regional or global scale [...] Read more.
The large area estimation of forest canopy closure (FCC) using remotely sensed data is of high interest in monitoring forest changes and forest health, as well as in assessing forest ecological services. The accurate estimation of FCC over the regional or global scale is challenging due to the difficulty of sample acquisition and the slow processing efficiency of large amounts of remote sensing data. To address this issue, we developed a novel bounding envelope methodology based on vegetation indices (BEVIs) for determining vegetation and bare soil endmembers using the normalized differences vegetation index (NDVI), modified bare soil index (MBSI), and bare soil index (BSI) derived from Landsat 8 OLI and Sentinel-2 image within the Google Earth Engine (GEE) platform, then combined the NDVI with the dimidiate pixel model (DPM), one of the most commonly used spectral-based unmixing methods, to map the FCC distribution over an area of more than 90,000 km2. The key processing was the determination of the threshold parameter in BEVIs that characterizes the spectral boundary of vegetation and soil endmembers. The results demonstrated that when the threshold equals 0.1, the extraction accuracy of vegetation and bare soil endmembers is the highest with the threshold range given as (0, 0.3), and the estimated spatial distribution of FCC using both Landsat 8 and Sentinel-2 images were consistent, that is, the area with high canopy density was mainly distributed in the western mountainous region of Chifeng city. The verification was carried out using independent field plots. The proposed approach yielded reliable results when the Landsat 8 data were used (R2 = 0.6, RMSE = 0.13, and 1-rRMSE = 80%), and the accuracy was further improved using Sentinel-2 images with higher spatial resolution (R2 = 0.81, RMSE = 0.09, and 1-rRMSE = 86%). The findings demonstrate that the proposed method is portable among sensors with similar spectral wavebands, and can assist in mapping FCC at a regional scale while using multispectral satellite imagery. Full article
(This article belongs to the Special Issue Environmental Health Diagnosis Based on Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The regional division (<b>a</b>), the location of the study area and field survey samples (<b>b</b>), the terrain (<b>c</b>), and the false-color (i.e., RGB: 5-4-3) composite of Landsat 8 OLI (<b>d</b>).</p>
Full article ">Figure 2
<p>The pictures of the field survey of forest canopy closure in September 2019 in Chifeng city ((<b>a</b>–<b>c</b>), respectively, refer to <span class="html-italic">Populus</span> spp., <span class="html-italic">Larix</span> spp., and <span class="html-italic">Pinus tabulaeformis</span>).</p>
Full article ">Figure 3
<p>Workflow of forest canopy closure estimation (NDVI refers to normalized difference vegetation index, MBSI refers to modified bare soil index, BSI refers to bar soil index, and BEVIs refers to bounding envelope method based on vegetation indices).</p>
Full article ">Figure 4
<p>Curve of model accuracy variation with k value.</p>
Full article ">Figure 5
<p>The canopy closure estimation results using Landsat 8 images based on BEVIs.</p>
Full article ">Figure 6
<p>The canopy closure estimation results using Sentinel-2 images based on BEVIs.</p>
Full article ">Figure 7
<p>Accuracy Assessment for the canopy closure prediction using Landsat 8 satellite images.</p>
Full article ">Figure 8
<p>Accuracy assessment for the canopy closure prediction using Sentinel-2 satellite images.</p>
Full article ">
20 pages, 10468 KiB  
Article
Retrieval of Crop Variables from Proximal Multispectral UAV Image Data Using PROSAIL in Maize Canopy
by Erekle Chakhvashvili, Bastian Siegmann, Onno Muller, Jochem Verrelst, Juliane Bendig, Thorsten Kraska and Uwe Rascher
Remote Sens. 2022, 14(5), 1247; https://doi.org/10.3390/rs14051247 - 3 Mar 2022
Cited by 27 | Viewed by 4995
Abstract
Mapping crop variables at different growth stages is crucial to inform farmers and plant breeders about the crop status. For mapping purposes, inversion of canopy radiative transfer models (RTMs) is a viable alternative to parametric and non-parametric regression models, which often lack transferability [...] Read more.
Mapping crop variables at different growth stages is crucial to inform farmers and plant breeders about the crop status. For mapping purposes, inversion of canopy radiative transfer models (RTMs) is a viable alternative to parametric and non-parametric regression models, which often lack transferability in time and space. Due to the physical nature of RTMs, inversion outputs can be delivered in sound physical units that reflect the underlying processes in the canopy. In this study, we explored the capabilities of the coupled leaf–canopy RTM PROSAIL applied to high-spatial-resolution (0.015 m) multispectral unmanned aerial vehicle (UAV) data to retrieve the leaf chlorophyll content (LCC), leaf area index (LAI) and canopy chlorophyll content (CCC) of sweet and silage maize throughout one growing season. Two different retrieval methods were tested: (i) applying the RTM inversion scheme to mean reflectance data derived from single breeding plots (mean reflectance approach) and (ii) applying the same inversion scheme to an orthomosaic to separately retrieve the target variables for each pixel of the breeding plots (pixel-based approach). For LCC retrieval, soil and shaded pixels were removed by applying simple vegetation index thresholding. Retrieval of LCC from UAV data yielded promising results compared to ground measurements (sweet maize RMSE = 4.92 µg/m2, silage maize RMSE = 3.74 µg/m2) when using the mean reflectance approach. LAI retrieval was more challenging due to the blending of sunlit and shaded pixels present in the UAV data, but worked well at the early developmental stages (sweet maize RMSE = 0.70 m2/m2, silage RMSE = 0.61 m2/m2 across all dates). CCC retrieval significantly benefited from the pixel-based approach compared to the mean reflectance approach (RMSEs decreased from 45.6 to 33.1 µg/m2). We argue that high-resolution UAV imagery is well suited for LCC retrieval, as shadows and background soil can be precisely removed, leaving only green plant pixels for the analysis. As for retrieving LAI, it proved to be challenging for two distinct varieties of maize that were characterized by contrasting canopy geometry. Full article
Show Figures

Figure 1

Figure 1
<p>Map of the maize trial in PhenoRob Central Experiment, at agricultural research station of campus Klein-Altendorf. Two-row sweet maize plots are depicted in green, one-row sweet maize plots are depicted in blue, and silage maize plots are shown in yellow gradient colors. Inset map shows the location of the experimental field within Germany.</p>
Full article ">Figure 2
<p>General workflow for the retrieval of LAI and chlorophyll content using different software packages: (1) conversion of raw images to radiance, (2) scene reconstruction in photogrammetric software, (3) application of ELM, (4) soil/shadow removal for pigment retrieval and (5) LUT construction and inversion, (6–7) application of inversion scheme using two different approaches. LHS—Latin hypercube sampling.</p>
Full article ">Figure 3
<p>Maps of LAI, LCC and CCC on acquisition dates. RGB orthomosaics are depicted at 0.015 m spatial resolution. The first RGB map (23.06.) displays the separation between the two maize types. For fast processing, LAI and CCC maps were created using 0.09 m resolution orthomosaics. LCC maps are displayed at original resolution of 0.015 m.</p>
Full article ">Figure 4
<p>Comparison of predicted mean LAI, LCC and CCC values per subplot to the reference measurements throughout the growing season. The first row represents the inversion results for the mean reflectance approach applied to the sweet maize plots (<b>A</b>–<b>C</b>), second row—silage maize plots (<b>D</b>–<b>F</b>) and third row—both maize types (<b>G</b>–<b>I</b>). rRMSE plots for each date and variable are displayed in the lowermost row (<b>J</b>–<b>L</b>).</p>
Full article ">Figure 5
<p>Comparison of predicted mean LAI, LCC and CCC values per subplot to the reference measurements throughout the growing season. The first row represents the inversion results for the pixel-based approach applied to the sweet maize plots (<b>A</b>–<b>C</b>), second row—silage maize plots (<b>D</b>–<b>F</b>) and third row—both maize types (<b>G</b>–<b>I</b>). rRMSE plots for each date and variable are displayed in the lowermost row (<b>J</b>–<b>L</b>).</p>
Full article ">Figure 6
<p>LAI retrieval using two approaches (<b>left</b> mean reflectance, <b>right</b> pixel-based) for early and late growth stages. BBCH principal growth stages 1, 3 and 5 correspond to leaf development, stem elongation and the start of inflorescence. Stages 6, 7 and 8 correspond to flowering, fruit development and ripening.</p>
Full article ">Figure A1
<p>SunScan probe placement (<b>left</b>) and SPAD measurement locations (<b>right</b>). Thick green lines represent maize rows.</p>
Full article ">Figure A2
<p>Map of the irradiance measurements taken by the DLS for each image on 14 July 21. These measurements are depicted for one band only.</p>
Full article ">
15 pages, 2546 KiB  
Article
A New Threshold-Based Method for Extracting Canopy Temperature from Thermal Infrared Images of Cork Oak Plantations
by Linqi Liu, Yingchao Xie, Xiang Gao, Xiangfen Cheng, Hui Huang and Jinsong Zhang
Remote Sens. 2021, 13(24), 5028; https://doi.org/10.3390/rs13245028 - 10 Dec 2021
Cited by 4 | Viewed by 2341
Abstract
Canopy temperature (Tc) is used to characterize plant water physiology, and thermal infrared (TIR) remote sensing is a convenient technology for measuring Tc in forest ecosystems. However, the images produced through this method contain background pixels of forest gaps, thereby [...] Read more.
Canopy temperature (Tc) is used to characterize plant water physiology, and thermal infrared (TIR) remote sensing is a convenient technology for measuring Tc in forest ecosystems. However, the images produced through this method contain background pixels of forest gaps, thereby reducing the accuracy of Tc observations. Extracting Tc data from TIR images is of great significance for understanding changes in ecosystem water status. In this study, a temperature threshold method was developed to rapidly, accurately, and automatically extract forest canopy pixels for Tc data obtention. Specifically, this method takes the temperature corresponding to the point with a slope of 0.5 in the curve composed of the normalized average temperature and the normalized cumulative number of pixels as the segmentation threshold to separate the forest gap pixels from the forest canopy pixels in the TIR images and extract the separated forest canopy pixels based on the pixel coordinates for Tc data obtention. Taking the Tc values, measured using a thermocouple, as the standard, Tc extraction using the new temperature threshold method and traditional methods (the Otsu algorithm and direct extraction) was compared in cork oak plantations. The results showed that the temperature threshold method offered the highest extraction accuracy, followed by the direct extraction method and the Otsu algorithm. The temperature threshold method was determined to be the most suitable for extracting Tc data from the TIR images of cork oak plantations. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the experimental site.</p>
Full article ">Figure 2
<p>TIR image of a cork oak plantation shot at 12:00 on 1 September using FLIR A310F (the yellow and light blue pixels represent the forest canopy; the dark blue and black pixels represent the forest gap; The bottom bar represents the temperatures of the pixels (T<sub>pixel</sub>) scale).</p>
Full article ">Figure 3
<p>The curve of the average temperature (T<sub>average</sub>) with the cumulative number of pixels (CNOP) in a TIR image. The red line represents the boundary between the forest gap pixels and forest canopy pixels. The area within the red lines represents the change of T<sub>average</sub> with the CNOP in the forest gap. The area outside the red lines represents the change of T<sub>average</sub> with the CNOP in the forest canopy.</p>
Full article ">Figure 4
<p>The image segmented by the temperature threshold method (the green pixels represent the forest canopy; the black pixels represent forest gaps).</p>
Full article ">Figure 5
<p>An image segmented using the Otsu algorithm (the green pixels represent the forest canopy; the black pixels represent the forest gap).</p>
Full article ">Figure 6
<p>Differences between T<sub>c</sub> values extracted using the temperature threshold method, the Otsu algorithm, and the direct extraction method. The left panel (<b>a</b>) shows the differences in T<sub>c</sub> extracted using the temperature threshold method versus the Otsu algorithm. The right panel (<b>b</b>) shows differences in the T<sub>c</sub> extracted using the temperature threshold method versus the direct extraction method. The red straight line is linear regression. The equation and R<sup>2</sup> represent the fitting equation and the coefficient of determination of the linear regression, respectively. ** represents <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">
9 pages, 3913 KiB  
Technical Note
A Semi-Automatic Workflow to Extract Irregularly Aligned Plots and Sub-Plots: A Case Study on Lentil Breeding Populations
by Thuan Ha, Hema Duddu, Kirstin Bett and Steve J. Shirtliffe
Remote Sens. 2021, 13(24), 4997; https://doi.org/10.3390/rs13244997 - 9 Dec 2021
Viewed by 2420
Abstract
Plant breeding experiments typically contain a large number of plots, and obtaining phenotypic data is an integral part of most studies. Image-based plot-level measurements may not always produce adequate precision and will require sub-plot measurements. To perform image analysis on individual sub-plots, they [...] Read more.
Plant breeding experiments typically contain a large number of plots, and obtaining phenotypic data is an integral part of most studies. Image-based plot-level measurements may not always produce adequate precision and will require sub-plot measurements. To perform image analysis on individual sub-plots, they must be segmented from plots, other sub-plots, and surrounding soil or vegetation. This study aims to introduce a semi-automatic workflow to segment irregularly aligned plots and sub-plots in breeding populations. Imagery from a replicated lentil diversity panel phenotyping experiment with 324 populations was used for this study. Image-based techniques using a convolution filter on an excess green index (ExG) were used to enhance and highlight plot rows and, thus, locate the plot center. Multi-threshold and watershed segmentation were then combined to separate plants, ground, and sub-plot within plots. Algorithms of local maxima and pixel resizing with surface tension parameters were used to detect the centers of sub-plots. A total of 3489 reference data points was collected on 30 random plots for accuracy assessment. It was found that all plots and sub-plots were successfully extracted with an overall plot extraction accuracy of 92%. Our methodology addressed some common issues related to plot segmentation, such as plot alignment and overlapping canopies in the field experiments. The ability to segment and extract phenometric information at the sub-plot level provides opportunities to improve the precision of image-based phenotypic measurements at field-scale. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) The lentil breeding trial within field boundary (red) in RGB color. (<b>B</b>) Subset of the orthomosaic showing sample crop row, plot, and sub-plot. (<b>C</b>) Excess green vegetation index (ExG) map of the subset.</p>
Full article ">Figure 2
<p>The overall workflow for plot and sub-plot extraction from UAV color (RGB) imagery. ExG: Excess Green Index; ExG_mask: ExG &gt; 0; ExG_conv: ExG enhanced using convolution filter; RGB: image in red, green, and blue composite.</p>
Full article ">Figure 3
<p>Example subsets illustrate plot extraction process. (<b>A</b>) RGB image of the plots. (<b>B</b>) ExG index. (<b>C</b>) ExG after convolution filter (ExG_convo). (<b>D</b>) Center row detection (in red). (<b>E</b>) Row mask (row_mask). (<b>F</b>) ExG_convo after mask out row gap to enhance lentil plot (ExG_enhanced). (<b>G</b>) Plot segmentation boundary (in blue line). (<b>H</b>) Plot boundary formation using pixel-based object resizing and plot center (purple point). (<b>I</b>) Final plot boundary 2.6. Sub-plot detection.</p>
Full article ">Figure 4
<p>Example to illustrate sub-plot extraction from plot boundary and ExG index. (<b>A</b>) Plot center and middle line. (<b>B</b>) Plot regions—middle, top, and bottom. (<b>C</b>) Watershed segmentation output. (<b>D</b>) Final sub-plot boundary map layer.</p>
Full article ">Figure 5
<p>The output maps of (<b>A</b>) plot boundary (yellow boxes). (<b>B</b>) Sub-plot map showing individual rows separated by color. (<b>C</b>) Vector maps (plot and sub-plot) of the whole experiment. The magnified regions are in the red boxes.</p>
Full article ">
25 pages, 14050 KiB  
Article
Individual Tree Detection from UAV Imagery Using Hölder Exponent
by Elena Belcore, Anna Wawrzaszek, Edyta Wozniak, Nives Grasso and Marco Piras
Remote Sens. 2020, 12(15), 2407; https://doi.org/10.3390/rs12152407 - 27 Jul 2020
Cited by 18 | Viewed by 4637
Abstract
This article explores the application of Hölder exponent analysis for the identification and delineation of single tree crowns from very high-resolution (VHR) imagery captured by unmanned aerial vehicles (UAV). Most of the present individual tree crown detection (ITD) methods are based on canopy [...] Read more.
This article explores the application of Hölder exponent analysis for the identification and delineation of single tree crowns from very high-resolution (VHR) imagery captured by unmanned aerial vehicles (UAV). Most of the present individual tree crown detection (ITD) methods are based on canopy height models (CHM) and are very effective as far as an accurate digital terrain model (DTM) is available. This prerequisite is hard to accomplish in some environments, such as alpine forests, because of the high tree density and the irregular topography. Indeed, in such conditions, the photogrammetrically derived DTM can be inaccurate. A novel image processing method supports the segmentation of crowns based only on the parameter related to the multifractality description of the image. In particular, the multifractality is related to the deviation from a strict self-similarity and can be treated as the information about the level of inhomogeneity of considered data. The multifractals, even if well established in image processing and recognized by the scientific community, represent a relatively new application in VHR aerial imagery. In this work, the Hölder exponent (one of the parameters related to multifractal description) is applied to the study of a coniferous forest in the Western Alps. The infrared dataset with 10 cm pixels is captured by a UAV-mounted optical sensor. Then, the tree crowns are detected by a basic workflow. This consists of the thresholding of the image on the basis of the Hölder exponent. Then, the single crowns are segmented through a multiresolution segmentation approach. The ITD segmentation was validated through a two-level validation analysis that included a visual evaluation and the computing of quantitative measures based on 200 reference crowns. The results were checked against the ITD performed in the same area but using only spectral, textural, and elevation information. Specifically, the visual assessment included the estimation of the producer’s and user’s accuracies and the F1 score. The quantitative measures considered are the root mean square error (RMSE) (for the area, the perimeter, and the distance between centroids) and the over-segmentation and under-segmentation indices, the Jaccard index, and the completeness index. The F1 score indicates positive results (over 73%) as well as the completeness index that does not exceed 0.23 on a scale of 0 to 1, taking 0 as the best result possible. The RMSE of the extension of crowns is 3 m2, which represents only 14% of the average extension of reference crowns. The performance of the segmentation based on the Hölder exponent outclasses those based on spectral, textural, and elevation information. Despite the good results of the segmentation, the method tends to under-segment rather than over-segment, especially in areas with sloping. This study lays the groundwork for future research into ITD from VHR optical imagery using multifractals. Full article
(This article belongs to the Special Issue Individual Tree Detection and Characterisation from UAV Data)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area in Cesana Torinese. The light blue circles are the check points (CPs) and the orange squares are the ground control points (GCPs).</p>
Full article ">Figure 2
<p>Resulting RGN orthomosaic.</p>
Full article ">Figure 3
<p>The procedure used to calculate the Hölder exponent <math display="inline"><semantics> <mi>α</mi> </semantics></math>, adapted from <a href="#remotesensing-12-02407-f001" class="html-fig">Figure 1</a>b in Aleksandrowicz et al. [<a href="#B51-remotesensing-12-02407" class="html-bibr">51</a>]. Here, <span class="html-italic">m</span> and <span class="html-italic">n</span> denote the pixel position on the image; <math display="inline"><semantics> <mrow> <msubsup> <mi>μ</mi> <mi>i</mi> <mrow> <mi>ISO</mi> </mrow> </msubsup> </mrow> </semantics></math> is the capacity measure calculated by using Equation (1) in the pixel neighborhood of <math display="inline"><semantics> <mrow> <msub> <mi>ε</mi> <mi>i</mi> </msub> </mrow> </semantics></math> size, where <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Yellow points indicate the location of the reference crowns within the study area.</p>
Full article ">Figure 5
<p>Possible cases of the relationship between reference crowns (blue border) and segmented crowns (red border). (<b>a</b>) Match. (<b>b</b>) Simple omission. (<b>c</b>) Omission through under-segmentation. (<b>d</b>) Commission through over-segmentation.</p>
Full article ">Figure 6
<p>(<b>a</b>,<b>d</b>) Details of RGB dataset; (<b>b</b>,<b>e</b>) Details of RGN dataset of the same area presented in (<b>a</b>,<b>d</b>); (<b>c</b>,<b>f</b>) Map of the Hölder exponents determined for the area presented in (<b>a</b>,<b>d</b>). The Hölder exponent layer restitution is in greyscale visualization, where 0 is black and 1 is white. The shadows are mitigated and the single crowns are easily identified, as well as the grasslands that are large areas of low DNs.</p>
Full article ">Figure 7
<p>Detail of the delineation of single crowns (red border) on RGB orthomosaic. The red square in the bottom-right corner indicates the location of the sample area within the entire study area.</p>
Full article ">Figure 8
<p>Distribution of the Jaccard index (<span class="html-italic">y</span>-axis) values according to the crown size (<span class="html-italic">x</span>-axis).</p>
Full article ">Figure 9
<p>The plot of the over-segmentation index (<span class="html-italic">OS</span>), under-segmentation index (<span class="html-italic">US</span>), ompleteness index (<span class="html-italic">D</span>), Jaccard index (<span class="html-italic">J</span>), and the distance between centroids (<span class="html-italic">CD</span>) calculated on the Hölder exponent dataset and the validation datasets (spectral information, NDVI, sum variance textural information, CHM, and the mixed input data).</p>
Full article ">
17 pages, 11917 KiB  
Article
Individual Tree Detection in a Eucalyptus Plantation Using Unmanned Aerial Vehicle (UAV)-LiDAR
by Juan Picos, Guillermo Bastos, Daniel Míguez, Laura Alonso and Julia Armesto
Remote Sens. 2020, 12(5), 885; https://doi.org/10.3390/rs12050885 - 10 Mar 2020
Cited by 63 | Viewed by 8747
Abstract
The present study addresses the tree counting of a Eucalyptus plantation, the most widely planted hardwood in the world. Unmanned aerial vehicle (UAV) light detection and ranging (LiDAR) was used for the estimation of Eucalyptus trees. LiDAR-based estimation of Eucalyptus is a challenge [...] Read more.
The present study addresses the tree counting of a Eucalyptus plantation, the most widely planted hardwood in the world. Unmanned aerial vehicle (UAV) light detection and ranging (LiDAR) was used for the estimation of Eucalyptus trees. LiDAR-based estimation of Eucalyptus is a challenge due to the irregular shape and multiple trunks. To overcome this difficulty, the layer of the point cloud containing the stems was automatically classified and extracted according to the height thresholds, and those points were horizontally projected. Two different procedures were applied on these points. One is based on creating a buffer around each single point and combining the overlapping resulting polygons. The other one consists of a two-dimensional raster calculated from a kernel density estimation with an axis-aligned bivariate quartic kernel. Results were assessed against the manual interpretation of the LiDAR point cloud. Both methods yielded a detection rate (DR) of 103.7% and 113.6%, respectively. Results of the application of the local maxima filter to the canopy height model (CHM) intensely depends on the algorithm and the CHM pixel size. Additionally, the height of each tree was calculated from the CHM. Estimates of tree height produced from the CHM was sensitive to spatial resolution. A resolution of 2.0 m produced a R2 and a root mean square error (RMSE) of 0.99 m and 0.34 m, respectively. A finer resolution of 0.5 m produced a more accurate height estimation, with a R2 and a RMSE of 0.99 and 0.44 m, respectively. The quality of the results is a step toward precision forestry in eucalypt plantations. Full article
(This article belongs to the Special Issue Individual Tree Detection and Characterisation from UAV Data)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the studied plots (aerial image from Plan Nacional de Ortofotografía Aérea (PNOA) 2016, <a href="https://pnoa.ign.es" target="_blank">https://pnoa.ign.es</a>).</p>
Full article ">Figure 2
<p>Flow chart for the three methods of individual tree detection.</p>
Full article ">Figure 3
<p>Extraction of the stem layer in a sample row of trees: (<b>a</b>) point cloud; (<b>b</b>) stem layer extraction from the point cloud; (<b>c</b>) horizontal projection of the extracted layer.</p>
Full article ">Figure 4
<p>Overview of Method 1 and Method 2 creation steps: (<b>a</b>) point cloud of a sample row of trees; (<b>b</b>) dissolve applied to overlapping polygons obtained by buffering stem points; (<b>c</b>) standardized density grid on the horizontal projection of the points.</p>
Full article ">Figure 5
<p>Example of the tree height measuring method.</p>
Full article ">Figure 6
<p>Detail of an aerial image superimposed with the laser returns corresponding to the stem layer (aerial image from Plan Nacional de Ortofotografía Aérea (PNOA) 2016, <a href="https://pnoa.ign.es" target="_blank">https://pnoa.ign.es</a>). Method 1 yielded density values closer to the reference value for both plots. An example of the yielded results for a tree line for both methods is shown in <a href="#remotesensing-12-00885-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 7
<p>Overview of the individual tree detection (ITD) results obtained by Method 1 and Method 2: (<b>a</b>) point cloud of a sample row of trees; (<b>b</b>) buffering on the horizontal projection of the points and the located individual; (<b>c</b>) standardized density raster on the horizontal projection of the points and the located individual.</p>
Full article ">Figure 8
<p>Examples of points that derived into commission errors: (<b>a</b>) and (<b>b</b>) are examples regarding canopy returns; (<b>c</b>) and (<b>d</b>) are examples regarding snag trees.</p>
Full article ">Figure 8 Cont.
<p>Examples of points that derived into commission errors: (<b>a</b>) and (<b>b</b>) are examples regarding canopy returns; (<b>c</b>) and (<b>d</b>) are examples regarding snag trees.</p>
Full article ">Figure 9
<p>False negative due to a merge of buffers.</p>
Full article ">Figure 10
<p>Sample of the position of detected trees for each method.</p>
Full article ">Figure 11
<p>Estimated vs. observed measured tree height for two different resolutions of the canopy height model: (<b>a</b>) 0.5 m; (<b>b</b>) 2.0 m.</p>
Full article ">
17 pages, 4395 KiB  
Article
Monitoring Mega-Crown Leaf Turnover from Space
by Emma R. Bush, Edward T. A. Mitchard, Thiago S. F. Silva, Edmond Dimoto, Pacôme Dimbonda, Loïc Makaga and Katharine Abernethy
Remote Sens. 2020, 12(3), 429; https://doi.org/10.3390/rs12030429 - 29 Jan 2020
Cited by 5 | Viewed by 4577
Abstract
Spatial and temporal patterns of tropical leaf renewal are poorly understood and poorly parameterized in modern Earth System Models due to lack of data. Remote sensing has great potential for sampling leaf phenology across tropical landscapes but until now has been impeded by [...] Read more.
Spatial and temporal patterns of tropical leaf renewal are poorly understood and poorly parameterized in modern Earth System Models due to lack of data. Remote sensing has great potential for sampling leaf phenology across tropical landscapes but until now has been impeded by lack of ground-truthing, cloudiness, poor spatial resolution, and the cryptic nature of incremental leaf turnover in many tropical plants. To our knowledge, satellite data have never been used to monitor individual crown leaf phenology in the tropics, an innovation that would be a major breakthrough for individual and species-level ecology and improve climate change predictions for the tropics. In this paper, we assessed whether satellite data can detect leaf turnover for individual trees using ground observations of a candidate tropical tree species, Moabi (Baillonella toxisperma), which has a mega-crown visible from space. We identified and delineated Moabi crowns at Lopé NP, Gabon from satellite imagery using ground coordinates and extracted high spatial and temporal resolution, optical, and synthetic-aperture radar (SAR) timeseries data for each tree. We normalized these data relative to the surrounding forest canopy and combined them with concurrent monthly crown observations of new, mature, and senescent leaves recorded from the ground. We analyzed the relationship between satellite and ground observations using generalized linear mixed models (GLMMs). Ground observations of leaf turnover were significantly correlated with optical indices derived from Sentinel-2 optical data (the normalized difference vegetation index and the green leaf index), but not with SAR data derived from Sentinel-1. We demonstrate, perhaps for the first time, how the leaf phenology of individual large-canopied tropical trees can directly influence the spectral signature of satellite pixels through time. Additionally, while the level of uncertainty in our model predictions is still very high, we believe this study shows that we are near the threshold for orbital monitoring of individual crowns within tropical forests, even in challenging locations, such as cloudy Gabon. Further technical advances in remote sensing instruments into the spatial and temporal scales relevant to organismal biological processes will unlock great potential to improve our understanding of the Earth system. Full article
(This article belongs to the Special Issue Remote Sensing of Tropical Phenology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Converting point observations to periods of leaf turnover. (<b>A</b>) Leaf senescence and loss events are defined as starting 16 days prior (D<sub>-16</sub>) to the first observation of leaf senescence or loss and ending 16 days after (D<sub>+16</sub>) the last observation of the same (in this example, the first and last observations of leaf senescence or loss are the same: Obs.<sub>0</sub>). (<b>B</b>) Leaf renewal events are defined as starting 16 days prior (D-<sub>16</sub>) to the first observation of new leaves and ending 16 days after (D<sub>+16</sub>) the last observation of the same, or 16 to 31 days after (D<sub>+16:+31</sub>) the last observation of leaf senescence or loss on occasions when new leaves are not directly observed. The colored rectangles indicate the likely periods of either leaf senescence and loss (brown) or leaf renewal (green) calculated in this way.</p>
Full article ">Figure 2
<p>Ground observations of Moabi leaf turnover at Lopé NP, Gabon (2015-2019). The solid and dashed lines show the scores given for canopy coverage of mature, senescing, and new leaves at each monthly observation. The brown and green rectangles show the likely periods of either leaf senescence and loss (brown rectangles) or leaf renewal (green rectangles) based on the monthly observations as defined in <a href="#remotesensing-12-00429-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 3
<p>Aerial view of eight focal Moabi crowns at Lopé NP, Gabon (small yellow circles) and their 100-m buffers (surrounding yellow circles). (<b>A</b>). Focal Moabi crowns were identified and drawn by hand using GPS coordinates from the ground and high resolution (&lt;2 m pixels) imagery available in the Google Earth™ and Microsoft Bing™ platforms. We delineated forest in a 100-m buffer around the crown boundary of each focal tree, erasing the focal crown and other Moabi crowns contained within the buffer. (<b>B</b>). Tree 3231 is adjacent to two Moabi crowns, which are not part of this study but were excluded from the 3231-forest buffer to avoid signal contamination. (<b>C</b>). Tree 3029 sits at the forest-savanna edge and thus c.a. 58% of the buffer was excluded due to being within the savanna. Background imagery is provided by ESRI World Imagery.</p>
Full article ">Figure 4
<p>Detecting leaf senescence and loss using satellite data. Standardized estimates (dots) and 95% confidence intervals (lines) from eight single variate generalized linear mixed models (binomial) for the probability of detecting leaf senescence and loss from focal canopy time series (mean value of all pixels within canopy) and normalized canopy time series (difference between the mean value of all pixels within the canopy and the mean value of all pixels in the surrounding forest buffer) of the VV and VH bands of the Sentinel-1 synthetic aperture radar (SAR) data and the normalized difference vegetation index (NDVI) and the green leaf index (GLI) derived from Sentinel-2 optical data. Mixed models included tree ID and year as random effects.</p>
Full article ">Figure 5
<p>Predicted relationship between Sentinel-2 normalized difference vegetation index (NDVI) canopy deviation and ground observations of Moabi leaf senescence and loss. The solid line shows the fixed effect prediction from the generalized linear mixed model (binomial), the grey ribbon shows the 95% confidence intervals of the prediction, and the translucent dots show the raw data, binned in intervals along the x-axis. The mixed model included tree ID and year as random effects.</p>
Full article ">Figure 6
<p>Detecting new leaves using satellite data. Standardized estimates (dots) and 95% confidence intervals (lines) from two separate generalized linear mixed models (binomial) of the probability of detecting new leaves using the Sentinel-2 data. The normalized difference vegetation index (NDVI) and the green leaf index (GLI) are derived indices from optical data from Sentinel-2. Mixed models included tree ID and year as random effects.</p>
Full article ">Figure 7
<p>Predicted relationship between Sentinel-2 green leaf index (GLI) canopy deviation and ground observations of Moabi leaf renewal. The solid line shows the fixed effect prediction from the generalized linear mixed model (binomial), the grey ribbon shows the 95% confidence intervals of the prediction, and the translucent dots show the raw data, binned in intervals along the x-axis. Mixed models included tree ID and year as random effects.</p>
Full article ">Figure 8
<p>A recent close-range aerial image of a defoliated Moabi crown at Lopé National Park. Photo by David Lehmann.</p>
Full article ">
18 pages, 8878 KiB  
Article
A Comparative Study of RGB and Multispectral Sensor-Based Cotton Canopy Cover Modelling Using Multi-Temporal UAS Data
by Akash Ashapure, Jinha Jung, Anjin Chang, Sungchan Oh, Murilo Maeda and Juan Landivar
Remote Sens. 2019, 11(23), 2757; https://doi.org/10.3390/rs11232757 - 23 Nov 2019
Cited by 65 | Viewed by 6437
Abstract
This study presents a comparative study of multispectral and RGB (red, green, and blue) sensor-based cotton canopy cover modelling using multi-temporal unmanned aircraft systems (UAS) imagery. Additionally, a canopy cover model using an RGB sensor is proposed that combines an RGB-based vegetation index [...] Read more.
This study presents a comparative study of multispectral and RGB (red, green, and blue) sensor-based cotton canopy cover modelling using multi-temporal unmanned aircraft systems (UAS) imagery. Additionally, a canopy cover model using an RGB sensor is proposed that combines an RGB-based vegetation index with morphological closing. The field experiment was established in 2017 and 2018, where the whole study area was divided into approximately 1 x 1 m size grids. Grid-wise percentage canopy cover was computed using both RGB and multispectral sensors over multiple flights during the growing season of the cotton crop. Initially, the normalized difference vegetation index (NDVI)-based canopy cover was estimated, and this was used as a reference for the comparison with RGB-based canopy cover estimations. To test the maximum achievable performance of RGB-based canopy cover estimation, a pixel-wise classification method was implemented. Later, four RGB-based canopy cover estimation methods were implemented using RGB images, namely Canopeo, the excessive greenness index, the modified red green vegetation index and the red green blue vegetation index. The performance of RGB-based canopy cover estimation was evaluated using NDVI-based canopy cover estimation. The multispectral sensor-based canopy cover model was considered to be a more stable and accurately estimating canopy cover model, whereas the RGB-based canopy cover model was very unstable and failed to identify canopy when cotton leaves changed color after canopy maturation. The application of a morphological closing operation after the thresholding significantly improved the RGB-based canopy cover modeling. The red green blue vegetation index turned out to be the most efficient vegetation index to extract canopy cover with very low average root mean square error (2.94% for the 2017 dataset and 2.82% for the 2018 dataset), with respect to multispectral sensor-based canopy cover estimation. The proposed canopy cover model provides an affordable alternate of the multispectral sensors which are more sensitive and expensive. Full article
(This article belongs to the Special Issue UAVs for Vegetation Monitoring)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Experimental field setup consisted of cotton in skip and solid row patterns in: (<b>a</b>) 2017 and (<b>b</b>) 2018. The experimental field setup is presented with an RGB (red, green, and blue) orthomosaic of the study area on June 7, 2017, and June 6, 2018.</p>
Full article ">Figure 2
<p>RGB and multispectral sensors used for data collection. (<b>a</b>) The DJI Phantom 4 Pro for RGB and (<b>b</b>) the DJI Matrice 100 platform with the SlantRange 3p sensor for multispectral data collection.</p>
Full article ">Figure 3
<p>Canopy cover estimation from the orthomosaic images (Red square: individual crop grid, and each grid is 1 × 1 m): an RGB orthomosaic image collected using the unmanned aerial systems (UAS) platform, followed by the binary classification results of the orthomosaic image, where white represents the canopy class and black represents the non-canopy class; the last image represents the grid-wise estimated canopy cover (CC).</p>
Full article ">Figure 4
<p>K-means clustering-based pixel classification method workflow where the orthomosaic is classified into five classes, and, later, classes are merged into two clusters, namely canopy and non-canopy. The RGB orthomosaic presented was captured on 19th June 2017.</p>
Full article ">Figure 5
<p>Procedure to generate a binary map that indicates canopy and non-canopy areas. Applying the vegetation index over the RGB orthomosaic resulted in a grayscale image. By applying thresholding, a binary image was generated. Lastly, morphological closing was applied over binary image to improve binary classification. The presented RGB orthomosaic was captured on 10 July 2017, and the excessive greenness index (ExG) was the VI used for the demonstration of the methodology.</p>
Full article ">Figure 6
<p>Procedure to select appropriate threshold value to generate a binary map that indicates canopy and non-canopy areas. The first images are a subset of RGB images captured on 7 June 2017 and 10 July 2017. Second images are the result of applying the vegetation index (ExG) over RGB images. The next three images are the result of applying varying threshold values that were superimposed on the original RGB images (the canopy classified pixels are represented by the red color, and non-canopy pixels were set to transparent).</p>
Full article ">Figure 7
<p>CC grid maps generated at each flight in growing season using normalized difference vegetation index (NDVI) maps for the 2017 dataset.</p>
Full article ">Figure 8
<p>CC grid maps generated at each flight in growing season using NDVI maps for the 2018 dataset.</p>
Full article ">Figure 9
<p>For the 2017 experiment: (<b>a</b>) the average NDVI and RGB reference-based percentage CC for each flight in the growing season; (<b>b</b>) a comparison of the NDVI and RGB reference-based percentage CC techniques with R<sup>2</sup>.</p>
Full article ">Figure 10
<p>For the 2018 experiment: (<b>a</b>) the average NDVI and RGB reference-based percentage CC for each flight in the growing season; (<b>b</b>) a comparison of the NDVI and RGB reference-based percentage CC techniques with R<sup>2</sup>.</p>
Full article ">Figure 11
<p>For the 2017 experiment, the average CC estimation per grid using the NDVI-based CC estimation throughout the growing season, along with the average CC estimation using (<b>a</b>) Canopeo, (<b>b</b>) the ExG, (<b>c</b>) the modified green red vegetation index (MGRVI) and (<b>d</b>) the red green blue vegetation index (RGBVI), before and after applying the morphological closing (MC) operation.</p>
Full article ">Figure 12
<p>For the 2018 experiment, the average CC estimation per grid using the NDVI-based CC estimation throughout the growing season, along with the average CC estimation using (<b>a</b>) Canopeo, (<b>b</b>) the ExG, (<b>c</b>) the MGRVI and (<b>d</b>) the RGBVI, before and after applying the morphological closing (MC) operation.</p>
Full article ">Figure 12 Cont.
<p>For the 2018 experiment, the average CC estimation per grid using the NDVI-based CC estimation throughout the growing season, along with the average CC estimation using (<b>a</b>) Canopeo, (<b>b</b>) the ExG, (<b>c</b>) the MGRVI and (<b>d</b>) the RGBVI, before and after applying the morphological closing (MC) operation.</p>
Full article ">
11 pages, 2838 KiB  
Letter
Determining a Threshold to Delimit the Amazonian Forests from the Tree Canopy Cover 2000 GFC Data
by Kaio Allan Cruz Gasparini, Celso Henrique Leite Silva Junior, Yosio Edemir Shimabukuro, Egidio Arai, Luiz Eduardo Oliveira Cruz e Aragão, Carlos Alberto Silva and Peter L. Marshall
Sensors 2019, 19(22), 5020; https://doi.org/10.3390/s19225020 - 18 Nov 2019
Cited by 8 | Viewed by 4126
Abstract
Open global forest cover data can be a critical component for Reducing Emissions from Deforestation and Forest Degradation (REDD+) policies. In this work, we determine the best threshold, compatible with the official Brazilian dataset, for establishing a forest mask cover within the Amazon [...] Read more.
Open global forest cover data can be a critical component for Reducing Emissions from Deforestation and Forest Degradation (REDD+) policies. In this work, we determine the best threshold, compatible with the official Brazilian dataset, for establishing a forest mask cover within the Amazon basin for the year 2000 using the Tree Canopy Cover 2000 GFC product. We compared forest cover maps produced using several thresholds (10%, 30%, 50%, 80%, 85%, 90%, and 95%) with a forest cover map for the same year from the Brazilian Amazon Deforestation Monitoring Project (PRODES) data, produced by the National Institute for Space Research (INPE). We also compared the forest cover classifications indicated by each of these maps to 2550 independently assessed Landsat pixels for the year 2000, providing an accuracy assessment for each of these map products. We found that thresholds of 80% and 85% best matched with the PRODES data. Consequently, we recommend using an 80% threshold for the Tree Canopy Cover 2000 data for assessing forest cover in the Amazon basin. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the state of Mato Grosso, representative biomes, and the spatial distribution of the 10 km<sup>2</sup> plots used as samples. The smaller map shows the extent of the Amazon basin [<a href="#B33-sensors-19-05020" class="html-bibr">33</a>].</p>
Full article ">Figure 2
<p>Spatial arrangement of each threshold assessed in the study compared to the Brazilian Amazon Deforestation Monitoring Project (PRODES) data.</p>
Full article ">Figure 3
<p>Maps of differences between spatial arrangements of each threshold assessed in the study compared to the PRODES data. Red (−1) represents pixels that were non-forest cover using PRODES and forest cover using the Tree Canopy Cover 2000 data. White (0) represents pixels classified into the same class using both datasets. Blue (1) represents pixels that were classified as forest cover using PRODES and non-forest cover using Tree Canopy Cover 2000 data.</p>
Full article ">Figure 4
<p>Regression between the forest percentage within all of 9329 (10 by 10 km) samples cells using different thresholds from the Tree Canopy Cover 2000 data and their corresponding percentages on a reference map developed using the PRODES data. The dashed red line is the 1:1 line. The blue line is the average regression line from 10,000 interactions for each threshold tested.</p>
Full article ">
Back to TopTop