[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Debris Flow Susceptibility Mapping Using Machine-Learning Techniques in Shigatse Area, China
Next Article in Special Issue
Understanding Current and Future Fragmentation Dynamics of Urban Forest Cover in the Nanjing Laoshan Region of Jiangsu, China
Previous Article in Journal
Application of GeoSHM System in Monitoring Extreme Wind Events at the Forth Road Bridge
Previous Article in Special Issue
An Integrated GIS and Remote Sensing Approach for Monitoring Harvested Areas from Very High-Resolution, Low-Cost Satellite Images
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sequential PCA-based Classification of Mediterranean Forest Plants using Airborne Hyperspectral Remote Sensing

1
The Remote Sensing Laboratory, Department of Geography and Human Environment, The Porter School of the Environment and Earth Sciences, Tel-Aviv University, Tel Aviv 699780, Israel
2
The Robert H. Smith Institute of Plant Sciences and Genetics in Agriculture, The Faculty of Agriculture, The Hebrew University of Jerusalem, Rehovot 7610001, Israel
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(23), 2800; https://doi.org/10.3390/rs11232800
Submission received: 3 October 2019 / Revised: 21 November 2019 / Accepted: 24 November 2019 / Published: 27 November 2019
(This article belongs to the Special Issue Monitoring Forest Change with Remote Sensing)
Figure 1
<p>Mount Horshan, northern Israel, aerial photo (WorldView 2 satellite image) overlaid with the Specim AisaFENIX image. The red dots mark the location of 257 ground-truth validation points.</p> ">
Figure 2
<p>PCABC flowchart presenting all stages of the classification process.</p> ">
Figure 3
<p>Illustration of the process for producing the non-vegetation mask by PCABC method in a subscene of the image (marked in red square). (<b>a</b>) Keren Kayemeth LeIsrael (KKL) orthophoto of the subscene showing that the pixels that were masked out are indeed related to roads and shade (non-vegetation pixels). (<b>b</b>) Specim AisaFENIX RGB. (<b>c</b>) PCA first iteration 1, component 1 (non-vegetation pixels appear in black). (<b>d</b>) Marking of the non-vegetation pixels using DS (non-vegetation pixels appear in cyan, yellow and blue colors; DS values: −51.84 to −5.03). (<b>e</b>) Non-vegetation pixels overlaid on RGB image. (<b>f</b>) Final product of the non-vegetation pixel mask applied to the image.</p> ">
Figure 4
<p>Illustration of the process for classifying plant species using the PCABC method in a subscene of the image (marked with a red square). (<b>a</b>) PCA iteration 1, component 1, with no differences among the vegetation pixels. (<b>b</b>) PCA iteration 2, component 1, with clear differences among the vegetation pixels (higher PC values appear in white). (<b>c</b>) Marking of plant species using the DS tool; vegetation pixels marked in different colors (based on the legend). (<b>d</b>) Example of one of the classes (marked in cyan) identified based on the highest DS values.</p> ">
Figure 5
<p>Illustration of combining the highest DS values into one DS cluster. (<b>a</b>) The final plant cluster that was produced in the second iteration of the PCABC process. (<b>b</b>) The tree crowns marked with different DS values (before the merger). The full DS values of the cluster are shown in the legend.</p> ">
Figure 6
<p>Subscenes of the results of classification using two different classifiers on the PCA components image: (<b>a</b>) K-means and (<b>b</b>) ISODATA.</p> ">
Figure 7
<p>Subscenes of the six classes detected using the PCABC methodology. Different colors represent the locations of the plant species in the image. These were later identified as: (<b>a</b>) <span class="html-italic">P. halepensis.</span> (<b>b</b>) Trees covered by lianas. (<b>c</b>) <span class="html-italic">P. lentiscus</span>. (<b>d</b>) Shrubs. (<b>e</b>) <span class="html-italic">Q. calliprinos</span>. (<b>f</b>) <span class="html-italic">Q. ithaburensis</span>.</p> ">
Figure 8
<p>Thematic map of the PCABC plant species classes.</p> ">
Figure 9
<p>A small <span class="html-italic">Pinus halepensis</span> tree identified by PCABC. (<b>a</b>) <span class="html-italic">P. halepensis</span> class based on PCABC (marked in cyan color); red circle marks a small tree with a height of 1.5 m and canopy diameter of 1 m. (<b>b</b>) The <span class="html-italic">P. halepensis</span> tree in the field.</p> ">
Figure 10
<p>Example of a tree covered by lianas. (<b>a</b>) Lianas class based on PCABC (marked in magenta color); red circle marks the location of the photographed tree in the field. (<b>b</b>) Tree covered by lianas in the field.</p> ">
Figure 11
<p>Illustration of similarity of the spectra of detected classes. (<b>a</b>) Plots of mean spectra for the six plant classes. (<b>b</b>) Mean and standard deviation spectra for <span class="html-italic">Q. calliprinos</span> and <span class="html-italic">Q. ithaburensis</span> classes. (<b>c</b>) Means for <span class="html-italic">Q. calliprinos</span> (blue line) and <span class="html-italic">Q. ithaburensis</span> (orange line). Gray line represents the ratio of the two means. Results indicate high similarity between the two species, reflected by a nearly straight line, with values of above 0.8 (reflectance).</p> ">
Versions Notes

Abstract

:
In recent years, hyperspectral remote sensing (HRS) has become common practice for remote analyses of the physiognomy and composition of forests. Supervised classification is often used for this purpose, but demands intensive sampling and analyses, whereas unsupervised classification often requires information retrieval out of the large HRS datasets, thereby not realizing the full potential of the technology. An improved principal component analysis-based classification (PCABC) scheme is presented and intended to provide accurate and sequential image-based unsupervised classification of Mediterranean forest species. In this study, unsupervised classification and reduction of data size are performed simultaneously by applying binary sequential thresholding to principal components, each time on a spatially reduced subscene that includes the entire spectral range. The methodology was tested on HRS data acquired by the airborne AisaFENIX HRS sensor over a Mediterranean forest in Mount Horshan, Israel. A comprehensive field-validation survey was performed, sampling 257 randomly selected individual plants. The PCABC provided highly improved results compared to the traditional unsupervised classification methodologies, reaching an overall accuracy of 91%. The presented approach may contribute to improved monitoring, management, and conservation of Mediterranean and similar forests.

1. Introduction

A wide range of environmental and ecological studies, including studies of forest and vegetation landscapes, require reliable information on land cover at local to global scales [1,2]. In particular, analyses of the composition and structure of vegetation, especially in diverse forests, are important for understanding ecosystem functioning [3].
In recent years, the use of hyperspectral remote sensing (HRS) data has become common for identifying the composition and physiognomy of forests across large areas [4,5,6,7]. The high spectral resolution of HRS (50–100 nm band width) allows identification of each classified land-cover cluster according to its known spectral signature, thereby enabling detailed analyses of land covers such as mineral composition [8], soil type [9], and vegetation structure and composition [8,9,10,11,12,13,14,15,16,17,18,19]. However, the ability to interpret and accurately classify the observed data is still constrained by factors such as: sensor attributes, sun altitude and azimuth, quality of ground-truth information, and the method of classification [20]. Seasonal and phenological changes contribute additional uncertainty in the classification of vegetation cover [21,22].
The large volume of data provided by HRS allows for a better distinction between classes, but significantly increases the complexity of the data processing. One of the common preprocessing methods applied to reduce the dimensionality of HRS data is principal component analysis (PCA) [23,24,25]. PCA is an eigenvector-based multivariate statistical analysis that applies eigenvalue decomposition of the data covariance (or correlation) matrix to convert correlated variables across all spectral bands into a set of values of linearly uncorrelated variables termed principal components (PCs). The number of PCs is equal to the number of bands, which are often visualized as grayscale band images presenting the “component score”.
The transformation of the data obtained by PCA is defined such that the first PC displays the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component in turn accounts for the next highest possible variance. In most cases, a second stage of inverse transformation of the PCA back to the original values is performed using only the first few PCs, without the noise or the autocorrelation between spectral bands. However, using only the coherent bands for the inverted image may result in some data loss [23,26,27,28]. A mathematical interpretation and historical review of PCA can be found in Green et al. [24], Jensen [26], and Rodarmel and Shan [25].
The PCA-reduced image is then commonly subjected to supervised or unsupervised classification approaches applied either to the inverted image or directly to the PCs [28,29,30,31,32,33,34,35,36,37,38,39]. More examples of PCA use in remote-sensing classification can be found [28,40,41,42,43,44,45,46,47,48]. However, in most of those studies, the main purpose for using the PCA was image enhancement and reduction of the autocorrelation between the spectral bands prior to a conventional classification scheme. During the last two decades, PCA techniques have been extensively applied for extracting spectral features to be used in supervised classifications [29,36]. However, small classes that are represented by relatively fewer pixels are more likely to remain undetected by the first PCs of the PCA. This is also the case when attempting to detect a phenomenon or variable that causes only subtle differences in the target reference [37,49].
In practice, it is sometimes difficult to obtain adequate training samples for supervised methods of hyperspectral image classification due to the requirement of large amounts of data from the field [50,51]. Unsupervised classifications, on the other hand, utilize statistical and numerical processes to outline groups of pixels with similar spectral features in the image without any preliminary sampling or training data [26,27,49,52]. This type of classification results in a thematic map of spectral clusters which must later be identified as classes and validated by the analyst in the field [26,27]. However, unsupervised classifications applied to PCA-reduced images are usually limited to the large and dominant classes appearing in the first PCs [37,49,52].
Implementation of efficient classification and mapping of forest plant species can contribute to forest monitoring and management [53,54,55,56,57]. However, this is a challenging task for most classification methodologies, particularly in the case of dense Mediterranean forests, which are highly diverse in species and contain many functionally and structurally similar species. To the best of our knowledge, such unsupervised classification has never been successfully implemented for a Mediterranean forest using HRS image datasets.
In this study, we test the compatibility of commonly used classifiers such as K-means and ISODATA for plant species classification in a Mediterranean forest ecosystem, and propose an improved PCA-based classification (PCABC) approach. The presented methodology provides better classification of the vegetation using the advantages of PCA for unsupervised variability detection of HRS data, while overcoming the known disadvantages: (a) low sensitivity to subtle differences in the target reference, (b) inability to detect small classes, and (c) a reduction of spectral data (bands) during the process. A Mediterranean forest in Mount Horshan, Israel was used as the test site for the methodology applied to data from Specim’s airborne AisaFENIX HRS sensor, and this study reports on the outcome of the approach.

2. Methodology

2.1. Research Site

The research was conducted in Mount Horshan (178 m ASL) in northern Israel. The annual precipitation is 584 mm (Israel Meteorological Service). The annual daily mean temperature is 20.12°C—14.4°C in the winter months and 25.4°C in the summer months (Israel Meteorological Service). The area consists of both natural and planted Mediterranean forests covering an area of over 19 km2 and encompassing a wide range of typical Mediterranean plant communities, including planted pine forests, dense woodlands with evergreen and deciduous oak trees, and short and tall Mediterranean shrubland (garrigue and maquis, respectively) [58,59]. The dominant species at the site are Pinus halepensis Mill., Quercus calliprinos Web, Quercus ithaburensis Decne., Pistacia lentiscus L., diverse vine species, sclerophyllous Mediterranean shrubs, and dwarf shrubs. The surrounding area includes urban, rural, and agricultural land uses. Figure 1 presents an aerial photo of the study area, overlaid with the Specim AisaFENIX image, including the location of the 257 ground-truth validation points.

2.2. Preprocessing

The Specim airborne AisaFENIX sensor used for this study covers a spectral range of 0.4–2.5 μm, with 448 spectral bands and a full width at half maximum of 0.0035–0.0055 μm. The analyzed image was acquired on 17 Sep 2014 at 06:55 UTC. A flight altitude of 792 m above ground level provided a spatial resolution of 1 m. ENVI image-processing package (Research Systems Inc. 1999) and ArcView GIS (ESRI Inc.) served as the major processing and analysis tools for this study.
The raw hyperspectral data were preprocessed using CaliGeoPRO software (Spectral Imaging Ltd., Oulu, Finland), resulting in a georectified radiance image. The radiance database was atmospherically corrected (ATC) using Atmospheric Correction Right Now (ACORN) software [60], with a gain factor calculated from a calibration site during the same flight based on the Supervised Vicarious Calibration method [61], resulting in a reflectance image that was validated on the ground using spectral measurements of control points acquired by Analytical Spectral devise (ASD). Topographic and illumination corrections were not used, and the shade pixels were masked out from the image using the PCABC method, as elaborated in Section 2.4. Of the original 448 spectral bands, 155 unusable bands were omitted from the dataset, mainly over the spectral region of the absorption features of atmospheric gases (1.34–1.51 μm, 1.80–2.06 μm), as well as at the edges of the image spectrum, due to low signal-to-noise ratios (0.379–0.430 μm, and 2.38–2.50 μm). The final preprocessed image contained 293 bands that were used for further processing. Areas outside the forest area such as agricultural fields have been excluded manually from the image as they do not include Mediterranean plant species.

2.3. K-Means and ISODATA Classifiers

K-means and ISODATA classifiers are commonly used methodologies that do not require any preliminary input data, and were therefore chosen for this study. ISODATA and K-means unsupervised classifiers calculate class means that are evenly distributed in the data space and then iteratively cluster the remaining pixels using minimum distance techniques [26,27,62].
Each iteration recalculates means and reclassifies pixels with respect to the new means. Iterative class splitting, merging, and deletion are based on input threshold parameters. All pixels are classified to the nearest class unless a standard deviation or distance threshold is specified, in which case some pixels may be unclassified if they do not meet the selected criteria. The process continues until the number of pixels in each class changes by less than the selected pixel change threshold, or the maximum number of iterations is reached [26,27,62].
Both methods were applied to the PCA components produced for the Specim AisaFENIX reflectance image as is commonly practiced for HRS images [39]. Table 1 presents the K-means and ISODATA classification parameters that were used for the classification. We used ENVI image-processing package (Research Systems Inc. 1999) in order to apply these classifications classifiers on the reflectance image.

2.4. PCABC Processing

The PCABC procedure is principally a binary threshold applied to the first few PC bands set for identifying and clustering endmembers in an unsupervised approach. However, the process presented here attempts to reach higher sensitivity to subtle differences in the target reference and small classes by masking out each detected class from the dataset. The masked dataset is then reintroduced to the PCA and binary masking procedure. The procedure is performed in a sequential manner until no class can be identified. During this process, inverse PCA transformation is not applied, nor are supervised or unsupervised classification schemes used. Figure 2 presents a flowchart summarizing all stages of the PCABC methodology.
At the first PCA iteration of the AisaFENIX reflectance image, the first PCs were analyzed in search of outlying bright and dark pixels, representing clusters that vary from the rest of the image. A close inspection of the PCs in comparison to the high-resolution orthophoto clearly showed that vegetation vs. non-vegetation pixels were the source of the highest variance. The density slice (DS) tool was used for binary thresholding and categorization of the pixel clusters. This algorithm divides image band gray-scale values into groups based on mean and standard deviation calculations (Research Systems Inc. 1999).
Figure 3 illustrates the process of producing the non-vegetation mask by the PCABC method in a subscene of the image (marked with a red square). A comparison of the orthophoto (Figure 3a) and Specim AisaFENIX RGB image (Figure 3b) to the first PC component of the first PCA iteration (Figure 3c) clearly shows that the highest detection variability is for vegetation vs. non-vegetation, the latter including ground, rock, roads, and shade (non-vegetation pixels appear in black). The DS tool was used to mark the non-vegetation pixels (non-vegetation pixels appear in red to magenta in Figure 3d,e). Finally, a non-vegetation mask was produced and applied to the image (Figure 3f).
The rationale underlying the PCABC is that exclusion of dominant endmembers from the dataset may allow detection of new dominant endmembers in the remaining spatial subset, which retains all spectral bands. Reiterating this process in a sequential manner may result in a detailed classification of the dataset to most of its endmembers. This was done in the (second) iteration, as illustrated in Figure 4, which presents a subscene of PCA first iteration 1, component 1 (Figure 4a) next to a subscene of the second iteration performed on the reflectance image including all 293 bands, and masking out only non-vegetated pixels (Figure 4b). There is a significant covariant difference between these two images; in Figure 4a, there are no significant differences among the vegetation pixels, whereas in Figure 4b, some vegetation pixels appear in white (higher PC values). The DS tool assisted in clustering these pixels (Figure 4c). Using the highest DS values, an ENVI classification image was produced (Figure 4d), representing the first identified plant species.
The identified class was masked out from the image before performing the next iteration. In this third iteration, five different clusters were identified at PCs 1–3. Of these, four were clustered from the second PC band. These clusters were transformed into five different ENVI classification images and were all masked out of the image. The fourth iteration was performed on the reflectance image with a mask including the non-vegetated pixels as well as all previously identified plant classes. At this stage (fourth iteration), no additional classes were detected.
As a result of the relatively high spectral and spatial resolution of the image, in some cases a single tree crown with high PC values was assigned several DS ranges (Figure 5a). These were therefore merged into one cluster (Figure 5b); DS values of: 10.8535–40.4227. In order to ensure that the various PC clusters are consolidated accurately on one PC cluster, we compared them with KKL aerial orthophoto (high spatial resolution photography). Thus, we used two screens in parallel, one on which the aerial photograph was displayed, and on the other, the different PC clusters were displayed. This comparison allowed an accurate identification that a number of certain PC clusters are actually one PC cluster, representing the same canopy of a particular tree.
This cluster was then masked out from the image before the next iteration using ENVI’s binary mask option.

2.5. Validation Process

Accuracy assessment of the classified image is an important stage in all image classification procedures. The error matrix is the most frequently used method for accuracy assessment [63,64] as detailed in previous literature [64,65,66,67,68,69,70]. One critical step is to select a sufficient and representative number of ground-trough points for each class with a suitable sampling method. Random or stratified sampling approaches are often used for a robust assessment of classification results, as the collection of test samples based on a pure random sampling technique was impossible due to steep topography and high forest density. We executed a validation protocol similar to those used in similar remote areas [63]. To that end, plots sizing from 6000 to 14,000 square meters were defined manually using ArcView GIS software (ESRI Inc.) according to the orthophoto and have been selected in different reachable locations within the image. At each plot, we manually selected random points which were used as validation target points. The survey was conducted along transects extending from the closest road to the selected point, where all individual plants that were large enough to cover at least one image pixel and that constituted a single species (i.e., did not overlap with other species) were sampled. This resulted in different densities of surveyed individuals (N = 257), depending on the local structure of the vegetation (Figure 1). We regard this routine as a semi-random selection as was carried out by [63].
Each surveyed ground-truth validation point was located by GPS coordinates and marked on a high-spatial-resolution orthophoto. Each tree was botanically identified and measured for its height; in addition, trunk and canopy diameters were taken. The validation survey provided a database consisting of 30 transacts of 50 meters length. Overall, 257 plant individuals were overlain on the classified images for the classification performance examination stage. The same ground truth points were used to evaluate classified images from PCBC, K-means and ISODATA classifiers and the corresponding error matrix were constructed independently. The producer’s accuracy, user’s accuracy and overall accuracy were calculated according to the error matrices [66].

3. Results and Validation

3.1. K-Means and ISODATA Results

To quantitatively assess the results of the K-means and ISODATA classifiers, we conducted a validation process based on the ground-truth points collected in the field. The classes were not well-defined for either K-means or ISODATA and were mixed together with non-vegetation pixels. In general, the K-means classifier identified vegetation and non-vegetation pixels. Among the vegetation pixels, the classifier identified only two species: P. halepensis and oak trees (including both Q. calliprinos and Q. ithaburensis as one class) (Figure 6a).
The ISODATA classifier differentiated more species than the K-means classifier. The identified species were: P. halepensis, Q. calliprinos and Q. ithaburensis (a separate class for each), shrubs, and lianas. Moreover, the roads were identified as one class. Figure 6 presents subscenes of the results of these classifiers. Table 2 and Table 3 present the validation results of the K-means and ISODATA classifiers, respectively, for the Components image, which was produced based on the Specim AisaFENIX reflectance image. As an initial stage, we assessed the distribution of plant species in the field site based on prior knowledge of existing plant species at the research site and assigned each automatically-classified category to a specific species or a general type of vegetation (e.g., "shrubs"). Since we used an automatic classification method, i.e., that is not based on data for calibration, we used all the ground-truthed data (N = 257) to validate the results of the K-means and ISODATA. To estimate the fit between the defined categories and plant identities, as mentioned before, each identified cluster represented crowns of similar plant species. Although the field survey included tree crown measurements, it was challenging to estimate the exact number of pixels representing each crown. Therefore, at the validation stage, in the case of a defined crown cluster covering the ground-truth validation point, it was considered an individual plant species/type rather than evaluated for its number of pixels. Hence, the confusion matrix column shown in Table 2 and Table 3: “Number of classified points in image” represents K-means and ISODATA class clusters, respectively, appearing at the location of the ground-truth points, and the row: “Number of ground-data points” represents the number of individual plants of each species/type identified in the field survey. This type of matrix is widely used in remote-sensing studies to validate classification results [26,27].
As can be seen in Table 2 and Table 3, only the P. halepensis class was identified with 100% producer and user accuracies by both K-means and ISODATA. The classes for other species were identified with lower accuracies. Classes detected by the ISODATA classifier were identified with higher accuracy. However, the overall accuracy of the K-means classifier was higher than the overall accuracy of the ISODATA classifier as it was calculated for a smaller number of classes. Moreover, both classifiers classified the roads in the research site as one cluster.

3.2. PCABC Results

Compared to the K-means and ISODATA classifiers, the PCABC resulted in the detection of six independent classes of vegetation in the studied Mediterranean forest (Figure 7). The classes appeared well-defined and spread across the image, aligned with the crowns of trees and shrubs (Figure 7). As in most unsupervised classification methodologies with no preliminary input data aside from the image itself, the classes clustered by PCABC must be identified and recognized. As mentioned before, an initial stage was conducted. In this stage, we assessed the distribution of plant species in the field site based on prior knowledge of existing plat species at the research site and assigned each automatically-classified category to a specific species or a general type of vegetation (e.g., "shrubs"). Since we used an automatic classification method, i.e., that is not based on data for calibration, we used all the ground-truthed data to validate the results of the automatic classification and to estimate the fit between the defined categories (PCABC classes) and plant identities. Classes and plant species were identified based on the comprehensive field survey.
Overall, the six vegetation classes were identified as: P. halepensis (Figure 7a), trees covered by lianas (Figure 7b); P. lentiscus (Figure 7c); shrubs: Calicotome villosa, Sarcopoterium spinosum, Rhamnus lycioides and Phillyrea latifolia (Figure 7d); and Q. calliprinos (Figure 7e) and Q.s ithaburensis (Figure 7f). Table 4 presents the validation results of the PCABC. The validation process was performed in the same way as for the K-means and ISODATA classifiers.
Table 4 presents all six identified classes as well as some unidentified pixels (“No class”). All six plant classes were detected with high accuracy. Note that the classes of P. halepensis, lianas, P. lentiscus, and Q. ithaburensis were detected with a phenomenal user accuracy of 100%. The high producer accuracy and the high Kappa coefficient indicate that very few misclassifications were observed for all classes.
An examination of the total classified forest area showed that approximately 41% of the research site was classified as non-vegetation and approximately 53% was classified as plant species. Only 6% of the research site remained unclassified after the completion of four PCABC iterations. Unclassified data mostly included canopy marginal areas of the identified plant species. Figure 8 presents a thematic map excluding the non-vegetation pixels and encompassing all detected classifications.
Correspondence of the classes with the ground-truth points was high. For instance, classification by the PCABC method succeeded in detecting short trees (<2 m) with a canopy diameter as small as 1 m in the dense forest. Figure 9 presents an example of a P. halepensis sapling (height 1.5 m, canopy diameter 1 m), which was identified by the PCABC method growing within a much larger patch of shrubs. Moreover, the detection of lianas (a collection of three to five different species) growing on top of different tree canopies is not trivial due to the potential for mixed pixels. Figure 10 presents an example of a ground-truth validation point showing correct identification of a tree canopy covered by lianas.

4. Discussion

The PCABC methodology presented here was formulated to enable improved unsupervised classification of plant species and types in a Mediterranean forest. Figure 11a presents the mean spectra of all six PCABC-identified classes in the Specim AisaFENIX image. Most plant classes contained similar spectral features, with minor variations. Nonetheless, PCABC provided highly improved classification compared to the traditional unsupervised classification methodologies, resulting in the identification of all six classes of Mediterranean trees and shrubs with an overall accuracy of 91%.
Figure 11b presents the mean spectra (full lines) ± one standard deviation spectra (dashed lines) calculated for Q. ithaburensis (light blue lines) and Q. calliprinos (dark blue lines). The plots of the spectra ± one standard deviation of these oak tree species are one above the other, indicating that the difference between these two dominant Mediterranean oak species is minor and mainly related to the overall albedo. This means that the spectral features are well defined and almost identical. In fact, one of the main biological differences between the two oaks is that Q. ithaburensis (upper curve) is winter deciduous with a thinner cuticle and hairy trichomes on the leaf surface, whereas Q. calliprinos (lower curve) is evergreen, having tough sclerophyllous leaves with a thick cuticle and no epidermal hairs. Moreover, as illustrated in Figure 11c, the ratio between the mean spectra of each of these classes was almost a straight line, with values of above 0.8 (in gray). Nevertheless, the two oak species were identified by PCABC with high user accuracies of 92% and 100%, respectively. Since both oak species are dominant in the studied forest, the ability to separate them is essential.
The class of shrubs was also identified with a relatively high user accuracy of 83%. Misclassification in this class was mostly related to it being a collection of up to six different shrub species with a wider range of spectral features compared to classes representing single species; therefore, they might resemble the spectral features of other classes (Figure 11).
It should be noted that the accuracy of the classification was tested for identification of particular tree species rather than the entire canopy of each plant individual, which means that parts of the canopy may not have been identified as part of the classified tree due to shading or relief. However, as long as most of the visible canopy of a tree species (appearing in the high-resolution orthophoto) was classified, it was considered to be a single tree belonging to that class.
Unidentified pixels and misclassification may be attributed to two main sources. The first is an enhanced bidirectional reflectance distribution function (BRDF). In this case, because the acquisition of the image was at 06:55 UTC, the direction of the sun, and therefore the BRDF effects, might explain why our PCABC method did not relate all pixels to one of the six classes. The second source of error might be related to the manual clustering applied in the PCABC methodology. As noted, this methodology is based on a binary categorization by the ENVI DS tool (Research Systems Inc. 1999), where the value of the ranges is statistically evaluated. Therefore, some of the pixels may not have been correctly assigned. One of the common difficulties in both supervised and unsupervised classifications is the spectral similarity between classes, especially of vegetation [26,27,49,52,71].
Previous studies have found, in other ecosystems, that the abundance of lianas and vines has a negative impact on the aboveground biomass of forest trees [72,73,74]. This may be true for Mediterranean forests as well, and if so, may be useful as a predictor of changes in tree health and biomass, thereby contributing to forest management [75]. We believe that the PCABC methodology will contribute to improved monitoring, management, and conservation of Mediterranean forests, and that future use of this methodology will expand its performance to other fields as well.

5. Summary and Conclusions

PCABC is primarily an image-based methodology with no reference input. As such, it may be considered unsupervised. The methodology is designed to overcome difficulties faced by traditional classifiers, such as the high dimensionality of hyperspectral data and the high correlation of adjacent band data [26,27,71,76]. Commonly-used unsupervised classification methodologies, including those tested here, are mostly based on class-comparative statistics (e.g., mean and standard deviation or error) and spectral-based numerical calculations [49,52,63]. These were not sensitive enough for this study. In contrast, PCABC results reached an overall accuracy of 91% and as high as 100% accuracy for most classes. The methodology displayed high efficiency at recognizing all major plant species of the Mediterranean forest as well as distinguishing between different species of the same genus having nearly similar spectral features. Moreover, trees with crowns under 2 m in diameter and liana growing on top of different tree canopies were identified with high accuracy. To the best of our knowledge, such a detailed and accurate classification has never been achieved for diverse sclerophyllous Mediterranean forests by supervised or unsupervised classification. This ability was attributed to both the high spatial resolution of the Specim AisaFENIX data and the high performance of the PCABC.
The ability to detect subtle spectral differences between similar plant species or types and classify them clearly demonstrates PCABC’s ability to overcome the known disadvantages of previous PCA-based methodologies, such as low sensitivity to subtle differences in the target reference as well as inability to detect small classes [25,36]. This is attributed to the advantages of PCA for unsupervised variability detection in HRS enhanced by the sequential processing. The scheme presented here utilizes known practices, such as PCA, but differs from previous classification-intended research in both concept and practice. First, PCA was not used in this procedure as a preprocessing stage to reduce data size prior to classification, but as a tool for reparability classification. Therefore, no inverse PCA was applied. Secondly, no spectral reduction was applied. To deal with the large HRS dataset, we simultaneously reduced the size of the data and classified it by spatial removal of the detected clustered areas rather than excluding bands from the original dataset. As no inverse transformation and no spectral reduction were applied, there was no loss of spectral data during the procedure. Furthermore, we used PCABC in an iterative procedure in which PCA is run on all of the original dataset bands at every iteration. Applying PCA to a smaller spatial subset of the image at every consecutive iteration reduced the variability compared to the original dataset, and therefore, increased the correlation detection at each iteration with less variables, while using all spectral data (bands) for the classification. In contrast to the conventional methodologies that were tested, PCABC results reached an overall accuracy of 91% and as high as 100% accuracy.
Due to its improved classification results, PCABC shows promise as a tool for unsupervised classification, enabling the mapping of diverse Mediterranean forests and potentially monitoring changes in the composition and structure of these dense forests, woodlands and shrublands. As noted, the PCABC methodology requires user-defined clustering for each iteration and is not yet fully automated. Therefore, both binary thresholding and the subjectivity of the applied manual clustering may result in misclassifications. The methodology has yet to be tested with other sensors (e.g., multispectral), backgrounds (e.g., urban, tropical forests, water bodies) or with subsequent datasets, and therefore, its further application to different forests is strongly warranted.

Author Contributions

Both authors: A.D. and M.M. preformed evenly the field work together with E.S. The writing of the article was done evenly by: A.D. and M.M. E.B.-D. has supervised the study and contributed a scientific nest to carry out the study and participated in the discussion. All authors contributed to the final version of the manuscript.

Funding

The study received no financial funding.

Acknowledgments

The authors would like to thank the forest rangers of KKL for their help during the conductance of this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Asner, G.P.; Martin, R.E.; Anderson, C.B.; Knapp, D.E. Quantifying forest canopy traits: Imaging spectroscopy versus field survey. Remote Sens. Environ. 2015, 158, 15–27. [Google Scholar] [CrossRef]
  2. Ustin, S.L.; Roberts, D.A.; Gamon, J.A.; Asner, G.P.; Green, R.O. Using Imaging Spectroscopy to Study Ecosystem Processes and Properties. BioScience 2004, 54, 523–534. [Google Scholar] [CrossRef]
  3. Jaiswal, R.K.; Mukherjee, S.; Raju, K.D.; Saxena, R. Forest fire risk zone mapping from satellite imagery and GIS. Int. J. Appl. Earth Obs. Geoinf. 2002, 4, 1–10. [Google Scholar] [CrossRef]
  4. Carlson, K.M.; Asner, G.P.; Hughes, R.F.; Ostertag, R.; Martin, R.E. Hyperspectral Remote Sensing of Canopy Biodiversity in Hawaiian Lowland Rainforests. Ecosystems 2007, 10, 536–549. [Google Scholar] [CrossRef]
  5. Clark, M.L.; Roberts, D.A.; Clark, D.B. Hyperspectral discrimination of tropical rain forest tree species at leaf to crown scales. Remote Sens. Environ. 2005, 96, 375–398. [Google Scholar] [CrossRef]
  6. Francis, E.J.; Asner, G.P. High-Resolution Mapping of Redwood (Sequoia sempervirens) Distributions in Three Californian Forests. Remote Sens. 2019, 11, 351. [Google Scholar] [CrossRef]
  7. Peng, Y.; Fan, M.; Bai, L.; Sang, W.; Feng, J.; Zhao, Z.; Tao, Z. Identification of the Best Hyperspectral Indices in Estimating Plant Species Richness in Sandy Grasslands. Remote Sens. 2019, 11, 588. [Google Scholar] [CrossRef]
  8. Aslett, Z.; Taranik, J.V.; Riley, D.N. Mapping rock forming minerals at Boundary Canyon, Death Valey National Park, California, using aerial SEBASS thermal infrared hyperspectral image data. Int. J. Appl. Earth Obs. Geoinf. 2018, 64, 326–339. [Google Scholar] [CrossRef]
  9. Aitkenhead, M.J.; Black, H.I.J. Exploring the impact of different input data types on soil variable estimation using the ICRAF-ISRIC global soil spectral database. Appl. Spectrosc. 2018, 72, 188–198. [Google Scholar] [CrossRef]
  10. Cao, Z.; Wang, Q. Retrieval of leaf fuel moisture contents from hyperspectral indices developed from dehydration experiments. Eur. J. Remote Sens. 2017, 50, 18–28. [Google Scholar] [CrossRef]
  11. Carmon, N.; Ben-Dor, E. Mapping Asphaltic Roads’ Skid Resistance Using Imaging Spectroscopy. Remote Sens. 2018, 10, 430. [Google Scholar] [CrossRef]
  12. Carmon, N.; Ben-Dor, E. Rapid Assessment of Dynamic Friction Coefficient of Asphalt Pavement Using Reflectance Spectroscopy. IEEE Geosci. Remote Sens. Lett. 2016, 13, 721–724. [Google Scholar] [CrossRef]
  13. Gholizadeh, A.; Saberioon, M.; Ben-Dor, E.; Boruvka, L. Monitoring of selected soil contaminants using proximal and remote sensing techniques: Background, state-of-the-art and future perspectives. Crit. Rev. Environ. Sci. Technol. 2018, 48, 243–278. [Google Scholar] [CrossRef]
  14. Govil, H.; Gill, N.; Rajendran, S.; Santosh, M.; Kumar, S. Identification of new base metal mineralization in Kumaon Himalaya, India, using hyperspectral remote sensing and hydrothermal alteration. Ore Geol. Rev. 2018, 92, 271–283. [Google Scholar] [CrossRef]
  15. Homolová, L.; Malenovský, Z.; Clevers, J.G.; García-Santos, G.; Schaepman, M.E. Review of optical-based remote sensing for plant trait mapping. Ecol. Complex. 2013, 15, 1–16. [Google Scholar] [CrossRef]
  16. Houborg, R.; Fisher, J.B.; Skidmore, A.K. Advances in remote sensing of vegetation function and traits. Int. J. Appl. Earth Obs. Geoinf. 2015, 43, 1–6. [Google Scholar] [CrossRef]
  17. Kokaly, R.F.; Skidmore, A.K. Plant phenolics and absorption features in vegetation reflectance spectra near 1.66 μm. Int. J. Appl. Earth Obs. Geoinf. 2015, 43, 55–83. [Google Scholar] [CrossRef]
  18. Kopačková, V.; Ben-Dor, E.; Carmon, N.; Notesco, G. Modelling Diverse Soil Attributes with Visible to Longwave Infrared Spectroscopy Using PLSR Employed by an Automatic Modelling Engine. Remote Sens. 2017, 9, 134. [Google Scholar] [CrossRef]
  19. Kopačková, V.; Ben-Dor, E. Normalizing reflectance from different spectrometers and protocols with an internal soil standard. Int. J. Remote Sens. 2016, 37, 1276–1290. [Google Scholar] [CrossRef]
  20. Arroyo, L.A.; Pascual, C.; Manzanera, J.A. Fire models and methods to map fuel types: The role of remote sensing. For. Ecol. Manag. 2008, 256, 1239–1252. [Google Scholar] [CrossRef]
  21. Shoshany, M.; Svoray, T. Multidate adaptive unmixing and its application to analysis of ecosystem transitions along a climatic gradient. Remote Sens. Environ. 2002, 82, 5–20. [Google Scholar] [CrossRef]
  22. Wittenberg, L.; Malkinson, D.; Beeri, O.; Halutzy, A.; Tesler, N. Spatial and temporal patterns of vegetation recovery following sequences of forest fires in a Mediterranean landscape, Mt. Carmel Israel. Catena 2007, 71, 76–83. [Google Scholar] [CrossRef]
  23. Deng, J.S.; Wang, K.; Deng, Y.H.; Qi, G.J. PCA-based land-use change detection and analysis using multitemporal and multisensor satellite data. Int. J. Remote Sens. 2008, 29, 4823–4838. [Google Scholar] [CrossRef]
  24. Green, A.; Berman, M.; Switzer, P.; Craig, M. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. IEEE Trans. Geosci. Remote Sens. 1988, 26, 65–74. [Google Scholar] [CrossRef]
  25. Rodarmel, C.; Shan, J. Principal component analysis for hyperspectral image classification. Surv. Land Inf. Sci. 2002, 62, 115–122. [Google Scholar]
  26. Jensen, J.R. Remote Sensing of the Environment: An Earth Resource Perspective; Prentice Hall: Upper Saddle River, NJ, USA, 2000. [Google Scholar]
  27. Jensen, J.R. Introductory Digital Image Processing: A Remote Sensing Perspective; Prentice Hall: Upper Saddle River, NJ, USA, 2005. [Google Scholar]
  28. Liu, L.; Li, C.F.; Lei, Y.M.; Yin, J.Y.; Zhao, J.J. Feature extraction for hyperspectral remote sensing image using weighted PCA-ICA. Arab. J. Geosci. 2017, 10, 307. [Google Scholar] [CrossRef]
  29. Van Aardt, J.A.N.; Wynne, R.H. Examining pine spectral separability using hyperspectral data from an airborne sensor: An extension of field-based results. Int. J. Remote Sens. 2007, 28, 431–436. [Google Scholar] [CrossRef]
  30. Burai, P.; Deák, B.; Valkó, O.; Tomor, T. Classification of Herbaceous Vegetation Using Airborne Hyperspectral Imagery. Remote Sens. 2015, 7, 2046–2066. [Google Scholar] [CrossRef]
  31. Galidaki, G.; Gitas, I. Mediterranean forest species mapping using classification of Hyperion imagery. Geocarto Int. 2015, 30, 48–61. [Google Scholar] [CrossRef]
  32. Kang, X.; Xiang, X.; Li, S.; Benediktsson, J.A. PCA-Based Edge-Preserving Features for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7140–7151. [Google Scholar] [CrossRef]
  33. Kavzoglu, T.; Tonbul, H.; Erdemir, M.Y.; Colkesen, I. Dimensionality Reduction and Classification of Hyperspectral Images Using Object-Based Image Analysis. J. Indian Soc. Remote Sens. 2018, 46, 1297–1306. [Google Scholar] [CrossRef]
  34. Kruse, F.; Lefkoff, A.; Boardman, J.; Heidebrecht, K.; Shapiro, A.; Barloon, P.; Goetz, A. The spectral image processing system (SIPS)—Interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
  35. Pu, R. Wavelet transform applied to EO-1 hyperspectral data for forest LAI and crown closure mapping. Remote Sens. Environ. 2004, 91, 212–224. [Google Scholar] [CrossRef]
  36. Pu, R.; Gong, P.; Tian, Y.; Miao, X.; Carruthers, R.I.; Anderson, G.L. Invasive species change detection using artificial neural networks and CASI hyperspectral imagery. Environ. Monit. Assess. 2008, 140, 15–32. [Google Scholar] [CrossRef]
  37. Bajwa, S.G.; Bajcsy, P.; Groves, P.; Tian, L.F. Hyperspectral image data mining for band selection in agricultural applications. Trans. ASAE 2004, 47, 895–907. [Google Scholar] [CrossRef] [Green Version]
  38. Xia, J.; Falco, N.; Benediktsson, J.A.; Du, P.; Chanussot, J. Hyperspectral image classification with rotation random forest via KPCA. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1601–1609. [Google Scholar] [CrossRef]
  39. Yousefi, B.; Sojasi, S.; Castanedo, C.I.; Maldague, X.P.; Beaudoin, G.; Chamberland, M. Comparison assessment of low rank sparse-PCA based-clustering/classification for automatic mineral identification in long wave infrared hyperspectral imagery. Infrared Phys. Technol. 2018, 93, 103–111. [Google Scholar] [CrossRef]
  40. Abdelaziz, R.; El-Rahman, Y.A.; Wilhelm, S. Landsat-8 data for chromite prospecting in the Logar Massif, Afghanistan. Heliyon 2018, 4, e00542. [Google Scholar] [CrossRef] [Green Version]
  41. Acheampong, M.; Yu, Q.; Enomah, L.D.; Anchang, J.; Eduful, M. Land use/cover change in Ghana’s oil city: Assessing the impact of neoliberal economic policies and implications for sustainable development goal number one—A remote sensing and GIS approach. Land Use Policy 2018, 73, 373–384. [Google Scholar] [CrossRef]
  42. Alexandris, N.; Koutsias, N.; Gupta, S. Remote sensing of burned areas via PCA, Part 2: SVD-based PCA using MODIS and Landsat data. Open Geospat. Data Softw. Stand. 2017, 2, 21. [Google Scholar] [CrossRef]
  43. Arias, O.V.; Garrido, A.; Villeta, M.; Tarquis, A.M. Homogenisation of a soil properties map by principal component analysis to define index agricultural insurance policies. Geoderma 2018, 311, 149–158. [Google Scholar] [CrossRef]
  44. Bellón, B.; Bégué, A.; Seen, D.L.; De Almeida, C.A.; Simões, M. A Remote Sensing Approach for Regional-Scale Mapping of Agricultural Land-Use Systems Based on NDVI Time Series. Remote Sens. 2017, 9, 600. [Google Scholar] [CrossRef] [Green Version]
  45. Cartwright, J.; Johnson, H.M. Springs as hydrologic refugia in a changing climate? A remote-sensing approach. Ecosphere 2018, 9, e02155. [Google Scholar] [CrossRef]
  46. Casagli, N.; Tofani, V.; Ciampalini, A.; Raspini, F.; Lu, P.; Morelli, S. TXT-tool 2.039-3.1: Satellite remote sensing techniques for landslides detection and mapping. In Landslide Dynamics: ISDR-ICL Landslide Interactive Teaching Tools; Sassa, K., Guzzetti, F., Yamagishi, H., Arbanas, Z., Casagli, N., McSaveney, M., Dang, K., Eds.; Springer: Cham, Switzerland, 2018; pp. 235–254. [Google Scholar] [CrossRef]
  47. Geiß, C.; Schauß, A.; Riedlinger, T.; Dech, S.; Zelaya, C.; Guzmán, N.; Hube, M.A.; Arsanjani, J.J.; Taubenböck, H. Joint use of remote sensing data and volunteered geographic information for exposure estimation: Evidence from Valparaíso, Chile. Nat. Hazards 2017, 86, 81–105. [Google Scholar] [CrossRef] [Green Version]
  48. Wang, J.; Luo, C.; Huang, H.; Zhao, H.; Wang, S. Transferring Pre-Trained Deep CNNs for Remote Scene Classification with General Features Learned from Linear PCA Network. Remote Sens. 2017, 9, 225. [Google Scholar] [CrossRef] [Green Version]
  49. Thenkabail, P.S.; Lyon, J.G.; Huete, A. Fundamentals, Sensor Systems, Spectral Libraries, and Data Mining for Vegetation; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  50. Huang, C.Y.; Asner, G.P. Applications of Remote Sensing to Alien Invasive Plant Studies. Sensors 2009, 9, 4869–4889. [Google Scholar] [CrossRef] [Green Version]
  51. Mack, R.N.; Simberloff, D.; Lonsdale, W.M.; Evans, H.; Clout, M.; Bazzaz, F.A. Biotic invasions: Causes, epidemiology, global consequences, and control. Ecol. Appl. 2000, 10, 689–710. [Google Scholar] [CrossRef]
  52. Thenkabail, P.S.; Lyon, J.G.; Huete, A. Hyperspectral Remote Sensing of Vegetation; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  53. Boisvenue, C.; White, J.C. Information Needs of Next-Generation Forest Carbon Models: Opportunities for Remote Sensing Science. Remote Sens. 2019, 11, 463. [Google Scholar] [CrossRef] [Green Version]
  54. Fischer, F.J.; Maréchaux, I.; Chave, J. Improving plant allometry by fusing forest models and remote sensing. New Phytol. 2019, 223, 1159–1165. [Google Scholar] [CrossRef] [Green Version]
  55. Jha, S.N.; Jaiswal, P.; Narsaiah, K.; Gupta, M.; Bhardwaj, R.; Singh, A.K. Non-destructive prediction of sweetness of intact mango using near infrared spectroscopy. Sci. Hortic. 2012, 138, 171–175. [Google Scholar] [CrossRef]
  56. Moreno, A.; Neumann, M.; Mohebalian, P.M.; Thurnher, C.; Hasenauer, H. The Continental Impact of European Forest Conservation Policy and Management on Productivity Stability. Remote Sens. 2019, 11, 87. [Google Scholar] [CrossRef] [Green Version]
  57. Zellweger, F.; De Frenne, P.; Lenoir, J.; Rocchini, D.; Coomes, D. Advances in Microclimate Ecology Arising from Remote Sensing. Trends Ecol. Evol. 2019, 34, 327–341. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Blondel, J.; Aronson, J. Biology and Wildlife of the Mediterranean Region; Oxford University Press: Oxford, UK, 1999. [Google Scholar]
  59. Kruger, F.J.; Mitchell, D.T.; Jarvis, J.U.M. Mediterranean-Type Ecosystems: The Role of Nutrients; Springer Science Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  60. Miller, C.J. Performance Assessment of ACORN Atmospheric Correction Algorithm. In Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII; International Society for Optics and Photonics: Orlando, FL, USA, 2002; pp. 438–450. [Google Scholar] [CrossRef]
  61. Brook, A.; Ben-Dor, E. Supervised Vicarious Calibration (SVC) of Multi-Source Hyperspectral Remote-Sensing Data. Remote Sens. 2015, 7, 6196–6223. [Google Scholar] [CrossRef] [Green Version]
  62. Dunn, J.C. A Fuzzy Relative of the ISODATA Process and Its Use in Detecting Compact Well-Separated Clusters. J. Cybern. 1973, 3, 32–57. [Google Scholar] [CrossRef]
  63. Zhang, Y.; Lu, D.; Yang, B.; Sun, C.; Sun, M. Coastal wetland vegetation classification with a Landsat Thematic Mapper image. Int. J. Remote Sens. 2011, 32, 545–561. [Google Scholar] [CrossRef]
  64. Foody, G.M. Status of land cover classification accuracy assessment. Remote Sens. Environ. 2002, 80, 185–201. [Google Scholar] [CrossRef]
  65. Congalton, R.G. A Quantitative Method to Test for Consistency and Correctness in Photointerpretation. Photogramm. Eng. Remote Sens. 1983, 49, 69–74. [Google Scholar]
  66. Hudson, W.D. Correct formulation of the Kappa coefficient of agreement. Photogramm. Eng. Remote Sens. 1987, 53, 421–422. [Google Scholar]
  67. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  68. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices; Mapping Science Series; Lewis: Boca Raton, FL, USA, 1999; ISBN 978-0-87371-986-5. [Google Scholar]
  69. Jensen, J.R.; McMaster, R.B.; Rizos, C. Manual of Geospatial Science and Technology; Informa UK Limited: Colchester, UK, 2001. [Google Scholar]
  70. Dadon, A.; Ben-Dor, E.; Beyth, M.; Karnieli, A. Examination of spaceborne imaging spectroscopy data utility for stratigraphic and lithologic mapping. J. Appl. Remote Sens. 2011, 5, 53507. [Google Scholar] [CrossRef]
  71. Thenkabail, P.S. Land Resources Monitoring, Modeling, and Mapping with Remote Sensing; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  72. Lai, H.R.; Hall, J.S.; Turner, B.L.; Van Breugel, M. Liana effects on biomass dynamics strengthen during secondary forest succession. Ecology 2017, 98, 1062–1070. [Google Scholar] [CrossRef] [PubMed]
  73. Schnitzer, S. The ecology of lianas and their role in forests. Trends Ecol. Evol. 2002, 17, 223–230. [Google Scholar] [CrossRef] [Green Version]
  74. Visser, M.D.; Schnitzer, S.A.; Muller-Landau, H.C.; Jongejans, E.; De Kroon, H.; Comita, L.S.; Hubbell, S.P.; Wright, S.J.; Muller-Landau, H.C.; Kroon, H. Tree species vary widely in their tolerance for liana infestation: A case study of differential host response to generalist parasites. J. Ecol. 2017, 106, 781–794. [Google Scholar] [CrossRef] [Green Version]
  75. Ledo, A.; Illian, J.B.; Schnitzer, S.A.; Wright, S.J.; Dalling, J.W.; Burslem, D.F.R.P. Lianas and soil nutrients predict fine-scale distribution of above-ground biomass in a tropical moist forest. J. Ecol. 2016, 104, 1819–1828. [Google Scholar] [CrossRef] [Green Version]
  76. Lillesand, T.M.; Kiefer, R.W.; Chipman, J.W. Remote Sensing and Image Interpretation, 5th ed.; Wiley: New York, NY, USA, 2004. [Google Scholar]
Figure 1. Mount Horshan, northern Israel, aerial photo (WorldView 2 satellite image) overlaid with the Specim AisaFENIX image. The red dots mark the location of 257 ground-truth validation points.
Figure 1. Mount Horshan, northern Israel, aerial photo (WorldView 2 satellite image) overlaid with the Specim AisaFENIX image. The red dots mark the location of 257 ground-truth validation points.
Remotesensing 11 02800 g001
Figure 2. PCABC flowchart presenting all stages of the classification process.
Figure 2. PCABC flowchart presenting all stages of the classification process.
Remotesensing 11 02800 g002
Figure 3. Illustration of the process for producing the non-vegetation mask by PCABC method in a subscene of the image (marked in red square). (a) Keren Kayemeth LeIsrael (KKL) orthophoto of the subscene showing that the pixels that were masked out are indeed related to roads and shade (non-vegetation pixels). (b) Specim AisaFENIX RGB. (c) PCA first iteration 1, component 1 (non-vegetation pixels appear in black). (d) Marking of the non-vegetation pixels using DS (non-vegetation pixels appear in cyan, yellow and blue colors; DS values: −51.84 to −5.03). (e) Non-vegetation pixels overlaid on RGB image. (f) Final product of the non-vegetation pixel mask applied to the image.
Figure 3. Illustration of the process for producing the non-vegetation mask by PCABC method in a subscene of the image (marked in red square). (a) Keren Kayemeth LeIsrael (KKL) orthophoto of the subscene showing that the pixels that were masked out are indeed related to roads and shade (non-vegetation pixels). (b) Specim AisaFENIX RGB. (c) PCA first iteration 1, component 1 (non-vegetation pixels appear in black). (d) Marking of the non-vegetation pixels using DS (non-vegetation pixels appear in cyan, yellow and blue colors; DS values: −51.84 to −5.03). (e) Non-vegetation pixels overlaid on RGB image. (f) Final product of the non-vegetation pixel mask applied to the image.
Remotesensing 11 02800 g003
Figure 4. Illustration of the process for classifying plant species using the PCABC method in a subscene of the image (marked with a red square). (a) PCA iteration 1, component 1, with no differences among the vegetation pixels. (b) PCA iteration 2, component 1, with clear differences among the vegetation pixels (higher PC values appear in white). (c) Marking of plant species using the DS tool; vegetation pixels marked in different colors (based on the legend). (d) Example of one of the classes (marked in cyan) identified based on the highest DS values.
Figure 4. Illustration of the process for classifying plant species using the PCABC method in a subscene of the image (marked with a red square). (a) PCA iteration 1, component 1, with no differences among the vegetation pixels. (b) PCA iteration 2, component 1, with clear differences among the vegetation pixels (higher PC values appear in white). (c) Marking of plant species using the DS tool; vegetation pixels marked in different colors (based on the legend). (d) Example of one of the classes (marked in cyan) identified based on the highest DS values.
Remotesensing 11 02800 g004
Figure 5. Illustration of combining the highest DS values into one DS cluster. (a) The final plant cluster that was produced in the second iteration of the PCABC process. (b) The tree crowns marked with different DS values (before the merger). The full DS values of the cluster are shown in the legend.
Figure 5. Illustration of combining the highest DS values into one DS cluster. (a) The final plant cluster that was produced in the second iteration of the PCABC process. (b) The tree crowns marked with different DS values (before the merger). The full DS values of the cluster are shown in the legend.
Remotesensing 11 02800 g005
Figure 6. Subscenes of the results of classification using two different classifiers on the PCA components image: (a) K-means and (b) ISODATA.
Figure 6. Subscenes of the results of classification using two different classifiers on the PCA components image: (a) K-means and (b) ISODATA.
Remotesensing 11 02800 g006
Figure 7. Subscenes of the six classes detected using the PCABC methodology. Different colors represent the locations of the plant species in the image. These were later identified as: (a) P. halepensis. (b) Trees covered by lianas. (c) P. lentiscus. (d) Shrubs. (e) Q. calliprinos. (f) Q. ithaburensis.
Figure 7. Subscenes of the six classes detected using the PCABC methodology. Different colors represent the locations of the plant species in the image. These were later identified as: (a) P. halepensis. (b) Trees covered by lianas. (c) P. lentiscus. (d) Shrubs. (e) Q. calliprinos. (f) Q. ithaburensis.
Remotesensing 11 02800 g007
Figure 8. Thematic map of the PCABC plant species classes.
Figure 8. Thematic map of the PCABC plant species classes.
Remotesensing 11 02800 g008
Figure 9. A small Pinus halepensis tree identified by PCABC. (a) P. halepensis class based on PCABC (marked in cyan color); red circle marks a small tree with a height of 1.5 m and canopy diameter of 1 m. (b) The P. halepensis tree in the field.
Figure 9. A small Pinus halepensis tree identified by PCABC. (a) P. halepensis class based on PCABC (marked in cyan color); red circle marks a small tree with a height of 1.5 m and canopy diameter of 1 m. (b) The P. halepensis tree in the field.
Remotesensing 11 02800 g009
Figure 10. Example of a tree covered by lianas. (a) Lianas class based on PCABC (marked in magenta color); red circle marks the location of the photographed tree in the field. (b) Tree covered by lianas in the field.
Figure 10. Example of a tree covered by lianas. (a) Lianas class based on PCABC (marked in magenta color); red circle marks the location of the photographed tree in the field. (b) Tree covered by lianas in the field.
Remotesensing 11 02800 g010
Figure 11. Illustration of similarity of the spectra of detected classes. (a) Plots of mean spectra for the six plant classes. (b) Mean and standard deviation spectra for Q. calliprinos and Q. ithaburensis classes. (c) Means for Q. calliprinos (blue line) and Q. ithaburensis (orange line). Gray line represents the ratio of the two means. Results indicate high similarity between the two species, reflected by a nearly straight line, with values of above 0.8 (reflectance).
Figure 11. Illustration of similarity of the spectra of detected classes. (a) Plots of mean spectra for the six plant classes. (b) Mean and standard deviation spectra for Q. calliprinos and Q. ithaburensis classes. (c) Means for Q. calliprinos (blue line) and Q. ithaburensis (orange line). Gray line represents the ratio of the two means. Results indicate high similarity between the two species, reflected by a nearly straight line, with values of above 0.8 (reflectance).
Remotesensing 11 02800 g011
Table 1. K-means and ISODATA classification parameters.
Table 1. K-means and ISODATA classification parameters.
ParameterK-meansISODATA
Number of classes55–10
Maximum iterations55
Change threshold (%)55
Minimum pixels in class-1
Maximum class standard deviation-1
Maximum class distance-5
Maximum merge pairs02
Maximum standard deviation from mean00
Maximum distance error00
Table 2. Validation results of AisaFENIX K-means classification on the PCA components image (no mask of non-vegetation pixels).
Table 2. Validation results of AisaFENIX K-means classification on the PCA components image (no mask of non-vegetation pixels).
Ground dataNumber of Classified Points in ImageProducer Accuracy (%)User Accuracy (%)
ClassPinus halepensisLianasPistacia lentiscusShrubsQuercus ithaburensisQuercus calliprinos
Pinus halepensis350000035100100
Lianas000000000
Pistacia lentiscus000000000
Shrubs000000000
Quercus ithaburensis0000107175959
Quercus calliprinos0000752598888
Number of ground-data points354538631759257--
Overall accuracy: 37.7%; Kappa coefficient: 0.788.
Table 3. Validation results of AisaFENIX ISODATA classification on the PCA components image (no mask of non-vegetation pixels).
Table 3. Validation results of AisaFENIX ISODATA classification on the PCA components image (no mask of non-vegetation pixels).
Ground dataNumber of Classified Points in ImageProducer Accuracy (%)User Accuracy (%)
ClassPinus halepensisLianasPistacia lentiscusShrubsQuercus ithaburensisQuercus calliprinos
Pinus halepensis350000035100100
Lianas0250800335676
Pistacia lentiscus000000000
Shrubs0005507628789
Quercus ithaburensis00001001059100
Quercus calliprinos02000752628884
Number of ground-data points354538631759257--
Overall accuracy: 68.8%; Kappa coefficient: 0.789.
Table 4. PCABC validation results.
Table 4. PCABC validation results.
Ground dataNumber of Classified Points in ImageProducer Accuracy (%)User Accuracy (%)
ClassPinus halepensisLianasPistacia lentiscusShrubsQuercus ithaburensisQuercus calliprinosNo Class
Pinus halepensis3500000035100100
Lianas037000003782100
Pistacia lentiscus003100003182100
Shrubs05662020759883
Quercus ithaburensis000012001271100
Quercus calliprinos00014570629792
No class0310100500
Number of ground-data points3545386317590257--
Overall accuracy: 91%; Kappa coefficient: 0.918.

Share and Cite

MDPI and ACS Style

Dadon, A.; Mandelmilch, M.; Ben-Dor, E.; Sheffer, E. Sequential PCA-based Classification of Mediterranean Forest Plants using Airborne Hyperspectral Remote Sensing. Remote Sens. 2019, 11, 2800. https://doi.org/10.3390/rs11232800

AMA Style

Dadon A, Mandelmilch M, Ben-Dor E, Sheffer E. Sequential PCA-based Classification of Mediterranean Forest Plants using Airborne Hyperspectral Remote Sensing. Remote Sensing. 2019; 11(23):2800. https://doi.org/10.3390/rs11232800

Chicago/Turabian Style

Dadon, Alon, Moshe Mandelmilch, Eyal Ben-Dor, and Efrat Sheffer. 2019. "Sequential PCA-based Classification of Mediterranean Forest Plants using Airborne Hyperspectral Remote Sensing" Remote Sensing 11, no. 23: 2800. https://doi.org/10.3390/rs11232800

APA Style

Dadon, A., Mandelmilch, M., Ben-Dor, E., & Sheffer, E. (2019). Sequential PCA-based Classification of Mediterranean Forest Plants using Airborne Hyperspectral Remote Sensing. Remote Sensing, 11(23), 2800. https://doi.org/10.3390/rs11232800

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop