Multi-Spectral Image Classification Based on an Object-Based Active Learning Approach
"> Figure 1
<p>An example to illustrate how the proposed query function works.</p> "> Figure 2
<p>Overall workflow of the proposed AL algorithm. The arrows in the red-solid-line box of “the proposed AL” are numbered to indicate the order of the AL process. Note that the letter after the arrow number, e.g., (3a), means the part (a) of step 3.</p> "> Figure 3
<p>The dataset “T1.” (<b>a</b>,<b>c</b>) are the original images of “T1A” and “T1B,” respectively. Their color configuration is R: near-infrared (NIR); G: red; and B: green. (<b>b</b>,<b>d</b>) are the ground truth sample polygons. The legend at the bottom shows the class types of the samples.</p> "> Figure 3 Cont.
<p>The dataset “T1.” (<b>a</b>,<b>c</b>) are the original images of “T1A” and “T1B,” respectively. Their color configuration is R: near-infrared (NIR); G: red; and B: green. (<b>b</b>,<b>d</b>) are the ground truth sample polygons. The legend at the bottom shows the class types of the samples.</p> "> Figure 4
<p>The dataset “T2.” (<b>a</b>,<b>c</b>) are the original images of “T2A” and “T2B,” respectively. Their color configuration is R: NIR; G: red; and B: green. (<b>b</b>,<b>d</b>) are the ground truth sample polygons. The legend at the bottom shows the class types of the samples.</p> "> Figure 5
<p>The dataset “T3.” (<b>a</b>,<b>c</b>) are the original images of “T3A” and “T3B,” respectively. Their color configuration is R: NIR; G: red; and B: green. (<b>b</b>,<b>d</b>) are the ground truth sample polygons. The legend at the bottom shows the class types of the samples.</p> "> Figure 6
<p>Scale analysis of the Johnson and Xie (JX) approach. (<b>a</b>) “T1A;” (<b>b</b>) “T1B;” (<b>c</b>) “T2A;” (<b>d</b>) “T2B;” (<b>e</b>) “T3A;” and (<b>f</b>) “T3B.”.</p> "> Figure 6 Cont.
<p>Scale analysis of the Johnson and Xie (JX) approach. (<b>a</b>) “T1A;” (<b>b</b>) “T1B;” (<b>c</b>) “T2A;” (<b>d</b>) “T2B;” (<b>e</b>) “T3A;” and (<b>f</b>) “T3B.”.</p> "> Figure 7
<p>“T1”’s segmentation results and the selected samples used for the AL method. (<b>a</b>,<b>d</b>) are the segmentation results of “T1A” and “T1B,” respectively. (<b>b</b>) shows the sample objects selected from “T1A,” and (<b>c</b>) illustrates the class labels of the samples exhibited in (<b>b</b>).</p> "> Figure 8
<p>“T2”’s segmentation results and the selected samples used for AL method. (<b>a</b>,<b>d</b>) are the segmentation results of “T2A” and “T2B,” respectively. (<b>b</b>) shows the sample objects selected from “T2A,” and (<b>c</b>) illustrates the class labels of the samples exhibited in (<b>b</b>).</p> "> Figure 9
<p>“T3”’s segmentation results and the selected samples used for AL method. (<b>a</b>,<b>d</b>) are the segmentation results of “T3A” and “T3B,” respectively. (<b>b</b>) shows the sample objects selected from “T3A,” and (<b>c</b>) illustrates the class labels of the samples exhibited in (<b>b</b>).</p> "> Figure 10
<p>The dependence of AL performance of “T1” and the number of AL-selected training samples for the 4 AL algorithms in the 8 situations of object feature combinations. (<b>a</b>) S; (<b>b</b>) GS; (<b>c</b>) ST; (<b>d</b>) SC; (<b>e</b>) GST; (<b>f</b>) GSC; (<b>g</b>) STC; and (<b>h</b>) GSTC.</p> "> Figure 10 Cont.
<p>The dependence of AL performance of “T1” and the number of AL-selected training samples for the 4 AL algorithms in the 8 situations of object feature combinations. (<b>a</b>) S; (<b>b</b>) GS; (<b>c</b>) ST; (<b>d</b>) SC; (<b>e</b>) GST; (<b>f</b>) GSC; (<b>g</b>) STC; and (<b>h</b>) GSTC.</p> "> Figure 11
<p>The dependence of AL performance of “T2” and the number of AL-selected training samples for the 4 AL algorithms in the 8 situations of object feature combinations. (<b>a</b>) S; (<b>b</b>) GS; (<b>c</b>) ST; (<b>d</b>) SC; (<b>e</b>) GST; (<b>f</b>) GSC; (<b>g</b>) STC; and (<b>h</b>) GSTC.</p> "> Figure 12
<p>The dependence of AL performance of “T3” and the number of AL-selected training samples for the 4 AL algorithms in the 8 situations of object feature combinations. (<b>a</b>) S; (<b>b</b>) GS; (<b>c</b>) ST; (<b>d</b>) SC; (<b>e</b>) GST; (<b>f</b>) GSC; (<b>g</b>) STC; and (<b>h</b>) GSTC.</p> "> Figure 12 Cont.
<p>The dependence of AL performance of “T3” and the number of AL-selected training samples for the 4 AL algorithms in the 8 situations of object feature combinations. (<b>a</b>) S; (<b>b</b>) GS; (<b>c</b>) ST; (<b>d</b>) SC; (<b>e</b>) GST; (<b>f</b>) GSC; (<b>g</b>) STC; and (<b>h</b>) GSTC.</p> "> Figure 13
<p>The optimal classification results of the 4 AL methods for “T1” when the GS feature combination was used. (<b>a</b>) M1; (<b>b</b>) M2; (<b>c</b>) M3; (<b>d</b>) M4; (<b>e</b>) M5; and (<b>f</b>) M6. The circles indicate the erroneously classified objects, and the color of a circle represents the correct class type for the corresponding object.</p> "> Figure 14
<p>The optimal classification results of the 4 AL methods for “T2” when the GST feature combination was used. (<b>a</b>) M1; (<b>b</b>) M2; (<b>c</b>) M3; (<b>d</b>) M4; (<b>e</b>) M5; and (<b>f</b>) M6. The circles indicate the erroneously classified objects, and the color of a circle means the correct class type for the corresponding object.</p> "> Figure 15
<p>The optimal classification results of the 4 AL methods for “T3” when the STC feature combination was used. (<b>a</b>) M1; (<b>b</b>) M2; (<b>c</b>) M3; (<b>d</b>) M4; (<b>e</b>) M5; and (<b>f</b>) M6. The circles indicate the erroneously classified objects, and the color of a circle means the correct class type for the corresponding object.</p> "> Figure 15 Cont.
<p>The optimal classification results of the 4 AL methods for “T3” when the STC feature combination was used. (<b>a</b>) M1; (<b>b</b>) M2; (<b>c</b>) M3; (<b>d</b>) M4; (<b>e</b>) M5; and (<b>f</b>) M6. The circles indicate the erroneously classified objects, and the color of a circle means the correct class type for the corresponding object.</p> "> Figure 16
<p>Effects of parameter <span class="html-italic">q</span> for the proposed AL algorithm in “T1”’s experiment. The feature combination was GSTC.</p> "> Figure 17
<p>Effects of parameter <span class="html-italic">q</span> for the proposed AL algorithm in “T2”’s experiment. The feature combination was GSTC.</p> "> Figure 18
<p>Effects of parameter <span class="html-italic">q</span> for the proposed AL algorithm in “T3”’s experiment. The feature combination was GSTC.</p> "> Figure 19
<p>The improvement of average overall accuracies (learning rates) for the 6 AL algorithms in different feature-combination cases. (<b>a</b>) “T1;” (<b>b</b>) “T2;” and (<b>c</b>) “T3.”</p> "> Figure 20
<p>The highest overall accuracies obtained by using the 6 AL algorithms in different feature-combination cases. (<b>a</b>) “T1;” (<b>b</b>) “T2;” and (<b>c</b>) “T3.”.</p> ">
Abstract
:1. Introduction
2. Methodology
2.1. Basics of Active Learning
2.2. Object-Based Active Learning
2.3. Random Forest-Based Query Model
2.4. The Proposed AL Algorithm
2.4.1. Details of the Proposed AL
2.4.2. Details of the Whole Processing Chain
2.5. Object-Based Feature Extraction
3. Dataset
3.1. Satellite Image Data
3.2. Sample Collection
4. Experimental Results
4.1. Results of Image Segmentation
4.2. Results of AL Experiment
4.2.1. Effects of Feature Combinations on AL Performance
4.2.2. Comparison of the AL Classifications
4.2.3. Analysis of Parameter q
5. Discussion
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Appendix A
Abbreviation | Description |
---|---|
AL | Active learning. |
OBIA | Object-based image analysis. |
RF | Random forest. |
DT | Decision tree. |
OAA | One-against all. |
OAO | One against one. |
T | Training sample set. Each element of this set contains a sample and its label. |
C | Classifier. In this paper it refers to a supervised classification algorithm. |
U | Unlabeled sample set. Each element of this set only contains a sample, and its label is unknown. |
Q | Query function. It aims to measure the appropriateness of a sample in U. |
S | Supervisor. In most cases, especially in real operational situations, the user acts as the supervisor. S aims to provide the label information for the unlabeled samples selected by a query function Q. |
Abbreviation | Description |
---|---|
M1 | The proposed AL technique, as delineated in Section 2. |
M2 | An AL scheme based on entropy query metric [39]. |
M3 | An AL approach based on breaking tie criterion. [34] |
M4 | An AL strategy based on random sampling. |
M5 | A multinomial logistic regression-based AL method based on a selective variance criterion [62]. |
M6 | An object-based AL algorithm constructed by using multinomial logistic regression classifier and breaking tie metric [63,64]. |
Abbreviation | Description | 2 Defined Situations |
---|---|---|
S | Spectral features only. | Simple combination |
GS | Geometric and spectral features. | |
ST | Spectral and textural features. | |
SC | Spectral and contextual features. | |
GST | Geometric, spectral, and textural features. | Complex combination |
GSC | Geometric, spectral, and contextual features. | |
STC | Spectral, textural and contextual features. | |
GSTC | The 4 types of features are all used. |
References
- Guo, H. Steps to the digital silk road. Nature 2018, 554, 25–27. [Google Scholar]
- Casu, F.; Manunta, M.; Agram, P.S.; Crippen, R.E. Big remotely sensed data: Tools, applications and experiences. Remote Sens. Environ. 2017, 202, 1–2. [Google Scholar] [CrossRef]
- Chi, M.; Plaza, A.; Benediktsson, J.A.; Sun, Z.; Shen, J.; Zhu, Y. Big data for remote sensing: Challenges and opportunities. Proc. IEEE 2016, 104, 2207–2219. [Google Scholar] [CrossRef]
- Wang, Q.; Wan, J.; Nie, F.; Liu, B.; Yan, C.; Li, X. Hierarchical feature selection for random projection. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 1581–1586. [Google Scholar] [CrossRef] [PubMed]
- Shi, C.; Pun, C. Adaptive multi-scale deep neural networks with perceptual loss for panchromatic and multispectral images classification. Inf. Sci. 2019, 490, 1–17. [Google Scholar] [CrossRef]
- Chen, G.; Weng, Q.; Hay, G.J.; He, Y. Geographic Object-based Image Analysis (GEOBIA): Emerging trends and future opportunities. Gisci. Remote Sens. 2018, 55, 159–182. [Google Scholar] [CrossRef]
- Ye, S.; Pontius, R.G., Jr.; Rakshit, R. A review of accuracy assessment for object-based image analysis: From per-pixel to per-polygon approaches. ISPRS J. Photogramm. Remote Sens. 2018, 141, 137–147. [Google Scholar] [CrossRef]
- Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Feitosa, R.Q.; Vander Meer, F.; Van der Werff, H.; Van Coillie, F.; et al. Geographic object-based image analysis towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef] [Green Version]
- Belgiua, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based time weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar] [CrossRef]
- Wu, M.; Huang, W.; Niu, Z.; Wang, Y.; Wang, C.; Li, W.; Hao, P.; Yu, B. Fine crop mapping by combining high spectral and high spatial resolution remote sensing data in complex heterogeneous areas. Comput. Electron. Agric. 2017, 139, 1–9. [Google Scholar] [CrossRef]
- Lu, C.; Liu, J.; Jia, M.; Liu, M.; Man, W.; Fu, W.; Zhong, L.; Lin, X.; Su, Y.; Gao, Y. Dynamic analysis of mangrove forests based on an optimal segmentation scale model and multi-seasonal images in Quanzhou Bay, China. Remote Sens. 2018, 10, 2020. [Google Scholar] [CrossRef] [Green Version]
- Niesterowicz, J.; Stepinski, T.F. Pattern-based, multi-scale segmentation and regionalization of EOSD land cover. Int. J. Appl. Earth Obs. Geo-Inf. 2017, 62, 192–200. [Google Scholar] [CrossRef]
- Cai, J.; Huang, B.; Song, Y. Using multi-source geospatial big data to identify the structure of polycentric cities. Remote Sens. Environ. 2017, 202, 210–221. [Google Scholar] [CrossRef]
- Grinias, I.; Panagiotakis, C.; Tziritas, G. MRF-based segmentation and unsupervised classification for building and road detection in peri-urban areas of high-resolution satellite images. ISPRS J. Photogramm. Remote Sens. 2016, 122, 145–166. [Google Scholar] [CrossRef]
- Ma, L.; Li, M.; Ma, X.; Cheng, L.; Du, P.; Liu, Y. A review of supervised object-based land-cover image classification. ISPRS J. Photogramm. Remote Sens. 2017, 130, 277–293. [Google Scholar] [CrossRef]
- Lv, Z.; Liu, T.; Benediktsson, J.A.; Lei, T.; Wan, Y. Multi-scale object histogram distance for LCCD using bi-temporal very-high-resolution remote sensing images. Remote Sens. 2018, 10, 1809. [Google Scholar] [CrossRef] [Green Version]
- Lv, Z.; Liu, T.; Zhang, P.; Benediktsson, J.A.; Lei, T.; Zhang, X. Novel adaptive histogram trend similarity approach for land cover change detection by using bitemporal very-high-resolution remote sensing images. IEEE Trans. Geosci. Remote Sens. 2019. [Google Scholar] [CrossRef]
- Zanotta, D.C.; Zortea, M.; Ferreira, M.P. A supervised approach for simultaneous segmentation and classification of remote sensing images. ISPRS J. Photogramm. Remote Sens. 2018, 142, 162–173. [Google Scholar] [CrossRef]
- Hossain, M.D.; Chen, D. Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 2019, 150, 115–134. [Google Scholar] [CrossRef]
- Lv, Z.; Shi, W.; Benediktsson, J.A.; Ning, X. Novel object-based filter for improving land-cover classification of aerial imagery with very high spatial resolution. Remote Sens. 2016, 8, 1023. [Google Scholar] [CrossRef] [Green Version]
- Chen, X.; Fang, T.; Huo, H.; Li, D. Measuring the Effectiveness of various features for thematic information extraction from very high resolution remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4837–4851. [Google Scholar] [CrossRef]
- Vidal-Fernández, E.; Piedra-Fernández, J.A.; Almendros-Jiménez, J.M.; Cantón-Garbín, M. OBIA system for identifying mesoscale oceanic structures in SeaWiFS and MODIS-Aqua images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1256–1265. [Google Scholar] [CrossRef] [Green Version]
- Lv, Z.; Zhang, P.; Benediktsson, J.A. Automatic object-oriented, spectral-spatial feature extraction driven by Tobler’s first law of geography for very high resolution aerial imagery classification. Remote Sens. 2017, 9, 285. [Google Scholar] [CrossRef] [Green Version]
- Troya-Galvis, A.; Gançarski, P.; Berti-Équille, L. Remote sensing image analysis by aggregation of segmentation-classification collaborative agents. Pattern Recognit. 2018, 73, 259–274. [Google Scholar] [CrossRef]
- Su, T.; Zhang, S. Local and global evaluation for remote sensing image segmentation. ISPRS J. Photogramm. Remote Sens. 2017, 130, 256–276. [Google Scholar] [CrossRef]
- Troya-Galvis, A.; Gançarski, P.; Passat, N.; Berti-Équille, L. Unsupervised quantification of under- and over-segmentation for object-based remote sensing image analysis. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1936–1945. [Google Scholar] [CrossRef] [Green Version]
- Costa, H.; Foody, G.M.; Boyd, D.S. Supervised methods of image segmentation accuracy assessment in land cover mapping. Remote Sens. Environ. 2018, 205, 338–351. [Google Scholar] [CrossRef] [Green Version]
- Liu, D.; Xia, F. Assessing object-based classification: Advantages and limitations. Remote Sens. Lett. 2010, 1, 187–194. [Google Scholar] [CrossRef]
- Sun, J.; Vu, T.T. Distributed and hierarchical object-based image analysis for damage assessment: A case study of 2008 Wenchuan earthquake, China. Geomat. Nat. Hazards Risk 2016, 7, 1962–1972. [Google Scholar] [CrossRef] [Green Version]
- Hosseini, R.S.; Safari, A.; Homayouni, S. Natural hazard damage detection based on object-level support vector data description of optical and SAR Earth observations. Int. J. Remote Sens. 2017, 38, 3356–3374. [Google Scholar] [CrossRef]
- Lv, Z.; Shi, W.; Zhang, X.; Benediktsson, J.A. Landslide inventory mapping from bitemporal high-resolution remote sensing images using change detection and multiscale segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1520–1532. [Google Scholar] [CrossRef]
- Goodin, D.G.; Anibas, K.L.; Bezymennyi, M. Mapping land cover and land use from object-based classification: An example from a complex agricultural landscape. Int. J. Remote Sens. 2015, 36, 4702–4723. [Google Scholar] [CrossRef]
- Liu, C.; He, L.; Li, Z.; Li, J. Feature-driven active learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 341–354. [Google Scholar] [CrossRef]
- Tuia, D.; Pasolli, E.; Emery, W.J. Using active learning to adapt remote sensing image classifiers. Remote Sens. Environ. 2011, 115, 2232–2242. [Google Scholar] [CrossRef]
- Rajan, S.; Ghosh, J.; Crawford, M.M. An active learning approach to hyperspectral data classification. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1231–1242. [Google Scholar] [CrossRef]
- Tan, K.; Zhang, Y.; Wang, X.; Chen, Y. Object-based change detection using multiple classifiers and multi-scale uncertainty analysis. Remote Sens. 2019, 11, 359. [Google Scholar] [CrossRef] [Green Version]
- Geib, C.; Thoma, M.; Taubenbock, H. Cost-Sensitive Multitask Active Learning for Characterization of Urban Environments with Remote Sensing. IEEE Geosci. Remote Sens. Lett. 2018, 15, 922–926. [Google Scholar] [CrossRef]
- Persello, C.; Boularias, A.; Dalponte, M.; Gobakken, T.; Næsset, E.; Schölkopf, B. Cost-sensitive active learning with lookahead: Optimizing field surveys for remote sensing data classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6652–6664. [Google Scholar] [CrossRef]
- Xu, J.; Hang, R.; Liu, Q. Patch-based active learning (PTAL) for spectral-spatial classification on hyperspectral data. Int. J. Remote Sens. 2014, 35, 1846–1875. [Google Scholar] [CrossRef]
- Li, J.; Bioucas-Dias, J.M.; Plaza, A. Hyperspectral Image Segmentation Using a New Bayesian Approach with Active Learning. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3947–3960. [Google Scholar] [CrossRef] [Green Version]
- Huo, L.; Tang, P. A batch-mode active learning algorithm using region-partitioning diversity for SVM classifier. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1036–1046. [Google Scholar] [CrossRef]
- Sun, S.; Zhong, P.; Xiao, H.; Wang, R. Active learning with gaussian process classifier for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1746–1760. [Google Scholar] [CrossRef]
- Liu, W.; Yang, J.; Li, P.; Han, Y.; Zhao, J.; Shi, H. A novel object-based supervised classification method with active learning and random forest for PolSAR imagery. Remote Sens. 2018, 10, 1092. [Google Scholar] [CrossRef] [Green Version]
- Xu, Z.; Wu, L.; Zhang, Z. Use of active learning for earthquake damage mapping from UAV photogrammetric point clouds. Int. J. Remote Sens. 2018, 39, 5568–5595. [Google Scholar] [CrossRef]
- Ma, L.; Fu, T.; Li, M. Active learning for object-based image classification using predefined training objects. Int. J. Remote Sens. 2018, 39, 2746–2765. [Google Scholar] [CrossRef]
- Belgiu, M.; Dragut, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
- Bialas, J.; Oommen, T.; Havens, T.C. Optimal segmentation of high spatial resolution images for the classification of buildings using random forests. Int. J. Appl. Earth Obs. Geoinf. 2019, 82, 101895. [Google Scholar] [CrossRef]
- Collins, L.; Griffioen, P.; Newell, G.; Mellor, A. The utility of Random Forests for wildfire severity mapping. Remote Sens. Environ. 2018, 216, 374–384. [Google Scholar] [CrossRef]
- Su, T. Efficient Paddy Field Mapping Using Landsat-8 Imagery and Object-Based Image Analysis Based on Advanced Fractal Net Evolution Approach. Gisci. Remote Sens. 2017, 54, 354–380. [Google Scholar] [CrossRef]
- Wang, Q.; Lin, J.; Yuan, Y. Salient Band Selection for Hyperspectral image classification via manifold ranking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1279–1289. [Google Scholar] [CrossRef]
- Yuan, Y.; Lin, J.; Wang, Q. Hyperspectral image classification via multitask joint sparse representation and stepwise MRF optimization. IEEE Trans. Cybern. 2016, 46, 2966–2977. [Google Scholar] [CrossRef] [PubMed]
- Vieira, M.A.; Formaggio, A.R.; Rennó, C.D.; Atzberger, C.; Aguiar, D.A.; Mello, M.P. Object based image analysis and data mining applied to a remotely sensed Landsat time-series to map sugarcane over large areas. Remote Sens. Environ. 2012, 123, 553–562. [Google Scholar] [CrossRef]
- Bazi, Y.; Melgani, F. Gaussian process approach to remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 186–197. [Google Scholar] [CrossRef]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Baatz, M.; Schäpe, M. Multiresolution segmentation—An optimization approach for high quality multi-scale image segmentation. In Angewandte Geographische InformationsVerarbeitung XII; Strobl, J., Blaschke, T., Eds.; Heidelberg: Wichmann, Germany, 2000; Volume 1, pp. 12–23. [Google Scholar]
- Kim, H.; Yeom, J. Effect of red-edge and texture features for object-based paddy rice crop classification using RapidEye multi-spectral satellite image data. Int. J. Remote Sens. 2014, 35, 7046–7068. [Google Scholar] [CrossRef]
- Trimble Germany GmbH. Reference Book: Trimble eCognition Developer for Windows Operating System; Version 9.3.2.; Arnulfstrasse 126 D-80636; Trimble Germany GmbH: Munich, Germany, 2018; pp. 1–510. [Google Scholar]
- Witharana, C.; Ouimet, W.B.; Johnson, K.M. Using LiDAR and GEOBIA for automated extraction of eighteenth–late nineteenth century relict charcoal hearths in southern New England. Gisci. Remote Sens. 2018, 55, 183–204. [Google Scholar] [CrossRef]
- Johnson, B.; Xie, Z. Unsupervised image segmentation evaluation and refinement using a multi-scale approach. ISPRS J. Photogramm. Remote Sens. 2011, 66, 473–483. [Google Scholar] [CrossRef]
- Duro, D.C.; Franklin, S.E.; Dubé, M.G. A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens. Environ. 2012, 118, 259–272. [Google Scholar] [CrossRef]
- Liaw, A.; Wiener, M. Classification and regression by random forest. R. News 2002, 2, 18–22. [Google Scholar]
- Tan, K.; Wang, X.; Zhu, J.; Hu, J.; Li, J. A novel active learning approach for the classification of hyperspectral imagery using quasi-Newton multinomial logistic regression. Int. J. Remote Sens. 2018, 39, 3029–3054. [Google Scholar] [CrossRef]
- Li, J.; Bioucas-Dias, J.; Plaza, A. Semisupervised hyperspectral image segmentation using multinomial logistic regression with active learning. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4085–4098. [Google Scholar] [CrossRef] [Green Version]
- Guo, J.; Zhou, X.; Li, J.; Plaza, A.; Prasad, S. Superpixel-based active learning and online feature importance learning for hyperspectral image analysis. IEEE J. Selec. Top. Appl. Earth Obs. Remote Sens. 2016, 10, 347–359. [Google Scholar] [CrossRef]
Input:T, C, U Output: an enlarged T |
1. Train C by using T; initialize Q by using C; 2. Find sample(s) in U by using Q; let S provide label information for the sample(s); add the sample(s) into T; remove the sample(s) from U; 3. Update Q by using C and T; Go to step 2 if the updated T meets the stopping criterion; 4. Output T. |
Feature Type | Feature Name | Description |
---|---|---|
Geometric | Area (A) | A is measured by using the number of pixels. |
Perimeter (P) | P is calculated by counting the number of edge pixels. | |
Roundness (Ro) | Ro = P2/A | |
Rectangular degree (Rrec) | Rrec= A/Ab. Ab means the area of the outer bounding box of the object. | |
length/width ratio (Rlw) | Ratio of the length and width of the outer bounding box. | |
Shape index (Is) | Is = P/(4·Ps). Ps means the perimeter of a square that has the same area with the object. | |
Border index (Ib) | Ib = 0.5·P/(w + l). w and l symbolize the width and length of the object, respectively. | |
Asymmetry (Ra) | Ra is defined by comparing an approximated ellipse with the object. Variances in pixel coordinate are used to compute this feature. Readers are referred to the reference book of eCognition for computation details [57]. | |
Main/secondary direction width ratio (Rms) | Ratio of the object widths in main and secondary directions. It is calculated by using the ratio of the two eigenvalues of the covariance matrix of pixel coordinates. | |
Density (Rd) | Rd = Ps/(1 + (Vx + Vy)0.5). Ps is similarly defined as in the description of Is. Vx and Vy are the coordinate variance in horizontal and vertical directions, respectively. | |
Spectral | Average value (Sa) | The average pixel value for a spectral channel. |
Median value (Sm) | The median pixel value for a spectral channel. | |
Standard deviation (STD) (Ss) | The standard deviation of pixel value for a spectral channel. | |
Textural | Average value (Ta) | Similarly defined as the spectral features, but textural feature bands are used. |
Median value (Tm) | ||
STD (Ts) | ||
Contextual | Average of contrast (Ca) | The mean difference between Sa of the object and the Sas of its neighboring objects. |
Median of contrast (Cm) | The mean difference between Sm of the object and the Sms of its neighboring objects. | |
STD of contrast (Cs) | The mean difference between Ss of the object and the Sss of its neighboring objects. |
Class Name | Coal Mine | Shadow | Dark Bare Soil | Bright Bare Soil |
---|---|---|---|---|
# polygons (pixels) for “T1A” | 20 (47,727) | 27 (18,082) | 21 (49,431) | 53 (67,455) |
# polygons (pixels) for “T1B” | 5 (13,710) | 29 (10,658) | 35 (39,132) | 49 (31,962) |
Class Name | Vegetation | Watered Field | Bright Bare Soil | Dry Bare Soil | Moist Bare Soil |
---|---|---|---|---|---|
# polygons (pixels) for “T2A” | 34 (23,766) | 34 (42,043) | 39 (24,037) | 38 (33,769) | 36 (29,412) |
# polygons (pixels) for “T2B” | 30 (13,317) | 44 (29,569) | 36 (18,509) | 39 (25,486) | 52 (24,835) |
Class Name | Bright Building | Light Color Building | Dark Color Building | Vegetation |
---|---|---|---|---|
# polygons (pixels) for “T3A” | 46 (17,587) | 111 (44,094) | 82 (15,601) | 32 (17,886) |
# polygons (pixels) for “T3B” | 50 (18,665) | 40 (17,610) | 62 (27,841) | 25 (37,116) |
Class Name | Coal Mine | Shadow | Dark Bare Soil | Bright Bare Soil |
---|---|---|---|---|
# selected objects | 37 | 21 | 22 | 75 |
Class Name | Vegetation | Watered Field | Bright Bare Soil | Dry Bare Soil | Moist Bare Soil |
---|---|---|---|---|---|
# selected objects | 28 | 39 | 50 | 20 | 14 |
Class Name | Bright Building | Light Color Building | Dark Color Building | Vegetation |
---|---|---|---|---|
# selected objects | 51 | 56 | 10 | 10 |
Input:T, C, U, S, a threshold Titer (Titer works as a stopping criterion), and a parameter q (q determines how many samples are selected in each AL iteration) Output: an enlarged T | ||
Process of M2 | Process of M3 | Process of M4 |
1. Let Niter = 1; train an EM classifier (C) by using T; 2. Find q sample(s) in U by using the entropy query metric (Equation (17) of [39]); let S provide label information for the q sample(s); add the sample(s) into T; remove the sample(s) from U; Niter = Niter + 1; 3. Retrain C by using the updated T; Go to step 2 if Niter < Titer; 4. Output T. | 1. Let Niter = 1; train an EM classifier (C) by using T; 2. Find q sample(s) in U by using the entropy query metric (Equation (1) of [34]); let S provide label information for the q sample(s); add the sample(s) into T; remove the sample(s) from U; Niter = Niter + 1; 3. Retrain C by using the updated T; Go to step 2 if Niter < Titer; 4. Output T. | 1. Let Niter = 1; 2. Randomly select q sample(s) from U; let S provide label information for the q sample(s); add the sample(s) into T; remove the sample(s) from U; Niter = Niter + 1; 3. Go to step 2 if Niter < Titer; 4. Output T. |
Method | M1 | M2 | M3 | M4 | M5 | M6 |
---|---|---|---|---|---|---|
UA of coal mine (%) | 94.439 | 99.06 | 91.06 | 73.07 | 58.11 | 70.64 |
UA of shadow (%) | 90.92 | 72.10 | 74.60 | 65.56 | 88.96 | 82.31 |
UA of dark bare soil (%) | 87.45 | 83.52 | 76.27 | 83.43 | 91.20 | 81.03 |
UA of bright bare soil (%) | 94.21 | 94.17 | 95.79 | 95.78 | 94.31 | 95.53 |
PA of coal mine (%) | 84.8505 | 69.53 | 49.18 | 79.50 | 91.98 | 91.00 |
PA of shadow (%) | 89.68 | 95.05 | 93.27 | 97.46 | 80.70 | 93.76 |
PA of dark bare soil (%) | 94.53 | 91.53 | 94.49 | 80.67 | 77.42 | 81.94 |
PA of bright bare soil (%) | 89.41 | 85.15 | 78.68 | 80.50 | 91.11 | 77.96 |
Overall accuracy (%) | 90.88 | 86.63 | 82.56 | 82.32 | 84.46 | 83.23 |
Method | M1 | M2 | M3 | M4 | M5 | M6 |
---|---|---|---|---|---|---|
UA of vegetation (%) | 96.71 | 81.44 | 100.0 | 98.83 | 100.00 | 95.51 |
UA of watered field (%) | 91.65 | 90.10 | 71.34 | 79.34 | 78.15 | 91.18 |
UA of bright bare soil (%) | 100.0 | 100.0 | 99.08 | 100.0 | 100.00 | 100.00 |
UA of dry bare soil (%) | 69.37 | 71.96 | 75.67 | 62.42 | 64.14 | 68.83 |
UA of moist bare soil (%) | 88.10 | 91.26 | 86.07 | 79.22 | 84.64 | 80.30 |
PA of vegetation (%) | 81.33 | 83.65 | 17.98 | 42.42 | 38.92 | 82.17 |
PA of watered field (%) | 98.04 | 97.42 | 98.64 | 97.28 | 98.06 | 97.24 |
PA of bright bare soil (%) | 49.95 | 58.71 | 66.89 | 32.30 | 36.09 | 48.30 |
PA of dry bare soil (%) | 94.20 | 94.20 | 90.41 | 85.80 | 92.35 | 83.40 |
PA of moist bare soil (%) | 88.82 | 80.25 | 88.34 | 91.66 | 88.82 | 93.45 |
Overall accuracy (%) | 85.15 | 84.81 | 79.60 | 76.11 | 77.39 | 83.33 |
Method | M1 | M2 | M3 | M4 | M5 | M6 |
---|---|---|---|---|---|---|
UA of bright building (%) | 95.67 | 68.50 | 65.87 | 95.73 | 76.09 | 73.60 |
UA of light color building (%) | 88.56 | 92.74 | 91.12 | 94.59 | 94.32 | 93.09 |
UA of dark color building (%) | 89.93 | 88.76 | 86.84 | 86.98 | 86.16 | 86.32 |
UA of vegetation (%) | 96.55 | 97.32 | 97.51 | 95.61 | 97.72 | 97.61 |
PA of bright building (%) | 93.38 | 96.68 | 96.88 | 94.82 | 96.24 | 96.24 |
PA of light color building (%) | 89.74 | 45.21 | 39.64 | 87.05 | 59.74 | 55.61 |
PA of dark color building (%) | 91.71 | 95.11 | 95.41 | 93.42 | 96.02 | 95.57 |
PA of vegetation (%) | 95.66 | 95.63 | 93.35 | 94.38 | 93.31 | 93.31 |
Overall accuracy (%) | 93.12 | 86.91 | 85.22 | 92.92 | 88.76 | 87.91 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Su, T.; Zhang, S.; Liu, T. Multi-Spectral Image Classification Based on an Object-Based Active Learning Approach. Remote Sens. 2020, 12, 504. https://doi.org/10.3390/rs12030504
Su T, Zhang S, Liu T. Multi-Spectral Image Classification Based on an Object-Based Active Learning Approach. Remote Sensing. 2020; 12(3):504. https://doi.org/10.3390/rs12030504
Chicago/Turabian StyleSu, Tengfei, Shengwei Zhang, and Tingxi Liu. 2020. "Multi-Spectral Image Classification Based on an Object-Based Active Learning Approach" Remote Sensing 12, no. 3: 504. https://doi.org/10.3390/rs12030504
APA StyleSu, T., Zhang, S., & Liu, T. (2020). Multi-Spectral Image Classification Based on an Object-Based Active Learning Approach. Remote Sensing, 12(3), 504. https://doi.org/10.3390/rs12030504