Classification of Building Damage Using a Novel Convolutional Neural Network Based on Post-Disaster Aerial Images
<p>The framework of the proposed EBDC-Net.</p> "> Figure 2
<p>Spatial attention mechanism.</p> "> Figure 3
<p>Feature sequence generation.</p> "> Figure 4
<p>LSTM processing cell.</p> "> Figure 5
<p>Loss value curve.</p> "> Figure 6
<p>Confusion matrix in Group 3: (<b>a</b>) baseline in the Ludian dataset; (<b>b</b>) EBDC-Net in the Ludian dataset; (<b>c</b>) baseline in the Yushu dataset; (<b>d</b>) EBDC-Net in the Yushu dataset.</p> "> Figure 7
<p>Examples of buildings in different areas: (<b>a</b>) Ludian dataset; (<b>b</b>) Yushu dataset.</p> "> Figure 8
<p>Example of assessment results of building damage in the Yangbi dataset: (<b>a</b>) original image; (<b>b</b>) EBDC-Net assessment results; (<b>c</b>) visual interpretation results.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. Data Sources
2.2. Methods
2.2.1. Feature Extraction Encoder Module
2.2.2. Building Damage Classification Module
3. Results
3.1. Implementation Details of the Experiment
3.2. Results of the Comparison of Different Baseline Models
3.3. Results of Ablation Experiments
3.4. Results of Comparison with Different Building Damage Classification Methods
4. Discussion
5. Conclusions
- (1)
- We propose a novel deep-learning-based model to solve the fine-grained classification problem of damaged buildings, which is critical to earthquake rescue and post-disaster damage assessment.
- (2)
- The spatial attention mechanism and the contextual feature extraction module are embedded in EBDC-Net, which can improve the model’s ability to classify buildings with different levels of damage.
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Taşkin, G.; Erten, E.; Alataş, E.O. A Review on Multi-temporal Earthquake Damage Assessment Using Satellite Images. In Change Detection and Image Time Series Analysis 2: Supervised Methods; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2021; pp. 155–221. [Google Scholar] [CrossRef]
- Huang, H.; Sun, G.; Zhang, X.; Hao, Y.; Zhang, A.; Ren, J.; Ma, H. Combined multiscale segmentation convolutional neural network for rapid damage mapping from postearthquake very high-resolution images. J. Appl. Remote Sens. 2019, 13, 022007. [Google Scholar] [CrossRef]
- Liu, X.; Deng, Z.; Yang, Y. Recent progress in semantic image segmentation. Artif. Intell. Rev. 2019, 52, 1089–1106. [Google Scholar] [CrossRef] [Green Version]
- Zheng, Z.; Zhong, Y.; Wang, J.; Ma, A.; Zhang, L. Building damage assessment for rapid disaster response with a deep object-based semantic change detection framework: From natural disasters to man-made disasters. Remote Sens. Environ. 2021, 265, 112636. [Google Scholar] [CrossRef]
- Wu, C.; Zhang, F.; Xia, J.; Xu, Y.; Li, G.; Xie, J.; Du, Z.; Liu, R. Building damage detection using U-Net with attention mechanism from pre-and post-disaster remote sensing datasets. Remote Sens. 2021, 13, 905. [Google Scholar] [CrossRef]
- Xiao, H.; Peng, Y.; Tan, H.; Li, P. Dynamic Cross Fusion Network for Building-Based Damage Assessment. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China, 5–9 July 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Adriano, B.; Yokoya, N.; Xia, J.; Miura, H.; Liu, W.; Matsuoka, M.; Koshimura, S. Learning from multimodal and multitemporal earth observation data for building damage mapping. ISPRS J. Photogramm. Remote Sens. 2021, 175, 132–143. [Google Scholar] [CrossRef]
- Dong, L.; Shan, J. A comprehensive review of earthquake-induced building damage detection with remote sensing techniques. ISPRS J. Photogramm. Remote Sens. 2013, 84, 85–99. [Google Scholar] [CrossRef]
- Song, D.; Tan, X.; Wang, B.; Zhang, L.; Shan, X.; Cui, J. Integration of super-pixel segmentation and deep-learning methods for evaluating earthquake-damaged buildings using single-phase remote sensing imagery. Int. J. Remote Sens. 2020, 41, 1040–1066. [Google Scholar] [CrossRef]
- Yang, W.; Zhang, X.; Luo, P. Transferability of convolutional neural network models for identifying damaged buildings due to earthquake. Remote Sens. 2021, 13, 504. [Google Scholar] [CrossRef]
- Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G. Multi-resolution feature fusion for image classification of building damages with convolutional neural networks. Remote Sens. 2018, 10, 1636. [Google Scholar] [CrossRef] [Green Version]
- Ji, M.; Liu, L.; Zhang, R.F.; Buchroithner, M. Discrimination of earthquake-induced building destruction from space using a pretrained CNN model. Appl. Sci. 2020, 10, 602. [Google Scholar] [CrossRef] [Green Version]
- Nex, F.; Duarte, D.; Tonolo, F.G.; Kerle, N. Structural building damage detection with deep learning: Assessment of a state-of-the-art CNN in operational conditions. Remote Sens. 2019, 11, 2765. [Google Scholar] [CrossRef] [Green Version]
- Ishraq, A.; Lima, A.A.; Kabir, M.M.; Rahman, M.S.; Mridha, M. Assessment of Building Damage on Post-Hurricane Satellite Imagery using improved CNN. In Proceedings of the 2022 International Conference on Decision Aid Sciences and Applications (DASA), Chiangrai, Thailand, 23–25 March 2022; pp. 665–669. [Google Scholar] [CrossRef]
- Cao, C.; Liu, D.; Singh, R.P.; Zheng, S.; Tian, R.; Tian, H. Integrated detection and analysis of earthquake disaster information using airborne data. Geomat. Nat. Hazards Risk 2016, 7, 1099–1128. [Google Scholar] [CrossRef] [Green Version]
- Ci, T.; Liu, Z.; Wang, Y. Assessment of the degree of building damage caused by disaster using convolutional neural networks in combination with ordinal regression. Remote Sens. 2019, 11, 2858. [Google Scholar] [CrossRef] [Green Version]
- Ma, H.; Liu, Y.; Ren, Y.; Wang, D.; Yu, L.; Yu, J. Improved CNN classification method for groups of buildings damaged by earthquake, based on high resolution remote sensing images. Remote Sens. 2020, 12, 260. [Google Scholar] [CrossRef] [Green Version]
- Matin, S.S.; Pradhan, B. Challenges and limitations of earthquake-induced building damage mapping techniques using remote sensing images-A systematic review. Geocarto Int. 2021, 1–27. [Google Scholar] [CrossRef]
- Guo, H.; Shi, Q.; Du, B.; Zhang, L.; Wang, D.; Ding, H. Scene-driven multitask parallel attention network for building extraction in high-resolution remote sensing images. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4287–4306. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3146–3154. [Google Scholar] [CrossRef] [Green Version]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Zhou, F.; Hang, R.; Liu, Q.; Yuan, X. Hyperspectral image classification using spectral-spatial LSTMs. Neurocomputing 2019, 328, 39–47. [Google Scholar] [CrossRef]
- Yin, J.; Qi, C.; Chen, Q.; Qu, J. Spatial-spectral network for hyperspectral image classification: A 3-D CNN and Bi-LSTM framework. Remote Sens. 2021, 13, 2353. [Google Scholar] [CrossRef]
- Liu, Q.; Zhou, F.; Hang, R.; Yuan, X. Bidirectional-convolutional LSTM based spectral-spatial feature learning for hyperspectral image classification. Remote Sens. 2017, 9, 1330. [Google Scholar] [CrossRef] [Green Version]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar] [CrossRef]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef] [Green Version]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar] [CrossRef] [Green Version]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef] [Green Version]
Dataset | Intact | Slightly Damaged | Severely Damaged | Collapsed | Image Size | Resolution |
---|---|---|---|---|---|---|
Yushu Dataset [16] | 88 × 88 Pixel | 0.1 m | ||||
Ludian Dataset [16] | 88 × 88 Pixel | 0.2 m | ||||
Yangbi Dataset | - | 88 × 88 Pixel | 0.03–0.2 m |
Damage Level | Description | Ludian Dataset | Yushu Dataset | Yangbi Dataset |
---|---|---|---|---|
L0 | Intact | 1630 | 778 | 928 |
L1 | Slightly damaged | 3074 | 918 | 202 |
L2 | Severely damaged | 1685 | 665 | 111 |
L3 | Collapsed | 1984 | 1140 | - |
Total | 8337 | 3510 | 1241 |
Group | Description | Damage Level |
---|---|---|
Group 1 | Non-collapsed | L0, L1, L2 |
Collapsed | L3 | |
Group 2 | Intact | L0, L1 |
Severely damaged | L2 | |
Collapse | L3 | |
Group 3 | Intact | L0 |
Slightly damaged | L1 | |
Severely damaged | L2 | |
Collapse | L3 |
Test | OA (%) | Kappa | MSE |
---|---|---|---|
LR-0001-BS-8 | 68.39 | 0.56 | 0.44 |
LR-0001-BS-16 | 75.00 | 0.65 | 0.28 |
LR-0001-BS-32 | 77.49 | 0.69 | 0.26 |
LR-0001-BS-64 | 75.96 | 0.67 | 0.28 |
Test | OA (%) | Kappa | MSE |
---|---|---|---|
LR-001-BS-32 | 29.02 | 0.06 | 0.93 |
LR-0001-BS-32 | 77.49 | 0.69 | 0.26 |
LR-00001-BS-32 | 68.58 | 0.57 | 0.40 |
Model Name | Group 1 | Group 2 | Group 3 | ||||||
---|---|---|---|---|---|---|---|---|---|
OA (%) | Kappa | MSE | OA (%) | Kappa | MSE | OA (%) | Kappa | MSE | |
DenseNet | 91.09 | 0.76 | 0.09 | 79.21 | 0.65 | 0.27 | 69.63 | 0.58 | 0.43 |
ResNet50 | 92.52 | 0.79 | 0.07 | 81.32 | 0.68 | 0.20 | 71.26 | 0.61 | 0.37 |
InceptionV3 | 92.62 | 0.79 | 0.07 | 81.41 | 0.68 | 0.23 | 72.99 | 0.63 | 0.35 |
Xception | 92.43 | 0.79 | 0.08 | 79.12 | 0.64 | 0.26 | 65.80 | 0.53 | 0.43 |
MobileNet | 92.81 | 0.80 | 0.07 | 80.46 | 0.66 | 0.24 | 72.22 | 0.62 | 0.33 |
VGG16 | 93.29 | 0.81 | 0.06 | 81.61 | 0.68 | 0.22 | 73.37 | 0.63 | 0.30 |
Baseline | 93.39 | 0.82 | 0.06 | 83.52 | 0.72 | 0.18 | 74.23 | 0.64 | 0.30 |
Model Name | Group 1 | Group 2 | Group 3 | ||||||
---|---|---|---|---|---|---|---|---|---|
OA (%) | Kappa | MSE | OA (%) | Kappa | MSE | OA (%) | Kappa | MSE | |
DenseNet | 92.44 | 0.82 | 0.07 | 76.32 | 0.60 | 0.38 | 63.34 | 0.50 | 0.66 |
ResNet50 | 93.58 | 0.85 | 0.06 | 76.03 | 0.61 | 0.32 | 63.20 | 0.50 | 0.60 |
InceptionV3 | 92.58 | 0.83 | 0.07 | 75.46 | 0.58 | 0.35 | 63.62 | 0.51 | 0.58 |
Xception | 92.86 | 0.84 | 0.07 | 76.46 | 0.61 | 0.29 | 61.91 | 0.48 | 0.59 |
MobileNet | 92.87 | 0.84 | 0.07 | 75.19 | 0.58 | 0.39 | 64.05 | 0.51 | 0.59 |
VGG16 | 93.72 | 0.86 | 0.06 | 76.18 | 0.63 | 0.27 | 63.91 | 0.51 | 0.52 |
Baseline | 93.30 | 0.85 | 0.07 | 77.03 | 0.64 | 0.26 | 65.19 | 0.53 | 0.47 |
Model Name | Group 1 | Group 2 | Group 3 | ||||||
---|---|---|---|---|---|---|---|---|---|
OA (%) | Kappa | MSE | OA (%) | Kappa | MSE | OA (%) | Kappa | MSE | |
Baseline | 93.39 | 0.82 | 0.07 | 83.52 | 0.72 | 0.18 | 74.23 | 0.64 | 0.30 |
Baseline +R | 93.58 | 0.82 | 0.06 | 83.81 | 0.72 | 0.18 | 75.19 | 0.66 | 0.27 |
Baseline +R+P | 93.67 | 0.82 | 0.06 | 84.10 | 0.72 | 0.17 | 76.14 | 0.67 | 0.27 |
Baseline +R+P+C | 94.44 | 0.83 | 0.06 | 85.53 | 0.75 | 0.17 | 77.49 | 0.69 | 0.26 |
Model Name | Group 1 | Group 2 | Group 3 | ||||||
---|---|---|---|---|---|---|---|---|---|
OA (%) | Kappa | MSE | OA (%) | Kappa | MSE | OA (%) | Kappa | MSE | |
Baseline | 93.30 | 0.85 | 0.07 | 77.03 | 0.64 | 0.26 | 65.19 | 0.53 | 0.47 |
Baseline +R | 93.58 | 0.86 | 0.06 | 78.60 | 0.64 | 0.27 | 65.48 | 0.53 | 0.47 |
Baseline +R+P | 93.86 | 0.86 | 0.06 | 78.74 | 0.65 | 0.24 | 66.48 | 0.66 | 0.45 |
Baseline +R+P+C | 94.72 | 0.88 | 0.05 | 79.02 | 0.65 | 0.26 | 67.62 | 0.56 | 0.42 |
Model Name | Group 1 | Group 2 | Group 3 | ||||||
---|---|---|---|---|---|---|---|---|---|
OA (%) | Kappa | MSE | OA (%) | Kappa | MSE | OA (%) | Kappa | MSE | |
Res-CNN [11] | 89.27 | 0.68 | 0.10 | 75.86 | 0.54 | 0.37 | 63.99 | 0.50 | 0.49 |
Dense-CNN [13] | 89.08 | 0.71 | 0.11 | 77.68 | 0.61 | 0.31 | 69.15 | 0.57 | 0.38 |
VGG-GAP [14] | 93.30 | 0.81 | 0.06 | 83.14 | 0.71 | 0.19 | 73.27 | 0.64 | 0.32 |
VGG-OR [16] | 93.29 | 0.81 | 0.06 | 84.58 | 0.74 | 0.17 | 75.57 | 0.66 | 0.26 |
EBDC-Net | 94.44 | 0.83 | 0.06 | 85.53 | 0.75 | 0.17 | 77.49 | 0.69 | 0.26 |
Model Name | Group 1 | Group 2 | Group 3 | ||||||
---|---|---|---|---|---|---|---|---|---|
OA (%) | Kappa | MSE | OA (%) | Kappa | MSE | OA (%) | Kappa | MSE | |
Res-CNN [11] | 91.44 | 0.91 | 0.09 | 75.32 | 0.58 | 0.37 | 58.20 | 0.43 | 0.64 |
Dense-CNN [13] | 92.15 | 0.83 | 0.08 | 76.03 | 0.60 | 0.32 | 59.49 | 0.44 | 0.74 |
VGG-GAP [14] | 94.00 | 0.87 | 0.06 | 78.89 | 0.67 | 0.23 | 63.77 | 0.51 | 0.56 |
VGG-OR [16] | 93.30 | 0.85 | 0.07 | 77.75 | 0.65 | 0.26 | 65.05 | 0.53 | 0.43 |
EBDC-Net | 94.72 | 0.88 | 0.05 | 79.02 | 0.65 | 0.26 | 67.62 | 0.56 | 0.42 |
Classification Results | Images | |||||||
---|---|---|---|---|---|---|---|---|
Ground Truth | L0 | L0 | L1 | L1 | L2 | L2 | L3 | L3 |
Dense-CNN | L1 | L1 | L2 | L2 | L2 | L3 | L3 | L3 |
VGG-GAP | L0 | L2 | L2 | L2 | L3 | L2 | L3 | L3 |
VGG-OR | L0 | L1 | L2 | L2 | L2 | L2 | L2 | L2 |
EBDC-Net | L0 | L0 | L1 | L1 | L2 | L2 | L3 | L3 |
Test Name | Group 1 | Group 2 | Group 3 | ||||||
---|---|---|---|---|---|---|---|---|---|
OA (%) | Kappa | MSE | OA (%) | Kappa | MSE | OA (%) | Kappa | MSE | |
Test1 | 88.30 | 0.75 | 0.12 | 69.19 | 0.52 | 0.39 | 56.63 | 0.41 | 0.76 |
Test 2 | 94.72 | 0.88 | 0.05 | 79.02 | 0.65 | 0.26 | 67.62 | 0.56 | 0.42 |
Test 3 | 95.86 | 0.91 | 0.04 | 80.02 | 0.68 | 0.22 | 68.33 | 0.57 | 0.43 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hong, Z.; Zhong, H.; Pan, H.; Liu, J.; Zhou, R.; Zhang, Y.; Han, Y.; Wang, J.; Yang, S.; Zhong, C. Classification of Building Damage Using a Novel Convolutional Neural Network Based on Post-Disaster Aerial Images. Sensors 2022, 22, 5920. https://doi.org/10.3390/s22155920
Hong Z, Zhong H, Pan H, Liu J, Zhou R, Zhang Y, Han Y, Wang J, Yang S, Zhong C. Classification of Building Damage Using a Novel Convolutional Neural Network Based on Post-Disaster Aerial Images. Sensors. 2022; 22(15):5920. https://doi.org/10.3390/s22155920
Chicago/Turabian StyleHong, Zhonghua, Hongzheng Zhong, Haiyan Pan, Jun Liu, Ruyan Zhou, Yun Zhang, Yanling Han, Jing Wang, Shuhu Yang, and Changyue Zhong. 2022. "Classification of Building Damage Using a Novel Convolutional Neural Network Based on Post-Disaster Aerial Images" Sensors 22, no. 15: 5920. https://doi.org/10.3390/s22155920
APA StyleHong, Z., Zhong, H., Pan, H., Liu, J., Zhou, R., Zhang, Y., Han, Y., Wang, J., Yang, S., & Zhong, C. (2022). Classification of Building Damage Using a Novel Convolutional Neural Network Based on Post-Disaster Aerial Images. Sensors, 22(15), 5920. https://doi.org/10.3390/s22155920